• Mode-5 HPC Cluster • Cluster : Multi-Core - MPI • Cluster : GPU - NVIDIA CUDA/OpenCL • Cluster : GPU - AMD OpenCL • Cluster : Coprocessors -Intel Xeon Phi • Cluster:Power & Perf. • Home




hyPACK-2013 : HPC GPU Cluster : NVIDIA CUDA - Health Monitoring

The research efforts on HPC GPU Cluster are aimed to address some of the challenges with building and running GPU clusters in Parallel computing environments. The focus is on cluster architecture, power consumption on each nodes for various work-loads, resource allocation and sharing as part of management software, health monitoring and data security, Programming models and applications. The prototype cluster can be made "adaptive" to the application it is running, assigning the most effective resources in real-time as per application demands, without requiring modifications to the application that is written using different progrmaming is used.


Example 1.1

Write MPI - OpenCL program to perform device query operation on each GPU of a node in a HPC GPU Cluster

Example 1.2

Demonstrate health monitoring of a large homogeneous HPC GPU Cluster using OpenCL device query and check GPU memory on GPUs of HPC GPU Cluster (Assignment)

Example 1.3

Write MPI - OpenCL program to measure the host-to device bandwidth on each node with single or multiple enabled HPC GPU Cluster

Example 1.4

Write a test suite based on GPU device virtualization to execute few hundred OpenCL kernels on HPC GPU Cluster using OpenCL wrapper test (Assignment)

Example 1.5

Write MPI-Pthread-OpenCL test Program to demonstrate list of pre-/post node job allocation post-job de-allocation features in a HPC GPU Cluster environment

Example 1.6

Write MPI-Pthread-OpenCL test Program to demonstrate list of pre-/post node job allocation post-job de-allocation features in a HPC GPU Cluster environment


Application Kernels & Matrix computations - Benchmarks

Example 1.7

Write MPI - OpenCL program to demonstrate maximum achieved performance of DGEMM using BLAS OpenCL libraries on each GPU of HPC GPU Cluster

Example 1.8

Write MPI -OpenCL program to calculate maximum achieved performance of DGEMM on each node of HPC GPU Cluster using ATLAS on host-cpu and OpenCL BLAS libraries on GPUs

Example 1.9

Write MPI - OpenCL program to perform communication among MPI processes using OpenCL 1.2 features in a HPC GPU Cluster environment for Solution of PDEs by Finite Difference Method


Centre for Development of Advanced Computing