From Beocat
Revision as of 08:29, 9 July 2014 by Mozes (talk | contribs) (Embed the videos)
Jump to: navigation, search

CUDA Overview

CUDA is a feature set for programming nVidia GPUs. We have 16 nodes with nVidia Tesla m2050 GPUs. These GPUs have 448 cores running at 1.15 GHz, and are very fast at floating point math - over a TeraFLOP! However, programming in CUDA is difficult for the uninitiated.

Training videos

CUDA Programming Model Overview: <HTML5video type="youtube" width="800" height="480" autoplay="false">aveYOlBSe-Y</HTML5video> CUDA Programming Basics Part I (Host functions): <HTML5video type="youtube" width="800" height="480" autoplay="false">79VARRFwQgY</HTML5video> CUDA Programming Basics Part II (Device functions): <HTML5video type="youtube" width="800" height="480" autoplay="false">G5-iI1ogDW4</HTML5video>

Compiling CUDA Applications

nvcc is the compiler for CUDA applications. When compiling your applications manually you will need to keep 3 things in mind:

  • The CUDA development headers are located here: /opt/cuda/sdk/C/common/inc
  • The CUDA architecture is: sm_20
  • The CUDA SDK is currently not available on the headnode. (compile on the nodes with CUDA, either in your jobscript or via qrsh -l cuda=TRUE)
  • Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.

Putting it all together you can compile CUDA applications as follows:

nvcc -I /opt/cuda/sdk/C/common/inc -arch sm_20 <source>.cu -o <output>


Create your Application

Copy the following Application into Beocat as

//  Kernel definition, see also section 4.2.3 of Nvidia Cuda Programming Guide
__global__  void vecAdd(float* A, float* B, float* C)
            // threadIdx.x is a built-in variable  provided by CUDA at runtime
            int i = threadIdx.x;
       C[i] = A[i] + B[i];

#include  <stdio.h>
#define  SIZE 10
int  main()
   int N=SIZE;
   float A[SIZE], B[SIZE], C[SIZE];
   float *devPtrA;
   float *devPtrB;
   float *devPtrC;
   int memsize= SIZE * sizeof(float);

   cudaMalloc((void**)&devPtrA, memsize);
   cudaMalloc((void**)&devPtrB, memsize);
   cudaMalloc((void**)&devPtrC, memsize);
   cudaMemcpy(devPtrA, A, memsize,  cudaMemcpyHostToDevice);
   cudaMemcpy(devPtrB, B, memsize,  cudaMemcpyHostToDevice);
   // __global__ functions are called:  Func<<< Dg, Db, Ns  >>>(parameter);
   vecAdd<<<1, N>>>(devPtrA,  devPtrB, devPtrC);
   cudaMemcpy(C, devPtrC, memsize,  cudaMemcpyDeviceToHost);

   for (int i=0; i<SIZE; i++)



Gain Access to a CUDA-capable Node

qrsh -l cuda=TRUE

Compile Your Application

nvcc -I /opt/cuda/sdk/C/common/inc -arch sm_20 -o vecadd

This will create a program with the name 'vecadd' (specified by the '-o' flag).

Run Your Application

Run the program as you usually would, namely


Assuming you don't want to run the program interactively because this is a large job, you can submit a job via qsub, just be sure to add the '-l cuda=true' directive.