Daveturner (talk | contribs) |
Daveturner (talk | contribs) |
||
Line 16: | Line 16: | ||
* The CUDA development headers are located here: /opt/cuda/sdk/common/inc | * The CUDA development headers are located here: /opt/cuda/sdk/common/inc | ||
* The CUDA architecture is: sm_30 | * The CUDA architecture is: sm_30 | ||
* The CUDA SDK is currently not available on the headnode. (compile on the nodes with CUDA, either in your jobscript or interactively via < | * The CUDA SDK is currently not available on the headnode. (compile on the nodes with CUDA, either in your jobscript or interactively via <B>srun -n 1 --gres=gpu:1 --pty bash</B>) | ||
* '''Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.''' | * '''Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.''' | ||
Revision as of 16:16, 7 May 2019
CUDA Overview
CUDA is a feature set for programming nVidia GPUs. We have many dwarf nodes that are CUDA-enabled with 1-2 GPUs and most of the Wizard nodes have 4 GPUs each. Most of these are consumer grade nVidia 1080 Ti graphics cards that are good for accelerating 32-bit calculations. Dwarf38 has two nVidia 980 Ti graphic cards and dwarf39 has two nVidia 1080 Ti graphics cards that are available for anybody to use but you'll need to email beocat@cs.ksu.edu to request being added to the GPU priority group then you'll need to submit jobs with --partition=ksu-gen-gpu.q. Wizard20 and wizard21 each have two nVidia P100 cards that are much more costly than the consumer grade 1080Ti cards but can accelerate 64-bit calculations much better.
Training videos
CUDA Programming Model Overview: http://www.youtube.com/watch?v=aveYOlBSe-Y <HTML5video type="youtube" width="800" height="480" autoplay="false">aveYOlBSe-Y</HTML5video>
CUDA Programming Basics Part I (Host functions): http://www.youtube.com/watch?v=79VARRFwQgY <HTML5video type="youtube" width="800" height="480" autoplay="false">79VARRFwQgY</HTML5video>
CUDA Programming Basics Part II (Device functions): http://www.youtube.com/watch?v=G5-iI1ogDW4 <HTML5video type="youtube" width="800" height="480" autoplay="false">G5-iI1ogDW4</HTML5video>
Compiling CUDA Applications
nvcc is the compiler for CUDA applications. When compiling your applications manually you will need to keep 3 things in mind:
- The CUDA development headers are located here: /opt/cuda/sdk/common/inc
- The CUDA architecture is: sm_30
- The CUDA SDK is currently not available on the headnode. (compile on the nodes with CUDA, either in your jobscript or interactively via srun -n 1 --gres=gpu:1 --pty bash)
- Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.
Putting it all together you can compile CUDA applications as follows:
nvcc -I /opt/cuda/sdk/common/inc -arch sm_30 <source>.cu -o <output>
Example
Create your Application
Copy the following Application into Beocat as vecadd.cu
// Kernel definition, see also section 4.2.3 of Nvidia Cuda Programming Guide
__global__ void vecAdd(float* A, float* B, float* C)
{
// threadIdx.x is a built-in variable provided by CUDA at runtime
int i = threadIdx.x;
A[i]=0;
B[i]=i;
C[i] = A[i] + B[i];
}
#include <stdio.h>
#define SIZE 10
int main()
{
int N=SIZE;
float A[SIZE], B[SIZE], C[SIZE];
float *devPtrA;
float *devPtrB;
float *devPtrC;
int memsize= SIZE * sizeof(float);
cudaMalloc((void**)&devPtrA, memsize);
cudaMalloc((void**)&devPtrB, memsize);
cudaMalloc((void**)&devPtrC, memsize);
cudaMemcpy(devPtrA, A, memsize, cudaMemcpyHostToDevice);
cudaMemcpy(devPtrB, B, memsize, cudaMemcpyHostToDevice);
// __global__ functions are called: Func<<< Dg, Db, Ns >>>(parameter);
vecAdd<<<1, N>>>(devPtrA, devPtrB, devPtrC);
cudaMemcpy(C, devPtrC, memsize, cudaMemcpyDeviceToHost);
for (int i=0; i<SIZE; i++)
printf("C[%d]=%f\n",i,C[i]);
cudaFree(devPtrA);
cudaFree(devPtrA);
cudaFree(devPtrA);
}
Gain Access to a CUDA-capable Node
See our advanced scheduler documentation
Compile Your Application
nvcc -I /opt/cuda/sdk/common/inc -arch sm_30 vecadd.cu -o vecadd
This will create a program with the name 'vecadd' (specified by the '-o' flag).
Run Your Application
Run the program as you usually would, namely
./vecadd
Assuming you don't want to run the program interactively because this is a large job, you can submit a job via qsub, just be sure to add the '-l cuda=true' directive.