Rob Gillen<br />Intro to GPGPU Programing With CUDA<br />
CodeStock is proudly partnered with:<br />RecruitWise and Staff with Excellence - www.recruitwise.jobs<br />Send instant f...
Intro to GPGPU Programming with CUDA<br />Rob Gillen<br />
Welcome!<br />Goals:<br />Overview of GPGPU with CUDA<br />“Vision Casting” for how you can use GPUs to improve your appli...
CPU vs. GPU<br />GPU devotes more transistors to data processing<br />
NVIDIA Fermi<br />~1.5TFLOPS (SP)/~800GFLOPS (DP)<br />230 GB/s DRAM Bandwidth<br />
Motivation<br />FLoating-Point Operations per Second (FLOPS) and <br />memory bandwidth For the CPU and GPU <br />
Example: Sparse Matrix-Vector<br />CPU Results from “Optimization of Sparse Matrix-Vector Multiplication on Emerging Multi...
Rayleigh-Bénard Results<br />Double precision<br />384 x 384 x 192 grid (max that fits in 4GB)<br />Vertical slice of temp...
G80 Characteristics<br />367 GFLOPS  peak performance (25-50 times of current high-end microprocessors)<br />265 GFLOPS su...
Supercomputer Comparison<br />
Applications<br />Exciting applications in future mass computing market have been traditionally considered “supercomputing...
*Not* for all applications<br />SPMD (Single Program, Multiple Data) are best (data parallel)<br />Operations need to be o...
Raytracing<br />
NVIRT: CUDA Ray Tracing API<br />
Tooling<br />VS 2010 C++ (Express is OK… sortof.)<br />NVIDIA CUDA-Capable GPU<br />NVIDIA CUDA Toolkit (v4+)<br />NVIDIA ...
Parallel Debugging<br />
Parallel Analysis<br />
VS Project Templates<br />
VS Project Templates<br />
Before we get too excited…<br />Host vs Device<br />Kernels <br />__global__   __device__  __host__<br />Thread/Block Cont...
Block IDs and Threads<br />Each thread uses IDs to decide what data to work on<br />Block ID: 1D or 2D<br />Thread ID: 1D,...
CUDA Thread Block<br />All threads in a block execute the same kernel program (SPMD)<br />Programmer declares block:<br />...
Transparent Scalability<br />Hardware is free to assigns blocks to any processor at any time<br />A kernel scales across a...
A Simple Running ExampleMatrix Multiplication<br />A simple matrix multiplication example that illustrates the basic featu...
Programming Model:Square Matrix Multiplication Example<br />P = M * N of size WIDTH x WIDTH<br />Without tiling:<br />One ...
Memory Layout of Matrix in C<br />M0,2<br />M0,1<br />M0,0<br />M0,3<br />M1,1<br />M1,0<br />M1,2<br />M1,3<br />M2,1<br ...
Simple Matrix Multiplication (CPU)<br />void MatrixMulOnHost(float* M, float* N, float* P, int Width)‏<br />{   <br />for ...
Simple Matrix Multiplication (GPU)<br />void MatrixMulOnDevice(float* M, float* N, float* P, int Width)‏<br />{<br />intsi...
Simple Matrix Multiplication (GPU)<br />// 2. Kernel invocation code – to be shown later<br />     …<br /> // 3. Read P fr...
Kernel Function<br />// Matrix multiplication kernel – per thread code<br />__global__ void MatrixMulKernel(float* Md, flo...
Kernel Function (contd.)<br />for (int k = 0; k < Width; ++k)‏ {<br />float Melement = Md[threadIdx.y*Width+k];<br />float...
Kernel Function (full)<br />// Matrix multiplication kernel – per thread code<br />__global__ void MatrixMulKernel(float* ...
Kernel Invocation (Host Side)<br /> // Setup the execution configuration<br />dim3 dimGrid(1, 1);<br />dim3 dimBlock(Width...
Only One Thread Block Used<br />Nd<br />Grid 1<br />One Block of threads compute matrix Pd<br />Each thread computes one e...
Handling Arbitrary Sized Square Matrices<br />Have each 2D thread block to compute a (TILE_WIDTH)2 sub-matrix (tile) of th...
Small Example<br />Nd1,0<br />Nd0,0<br />Block(0,0)<br />Block(1,0)<br />Nd1,1<br />Nd0,1<br />P1,0<br />P0,0<br />P2,0<br...
Cleanup Topics<br />Memory Management<br />Pinned Memory (Zero-Transfer)<br />Portable Pinned Memory<br />Multi-GPU<br />W...
Questions?<br />rob@gillenfamily.net@argodev<br />http://rob.gillenfamily.netRate: http://spkr8.com/t/7714<br />
Upcoming SlideShare
Loading in …5
×

Intro to GPGPU Programming with Cuda

5,064 views

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,064
On SlideShare
0
From Embeds
0
Number of Embeds
2,038
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Sparse linear algebra is interesting both because many science and engineering codes rely on it, and also because it was traditionally assumed to be something that GPUs would not be good at (because of irregular data access patterns). We have shown that in fact GPUs are extremely good at sparse matrix-vector multiply (SpMV), which is the basic building block of sparse linear algebra. The code and an accompanying white paper are available on the cuda forums and also posted on research.nvidia.com.This is compared to an extremely well-studied, well-optimized SpMV implementation from a widely respected paper in Supercomputing 2007. that paper only reported double-precision results for CPUs; our single precision results are even more impressive in comparison.
  • Compared to highly optimizedfortran code from an oceanography researcher at UCLA
  • Current implementation uses short-stack approach. Top elements of the stack are cached in registers.
  • RTAPI enables implementation of manydifferent raytracing flavors.left-right, top-bottom: Procedural materials, Ambient occlusion, Whittedraytracer (thin shell glass and metalic spheres) Path tracer (Cornell box), Refactions, Cook-style distribution raytracingCould also do non-rendering stuff, e.g. GIS (line of sight say), physics (collision/proximity detection)
  • Intro to GPGPU Programming with Cuda

    1. 1. Rob Gillen<br />Intro to GPGPU Programing With CUDA<br />
    2. 2. CodeStock is proudly partnered with:<br />RecruitWise and Staff with Excellence - www.recruitwise.jobs<br />Send instant feedback on this session via Twitter:<br />Send a direct message with the room number to @CodeStock<br />d codestock 411 This guy is Amazing!<br />For more information on sending feedback using Twitter while at CodeStock, please see the “CodeStock README” in your CodeStock guide.<br />
    3. 3.
    4. 4. Intro to GPGPU Programming with CUDA<br />Rob Gillen<br />
    5. 5. Welcome!<br />Goals:<br />Overview of GPGPU with CUDA<br />“Vision Casting” for how you can use GPUs to improve your application<br />Outline<br />Why GPGPUs?<br />Applications<br />Tooling<br />Hands-On: Matrix Multiplication<br />Rating: http://spkr8.com/t/7714<br />
    6. 6. CPU vs. GPU<br />GPU devotes more transistors to data processing<br />
    7. 7. NVIDIA Fermi<br />~1.5TFLOPS (SP)/~800GFLOPS (DP)<br />230 GB/s DRAM Bandwidth<br />
    8. 8. Motivation<br />FLoating-Point Operations per Second (FLOPS) and <br />memory bandwidth For the CPU and GPU <br />
    9. 9. Example: Sparse Matrix-Vector<br />CPU Results from “Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms", Williams et al, Supercomputing 2007<br />
    10. 10. Rayleigh-Bénard Results<br />Double precision<br />384 x 384 x 192 grid (max that fits in 4GB)<br />Vertical slice of temperature at y=0<br />Transition from stratified (left) to turbulent (right)<br />Regime depends on Rayleigh number: Ra = gαΔT/κν<br />8.5x speedup versus Fortran code running on 8-core 2.5 GHz Xeon<br />
    11. 11. G80 Characteristics<br />367 GFLOPS peak performance (25-50 times of current high-end microprocessors)<br />265 GFLOPS sustained for apps such as VMD<br />Massively parallel, 128 cores, 90W<br />Massively threaded, sustains 1000s of threads per app<br />30-100 times speedup over high-end microprocessors on scientific and media applications: medical imaging, molecular dynamics<br />
    12. 12. Supercomputer Comparison<br />
    13. 13. Applications<br />Exciting applications in future mass computing market have been traditionally considered “supercomputing applications”<br />Molecular dynamics simulation, Video and audio codingand manipulation, 3D imaging and visualization, Consumer game physics, and virtual reality products <br />These “Super-apps” represent and model physical, concurrent world<br />Various granularities of parallelism exist, but…<br />programming model must not hinder parallel implementation<br />data delivery needs careful management<br />
    14. 14. *Not* for all applications<br />SPMD (Single Program, Multiple Data) are best (data parallel)<br />Operations need to be of sufficient size to overcome overhead<br />Think Millions of operations.<br />
    15. 15. Raytracing<br />
    16. 16. NVIRT: CUDA Ray Tracing API<br />
    17. 17. Tooling<br />VS 2010 C++ (Express is OK… sortof.)<br />NVIDIA CUDA-Capable GPU<br />NVIDIA CUDA Toolkit (v4+)<br />NVIDIA CUDA Tools (v4+)<br />GPU Computing SDK<br />NVIDIA Parallel Insight<br />
    18. 18. Parallel Debugging<br />
    19. 19. Parallel Analysis<br />
    20. 20. VS Project Templates<br />
    21. 21. VS Project Templates<br />
    22. 22. Before we get too excited…<br />Host vs Device<br />Kernels <br />__global__ __device__ __host__<br />Thread/Block Control<br /><<<x, y>>><br />Multi-dimensioned coordinate objects<br />Memory Management/Movement<br />Thread Management – think 1000’s or 1,000,000’s<br />
    23. 23. Block IDs and Threads<br />Each thread uses IDs to decide what data to work on<br />Block ID: 1D or 2D<br />Thread ID: 1D, 2D, or 3D <br />Simplifies memoryaddressing when processingmultidimensional data<br />Image processing<br />
    24. 24. CUDA Thread Block<br />All threads in a block execute the same kernel program (SPMD)<br />Programmer declares block:<br />Block size 1 to 512 concurrent threads<br />Block shape 1D, 2D, or 3D<br />Block dimensions in threads<br />Threads have thread id numbers within block<br />Thread program uses thread id to select work and address shared data<br />Threads in the same block share data and synchronize while doing their share of the work<br />Threads in different blocks cannot cooperate<br />Each block can execute in any order relative to other blocs!<br />CUDA Thread Block<br />Thread Id #:0 1 2 3 … m <br />Thread program<br />
    25. 25. Transparent Scalability<br />Hardware is free to assigns blocks to any processor at any time<br />A kernel scales across any number of parallel processors<br />Kernel grid<br />Device<br />Block 0<br />Block 1<br />Block 2<br />Block 3<br />Block 4<br />Block 5<br />Block 6<br />Block 7<br />Device<br />Block 0<br />Block 1<br />Block 2<br />Block 3<br />Block 4<br />Block 5<br />Block 6<br />Block 7<br />Block 0<br />Block 1<br />Block 2<br />Block 3<br />Block 4<br />Block 5<br />Block 6<br />Block 7<br />time<br />Each block can execute in any order relative to other blocks. <br />
    26. 26. A Simple Running ExampleMatrix Multiplication<br />A simple matrix multiplication example that illustrates the basic features of memory and thread management in CUDA programs<br />Leave shared memory usage until later<br />Local, register usage<br />Thread ID usage<br />Memory data transfer API between host and device<br />Assume square matrix for simplicity<br />
    27. 27. Programming Model:Square Matrix Multiplication Example<br />P = M * N of size WIDTH x WIDTH<br />Without tiling:<br />One thread calculates one element of P<br />M and N are loaded WIDTH timesfrom global memory<br />N<br />WIDTH<br />M<br />P<br />WIDTH<br />WIDTH<br />WIDTH<br />27<br />
    28. 28. Memory Layout of Matrix in C<br />M0,2<br />M0,1<br />M0,0<br />M0,3<br />M1,1<br />M1,0<br />M1,2<br />M1,3<br />M2,1<br />M2,0<br />M2,2<br />M2,3<br />M3,1<br />M3,0<br />M3,2<br />M3,3<br />M<br />M0,2<br />M0,1<br />M0,0<br />M0,3<br />M1,1<br />M1,0<br />M1,2<br />M1,3<br />M2,1<br />M2,0<br />M2,2<br />M2,3<br />M3,1<br />M3,0<br />M3,2<br />M3,3<br />
    29. 29. Simple Matrix Multiplication (CPU)<br />void MatrixMulOnHost(float* M, float* N, float* P, int Width)‏<br />{ <br />for (int i = 0; i < Width; ++i) {‏<br /> for (int j = 0; j < Width; ++j) { <br />float sum = 0;<br />for (int k = 0; k < Width; ++k) {<br />float a = M[i * width + k];<br />float b = N[k * width + j];<br />sum += a * b;<br />}<br />P[i * Width + j] = sum;<br /> }<br /> }<br />}<br />N<br />k<br />j<br />WIDTH<br />M<br />P<br />i<br />WIDTH<br />k<br />29<br />WIDTH<br />WIDTH<br />
    30. 30. Simple Matrix Multiplication (GPU)<br />void MatrixMulOnDevice(float* M, float* N, float* P, int Width)‏<br />{<br />intsize = Width * Width * sizeof(float); <br />float* Md, Nd, Pd;<br /> …<br /> // 1. Allocate and Load M, N to device memory <br />cudaMalloc(&Md, size);<br />cudaMemcpy(Md, M, size, cudaMemcpyHostToDevice);<br />cudaMalloc(&Nd, size);<br />cudaMemcpy(Nd, N, size, cudaMemcpyHostToDevice);<br />// Allocate P on the device<br />cudaMalloc(&Pd, size);<br />
    31. 31. Simple Matrix Multiplication (GPU)<br />// 2. Kernel invocation code – to be shown later<br /> …<br /> // 3. Read P from the device<br />cudaMemcpy(P, Pd, size, cudaMemcpyDeviceToHost);<br />// Free device matrices<br />cudaFree(Md); <br />cudaFree(Nd); <br />cudaFree(Pd);<br />}<br />
    32. 32. Kernel Function<br />// Matrix multiplication kernel – per thread code<br />__global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width)‏<br />{<br /> // Pvalue is used to store the element of the matrix<br /> // that is computed by the thread<br /> float Pvalue = 0;<br />
    33. 33. Kernel Function (contd.)<br />for (int k = 0; k < Width; ++k)‏ {<br />float Melement = Md[threadIdx.y*Width+k];<br />float Nelement = Nd[k*Width+threadIdx.x];<br />Pvalue+= Melement * Nelement;<br /> }<br />Pd[threadIdx.y*Width+threadIdx.x] = Pvalue;<br />}<br />Nd<br />k<br />WIDTH<br />tx<br />Md<br />Pd<br />ty<br />ty<br />WIDTH<br />tx<br />k<br />33<br />WIDTH<br />WIDTH<br />
    34. 34. Kernel Function (full)<br />// Matrix multiplication kernel – per thread code<br />__global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width)‏<br />{ <br /> // Pvalue is used to store the element of the matrix<br />// that is computed by the thread<br />float Pvalue = 0;<br /> for (int k = 0; k < Width; ++k)‏ {<br /> float Melement = Md[threadIdx.y*Width+k];<br /> float Nelement = Nd[k*Width+threadIdx.x];<br />Pvalue += Melement * Nelement;<br /> }<br />Pd[threadIdx.y*Width+threadIdx.x] = Pvalue;<br />}<br />
    35. 35. Kernel Invocation (Host Side)<br /> // Setup the execution configuration<br />dim3 dimGrid(1, 1);<br />dim3 dimBlock(Width, Width);<br />// Launch the device computation threads!<br />MatrixMulKernel<<<dimGrid, dimBlock>>>(Md, Nd, Pd, Width);<br />
    36. 36. Only One Thread Block Used<br />Nd<br />Grid 1<br />One Block of threads compute matrix Pd<br />Each thread computes one element of Pd<br />Each thread<br />Loads a row of matrix Md<br />Loads a column of matrix Nd<br />Perform one multiply and addition for each pair of Md and Nd elements<br />Compute to off-chip memory access ratio close to 1:1 (not very high)‏<br />Size of matrix limited by the number of threads allowed in a thread block<br />Block 1<br />Thread<br />(2, 2)‏<br />48<br /> WIDTH<br />Pd<br />Md<br />
    37. 37. Handling Arbitrary Sized Square Matrices<br />Have each 2D thread block to compute a (TILE_WIDTH)2 sub-matrix (tile) of the result matrix<br />Each has (TILE_WIDTH)2 threads<br />Generate a 2D Grid of (WIDTH/TILE_WIDTH)2 blocks<br />Nd<br />WIDTH<br />Md<br />Pd<br />by<br />You still need to put a loop around the kernel call for cases where WIDTH/TILE_WIDTH is greater than max grid size (64K)!<br />TILE_WIDTH<br />ty<br />WIDTH<br />bx<br />tx<br />37<br />WIDTH<br />WIDTH<br />
    38. 38. Small Example<br />Nd1,0<br />Nd0,0<br />Block(0,0)<br />Block(1,0)<br />Nd1,1<br />Nd0,1<br />P1,0<br />P0,0<br />P2,0<br />P3,0<br />Nd1,2<br />Nd0,2<br />TILE_WIDTH = 2<br />P0,1<br />P1,1<br />P3,1<br />P2,1<br />Nd0,3<br />Nd1,3<br />P0,2<br />P2,2<br />P3,2<br />P1,2<br />P0,3<br />P2,3<br />P3,3<br />P1,3<br />Pd1,0<br />Md2,0<br />Md1,0<br />Md0,0<br />Md3,0<br />Pd0,0<br />Pd2,0<br />Pd3,0<br />Md1,1<br />Md0,1<br />Md2,1<br />Md3,1<br />Pd0,1<br />Pd1,1<br />Pd3,1<br />Pd2,1<br />Block(1,1)<br />Block(0,1)<br />Pd0,2<br />Pd2,2<br />Pd3,2<br />Pd1,2<br />Pd0,3<br />Pd2,3<br />Pd3,3<br />Pd1,3<br />
    39. 39. Cleanup Topics<br />Memory Management<br />Pinned Memory (Zero-Transfer)<br />Portable Pinned Memory<br />Multi-GPU<br />Wrappers (Python, Java, .NET)<br />Kernels<br />Atomics<br />Thread Synchronization (staged reductions)<br />NVCC<br />
    40. 40. Questions?<br />rob@gillenfamily.net@argodev<br />http://rob.gillenfamily.netRate: http://spkr8.com/t/7714<br />

    ×