This document provides a tutorial introduction to GPGPU computation using NVIDIA CUDA. It begins with a brief overview and warnings about the large numbers involved in GPGPU. The agenda then outlines topics to be covered including general purpose GPU computing using CUDA and optimization topics like memory bandwidth optimization. Key aspects of CUDA programming are introduced like the CUDA memory model, compute capabilities of GPUs, and profiling tools. Examples are provided of simple CUDA kernels and how to configure kernel launches for grids and blocks of threads. Optimization techniques like choosing block/grid sizes to maximize occupancy are also discussed.
Essentials of Automations: Optimizing FME Workflows with Parameters
GPGPU Computation
1. GPGPU Computation
Introduction, Performance Analysis
and optimization
A Tutorial
Giannis Tsagatakis
jtsagata@gmail.com
Msc in Informatics & Multimedia
Department of Informatics Engineering TEI of Crete
2. 2
Warning
In GP-GPU computing we deal with HUGE numbers
– In number of threads
– In Teraflops
– In number of “cores”
– In throughput
– And in number of slides
It’s more a tutorial / handout
There are better slides out there
3. Do you have a
sound blaster card
in your PC ?
Why not ?
Remember the days you have one?
Do you have a graphics card in your PC?
Why ?
4. Agenda
●
General purpose GPU Computing
– Not about computer graphics
●
Vendor Specific (NVIDIA CUDA)
– With a bit of OpenCL and OpenACC
●
Not focus on parallel algorithms
●
Touch some optimization topics
– Mostly on memory bandwidth optimization
●
I try to be self contained
– No previous experience need it
– But there is a lot to cover
9. 9
The road to GP GPU Computing
●
Let’s use shaders to do computation
●
Problems:
– Describe problem in native language
– Bad floating point computations
– Limited memory access patterns
●
GP GPU
– Better hardware
– CUDA, OpenCL, openACC
Where is my
Sound Blaster ?
10. 10
A brief History
●
The fixed graphics
pipeline era
– 1980 OpenGL Expensive
Graphics Workstations
(50.000$)
– 1990 DirectX PC graphics
(200$)
●
The programmable
Graphics pipeline era
– 2001, NVIDIA NV20 (GeForce 3)
– 2002 ATI Radeon 9700
– Shader Languages
Cg, GLSL, HLSL
– Direct X8/9
●
Unified Graphics and
Computing era
– 2006, GeForce 8800
– CUDA, OpenCL
– OpenACC
– Direct X10,
– Vulkan
– Direct Compute
– OpenGL 4.X
●
Deep Learning
●
The bright future
– GPUs ? TPUs? FPGAs ?
12. 12
The death of CPU Scaling
●
Intel(R) Xeon(R) CPU E5-
2680
– Cores 14
– Threads 28
●
Single thread utilization
– Advanced Vector
Extensions
– AVX2 (256 bits)
8 x 32bit
28 x 8 =224, X2 = 448
– Single Thread: 0.22% max
– AVX-512
●
Is that a lot of
computation ;
24. 24
What is CUDA
●
Development Tools
●
Performance
Analysis Tools
●
CUDA
– Compute Unified
Device Architecture
– Parallel computing
platform and API
– C/C++/Fortran
– OpenACC, OpenCL
compatible
37. 37
SIMD
// SIMD ???
if (x>a) {
y=2*x;
} else {
y=2*(x+1);
}
/* t=0 */ z=bool(x>a);
/* t=1 */ y1 = 2 * x;
/* t=2 */ t = x +1;
/* t=3 */ y2 = 2 *t;
/* t=4 */ y = z *y1;
/* t=5 */ y += not(z) *y1;
Branches
If every thread
choose a different
path we have to
do double work.
In CUDA every 32 threads form a warp.
Its good not to have thread diversity within a wrap.
39. 39
The Big Picture
●
Optimize throughput
– Not latency
●
Choose an algorithm that
– Keeps all threads busy
– Keeps all SMs busy
– Optimize memory transfers
●
Must know parallel algorithms
– Not the same as serial ones
– Not in your Data Structures and Algorithms book
Image from : Efficient Parallel Algorithms (CS329) Warwick University
41. 41
A Simple Cuda Kernel
SAXPY stands for “Single-Precision A·X Plus Y”. It is a function
in the standard Basic Linear Algebra Subroutines (BLAS)library.
SAXPY is a combination of scalar multiplication and vector
addition, and it’s very simple: it takes as input two vectors of
32-bit floats X and Y with N elements each, and a scalar value A.
It multiplies each element X[i] by A and adds the result to Y[i].
__global__
void saxpy(int n, float a, float *x, float *y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) y[i] = a*x[i] + y[i];
}
// Perform SAXPY on 1M elements
int N = 1<<20;
saxpy<<<<<<40964096, 256, 256>>>>>>(N, 2.0f, d_x, d_y);
42. 42
Kernel instance parameters
●
gridDim.{x,y,z}
The dimensions of the grid
●
blockDim.{x,y,z}
The dimensions of the block
●
blockIdx.{x,y,z}
The index of the current block within the grid
●
threadIdx.{x,y,z}
The index of the current thread with the block
43. 43
Configuring the kernel launch
// Run kernel on 1M elements on the GPU
int N = 1<<20;
int blockSize = 256;
int numBlocks = (N + blockSize - 1) / blockSize;
squaresquare<<<<<<numBlocks, blockSizenumBlocks, blockSize>>>>>>(N, input, output);(N, input, output);
Number of blocks to run
Maximum Number of
Threads/block
(max 1024 on newer GPUs)
Conceptual model:
●
All threads starts at the same time
●
Threads wraps (n usually 32)
●
Synchronization and memory sharing within a block
●
Threads can be execute in any order
●
Scalability by using a more expensive GPU
44. 44
Block grid configurations
/** 1D grid of 1D blocks **/
__device__ int getGlobalIdx_1D_1D()
{
return blockIdxblockIdx.x *blockDimblockDim.x + threadIdxthreadIdx.x;
}
/** 1D grid of 2D blocks **/
/** 1D grid of 3D blocks **/
/** 2D grid of 1D blocks **/
/* 2D grid of 2D blocks */
__device__ int getGlobalIdx_2D_2D()
{
int blockId = blockIdx.x + blockIdx.y * gridDim.x;
int threadId = blockId * (blockDim.x * blockDim.y) +
(threadIdx.y * blockDim.x) + threadIdx.x;
return threadId;
}
/* . . . . . . . . . . */
/* 3D grid of 3D blocks */
__device__ int getGlobalIdx_3D_3D()
{
int blockId = blockIdx.x
+ blockIdx.y * gridDim.x
+ gridDim.x * gridDim.y * blockIdx.z;
int threadId = blockId * (blockDim.x * blockDim.y * blockDim.z)
+ (threadIdx.z * (blockDim.x * blockDim.y))
+ (threadIdx.y * blockDim.x)
+ threadIdx.x;
return threadId;
}
45. 45
Choose size
●
Match data organization
●
Maximize occupancy (active threads)
– Shared memory usage
– Register usage
– Multiplies of 32 (warp size)
●
A black art
46. 46
The 3 steps to CUDA acceleration
cudaMemcpy(d_x, x, N*sizeof(float),
cudaMemcpyHostToDevicecudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float),
cudaMemcpyHostToDevicecudaMemcpyHostToDevice);
Images from NVIDIA, and Tasos Maragkos (tasmar)
47. 47
The 3 steps to CUDA accelaration
saxpy<<<100, 256>>>(N, 2.0f, d_x, d_y);
48. 48
The 3 steps to CUDA accelaration v
cudaMemcpy(y, d_y, N*sizeof(float),
cudaMemcpyDeviceToHostcudaMemcpyDeviceToHost);
49. 49
Calling the Kernel, The hard way
int main(void) {
int N = 1<<20;
float *x, *y, *d_x, *d_y;
x = (float*)malloc(N*sizeof(float));
y = (float*)malloc(N*sizeof(float));
cudaMalloc(&d_x, N*sizeof(float));
cudaMalloc(&d_y, N*sizeof(float));
for (int i = 0; i < N; i++) {x[i] = 1.0f; y[i] = 2.0f;}
cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice);
saxpy<<<(N+255)/256, 256>>>(N, 2.0f, d_x, d_y);
cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost);
float maxError = 0.0f;
for (int i = 0; i < N; i++){ maxError = max(maxError, abs(y[i]-4.0f));}
printf("Max error: %fn", maxError);
cudaFree(d_x); cudaFree(d_y); free(x); free(y);
}
51. 51
Calling the Kernel, The easy way
int main(void)
{
int N = 1<<20; // 1M elements
// Allocate Unified Memory -- accessible from CPU or GPU
float *x, *y;
cudaMallocManaged(&x, N*sizeof(float));
cudaMallocManaged(&y, N*sizeof(float));
// Init arrays ....
// Perform SAXPY on 1M elements
saxpy<<<(N+255)/256, 256>>>(N, 2.0f, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
float maxError = 0.0f;
for (int i = 0; i < N; i++)
maxError = max(maxError, abs(y[i]-4.0f));
printf("Max error: %fn", maxError);
// Free memory
cudaFree(x);
cudaFree(y);
}
https://devblogs.nvidia.com/even-easier-introduction-cuda/
Unified memoryUnified memory
architecturearchitecture
https://devblogs.nvidia.com/unified-memory-in-cuda-6/
54. 54
Speedup! Initialize with a Kernel
__global__ void init_kernel(int n, NUMBER *x, NUMBER *y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) {
x[i] = 1.0f;
y[i] = 2.0f;
}
}
https://devblogs.nvidia.com/unified-memory-cuda-beginners/
55. 55
Speedup! The fastest
__global__
void barrier_kernel(int n, NUMBER a, NUMBER *x, NUMBER *y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) { y[i] = a*x[i] + y[i];
x[i] = 1.0f;
y[i] = 2.0f;
// Not really need it here
__syncthreads();
y[i] = a*x[i] + y[i];
}
}
https://devblogs.nvidia.com/unified-memory-cuda-beginners/
56. 56
The Timings
CPU Simple Kernel Easy Kernel Implace Easy Implace Prefeched Barrier
0
2000000
4000000
6000000
8000000
10000000
12000000
0
2000000
4000000
6000000
8000000
10000000
12000000
Time
Min
Max
Mean
61. 61
Open CL
●
Khronos Group
– Apple, Altera, AMD, ARM, Xilinx, Intel, Creative, ...
●
Heterogeneous Platform
– CPUs, GPUs, DSPs, FPGAs,
– Accelerators (Intel Movidius, Adapteva)
●
Active development
– OpenCL 2.2 (2017)
●
To be merged with Vulkan
●
There is also OpenGL Compute Shaders
●
More complex to code
– Maximize portability, easy implementation
62. 62
An OpenCL Kernel
●
The Kernel
__kernel void vector_add(__global const int *A, __global const int *B, __global int *C) {
// Get the index of the current element to be processed
int i = get_global_id(0);
// Do the operation
C[i] = A[i] + B[i];
}
// 100 Lines of Code
// Create a program from the kernel source
cl_program program = clCreateProgramWithSource(context, 1,
(const char **)&source_str, (const size_t *)&source_size, &ret);
// Build the program
ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL);
// Create the OpenCL kernel
cl_kernel kernel = clCreateKernel(program, "vector_add", &ret);
// Set the arguments of the kernel
ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&a_mem_obj);
ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&b_mem_obj);
ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&c_mem_obj);
// Execute the OpenCL kernel on the list
size_t global_item_size = LIST_SIZE; // Process the entire lists
size_t local_item_size = 64; // Divide work items into groups of 64
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
&global_item_size, &local_item_size, 0, NULL, NULL);
https://www.eriksmistad.no/getting-started-with-opencl-and-gpu-computing/
●
The Setup
– AMD SDK
– Intel OpenCL
SDK
– Cuda
– Xilinx
SDAccel
●
The Driver
63. 63
The C++ Way: THRUST
●
Analogous to C++ STL
– Containers, iterators, algorithms
●
Template power to CUDA programming
● thrust::device_vector
– sorting: thrust::sort and thrust::sort_by_key
– transformations: thrust::transform
– reductions: thrust::reduce and thrust::transform_reduce
– scans: thrust::inclusive_scan, thrust::exclusive_scan,
thrust::transform_inclusive_scan
●
CUDA, OpenMP, TBB
●
Apache License v 2.0
https://docs.nvidia.com/cuda/thrust/index.html
64. 64
Alternative Ways to SAXPY
Thrust & cuBLAS
using thrust::placeholders;
int N = 1<<20;
thrust::host_vector x(N), y(N);
...
// alloc and copy host to device
thrust::device_vector d_x = x;
thrust::device_vector d_y = y;
// Perform SAXPY C++ STLC++ STL Way
thrust::transform(d_x.begin(), d_x.end(),
d_y.begin(), d_y.begin(), 2.0f * _1 + _2);
// copy results to the host vector
y = d_y;
int N = 1<<20;
cublasInit();
cublasSetVector(N, sizeof(x[0]), x, 1, d_x, 1);
cublasSetVector(N, sizeof(y[0]), y, 1, d_y, 1);
// Perform SAXPY on 1M elements
cublasSaxpy(N, 2.0, d_x, 1, d_y, 1);
cublasGetVector(N, sizeof(y[0]), d_y, 1, y, 1);
cublasShutdown();
https://devblogs.nvidia.com/six-ways-saxpy/
ThrustThrust
65. 65
Open ACC
void
saxpy(int n, float a, float * restrict x,
float * restrict y) {
#pragma acc kernels
for (int i = 0; i < n; ++i)
y[i] = a*x[i] + y[i];
}
...
// Perform SAXPY on 1M elements
// Looks like a normal C call
saxpy(1<<20, 2.0, x, y);
#pragma acc kernels
#pragma acc parallel
#pragma acc data
#pragma acc loop
#pragma acc cache
#pragma acc update
#pragma acc declare
#pragma acc wait
https://www.openacc.org/
●
Directive based
●
Cray, Nvidia, PGI
●
Extension of openMP
– Will be merged
66. 66
Even More Ways to SAXPY
Python & Fortran
module mymodule contains
attributes(global) subroutine saxpy(n, a, x, y)
real :: x(:), y(:), a
integer :: n, i
attributes(value) :: a, n
i = threadIdx%x+(blockIdx%x-1)*blockDim%x
if (i<=n) y(i) = a*x(i)+y(i)
end subroutine saxpy
end module mymodule
program main
use cudafor; use mymodule
real, device :: x_d(2**20), y_d(2**20)
x_d = 1.0, y_d = 2.0
! Perform SAXPY on 1M elements
call saxpy<<<4096, 256>>>(2**20, 2.0, x_d, y_d)
end program main
from copperhead import *
import numpy as np
@cu
def saxpy(a, x, y):
return [a * xi + yi for xi, yi in zip(x, y)]
x = np.arange(2**20, dtype=np.float32)
y = np.arange(2**20, dtype=np.float32)
with places.gpu0:
gpu_result = saxpy(2.0, x, y)
with places.openmp:
cpu_result = saxpy(2.0, x, y)
https://devblogs.nvidia.com/six-ways-saxpy/
Copperhead (Python)Copperhead (Python) FortranFortran
69. 69
Matrix Transpose Problem
Some Parallelism
10 n-1
// 1 Thread per row
__global__ void
transpose_parallel_per_row(float in[], float out[])
{
int i = threadIdx.x;
for(int j=0; j < N; j++)
// out(j,i) = in(i,j)
out[j + i*N] = in[i + j*N];
}
transpose_parallel_per_row<<<1,N>>>(d_in, d_out);
Why not :transpose_parallel_per_row<<<N,1>>>(d_in, d_out)??
1 Block
1024 Threads
1 Loop
Some Parallelism
70. 70
Matrix Transpose Problem
First performance gains
10 n-1
Serial: 173.164 ns
per_row: 1.37914 ns
What is the
next optimization step ?
71. 71
Matrix Transpose Problem
Going full parallel
__global__ void
transpose_parallel_per_element(float in[], float out[]) {
int i = blockIdx.x * K + threadIdx.x;
int j = blockIdx.y * K + threadIdx.y;
out[j + i*N] = in[i + j*N];
}
dim3 blocks(N/K,N/K);
dim3 threads(K,K);
transpose_parallel_per_element<<<blocks,threads>>>(d_in, d_out);
Warning: Maximum parallelism not always gives the best performance
32 X 32 Blocks
32 X 32 Threads/Block
Maximum parallelism
No Loops
72. 72
Matrix Transpose Problem
Full parallel performance
Warning: Maximum parallelism not always gives the best performance
Serial: 173.164 ns
per_row: 1.37914
ns
par_per_el: 0.090304 ms
Can we get more performance ?
73. 73
Lemon juice the GPU
Warning: Maximum parallelism not always gives the best performance
What is the
next optimization step ?
Can we get more performance ?
74. 74
More Optimizations
●
Wait!
Time to Stop Optimizing and
Start Thinking
●
Did we get the performance we seek ?
●
Did we have the same hot-spot ?
NOT
75. 75
Memory bandwidth
./deviceQuery
GPU Max Clock rate: 1683 MHz (1.68 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
GeForce GTX 1080 Ti
Memory Clock: 5.505 * 10-6 clocks
/sec
Memory bus: 352 bits = 44 bytes
Maximum bandwith 242.220 GB/sec
Memory Clock: 5.505 * 10-6 clocks
/sec
Memory bus: 352 bits = 44 bytes
Maximum bandwith 242.220 GB/sec
< 40%
Bad
40-60%
OK
60-75%
Good
> 75 %
Excellent!
N = 1024, time = 0.67 ms
1024 * 1024 * 4 * 2 / (0.67 * 10-3
) = 1.25 * 1010
= 12.5 GB/sec
N = 1024, time = 0.67 ms
1024 * 1024 * 4 * 2 / (0.67 * 10-3
) = 1.25 * 1010
= 12.5 GB/sec
76. 76
Memory Coalescing
CPU: xx GB/sec
0.1%
Serial: xx GB/sec
1%
per_row: xx GB/sec
4.5%
per_elem: xx GB/sec
31%
https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html
GOOD!
GOOD!
BABA
D!D!
83. 83
Tilling Code
const int N= 1024;
const int K= 32;
__global__ void
transpose_parallel_per_element_tiled(float in[], float out[])
{
// (i,j) locations of the tile corners for input & output matrices:
int in_corner_i = blockIdx.x * K, in_corner_j = blockIdx.y * K;
int out_corner_i = blockIdx.y * K, out_corner_j = blockIdx.x * K;
int x = threadIdx.x, y = threadIdx.y;
__shared__ float tile[K][K];
// coalesced read from global mem, TRANSPOSED write into shared mem:
tile[y][x] = in[(in_corner_i + x) + (in_corner_j + y)*N];
__syncthreads();
// read from shared mem, coalesced write to global mem:
out[(out_corner_i + x) + (out_corner_j + y)*N] = tile[x][y];
}
dim3 blocks16x16(N/K,N/K); // blocks per grid
dim3 threads16x16(K,K); // threads per block
transpose_parallel_per_element_tiled<<<blocks,threads>>>(d_in, d_out);
Shared
synchonize
Increase number of blocks per streaming processor
Reduce number of threads
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
85. 85
Comparison Analysis
Serial Per Row Per Element Tiled 32 Tiled 16
0.01
0.1
1
10
100
1000
Serial Per Row Per Element Tiled 32 Tiled 16
86. 86
Memory Coalescing
in transpose_parallel_per_element
10 n-1
const int N= 1024; // matrix size is NxN
const int K= 32; // tile size is KxK
__global__ void
transpose_parallel_per_element(float in[], float out[])
{
int i = blockIdx.x * K + threadIdx.x;
int j = blockIdx.y * K + threadIdx.y;
out[j + i*N] = in[i + j*N];
}
32X32
BAD!
BAD!
N = 1024
N = 1024
Most GPU codes are bandwith limitedMost GPU codes are bandwith limited
copy
shared memory copy
naive transpose
coalesced transpose
conflict-free transpose
0 50 100 150 200 250 300 350 400
< 40%
Bad
40-60%
OK
60-75%
Good
> 75 %
Excellent!
87. 87
Optimization Verdict
●
Never optimize in vacuum. Know when to stop
●
Use existing robust libraries
●
Measure & improve memory bandwidth
– Assure sufficient occupancy
– Coalesce global memory accesses
– Minimize latency between accesses
●
Minimize thread divergence
– Avoid branchy code
– Avoid thread workload imbalance
●
Use fast math intrinsics, and double precision
●
Split workload into streams
●
Learn Nvidia CUDA Visual Profiler (nvvp)
Udacity CS344: Intro to Parallel Programming
88. 88
All of CUDA
●
CUDA atomic operations
●
CUDA streams
●
CUDA textures
●
CUDA Dynamic parallelism
●
Floating point calculations
●
Instincts
●
Important algorithms
map, reduce, scan, gather, scatter, stencil, histogram
●
Multi GPU programming
●
Multi GPU/CPU and OpenMP
NOT
89. 89
For more Info
Cuda C Best Practices
Cuda Documentation
Heterogeneous Parallel Programming , University of Illinois, Co
Udacity CS344: Intro to Parallel Programming (NVidia)
91. A genie gives you
1,000 more
computing power.
How you gonna use
it?
If it gives you 100,000 more ?
92. 92
GeForce GTX 1080 Ti
CUDA Driver Version / Runtime Version 9.2 / 9.2
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 11177 MBytes
(28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
GPU Max Clock rate: 1683 MHz (1.68 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 2883584 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536),
3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
./deviceQuery