SlideShare a Scribd company logo
GPGPU Computation
Introduction, Performance Analysis
and optimization
A Tutorial
Giannis Tsagatakis
jtsagata@gmail.com
Msc in Informatics & Multimedia
Department of Informatics Engineering TEI of Crete
2
Warning
In GP-GPU computing we deal with HUGE numbers
– In number of threads
– In Teraflops
– In number of “cores”
– In throughput
– And in number of slides
It’s more a tutorial / handout
There are better slides out there
Do you have a
sound blaster card
in your PC ?
Why not ?
Remember the days you have one?
Do you have a graphics card in your PC?
Why ?
Agenda
●
General purpose GPU Computing
– Not about computer graphics
●
Vendor Specific (NVIDIA CUDA)
– With a bit of OpenCL and OpenACC
●
Not focus on parallel algorithms
●
Touch some optimization topics
– Mostly on memory bandwidth optimization
●
I try to be self contained
– No previous experience need it
– But there is a lot to cover
5
G is for Graphics
6
The OpenGL pipeline
7
Shader Technology
8
GPU is for Computation
9
The road to GP GPU Computing
●
Let’s use shaders to do computation
●
Problems:
– Describe problem in native language
– Bad floating point computations
– Limited memory access patterns
●
GP GPU
– Better hardware
– CUDA, OpenCL, openACC
Where is my
Sound Blaster ?
10
A brief History
●
The fixed graphics
pipeline era
– 1980 OpenGL Expensive
Graphics Workstations
(50.000$)
– 1990 DirectX PC graphics
(200$)
●
The programmable
Graphics pipeline era
– 2001, NVIDIA NV20 (GeForce 3)
– 2002 ATI Radeon 9700
– Shader Languages
Cg, GLSL, HLSL
– Direct X8/9
●
Unified Graphics and
Computing era
– 2006, GeForce 8800
– CUDA, OpenCL
– OpenACC
– Direct X10,
– Vulkan
– Direct Compute
– OpenGL 4.X
●
Deep Learning
●
The bright future
– GPUs ? TPUs? FPGAs ?
11
NVIDIA Timeline
●
1999 GeForce 256
●
2001 GeForce 2
Programmable
●
2004 Scalable Link
Interface
●
2006 CUDA
●
2007 Tesla
– Unified shader model
●
2009 Fermi
– Fused multiply add
●
2013 Kepler
– Dynamic parallelism
– Unified memory
●
2014 Maxwell
●
2016 Pascal
– Page mitigation engine
– High bandwidth memory
– NVLink
●
2017 Volta
– Tensor cores
12
The death of CPU Scaling
●
Intel(R) Xeon(R) CPU E5-
2680
– Cores 14
– Threads 28
●
Single thread utilization
– Advanced Vector
Extensions
– AVX2 (256 bits)
8 x 32bit
28 x 8 =224, X2 = 448
– Single Thread: 0.22% max
– AVX-512
●
Is that a lot of
computation ;
13
The rise (and limits) of Many-Core
14
High Performance Computing (HPA)
GPUs, TPUs, FPGAs, ASICs
15
CPU vs GPU
Latency oriented design Throughput oriented design
16
Massive Parallelism
Pascal GP100 Block Diagram
17
Streaming Multiprocessor
Pascal GP100 Streaming Multiprocessor
Special
Function
Unit
LD/ST
Load/Store
Unit
Double
Precision
Unit
18
Warps
19
Tensor Cores on Volta
https://devblogs.nvidia.com/programming-tensor-cores-cuda-9/
20
Volta GV100
●
5,376 32-bit integer cores
●
5,376 32-bit floating point cores
●
2,688 64-bit floating point cores
●
672 Tensor Cores
●
336 texture units
●
4096bit memory bus width
●
21.1 Billion Transistors / 815 mm2 / 12nm
●
300 Watt
●
$9,269.00 & FREE shipping 32GB Bulk (Amazon).
21
Communication Patterns
Eight GPU hybrid cube mesh architecture with NVLink.
22
CPU vs GPU
23
NVidia Accelerating Computing
24
What is CUDA
●
Development Tools
●
Performance
Analysis Tools
●
CUDA
– Compute Unified
Device Architecture
– Parallel computing
platform and API
– C/C++/Fortran
– OpenACC, OpenCL
compatible
25
CUDA Parallel Computing Platform
26
The many ways to GPU Acceleration
●
Libraries
– Drop in replacements
– Minimal code changes
●
OpenACC Directives
– Easily Accelerate
Applications
●
Programming
Languages
– C/C++, Fortran, LLVM
– MATLAB, Mathematica,
LabVIEW
– Maximum Flexibility
●
Parallel languages
– Copperhead (python)
– Halide (image
processing)
●
Software stacks
Deep learning stack
– Keras
●
Tensor Flow
– CUDA libararies
●
CUDAnn, Digits, cuBlas
27
Domain Specific Libraries
Heterogeneous CPU
GPU-based architectures
CUDA, Intel Xeon Phi,
OpenCL
From the writers of
LAPACK
BSD License
28
Domain Specific Libraries
29
Domain Specific Libraries
30
Great Artists Steal
31
GPU Computing Applications
32
Cuda Internals
33
Heterogeneous Architectures
34
Cuda memory model
35
Compute Capabilities
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities
Compute capability is about capabilities and not about computation power
36
GeForce GTX 1080 Ti
37
SIMD
// SIMD ???
if (x>a) {
y=2*x;
} else {
y=2*(x+1);
}
/* t=0 */ z=bool(x>a);
/* t=1 */ y1 = 2 * x;
/* t=2 */ t = x +1;
/* t=3 */ y2 = 2 *t;
/* t=4 */ y = z *y1;
/* t=5 */ y += not(z) *y1;
Branches
If every thread
choose a different
path we have to
do double work.
In CUDA every 32 threads form a warp.
Its good not to have thread diversity within a wrap.
38
Systematic Optimization
Weak ScalingWeak Scaling: Run a larger
problem
Strong ScalingStrong Scaling : Run a problem
faster
39
The Big Picture
●
Optimize throughput
– Not latency
●
Choose an algorithm that
– Keeps all threads busy
– Keeps all SMs busy
– Optimize memory transfers
●
Must know parallel algorithms
– Not the same as serial ones
– Not in your Data Structures and Algorithms book
Image from : Efficient Parallel Algorithms (CS329) Warwick University
40
Appetizer
A First Taste of Cuda
41
A Simple Cuda Kernel
SAXPY stands for “Single-Precision A·X Plus Y”. It is a function
in the standard Basic Linear Algebra Subroutines (BLAS)library.
SAXPY is a combination of scalar multiplication and vector
addition, and it’s very simple: it takes as input two vectors of
32-bit floats X and Y with N elements each, and a scalar value A.
It multiplies each element X[i] by A and adds the result to Y[i].
__global__
void saxpy(int n, float a, float *x, float *y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) y[i] = a*x[i] + y[i];
}
// Perform SAXPY on 1M elements
int N = 1<<20;
saxpy<<<<<<40964096, 256, 256>>>>>>(N, 2.0f, d_x, d_y);
42
Kernel instance parameters
●
gridDim.{x,y,z}
The dimensions of the grid
●
blockDim.{x,y,z}
The dimensions of the block
●
blockIdx.{x,y,z}
The index of the current block within the grid
●
threadIdx.{x,y,z}
The index of the current thread with the block
43
Configuring the kernel launch
// Run kernel on 1M elements on the GPU
int N = 1<<20;
int blockSize = 256;
int numBlocks = (N + blockSize - 1) / blockSize;
squaresquare<<<<<<numBlocks, blockSizenumBlocks, blockSize>>>>>>(N, input, output);(N, input, output);
Number of blocks to run
Maximum Number of
Threads/block
(max 1024 on newer GPUs)
Conceptual model:
●
All threads starts at the same time
●
Threads wraps (n usually 32)
●
Synchronization and memory sharing within a block
●
Threads can be execute in any order
●
Scalability by using a more expensive GPU
44
Block grid configurations
/** 1D grid of 1D blocks **/
__device__ int getGlobalIdx_1D_1D()
{
return blockIdxblockIdx.x *blockDimblockDim.x + threadIdxthreadIdx.x;
}
/** 1D grid of 2D blocks **/
/** 1D grid of 3D blocks **/
/** 2D grid of 1D blocks **/
/* 2D grid of 2D blocks */
__device__ int getGlobalIdx_2D_2D()
{
int blockId = blockIdx.x + blockIdx.y * gridDim.x;
int threadId = blockId * (blockDim.x * blockDim.y) +
(threadIdx.y * blockDim.x) + threadIdx.x;
return threadId;
}
/* . . . . . . . . . . */
/* 3D grid of 3D blocks */
__device__ int getGlobalIdx_3D_3D()
{
int blockId = blockIdx.x
+ blockIdx.y * gridDim.x
+ gridDim.x * gridDim.y * blockIdx.z;
int threadId = blockId * (blockDim.x * blockDim.y * blockDim.z)
+ (threadIdx.z * (blockDim.x * blockDim.y))
+ (threadIdx.y * blockDim.x)
+ threadIdx.x;
return threadId;
}
45
Choose size
●
Match data organization
●
Maximize occupancy (active threads)
– Shared memory usage
– Register usage
– Multiplies of 32 (warp size)
●
A black art
46
The 3 steps to CUDA acceleration
cudaMemcpy(d_x, x, N*sizeof(float),
cudaMemcpyHostToDevicecudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float),
cudaMemcpyHostToDevicecudaMemcpyHostToDevice);
Images from NVIDIA, and Tasos Maragkos (tasmar)
47
The 3 steps to CUDA accelaration
saxpy<<<100, 256>>>(N, 2.0f, d_x, d_y);
48
The 3 steps to CUDA accelaration v
cudaMemcpy(y, d_y, N*sizeof(float),
cudaMemcpyDeviceToHostcudaMemcpyDeviceToHost);
49
Calling the Kernel, The hard way
int main(void) {
int N = 1<<20;
float *x, *y, *d_x, *d_y;
x = (float*)malloc(N*sizeof(float));
y = (float*)malloc(N*sizeof(float));
cudaMalloc(&d_x, N*sizeof(float));
cudaMalloc(&d_y, N*sizeof(float));
for (int i = 0; i < N; i++) {x[i] = 1.0f; y[i] = 2.0f;}
cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice);
saxpy<<<(N+255)/256, 256>>>(N, 2.0f, d_x, d_y);
cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost);
float maxError = 0.0f;
for (int i = 0; i < N; i++){ maxError = max(maxError, abs(y[i]-4.0f));}
printf("Max error: %fn", maxError);
cudaFree(d_x); cudaFree(d_y); free(x); free(y);
}
50
Compiling a CUDA program
51
Calling the Kernel, The easy way
int main(void)
{
int N = 1<<20; // 1M elements
// Allocate Unified Memory -- accessible from CPU or GPU
float *x, *y;
cudaMallocManaged(&x, N*sizeof(float));
cudaMallocManaged(&y, N*sizeof(float));
// Init arrays ....
// Perform SAXPY on 1M elements
saxpy<<<(N+255)/256, 256>>>(N, 2.0f, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
float maxError = 0.0f;
for (int i = 0; i < N; i++)
maxError = max(maxError, abs(y[i]-4.0f));
printf("Max error: %fn", maxError);
// Free memory
cudaFree(x);
cudaFree(y);
}
https://devblogs.nvidia.com/even-easier-introduction-cuda/
Unified memoryUnified memory
architecturearchitecture
https://devblogs.nvidia.com/unified-memory-in-cuda-6/
52
Unified memory architecture
●
Kepler GPU: no page fault support, limited virtual space, CUDA-6
●
Pascal GPU: page fault support, extended virtual address space (48-bit),
CUDA-8
●
Volta GPU: Access counters, NVLINK2
53
“Basic” Profiling with nvprof
Lots of other metrics
nvprof –-query-metrics
54
Speedup! Initialize with a Kernel
__global__ void init_kernel(int n, NUMBER *x, NUMBER *y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) {
x[i] = 1.0f;
y[i] = 2.0f;
}
}
https://devblogs.nvidia.com/unified-memory-cuda-beginners/
55
Speedup! The fastest
__global__
void barrier_kernel(int n, NUMBER a, NUMBER *x, NUMBER *y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) { y[i] = a*x[i] + y[i];
x[i] = 1.0f;
y[i] = 2.0f;
// Not really need it here
__syncthreads();
y[i] = a*x[i] + y[i];
}
}
https://devblogs.nvidia.com/unified-memory-cuda-beginners/
56
The Timings
CPU Simple Kernel Easy Kernel Implace Easy Implace Prefeched Barrier
0
2000000
4000000
6000000
8000000
10000000
12000000
0
2000000
4000000
6000000
8000000
10000000
12000000
Time
Min
Max
Mean
57
Advanced Profiling using
NVIDIA Visual Profiler
Nvvp
or
nSight
IDE
58
Verdict : Memory Bandwidth
59
Verdict : Memory statistics
60
Other programming models
61
Open CL
●
Khronos Group
– Apple, Altera, AMD, ARM, Xilinx, Intel, Creative, ...
●
Heterogeneous Platform
– CPUs, GPUs, DSPs, FPGAs,
– Accelerators (Intel Movidius, Adapteva)
●
Active development
– OpenCL 2.2 (2017)
●
To be merged with Vulkan
●
There is also OpenGL Compute Shaders
●
More complex to code
– Maximize portability, easy implementation
62
An OpenCL Kernel
●
The Kernel
__kernel void vector_add(__global const int *A, __global const int *B, __global int *C) {
// Get the index of the current element to be processed
int i = get_global_id(0);
// Do the operation
C[i] = A[i] + B[i];
}
// 100 Lines of Code
// Create a program from the kernel source
cl_program program = clCreateProgramWithSource(context, 1,
(const char **)&source_str, (const size_t *)&source_size, &ret);
// Build the program
ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL);
// Create the OpenCL kernel
cl_kernel kernel = clCreateKernel(program, "vector_add", &ret);
// Set the arguments of the kernel
ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&a_mem_obj);
ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&b_mem_obj);
ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&c_mem_obj);
// Execute the OpenCL kernel on the list
size_t global_item_size = LIST_SIZE; // Process the entire lists
size_t local_item_size = 64; // Divide work items into groups of 64
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
&global_item_size, &local_item_size, 0, NULL, NULL);
https://www.eriksmistad.no/getting-started-with-opencl-and-gpu-computing/
●
The Setup
– AMD SDK
– Intel OpenCL
SDK
– Cuda
– Xilinx
SDAccel
●
The Driver
63
The C++ Way: THRUST
●
Analogous to C++ STL
– Containers, iterators, algorithms
●
Template power to CUDA programming
● thrust::device_vector
– sorting: thrust::sort and thrust::sort_by_key
– transformations: thrust::transform
– reductions: thrust::reduce and thrust::transform_reduce
– scans: thrust::inclusive_scan, thrust::exclusive_scan,
thrust::transform_inclusive_scan
●
CUDA, OpenMP, TBB
●
Apache License v 2.0
https://docs.nvidia.com/cuda/thrust/index.html
64
Alternative Ways to SAXPY
Thrust & cuBLAS
using thrust::placeholders;
int N = 1<<20;
thrust::host_vector x(N), y(N);
...
// alloc and copy host to device
thrust::device_vector d_x = x;
thrust::device_vector d_y = y;
// Perform SAXPY C++ STLC++ STL Way
thrust::transform(d_x.begin(), d_x.end(),
d_y.begin(), d_y.begin(), 2.0f * _1 + _2);
// copy results to the host vector
y = d_y;
int N = 1<<20;
cublasInit();
cublasSetVector(N, sizeof(x[0]), x, 1, d_x, 1);
cublasSetVector(N, sizeof(y[0]), y, 1, d_y, 1);
// Perform SAXPY on 1M elements
cublasSaxpy(N, 2.0, d_x, 1, d_y, 1);
cublasGetVector(N, sizeof(y[0]), d_y, 1, y, 1);
cublasShutdown();
https://devblogs.nvidia.com/six-ways-saxpy/
ThrustThrust
65
Open ACC
void
saxpy(int n, float a, float * restrict x,
float * restrict y) {
#pragma acc kernels
for (int i = 0; i < n; ++i)
y[i] = a*x[i] + y[i];
}
...
// Perform SAXPY on 1M elements
// Looks like a normal C call
saxpy(1<<20, 2.0, x, y);
#pragma acc kernels
#pragma acc parallel
#pragma acc data
#pragma acc loop
#pragma acc cache
#pragma acc update
#pragma acc declare
#pragma acc wait
https://www.openacc.org/
●
Directive based
●
Cray, Nvidia, PGI
●
Extension of openMP
– Will be merged
66
Even More Ways to SAXPY
Python & Fortran
module mymodule contains
attributes(global) subroutine saxpy(n, a, x, y)
real :: x(:), y(:), a
integer :: n, i
attributes(value) :: a, n
i = threadIdx%x+(blockIdx%x-1)*blockDim%x
if (i<=n) y(i) = a*x(i)+y(i)
end subroutine saxpy
end module mymodule
program main
use cudafor; use mymodule
real, device :: x_d(2**20), y_d(2**20)
x_d = 1.0, y_d = 2.0
! Perform SAXPY on 1M elements
call saxpy<<<4096, 256>>>(2**20, 2.0, x_d, y_d)
end program main
from copperhead import *
import numpy as np
@cu
def saxpy(a, x, y):
return [a * xi + yi for xi, yi in zip(x, y)]
x = np.arange(2**20, dtype=np.float32)
y = np.arange(2**20, dtype=np.float32)
with places.gpu0:
gpu_result = saxpy(2.0, x, y)
with places.openmp:
cpu_result = saxpy(2.0, x, y)
https://devblogs.nvidia.com/six-ways-saxpy/
Copperhead (Python)Copperhead (Python) FortranFortran
67
Main Dish
Optimizing Memory Transfers
68
Matrix Transpose Problem
10 n-1
// CPU code
void transpose_CPU(float in[], float
out[]{
for(int j=0; j < N; j++)
for(int i=0; i < N; i++)
// out(j,i) = in(i,j)
out[j + i*N] = in[i + j*N];
}
// Single Thread
__global__ void
transpose_serial(float in[], float out[])
{
for(int j=0; j < N; j++)
for(int i=0; i < N; i++)
out[j + i*N] = in[i + j*N];
}
transpose_serial<<<1,1>>>(d_in, d_out);
N = 1024
https://devblogs.nvidia.com/efficient-matrix-transpose-cuda-cc/
1 Thread
2 Inner Loops
No parallelism
69
Matrix Transpose Problem
Some Parallelism
10 n-1
// 1 Thread per row
__global__ void
transpose_parallel_per_row(float in[], float out[])
{
int i = threadIdx.x;
for(int j=0; j < N; j++)
// out(j,i) = in(i,j)
out[j + i*N] = in[i + j*N];
}
transpose_parallel_per_row<<<1,N>>>(d_in, d_out);
Why not :transpose_parallel_per_row<<<N,1>>>(d_in, d_out)??
1 Block
1024 Threads
1 Loop
Some Parallelism
70
Matrix Transpose Problem
First performance gains
10 n-1
Serial: 173.164 ns
per_row: 1.37914 ns
What is the
next optimization step ?
71
Matrix Transpose Problem
Going full parallel
__global__ void
transpose_parallel_per_element(float in[], float out[]) {
int i = blockIdx.x * K + threadIdx.x;
int j = blockIdx.y * K + threadIdx.y;
out[j + i*N] = in[i + j*N];
}
dim3 blocks(N/K,N/K);
dim3 threads(K,K);
transpose_parallel_per_element<<<blocks,threads>>>(d_in, d_out);
Warning: Maximum parallelism not always gives the best performance
32 X 32 Blocks
32 X 32 Threads/Block
Maximum parallelism
No Loops
72
Matrix Transpose Problem
Full parallel performance
Warning: Maximum parallelism not always gives the best performance
Serial: 173.164 ns
per_row: 1.37914
ns
par_per_el: 0.090304 ms
Can we get more performance ?
73
Lemon juice the GPU
Warning: Maximum parallelism not always gives the best performance
What is the
next optimization step ?
Can we get more performance ?
74
More Optimizations
●
Wait!
Time to Stop Optimizing and
Start Thinking
●
Did we get the performance we seek ?
●
Did we have the same hot-spot ?
NOT
75
Memory bandwidth
./deviceQuery
GPU Max Clock rate: 1683 MHz (1.68 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
GeForce GTX 1080 Ti
Memory Clock: 5.505 * 10-6 clocks
/sec
Memory bus: 352 bits = 44 bytes
Maximum bandwith 242.220 GB/sec
Memory Clock: 5.505 * 10-6 clocks
/sec
Memory bus: 352 bits = 44 bytes
Maximum bandwith 242.220 GB/sec
< 40%
Bad
40-60%
OK
60-75%
Good
> 75 %
Excellent!
N = 1024, time = 0.67 ms
1024 * 1024 * 4 * 2 / (0.67 * 10-3
) = 1.25 * 1010
= 12.5 GB/sec
N = 1024, time = 0.67 ms
1024 * 1024 * 4 * 2 / (0.67 * 10-3
) = 1.25 * 1010
= 12.5 GB/sec
76
Memory Coalescing
CPU: xx GB/sec
0.1%
Serial: xx GB/sec
1%
per_row: xx GB/sec
4.5%
per_elem: xx GB/sec
31%
https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html
GOOD!
GOOD!
BABA
D!D!
77
NVidia Visual Profiler
●
Demo
78
Tiling
●
Problem: coalesced reads, scattered writes
●
Goal: coalesced reads, coalesced writes
●
Solution:
Input Output
Shared memory
K=32
79
Tilling Code
const int N= 1024; // matrix size is NxN
const int K= 32; // tile size is KxK
__global__ void
transpose_parallel_per_element_tiled(float in[], float out[])
{
// (i,j) locations of the tile corners for input & output matrices:
int in_corner_i = blockIdx.x * K, in_corner_j = blockIdx.y * K;
int out_corner_i = blockIdx.y * K, out_corner_j = blockIdx.x * K;
int x = threadIdx.x, y = threadIdx.y;
__shared__ float tile[K][K];
// coalesced read from global mem, TRANSPOSED write into shared mem:
tile[y][x] = in[(in_corner_i + x) + (in_corner_j + y)*N];
__syncthreads();
// read from shared mem, coalesced write to global mem:
out[(out_corner_i + x) + (out_corner_j + y)*N] = tile[x][y];
}
dim3 blocks16x16(N/K,N/K); // blocks per grid
dim3 threads16x16(K,K); // threads per block
transpose_parallel_per_element_tiled<<<blocks,threads>>>(d_in, d_out);
// to be launched with one thread per element, in (tilesize)x(tilesize) threadblocks
// thread blocks read & write tiles, in coalesced fashion
// adjacent threads read adjacent input elements, write adjacent output elmts
Shared
synchonize
80
NVidia Visual Profiler
●
Demo
81
Little’s Law
Udacity: Into to Parallel Programming
82
Little’s Law
83
Tilling Code
const int N= 1024;
const int K= 32;
__global__ void
transpose_parallel_per_element_tiled(float in[], float out[])
{
// (i,j) locations of the tile corners for input & output matrices:
int in_corner_i = blockIdx.x * K, in_corner_j = blockIdx.y * K;
int out_corner_i = blockIdx.y * K, out_corner_j = blockIdx.x * K;
int x = threadIdx.x, y = threadIdx.y;
__shared__ float tile[K][K];
// coalesced read from global mem, TRANSPOSED write into shared mem:
tile[y][x] = in[(in_corner_i + x) + (in_corner_j + y)*N];
__syncthreads();
// read from shared mem, coalesced write to global mem:
out[(out_corner_i + x) + (out_corner_j + y)*N] = tile[x][y];
}
dim3 blocks16x16(N/K,N/K); // blocks per grid
dim3 threads16x16(K,K); // threads per block
transpose_parallel_per_element_tiled<<<blocks,threads>>>(d_in, d_out);
Shared
synchonize
Increase number of blocks per streaming processor
Reduce number of threads
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
84
Tile size experiment
85
Comparison Analysis
Serial Per Row Per Element Tiled 32 Tiled 16
0.01
0.1
1
10
100
1000
Serial Per Row Per Element Tiled 32 Tiled 16
86
Memory Coalescing
in transpose_parallel_per_element
10 n-1
const int N= 1024; // matrix size is NxN
const int K= 32; // tile size is KxK
__global__ void
transpose_parallel_per_element(float in[], float out[])
{
int i = blockIdx.x * K + threadIdx.x;
int j = blockIdx.y * K + threadIdx.y;
out[j + i*N] = in[i + j*N];
}
32X32
BAD!
BAD!
N = 1024
N = 1024
Most GPU codes are bandwith limitedMost GPU codes are bandwith limited
copy
shared memory copy
naive transpose
coalesced transpose
conflict-free transpose
0 50 100 150 200 250 300 350 400
< 40%
Bad
40-60%
OK
60-75%
Good
> 75 %
Excellent!
87
Optimization Verdict
●
Never optimize in vacuum. Know when to stop
●
Use existing robust libraries
●
Measure & improve memory bandwidth
– Assure sufficient occupancy
– Coalesce global memory accesses
– Minimize latency between accesses
●
Minimize thread divergence
– Avoid branchy code
– Avoid thread workload imbalance
●
Use fast math intrinsics, and double precision
●
Split workload into streams
●
Learn Nvidia CUDA Visual Profiler (nvvp)
Udacity CS344: Intro to Parallel Programming
88
All of CUDA
●
CUDA atomic operations
●
CUDA streams
●
CUDA textures
●
CUDA Dynamic parallelism
●
Floating point calculations
●
Instincts
●
Important algorithms
map, reduce, scan, gather, scatter, stencil, histogram
●
Multi GPU programming
●
Multi GPU/CPU and OpenMP
NOT
89
For more Info
Cuda C Best Practices
Cuda Documentation
Heterogeneous Parallel Programming , University of Illinois, Co
Udacity CS344: Intro to Parallel Programming (NVidia)
Will your next
computer have a
graphics card ?
Why ?
A genie gives you
1,000 more
computing power.
How you gonna use
it?
If it gives you 100,000 more ?
92
GeForce GTX 1080 Ti
CUDA Driver Version / Runtime Version 9.2 / 9.2
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 11177 MBytes
(28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
GPU Max Clock rate: 1683 MHz (1.68 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 2883584 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536),
3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
./deviceQuery
93
Thanks for your patience
Any
Questions ?
Any
Questions ?

More Related Content

What's hot

FPGA Hardware Accelerator for Machine Learning
FPGA Hardware Accelerator for Machine Learning FPGA Hardware Accelerator for Machine Learning
FPGA Hardware Accelerator for Machine Learning Dr. Swaminathan Kathirvel
 
Graphic Processing Unit (GPU)
Graphic Processing Unit (GPU)Graphic Processing Unit (GPU)
Graphic Processing Unit (GPU)Jafar Khan
 
Introduction to parallel computing using CUDA
Introduction to parallel computing using CUDAIntroduction to parallel computing using CUDA
Introduction to parallel computing using CUDAMartin Peniak
 
Graphics Processing Unit by Saurabh
Graphics Processing Unit by SaurabhGraphics Processing Unit by Saurabh
Graphics Processing Unit by SaurabhSaurabh Kumar
 
Deep learning: Hardware Landscape
Deep learning: Hardware LandscapeDeep learning: Hardware Landscape
Deep learning: Hardware LandscapeGrigory Sapunov
 
GPU Architecture NVIDIA (GTX GeForce 480)
GPU Architecture NVIDIA (GTX GeForce 480)GPU Architecture NVIDIA (GTX GeForce 480)
GPU Architecture NVIDIA (GTX GeForce 480)Fatima Qayyum
 
Basic of AI Accelerator Design using Verilog HDL
Basic of AI Accelerator Design using Verilog HDLBasic of AI Accelerator Design using Verilog HDL
Basic of AI Accelerator Design using Verilog HDLJoohan KIM
 
Presentation on graphics processing unit (GPU)
Presentation on graphics processing unit (GPU)Presentation on graphics processing unit (GPU)
Presentation on graphics processing unit (GPU)MuntasirMuhit
 
Electronic Hardware Design with FPGA
Electronic Hardware Design with FPGAElectronic Hardware Design with FPGA
Electronic Hardware Design with FPGAKrishna Gaihre
 
Simd programming introduction
Simd programming introductionSimd programming introduction
Simd programming introductionChamp Yen
 
NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化
NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化
NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化NVIDIA Japan
 
Hardware Acceleration for Machine Learning
Hardware Acceleration for Machine LearningHardware Acceleration for Machine Learning
Hardware Acceleration for Machine LearningCastLabKAIST
 
3D V-Cache
3D V-Cache 3D V-Cache
3D V-Cache AMD
 
AMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop ProductsAMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop ProductsAMD
 
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor Core
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor CoreZen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor Core
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor CoreAMD
 
High performance computing with accelarators
High performance computing with accelaratorsHigh performance computing with accelarators
High performance computing with accelaratorsEmmanuel college
 

What's hot (20)

FPGA Hardware Accelerator for Machine Learning
FPGA Hardware Accelerator for Machine Learning FPGA Hardware Accelerator for Machine Learning
FPGA Hardware Accelerator for Machine Learning
 
Graphic Processing Unit (GPU)
Graphic Processing Unit (GPU)Graphic Processing Unit (GPU)
Graphic Processing Unit (GPU)
 
Introduction to parallel computing using CUDA
Introduction to parallel computing using CUDAIntroduction to parallel computing using CUDA
Introduction to parallel computing using CUDA
 
Parallel Computing on the GPU
Parallel Computing on the GPUParallel Computing on the GPU
Parallel Computing on the GPU
 
GPU Programming
GPU ProgrammingGPU Programming
GPU Programming
 
Graphics Processing Unit by Saurabh
Graphics Processing Unit by SaurabhGraphics Processing Unit by Saurabh
Graphics Processing Unit by Saurabh
 
GPU Computing
GPU ComputingGPU Computing
GPU Computing
 
Deep learning: Hardware Landscape
Deep learning: Hardware LandscapeDeep learning: Hardware Landscape
Deep learning: Hardware Landscape
 
GPU Architecture NVIDIA (GTX GeForce 480)
GPU Architecture NVIDIA (GTX GeForce 480)GPU Architecture NVIDIA (GTX GeForce 480)
GPU Architecture NVIDIA (GTX GeForce 480)
 
Basic of AI Accelerator Design using Verilog HDL
Basic of AI Accelerator Design using Verilog HDLBasic of AI Accelerator Design using Verilog HDL
Basic of AI Accelerator Design using Verilog HDL
 
Presentation on graphics processing unit (GPU)
Presentation on graphics processing unit (GPU)Presentation on graphics processing unit (GPU)
Presentation on graphics processing unit (GPU)
 
Electronic Hardware Design with FPGA
Electronic Hardware Design with FPGAElectronic Hardware Design with FPGA
Electronic Hardware Design with FPGA
 
Simd programming introduction
Simd programming introductionSimd programming introduction
Simd programming introduction
 
NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化
NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化
NVIDIA cuQuantum SDK による量子回路シミュレーターの高速化
 
Hardware Acceleration for Machine Learning
Hardware Acceleration for Machine LearningHardware Acceleration for Machine Learning
Hardware Acceleration for Machine Learning
 
3D V-Cache
3D V-Cache 3D V-Cache
3D V-Cache
 
AMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop ProductsAMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop Products
 
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor Core
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor CoreZen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor Core
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor Core
 
Cuda tutorial
Cuda tutorialCuda tutorial
Cuda tutorial
 
High performance computing with accelarators
High performance computing with accelaratorsHigh performance computing with accelarators
High performance computing with accelarators
 

Similar to GPGPU Computation

Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08Angela Mendoza M.
 
Graphics processing uni computer archiecture
Graphics processing uni computer archiectureGraphics processing uni computer archiecture
Graphics processing uni computer archiectureHaris456
 
lecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptxlecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptxssuser413a98
 
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCAccelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCinside-BigData.com
 
Introduction to Accelerators
Introduction to AcceleratorsIntroduction to Accelerators
Introduction to AcceleratorsDilum Bandara
 
Monte Carlo G P U Jan2010
Monte  Carlo  G P U  Jan2010Monte  Carlo  G P U  Jan2010
Monte Carlo G P U Jan2010John Holden
 
Computing using GPUs
Computing using GPUsComputing using GPUs
Computing using GPUsShree Kumar
 
Optimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTESOptimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
CUDA and Caffe for deep learning
CUDA and Caffe for deep learningCUDA and Caffe for deep learning
CUDA and Caffe for deep learningAmgad Muhammad
 
lecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdflecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdfTigabu Yaya
 
Introduction to CUDA
Introduction to CUDAIntroduction to CUDA
Introduction to CUDARaymond Tay
 
Lrz kurs: gpu and mic programming with r
Lrz kurs: gpu and mic programming with rLrz kurs: gpu and mic programming with r
Lrz kurs: gpu and mic programming with rFerdinand Jamitzky
 
Hardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLHardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLinside-BigData.com
 

Similar to GPGPU Computation (20)

Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08
 
Graphics processing uni computer archiecture
Graphics processing uni computer archiectureGraphics processing uni computer archiecture
Graphics processing uni computer archiecture
 
lecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptxlecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptx
 
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCAccelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
 
Introduction to Accelerators
Introduction to AcceleratorsIntroduction to Accelerators
Introduction to Accelerators
 
Monte Carlo G P U Jan2010
Monte  Carlo  G P U  Jan2010Monte  Carlo  G P U  Jan2010
Monte Carlo G P U Jan2010
 
Computing using GPUs
Computing using GPUsComputing using GPUs
Computing using GPUs
 
Optimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTESOptimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTES
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
CUDA and Caffe for deep learning
CUDA and Caffe for deep learningCUDA and Caffe for deep learning
CUDA and Caffe for deep learning
 
Cuda Architecture
Cuda ArchitectureCuda Architecture
Cuda Architecture
 
Can FPGAs Compete with GPUs?
Can FPGAs Compete with GPUs?Can FPGAs Compete with GPUs?
Can FPGAs Compete with GPUs?
 
lecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdflecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdf
 
Introduction to CUDA
Introduction to CUDAIntroduction to CUDA
Introduction to CUDA
 
Reduction
ReductionReduction
Reduction
 
Lrz kurs: gpu and mic programming with r
Lrz kurs: gpu and mic programming with rLrz kurs: gpu and mic programming with r
Lrz kurs: gpu and mic programming with r
 
Hardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLHardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and ML
 

More from jtsagata

Advanced Notes on Pointers
Advanced Notes on PointersAdvanced Notes on Pointers
Advanced Notes on Pointersjtsagata
 
Eισαγωγή στο TDD
Eισαγωγή στο TDDEισαγωγή στο TDD
Eισαγωγή στο TDDjtsagata
 
Παιγνίδια με Πίνακες και Δείκτες
Παιγνίδια με Πίνακες και ΔείκτεςΠαιγνίδια με Πίνακες και Δείκτες
Παιγνίδια με Πίνακες και Δείκτεςjtsagata
 
Linux and C
Linux and CLinux and C
Linux and Cjtsagata
 
Greek utf8
Greek utf8Greek utf8
Greek utf8jtsagata
 
Function pointers in C
Function pointers in CFunction pointers in C
Function pointers in Cjtsagata
 
Why computers can' compute
Why computers can' computeWhy computers can' compute
Why computers can' computejtsagata
 
Τι είναι υπολογισμός
Τι είναι υπολογισμόςΤι είναι υπολογισμός
Τι είναι υπολογισμόςjtsagata
 
IEEE 754 Floating point
IEEE 754 Floating pointIEEE 754 Floating point
IEEE 754 Floating pointjtsagata
 
Η Τέχνη του TeX/LaTeX
Η Τέχνη του TeX/LaTeXΗ Τέχνη του TeX/LaTeX
Η Τέχνη του TeX/LaTeXjtsagata
 
Unikernels
UnikernelsUnikernels
Unikernelsjtsagata
 
FPGA on the Cloud
FPGA on the Cloud FPGA on the Cloud
FPGA on the Cloud jtsagata
 
Evolutionary keyboard Layout
Evolutionary keyboard LayoutEvolutionary keyboard Layout
Evolutionary keyboard Layoutjtsagata
 
Το εργαλείο
Το εργαλείοΤο εργαλείο
Το εργαλείοjtsagata
 

More from jtsagata (17)

Advanced Notes on Pointers
Advanced Notes on PointersAdvanced Notes on Pointers
Advanced Notes on Pointers
 
C locales
C localesC locales
C locales
 
Eισαγωγή στο TDD
Eισαγωγή στο TDDEισαγωγή στο TDD
Eισαγωγή στο TDD
 
Παιγνίδια με Πίνακες και Δείκτες
Παιγνίδια με Πίνακες και ΔείκτεςΠαιγνίδια με Πίνακες και Δείκτες
Παιγνίδια με Πίνακες και Δείκτες
 
Linux and C
Linux and CLinux and C
Linux and C
 
Git intro
Git introGit intro
Git intro
 
Greek utf8
Greek utf8Greek utf8
Greek utf8
 
Function pointers in C
Function pointers in CFunction pointers in C
Function pointers in C
 
Why computers can' compute
Why computers can' computeWhy computers can' compute
Why computers can' compute
 
Τι είναι υπολογισμός
Τι είναι υπολογισμόςΤι είναι υπολογισμός
Τι είναι υπολογισμός
 
IEEE 754 Floating point
IEEE 754 Floating pointIEEE 754 Floating point
IEEE 754 Floating point
 
Η Τέχνη του TeX/LaTeX
Η Τέχνη του TeX/LaTeXΗ Τέχνη του TeX/LaTeX
Η Τέχνη του TeX/LaTeX
 
Unikernels
UnikernelsUnikernels
Unikernels
 
FPGA on the Cloud
FPGA on the Cloud FPGA on the Cloud
FPGA on the Cloud
 
Evolutionary keyboard Layout
Evolutionary keyboard LayoutEvolutionary keyboard Layout
Evolutionary keyboard Layout
 
Omilia
OmiliaOmilia
Omilia
 
Το εργαλείο
Το εργαλείοΤο εργαλείο
Το εργαλείο
 

Recently uploaded

"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...Sri Ambati
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekCzechDreamin
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Product School
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIES VE
 
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...CzechDreamin
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutesconfluent
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Thierry Lestable
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...Product School
 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationZilliz
 
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlPeter Udo Diehl
 
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...CzechDreamin
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxAbida Shariff
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaRTTS
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...Elena Simperl
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Julian Hyde
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka DoktorováCzechDreamin
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
 

Recently uploaded (20)

"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří Karpíšek
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and Planning
 
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutes
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG Evaluation
 
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
 
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 

GPGPU Computation

  • 1. GPGPU Computation Introduction, Performance Analysis and optimization A Tutorial Giannis Tsagatakis jtsagata@gmail.com Msc in Informatics & Multimedia Department of Informatics Engineering TEI of Crete
  • 2. 2 Warning In GP-GPU computing we deal with HUGE numbers – In number of threads – In Teraflops – In number of “cores” – In throughput – And in number of slides It’s more a tutorial / handout There are better slides out there
  • 3. Do you have a sound blaster card in your PC ? Why not ? Remember the days you have one? Do you have a graphics card in your PC? Why ?
  • 4. Agenda ● General purpose GPU Computing – Not about computer graphics ● Vendor Specific (NVIDIA CUDA) – With a bit of OpenCL and OpenACC ● Not focus on parallel algorithms ● Touch some optimization topics – Mostly on memory bandwidth optimization ● I try to be self contained – No previous experience need it – But there is a lot to cover
  • 5. 5 G is for Graphics
  • 8. 8 GPU is for Computation
  • 9. 9 The road to GP GPU Computing ● Let’s use shaders to do computation ● Problems: – Describe problem in native language – Bad floating point computations – Limited memory access patterns ● GP GPU – Better hardware – CUDA, OpenCL, openACC Where is my Sound Blaster ?
  • 10. 10 A brief History ● The fixed graphics pipeline era – 1980 OpenGL Expensive Graphics Workstations (50.000$) – 1990 DirectX PC graphics (200$) ● The programmable Graphics pipeline era – 2001, NVIDIA NV20 (GeForce 3) – 2002 ATI Radeon 9700 – Shader Languages Cg, GLSL, HLSL – Direct X8/9 ● Unified Graphics and Computing era – 2006, GeForce 8800 – CUDA, OpenCL – OpenACC – Direct X10, – Vulkan – Direct Compute – OpenGL 4.X ● Deep Learning ● The bright future – GPUs ? TPUs? FPGAs ?
  • 11. 11 NVIDIA Timeline ● 1999 GeForce 256 ● 2001 GeForce 2 Programmable ● 2004 Scalable Link Interface ● 2006 CUDA ● 2007 Tesla – Unified shader model ● 2009 Fermi – Fused multiply add ● 2013 Kepler – Dynamic parallelism – Unified memory ● 2014 Maxwell ● 2016 Pascal – Page mitigation engine – High bandwidth memory – NVLink ● 2017 Volta – Tensor cores
  • 12. 12 The death of CPU Scaling ● Intel(R) Xeon(R) CPU E5- 2680 – Cores 14 – Threads 28 ● Single thread utilization – Advanced Vector Extensions – AVX2 (256 bits) 8 x 32bit 28 x 8 =224, X2 = 448 – Single Thread: 0.22% max – AVX-512 ● Is that a lot of computation ;
  • 13. 13 The rise (and limits) of Many-Core
  • 14. 14 High Performance Computing (HPA) GPUs, TPUs, FPGAs, ASICs
  • 15. 15 CPU vs GPU Latency oriented design Throughput oriented design
  • 17. 17 Streaming Multiprocessor Pascal GP100 Streaming Multiprocessor Special Function Unit LD/ST Load/Store Unit Double Precision Unit
  • 19. 19 Tensor Cores on Volta https://devblogs.nvidia.com/programming-tensor-cores-cuda-9/
  • 20. 20 Volta GV100 ● 5,376 32-bit integer cores ● 5,376 32-bit floating point cores ● 2,688 64-bit floating point cores ● 672 Tensor Cores ● 336 texture units ● 4096bit memory bus width ● 21.1 Billion Transistors / 815 mm2 / 12nm ● 300 Watt ● $9,269.00 & FREE shipping 32GB Bulk (Amazon).
  • 21. 21 Communication Patterns Eight GPU hybrid cube mesh architecture with NVLink.
  • 24. 24 What is CUDA ● Development Tools ● Performance Analysis Tools ● CUDA – Compute Unified Device Architecture – Parallel computing platform and API – C/C++/Fortran – OpenACC, OpenCL compatible
  • 26. 26 The many ways to GPU Acceleration ● Libraries – Drop in replacements – Minimal code changes ● OpenACC Directives – Easily Accelerate Applications ● Programming Languages – C/C++, Fortran, LLVM – MATLAB, Mathematica, LabVIEW – Maximum Flexibility ● Parallel languages – Copperhead (python) – Halide (image processing) ● Software stacks Deep learning stack – Keras ● Tensor Flow – CUDA libararies ● CUDAnn, Digits, cuBlas
  • 27. 27 Domain Specific Libraries Heterogeneous CPU GPU-based architectures CUDA, Intel Xeon Phi, OpenCL From the writers of LAPACK BSD License
  • 37. 37 SIMD // SIMD ??? if (x>a) { y=2*x; } else { y=2*(x+1); } /* t=0 */ z=bool(x>a); /* t=1 */ y1 = 2 * x; /* t=2 */ t = x +1; /* t=3 */ y2 = 2 *t; /* t=4 */ y = z *y1; /* t=5 */ y += not(z) *y1; Branches If every thread choose a different path we have to do double work. In CUDA every 32 threads form a warp. Its good not to have thread diversity within a wrap.
  • 38. 38 Systematic Optimization Weak ScalingWeak Scaling: Run a larger problem Strong ScalingStrong Scaling : Run a problem faster
  • 39. 39 The Big Picture ● Optimize throughput – Not latency ● Choose an algorithm that – Keeps all threads busy – Keeps all SMs busy – Optimize memory transfers ● Must know parallel algorithms – Not the same as serial ones – Not in your Data Structures and Algorithms book Image from : Efficient Parallel Algorithms (CS329) Warwick University
  • 41. 41 A Simple Cuda Kernel SAXPY stands for “Single-Precision A·X Plus Y”. It is a function in the standard Basic Linear Algebra Subroutines (BLAS)library. SAXPY is a combination of scalar multiplication and vector addition, and it’s very simple: it takes as input two vectors of 32-bit floats X and Y with N elements each, and a scalar value A. It multiplies each element X[i] by A and adds the result to Y[i]. __global__ void saxpy(int n, float a, float *x, float *y) { int i = blockIdx.x*blockDim.x + threadIdx.x; if (i < n) y[i] = a*x[i] + y[i]; } // Perform SAXPY on 1M elements int N = 1<<20; saxpy<<<<<<40964096, 256, 256>>>>>>(N, 2.0f, d_x, d_y);
  • 42. 42 Kernel instance parameters ● gridDim.{x,y,z} The dimensions of the grid ● blockDim.{x,y,z} The dimensions of the block ● blockIdx.{x,y,z} The index of the current block within the grid ● threadIdx.{x,y,z} The index of the current thread with the block
  • 43. 43 Configuring the kernel launch // Run kernel on 1M elements on the GPU int N = 1<<20; int blockSize = 256; int numBlocks = (N + blockSize - 1) / blockSize; squaresquare<<<<<<numBlocks, blockSizenumBlocks, blockSize>>>>>>(N, input, output);(N, input, output); Number of blocks to run Maximum Number of Threads/block (max 1024 on newer GPUs) Conceptual model: ● All threads starts at the same time ● Threads wraps (n usually 32) ● Synchronization and memory sharing within a block ● Threads can be execute in any order ● Scalability by using a more expensive GPU
  • 44. 44 Block grid configurations /** 1D grid of 1D blocks **/ __device__ int getGlobalIdx_1D_1D() { return blockIdxblockIdx.x *blockDimblockDim.x + threadIdxthreadIdx.x; } /** 1D grid of 2D blocks **/ /** 1D grid of 3D blocks **/ /** 2D grid of 1D blocks **/ /* 2D grid of 2D blocks */ __device__ int getGlobalIdx_2D_2D() { int blockId = blockIdx.x + blockIdx.y * gridDim.x; int threadId = blockId * (blockDim.x * blockDim.y) + (threadIdx.y * blockDim.x) + threadIdx.x; return threadId; } /* . . . . . . . . . . */ /* 3D grid of 3D blocks */ __device__ int getGlobalIdx_3D_3D() { int blockId = blockIdx.x + blockIdx.y * gridDim.x + gridDim.x * gridDim.y * blockIdx.z; int threadId = blockId * (blockDim.x * blockDim.y * blockDim.z) + (threadIdx.z * (blockDim.x * blockDim.y)) + (threadIdx.y * blockDim.x) + threadIdx.x; return threadId; }
  • 45. 45 Choose size ● Match data organization ● Maximize occupancy (active threads) – Shared memory usage – Register usage – Multiplies of 32 (warp size) ● A black art
  • 46. 46 The 3 steps to CUDA acceleration cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevicecudaMemcpyHostToDevice); cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevicecudaMemcpyHostToDevice); Images from NVIDIA, and Tasos Maragkos (tasmar)
  • 47. 47 The 3 steps to CUDA accelaration saxpy<<<100, 256>>>(N, 2.0f, d_x, d_y);
  • 48. 48 The 3 steps to CUDA accelaration v cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHostcudaMemcpyDeviceToHost);
  • 49. 49 Calling the Kernel, The hard way int main(void) { int N = 1<<20; float *x, *y, *d_x, *d_y; x = (float*)malloc(N*sizeof(float)); y = (float*)malloc(N*sizeof(float)); cudaMalloc(&d_x, N*sizeof(float)); cudaMalloc(&d_y, N*sizeof(float)); for (int i = 0; i < N; i++) {x[i] = 1.0f; y[i] = 2.0f;} cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice); cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice); saxpy<<<(N+255)/256, 256>>>(N, 2.0f, d_x, d_y); cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost); float maxError = 0.0f; for (int i = 0; i < N; i++){ maxError = max(maxError, abs(y[i]-4.0f));} printf("Max error: %fn", maxError); cudaFree(d_x); cudaFree(d_y); free(x); free(y); }
  • 51. 51 Calling the Kernel, The easy way int main(void) { int N = 1<<20; // 1M elements // Allocate Unified Memory -- accessible from CPU or GPU float *x, *y; cudaMallocManaged(&x, N*sizeof(float)); cudaMallocManaged(&y, N*sizeof(float)); // Init arrays .... // Perform SAXPY on 1M elements saxpy<<<(N+255)/256, 256>>>(N, 2.0f, x, y); // Wait for GPU to finish before accessing on host cudaDeviceSynchronize(); float maxError = 0.0f; for (int i = 0; i < N; i++) maxError = max(maxError, abs(y[i]-4.0f)); printf("Max error: %fn", maxError); // Free memory cudaFree(x); cudaFree(y); } https://devblogs.nvidia.com/even-easier-introduction-cuda/ Unified memoryUnified memory architecturearchitecture https://devblogs.nvidia.com/unified-memory-in-cuda-6/
  • 52. 52 Unified memory architecture ● Kepler GPU: no page fault support, limited virtual space, CUDA-6 ● Pascal GPU: page fault support, extended virtual address space (48-bit), CUDA-8 ● Volta GPU: Access counters, NVLINK2
  • 53. 53 “Basic” Profiling with nvprof Lots of other metrics nvprof –-query-metrics
  • 54. 54 Speedup! Initialize with a Kernel __global__ void init_kernel(int n, NUMBER *x, NUMBER *y) { int i = blockIdx.x*blockDim.x + threadIdx.x; if (i < n) { x[i] = 1.0f; y[i] = 2.0f; } } https://devblogs.nvidia.com/unified-memory-cuda-beginners/
  • 55. 55 Speedup! The fastest __global__ void barrier_kernel(int n, NUMBER a, NUMBER *x, NUMBER *y) { int i = blockIdx.x*blockDim.x + threadIdx.x; if (i < n) { y[i] = a*x[i] + y[i]; x[i] = 1.0f; y[i] = 2.0f; // Not really need it here __syncthreads(); y[i] = a*x[i] + y[i]; } } https://devblogs.nvidia.com/unified-memory-cuda-beginners/
  • 56. 56 The Timings CPU Simple Kernel Easy Kernel Implace Easy Implace Prefeched Barrier 0 2000000 4000000 6000000 8000000 10000000 12000000 0 2000000 4000000 6000000 8000000 10000000 12000000 Time Min Max Mean
  • 57. 57 Advanced Profiling using NVIDIA Visual Profiler Nvvp or nSight IDE
  • 58. 58 Verdict : Memory Bandwidth
  • 59. 59 Verdict : Memory statistics
  • 61. 61 Open CL ● Khronos Group – Apple, Altera, AMD, ARM, Xilinx, Intel, Creative, ... ● Heterogeneous Platform – CPUs, GPUs, DSPs, FPGAs, – Accelerators (Intel Movidius, Adapteva) ● Active development – OpenCL 2.2 (2017) ● To be merged with Vulkan ● There is also OpenGL Compute Shaders ● More complex to code – Maximize portability, easy implementation
  • 62. 62 An OpenCL Kernel ● The Kernel __kernel void vector_add(__global const int *A, __global const int *B, __global int *C) { // Get the index of the current element to be processed int i = get_global_id(0); // Do the operation C[i] = A[i] + B[i]; } // 100 Lines of Code // Create a program from the kernel source cl_program program = clCreateProgramWithSource(context, 1, (const char **)&source_str, (const size_t *)&source_size, &ret); // Build the program ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL); // Create the OpenCL kernel cl_kernel kernel = clCreateKernel(program, "vector_add", &ret); // Set the arguments of the kernel ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&a_mem_obj); ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&b_mem_obj); ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&c_mem_obj); // Execute the OpenCL kernel on the list size_t global_item_size = LIST_SIZE; // Process the entire lists size_t local_item_size = 64; // Divide work items into groups of 64 ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, &global_item_size, &local_item_size, 0, NULL, NULL); https://www.eriksmistad.no/getting-started-with-opencl-and-gpu-computing/ ● The Setup – AMD SDK – Intel OpenCL SDK – Cuda – Xilinx SDAccel ● The Driver
  • 63. 63 The C++ Way: THRUST ● Analogous to C++ STL – Containers, iterators, algorithms ● Template power to CUDA programming ● thrust::device_vector – sorting: thrust::sort and thrust::sort_by_key – transformations: thrust::transform – reductions: thrust::reduce and thrust::transform_reduce – scans: thrust::inclusive_scan, thrust::exclusive_scan, thrust::transform_inclusive_scan ● CUDA, OpenMP, TBB ● Apache License v 2.0 https://docs.nvidia.com/cuda/thrust/index.html
  • 64. 64 Alternative Ways to SAXPY Thrust & cuBLAS using thrust::placeholders; int N = 1<<20; thrust::host_vector x(N), y(N); ... // alloc and copy host to device thrust::device_vector d_x = x; thrust::device_vector d_y = y; // Perform SAXPY C++ STLC++ STL Way thrust::transform(d_x.begin(), d_x.end(), d_y.begin(), d_y.begin(), 2.0f * _1 + _2); // copy results to the host vector y = d_y; int N = 1<<20; cublasInit(); cublasSetVector(N, sizeof(x[0]), x, 1, d_x, 1); cublasSetVector(N, sizeof(y[0]), y, 1, d_y, 1); // Perform SAXPY on 1M elements cublasSaxpy(N, 2.0, d_x, 1, d_y, 1); cublasGetVector(N, sizeof(y[0]), d_y, 1, y, 1); cublasShutdown(); https://devblogs.nvidia.com/six-ways-saxpy/ ThrustThrust
  • 65. 65 Open ACC void saxpy(int n, float a, float * restrict x, float * restrict y) { #pragma acc kernels for (int i = 0; i < n; ++i) y[i] = a*x[i] + y[i]; } ... // Perform SAXPY on 1M elements // Looks like a normal C call saxpy(1<<20, 2.0, x, y); #pragma acc kernels #pragma acc parallel #pragma acc data #pragma acc loop #pragma acc cache #pragma acc update #pragma acc declare #pragma acc wait https://www.openacc.org/ ● Directive based ● Cray, Nvidia, PGI ● Extension of openMP – Will be merged
  • 66. 66 Even More Ways to SAXPY Python & Fortran module mymodule contains attributes(global) subroutine saxpy(n, a, x, y) real :: x(:), y(:), a integer :: n, i attributes(value) :: a, n i = threadIdx%x+(blockIdx%x-1)*blockDim%x if (i<=n) y(i) = a*x(i)+y(i) end subroutine saxpy end module mymodule program main use cudafor; use mymodule real, device :: x_d(2**20), y_d(2**20) x_d = 1.0, y_d = 2.0 ! Perform SAXPY on 1M elements call saxpy<<<4096, 256>>>(2**20, 2.0, x_d, y_d) end program main from copperhead import * import numpy as np @cu def saxpy(a, x, y): return [a * xi + yi for xi, yi in zip(x, y)] x = np.arange(2**20, dtype=np.float32) y = np.arange(2**20, dtype=np.float32) with places.gpu0: gpu_result = saxpy(2.0, x, y) with places.openmp: cpu_result = saxpy(2.0, x, y) https://devblogs.nvidia.com/six-ways-saxpy/ Copperhead (Python)Copperhead (Python) FortranFortran
  • 68. 68 Matrix Transpose Problem 10 n-1 // CPU code void transpose_CPU(float in[], float out[]{ for(int j=0; j < N; j++) for(int i=0; i < N; i++) // out(j,i) = in(i,j) out[j + i*N] = in[i + j*N]; } // Single Thread __global__ void transpose_serial(float in[], float out[]) { for(int j=0; j < N; j++) for(int i=0; i < N; i++) out[j + i*N] = in[i + j*N]; } transpose_serial<<<1,1>>>(d_in, d_out); N = 1024 https://devblogs.nvidia.com/efficient-matrix-transpose-cuda-cc/ 1 Thread 2 Inner Loops No parallelism
  • 69. 69 Matrix Transpose Problem Some Parallelism 10 n-1 // 1 Thread per row __global__ void transpose_parallel_per_row(float in[], float out[]) { int i = threadIdx.x; for(int j=0; j < N; j++) // out(j,i) = in(i,j) out[j + i*N] = in[i + j*N]; } transpose_parallel_per_row<<<1,N>>>(d_in, d_out); Why not :transpose_parallel_per_row<<<N,1>>>(d_in, d_out)?? 1 Block 1024 Threads 1 Loop Some Parallelism
  • 70. 70 Matrix Transpose Problem First performance gains 10 n-1 Serial: 173.164 ns per_row: 1.37914 ns What is the next optimization step ?
  • 71. 71 Matrix Transpose Problem Going full parallel __global__ void transpose_parallel_per_element(float in[], float out[]) { int i = blockIdx.x * K + threadIdx.x; int j = blockIdx.y * K + threadIdx.y; out[j + i*N] = in[i + j*N]; } dim3 blocks(N/K,N/K); dim3 threads(K,K); transpose_parallel_per_element<<<blocks,threads>>>(d_in, d_out); Warning: Maximum parallelism not always gives the best performance 32 X 32 Blocks 32 X 32 Threads/Block Maximum parallelism No Loops
  • 72. 72 Matrix Transpose Problem Full parallel performance Warning: Maximum parallelism not always gives the best performance Serial: 173.164 ns per_row: 1.37914 ns par_per_el: 0.090304 ms Can we get more performance ?
  • 73. 73 Lemon juice the GPU Warning: Maximum parallelism not always gives the best performance What is the next optimization step ? Can we get more performance ?
  • 74. 74 More Optimizations ● Wait! Time to Stop Optimizing and Start Thinking ● Did we get the performance we seek ? ● Did we have the same hot-spot ? NOT
  • 75. 75 Memory bandwidth ./deviceQuery GPU Max Clock rate: 1683 MHz (1.68 GHz) Memory Clock rate: 5505 Mhz Memory Bus Width: 352-bit GeForce GTX 1080 Ti Memory Clock: 5.505 * 10-6 clocks /sec Memory bus: 352 bits = 44 bytes Maximum bandwith 242.220 GB/sec Memory Clock: 5.505 * 10-6 clocks /sec Memory bus: 352 bits = 44 bytes Maximum bandwith 242.220 GB/sec < 40% Bad 40-60% OK 60-75% Good > 75 % Excellent! N = 1024, time = 0.67 ms 1024 * 1024 * 4 * 2 / (0.67 * 10-3 ) = 1.25 * 1010 = 12.5 GB/sec N = 1024, time = 0.67 ms 1024 * 1024 * 4 * 2 / (0.67 * 10-3 ) = 1.25 * 1010 = 12.5 GB/sec
  • 76. 76 Memory Coalescing CPU: xx GB/sec 0.1% Serial: xx GB/sec 1% per_row: xx GB/sec 4.5% per_elem: xx GB/sec 31% https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html GOOD! GOOD! BABA D!D!
  • 78. 78 Tiling ● Problem: coalesced reads, scattered writes ● Goal: coalesced reads, coalesced writes ● Solution: Input Output Shared memory K=32
  • 79. 79 Tilling Code const int N= 1024; // matrix size is NxN const int K= 32; // tile size is KxK __global__ void transpose_parallel_per_element_tiled(float in[], float out[]) { // (i,j) locations of the tile corners for input & output matrices: int in_corner_i = blockIdx.x * K, in_corner_j = blockIdx.y * K; int out_corner_i = blockIdx.y * K, out_corner_j = blockIdx.x * K; int x = threadIdx.x, y = threadIdx.y; __shared__ float tile[K][K]; // coalesced read from global mem, TRANSPOSED write into shared mem: tile[y][x] = in[(in_corner_i + x) + (in_corner_j + y)*N]; __syncthreads(); // read from shared mem, coalesced write to global mem: out[(out_corner_i + x) + (out_corner_j + y)*N] = tile[x][y]; } dim3 blocks16x16(N/K,N/K); // blocks per grid dim3 threads16x16(K,K); // threads per block transpose_parallel_per_element_tiled<<<blocks,threads>>>(d_in, d_out); // to be launched with one thread per element, in (tilesize)x(tilesize) threadblocks // thread blocks read & write tiles, in coalesced fashion // adjacent threads read adjacent input elements, write adjacent output elmts Shared synchonize
  • 81. 81 Little’s Law Udacity: Into to Parallel Programming
  • 83. 83 Tilling Code const int N= 1024; const int K= 32; __global__ void transpose_parallel_per_element_tiled(float in[], float out[]) { // (i,j) locations of the tile corners for input & output matrices: int in_corner_i = blockIdx.x * K, in_corner_j = blockIdx.y * K; int out_corner_i = blockIdx.y * K, out_corner_j = blockIdx.x * K; int x = threadIdx.x, y = threadIdx.y; __shared__ float tile[K][K]; // coalesced read from global mem, TRANSPOSED write into shared mem: tile[y][x] = in[(in_corner_i + x) + (in_corner_j + y)*N]; __syncthreads(); // read from shared mem, coalesced write to global mem: out[(out_corner_i + x) + (out_corner_j + y)*N] = tile[x][y]; } dim3 blocks16x16(N/K,N/K); // blocks per grid dim3 threads16x16(K,K); // threads per block transpose_parallel_per_element_tiled<<<blocks,threads>>>(d_in, d_out); Shared synchonize Increase number of blocks per streaming processor Reduce number of threads Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024
  • 85. 85 Comparison Analysis Serial Per Row Per Element Tiled 32 Tiled 16 0.01 0.1 1 10 100 1000 Serial Per Row Per Element Tiled 32 Tiled 16
  • 86. 86 Memory Coalescing in transpose_parallel_per_element 10 n-1 const int N= 1024; // matrix size is NxN const int K= 32; // tile size is KxK __global__ void transpose_parallel_per_element(float in[], float out[]) { int i = blockIdx.x * K + threadIdx.x; int j = blockIdx.y * K + threadIdx.y; out[j + i*N] = in[i + j*N]; } 32X32 BAD! BAD! N = 1024 N = 1024 Most GPU codes are bandwith limitedMost GPU codes are bandwith limited copy shared memory copy naive transpose coalesced transpose conflict-free transpose 0 50 100 150 200 250 300 350 400 < 40% Bad 40-60% OK 60-75% Good > 75 % Excellent!
  • 87. 87 Optimization Verdict ● Never optimize in vacuum. Know when to stop ● Use existing robust libraries ● Measure & improve memory bandwidth – Assure sufficient occupancy – Coalesce global memory accesses – Minimize latency between accesses ● Minimize thread divergence – Avoid branchy code – Avoid thread workload imbalance ● Use fast math intrinsics, and double precision ● Split workload into streams ● Learn Nvidia CUDA Visual Profiler (nvvp) Udacity CS344: Intro to Parallel Programming
  • 88. 88 All of CUDA ● CUDA atomic operations ● CUDA streams ● CUDA textures ● CUDA Dynamic parallelism ● Floating point calculations ● Instincts ● Important algorithms map, reduce, scan, gather, scatter, stencil, histogram ● Multi GPU programming ● Multi GPU/CPU and OpenMP NOT
  • 89. 89 For more Info Cuda C Best Practices Cuda Documentation Heterogeneous Parallel Programming , University of Illinois, Co Udacity CS344: Intro to Parallel Programming (NVidia)
  • 90. Will your next computer have a graphics card ? Why ?
  • 91. A genie gives you 1,000 more computing power. How you gonna use it? If it gives you 100,000 more ?
  • 92. 92 GeForce GTX 1080 Ti CUDA Driver Version / Runtime Version 9.2 / 9.2 CUDA Capability Major/Minor version number: 6.1 Total amount of global memory: 11177 MBytes (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores GPU Max Clock rate: 1683 MHz (1.68 GHz) Memory Clock rate: 5505 Mhz Memory Bus Width: 352-bit L2 Cache Size: 2883584 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) ./deviceQuery
  • 93. 93 Thanks for your patience Any Questions ? Any Questions ?