Monte Carlo simulation is one of the most important numerical methods in financial derivative pricing and risk management. Due to the increasing sophistication of exotic derivative models, Monte Carlo becomes the method of choice for numerical implementations because of its flexibility in high-dimensional problems. However, the method of discretization of the underlying stochastic differential equation (SDE) has a significant effect on convergence. In addition the choice of computing platform and the exploitation of parallelism offers further efficiency gains. We consider here the effect of higher order discretization methods together with the possibilities opened up by the advent of programmable graphics processing units (GPUs) on the overall performance of Monte Carlo and quasi-Monte Carlo methods.
Boost PC performance: How more available memory can improve productivity
Monte Carlo G P U Jan2010
1. Monte Carlo Simulation
and its Efficient Implementation
Robert Tong
28 January 2010
Experts in numerical algorithms
and HPC services
2. Outline
Why use Monte Carlo simulation?
Higher order methods and convergence
GPU acceleration
The need for numerical libraries
2
3. Why use Monte Carlo methods?
Essential for high dimensional problems – many
degrees of freedom
For applications with uncertainty in inputs
In finance
Important in risk modelling
Pricing/hedging derivatives
3
4. The elements of Monte Carlo simulation
Derivative pricing
Simulate a path of asset values
Compute payoff from path
Compute option value
Numerical components
Pseudo-random number generator
Discretization scheme
4
5. The demand for ever increasing performance
In the past
Faster solution has been provided by increasing processor
speeds
Want a quicker solution? Buy a new processor
Present
Multi-core/Many-core architectures, without increased
processor clock speeds
A major challenge for existing numerical algorithms
The escalator has stopped... or gone into reverse!
Existing codes may well run slower on multi-core
5
6. Ways to improve performance in Monte Carlo
simulation
1. Use higher order discretization
2. Keep low order (Euler) discretization –
make use of multi-core potential
e.g. GPU (Graphics Processing Unit)
3. Use high order discretization on GPU
4. Use quasi-random sequence (Sobol’, …) and
Brownian Bridge
5. Implement Sobol’ sequence and Brownian Bridge
on GPU
6
14. GPU acceleration
Retain low order Euler discretization
Use multi-core GPU architecture to achieve speed-up
14
15. The Emergence of GPGPU Computing
Initially – computation carried out by CPU (scalar,
serial execution)
CPU
evolves to add cache, SSE instructions, ...
GPU
added to speed graphics display – driven by gaming needs
multi-core, SIMT, limited flexibility
CPU and GPU move closer
CPU becomes multi-core
GPU becomes General Purpose (GPGPU) – fully
programmable
15
19. Programming GPUs – CUDA and OpenCL
CUDA (Compute Unified Device Architecture,
developed by NVIDIA)
Extension of C to enable programming of GPU devices
Allows easy management of parallel threads executing on
GPU
Handles communication with ‘host’ CPU
OpenCL
Standard language for multi-device programming
Not tied to a particular company
Will open up GPU computing
Incorporates elements of CUDA
19
20. First step – obtaining and installing CUDA
FREE download from
http://www.nvidia.com/object/cuda_learn.html
See: Quickstart Guide
Require:
CUDA capable GPU – GeForce 8, 9, 200, Tesla, many Quadro
Recent version of NVIDIA driver
CUDA Toolkit – essential components to compile and build applications
CUDA SDK – example projects
Update environment variables (Linux default shown)
PATH /usr/local/cuda/bin
LD_LIBRARY_PATH /usr/local/cuda/lib
CUDA compiler nvcc works with gcc (Linux) MS VC++ (Windows)
20
21. Host (CPU) – Device (GPU) Relationship
Application program initiated on Host (CPU)
Device ‘kernels’ execute on GPU in SIMT (Single
Instruction Multiple Thread) manner
Host program
Transfers data from Host memory to Device (GPU)
memory
Specifies number and organisation of threads on Device
Calls Device ‘kernel’ as a C function, passing parameters
Copies output from Device back to Host memory
21
22. Organisation of threads on GPU
SM (Streaming Multiprocessor) manages up to 1024
threads
Each thread is identified by an index
Threads execute as Warps of 32 threads
Threads are grouped in blocks (user specifies
number of threads per block)
Blocks make up a grid
22
23. Memory hierarchy
• On device can
• Read/write per-thread
• Registers
• Local memory
• Read/write per-block shared memory
• Read/write per-grid global memory
• Read only per-grid constant memory
• On host (CPU) can
• Read/write per-grid
• Global memory
• Constant memory
23
24. CUDA terminology
‘kernel’ – C function executing on the GPU
__global__ declares function as a kernel
Executed on the Device
Callable only from the Host
void return type
__device__ declares function that is
Executed on the Device
Callable only from the Device
24
25. Application to Monte Carlo simulation
Monte Carlo paths lead to highly parallel
algorithms
• Applications in finance e.g. simulation based
on SDE (return on asset)
dS t drift + Brownian motion
dt dW t
St
• Requires fast pseudorandom or
Quasi-random number generator
• Additional techniques improve efficiency:
Brownian Bridge, stratified sampling, …
25
26. Random Number Generators:
choice of algorithm
Must be highly parallel
Implementation must satisfy statistical
tests of randomness
Some common generators do not
guarantee randomness properties when
split into parallel streams
A suitable choice: MRG32k3a (L’Ecuyer)
26
27. MRG32k3a: skip ahead
Generator combines 2 recurrences:
xn ,1 a1 xn 2,1 b1 xn 3,1 mod m1
xn,2 a2 xn1,2 b2 xn3,2 modm2
Each recurrence of form (M Giles, note on
implementation) xn
yn xn 1
yn Ayn 1 x
n2
Precompute A p in O(log p ) operations on CPU,
yn p A p yn mod m
27
28. MRG32k3a: modulus
Combined and individual recurrences
z n xn ,1 xn , 2 mod m1
Can compute using double precision divide – slow
Use 64 bit integers (supported on GPU) – avoid
divide
Bit shift – faster (used in CPU implementations)
Note: speed of different possibilities subject to
change as NVIDIA updates floating point
capability
28
29. MRG32k3a: memory coalescence
GPU performance limited by memory access
Require memory coalescence for fast transfer of data
Order RNG output to retain consecutive memory
accesses
xn ,t ,b the n th element generated by thread t in block b
is stored at
t Nt n Nt N pb
sequential ordering n N pt N t N pb
N t num threads, N p num points per thread
(Implementation by M Giles)
29
30. MRG32k3a: single – double precision
L’Ecuyer’s example implementation in double
precision floating point
Double precision on high end GPUs – but
arithmetic operations much slower in execution
than single precision
GPU implementation in integers – final output
cast to double
Note: output to single precision gives sequence
that does not pass randomness tests
30
31. MRG32k3a: GPU benchmarking –
double precision
GPU – NVIDIA Tesla C1060
CPU – serial version of integer implementation running on
single core of quad-core Xeon
VSL – Intel Library MRG32k3a
ICC – Intel C/C++ compiler
VC++ – Microsoft Visual C++
GPU CPU-ICC CPU-VC++ VSL-ICC VSL-VC++
Samples/ 3.00E+09 3.46E+07 4.77E+07 9.35E+07 9.32E+07
sec
31
32. MRG32k3a: GPU benchmarking –
single precision
Note: for double precision all sequences were identical
For single precision GPU and CPU identical
GPU and VSL differ
max abs err 5.96E-008
Which output preferred?
use statistical tests of randomness
GPU CPU-ICC CPU-VC++ VSL-ICC VSL-VC++
Samples/ 3.49E+09 3.58E+07 5.24E+07 1.02E+08 9.75E+07
sec
32
33. LIBOR Market Model on GPU
Equally weighted portfolio of 15 swaptions each with
same maturity, but different lengths and
different strikes
33
34. Numerical Libraries for GPUs
The problem
The time-consuming work of writing basic numerical
components should not be repeated
The general user should not need to spend many days
writing each application
The solution
Standard numerical components should be available as
libraries for GPUs
34
43. Example program – kernel function
__global__ void mrg32k3a_kernel(int np, FP *d_P){
unsigned int v1[3], v2[3];
int n, i0;
FP x, x2 = nanf("");
// initialisation for first point
nag_gpu_mrg32k3a_stream_init(v1, v2, np);
// now do points
i0 = threadIdx.x + np*blockDim.x*blockIdx.x;
for (n=0; n<np; n++) {
nag_gpu_mrg32k3a_next_uniform(v1, v2, x);
}
d_P[i0] = x;
i0 += blockDim.x;
}
43
44. Library issues: Auto-tuning
Performance affected by mapping of algorithm to
GPU via threads, blocks and warps
Implement a code generator to produce variants
using the relevant parameters
Determine optimal performance
Li, Dongarra & Tomov (2009)
44
45. Early Success with BNP Paribas
Working with Fixed Income Research & Strategies
Team (FIRST)
NAG mrg32k3a works well in BNP Paribas CUDA “Local Vol
Monte-Carlo”
Passes rigorous statistical tests for randomness properties
(Diehard, Dieharder,TestU01)
Performance good
Being able to match the GPU random numbers with the
CPU version of mrg32k3a has been very valuable for
establishing validity of output
45
47. And with Bank of America Merrill Lynch
“The NAG GPU libraries are helping us enormously
by providing us with fast, good quality algorithms.
This has let us concentrate on our models and
deliver GPGPU based pricing much more quickly.”
47
48. “A N Other Tier 1” Risk Group
“Thank you for the GPU code, we have achieved
speed ups of x120”
In a simple uncorrelated loss simulation:
Number of simulations 50,000
Time taken in seconds 2.373606
Simulations per second 21065
Simulated default rate 311.8472
Theoretical default rate 311.9125
24 trillion numbers in 6 hours
48
49. NAG routines for GPUs – 1
Currently available
Random Number Generator (L’Ecuyer mrg32k3a)
Uniform distribution
Normal distribution
Exponential distribution
Gamma distribution
Sobol sequence for Quasi-Monte Carlo (to 19,000
dimensions)
Brownian Bridge
49
50. NAG routines for GPUs – 2
Future plans
Random Number Generator – Mersenne Twister
Linear algebra components for PDE option pricing
methods
Time series analysis – wavelets ...
50
51. Summary
GPUs offer high performance computing for specific
massively parallel algorithms such as Monte Carlo
simulations
GPUs are lower cost and require less power than
corresponding CPU configurations
Numerical libraries for GPUs will make these an
important computing resource
Higher order methods for GPUs being considered
51
52. Acknowledgments
Mike Giles (Mathematical Institute, University of
Oxford) – algorithmic input
Technology Strategy Board through Knowledge
Transfer Partnership with Smith Institute
NVIDIA for technical support and supply of Tesla
C1060 and Quadro FX 5800
See
www.nag.co.uk/numeric/GPUs/
Contact:
francois.cassier@nag.com
52