Reservoir engineering in a HPC (zettaflops) world: a ‘disruptive’ presentation
1. Reservoir engineering in a
HPC (zettaflops) world:
a ‘disruptive’ presentation
(and, how long would it take to run the BOSIM Brent FFM on ‘todays’ HPC)
Hans Haringa
GameChanger & Reservoir Engineer (‘in rest’)
IRD - Emerging Technologies
Shell Global Solutions International B.V
1
4. Supercomputing is the biggest, fastest computing right this minute
Likewise, a supercomputer is one of the biggest, fastest computers
right this minute
So, the ‘definition’ of supercomputing is
constantly changing
Whatever happens in supercomputing today will be on your desktop
in about 1/3 to 1/2 of a typical length of a ‘regular Shell career’
So, if you have experience with supercomputing ‘now’, you’ll be
ahead of the curve when things get to the desktop
Jargon: supercomputing is also called High Performance Computing
(HPC) 4
5. Size: many problems that are interesting to scientists
and engineers can’t fit on a PC – usually because they
need more than a few GB of RAM, or more than a few TB
of disk
Speed: many problems that are interesting to scientists
and engineers would take a very very long time to run
on a (GID) PC: months or even years.
Why bother: HPC gives the ability to do bigger, better,
more exciting science. If your code can run faster, that
means that you can tackle much bigger problems in
the same amount of time that you used to need for
smaller problems 5
7. The CDC 7600 breaks the Megaflop/s barrier
with 1.24 Megaflop/s in 1971
The Cray 2 (boasting 2 GB) of memory
exceeds the Gigaflop/s barrier in 1986
Intel’s ASCI Red broke the Teraflop/s barrier
in 1997
IBM Roadrunner (3 Megawatt, $133
Million) shattered the Petaflop/s barrier in
2008
flop/s: floating point operations execution rate refers to 64-bit floating-point operations of
either addition or multiplication (Linpack measure) 7
12. Giant reservoirs AND more data
results in models getting from
mega to giga cells whilst HPC
went from mega to penta
Model
size Model
size
Million
of cells Million
of cells
12
15. zettaflops +/- 2028 - Black Swans?
Power consumption (energy per operation)
Limited feature size headroom as we converge to the nanoscale
Latency for global interaction measured in local clock cycles
Execution and program parallelism to handle the resulting
extremes of physical concurrency
Bandwidth - including both system-wide band- width and
memory access bandwidth
The overhead of managing fine grain parallel resources and
concurrent actions 15
17. zettaflops in support to ‘solve’ many of
the Grand Challenges
Let’s briefly focus on one close to RE:
Modeling multiple processes at interacting scales
(time and dimension)
17
23. BOSIM Brent FFM study mid ’80
Some 35000 cells
Cray X-MP, 2 CPU version, 400 Megaflop/s
21
24. BOSIM Brent FFM study mid ’80
Some 35000 cells
Cray X-MP, 2 CPU version, 400 Megaflop/s
About 1700 CPU-seconds / simulated year
21
25. BOSIM Brent FFM study mid ’80
Some 35000 cells
Cray X-MP, 2 CPU version, 400 Megaflop/s
About 1700 CPU-seconds / simulated year
10 year run => about 5 hrs clock time
21