SlideShare a Scribd company logo
1
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Lawrence Livermore National Laboratory , P. O. Box 808, Livermore, CA 94551
ASCI-00-003.1
ASCI Terascale Simulation
Requirements and
Deployments
David A. Nowak
ASCI Program Leader
Mark Seager
ASCI Terascale Systems Principal Investigator
Lawrence Livermore National Laboratory
University of California
*
Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.
2
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Overview
ASCI program background
Applications requirements
Balanced terascale computing
environment
Red Partnership and CPLANT
Blue-Mountain partnership
Sustained Stewardship TeraFLOP/s
(SST)
Blue-Pacific partnership
Sustained Stewardship TeraFLOP/s
(SST)
3
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
A successful Stockpile Stewardship
Program requires a successful ASCI
B61 W62 W76 W78 W80 B83 W84 W87 W88
PrimaryPrimary
CertificationCertification
MaterialsMaterials
DynamicsDynamics
AdvancedAdvanced
RadiographyRadiography
SecondarySecondary
CertificationCertification
EnhancedEnhanced
SuretySurety
ICFICF
EnhancedEnhanced
SurveillanceSurveillance
HostileHostile
EnvironmentsEnvironments
WeaponsWeapons
SystemSystem
EngineeringEngineering
Directed Stockpile Work
4
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
ASCI’s major technical elements meet
Stockpile Stewardship requirements
Defense Applications
& Modeling
University
Partnerships
Integration
Simulation
& Computer
Science
Integrated
Computing Systems
Applications
Development
Physical
& Materials
Modeling
V&V
DisCom2
PSE
Physical
Infrastructure
& Platforms
Operation of
Facilities
Alliances
Institutes
VIEWS
PathForward
5
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Example terascale computing
environment in CY00 with ASCI White at
LLNL
Programs
Application
Performance
Computing Speed
Parallel I/O Archive BW
Cache BW
Archival
Storage
TFLOPS
PetaBytes
GigaBytes/sec GigaBytes/sec
10 3 10 5
0.1
1
10
100
1.6
16
160
TeraBytes/sec
1600
0.1
1
100
10
0.2
2
20
200
10
1
0.01
‘96
‘97
‘00
2004
Year
Disk
TeraBytes
5.0
50
500
5000
Memory
Memory BW
10
0
1
0
1
0.130
30
0 3
0.3
TeraBytes
TeraBytes/sec
1
Interconnect
TeraBytes/sec
0.05
5
50
10 2
0.1
0.5
ASCI is achieving programmatic objectives, but the computing
environment will not be in balance at LLNL for the ASCI White platform.
ASCI is achieving programmatic objectives, but the computing
environment will not be in balance at LLNL for the ASCI White platform.
Computational Resource Scaling for
ASCI Physics Applications
1 FLOPS Peak Compute
16 Bytes/s / FLOPS Cache BW
1 Byte / FLOPS Memory
3 Bytes/s / FLOPS Memory BW
0.5 Bytes/s / FLOPS Interconnect BW
50 Bytes / FLOPS Local Disk
0.002 Byte/s / FLOPS Parallel I/O BW
0.0001 Byte/s / FLOPS Archive BW
1000 Bytes / FLOPS Archive
Computational Resource Scaling for
ASCI Physics Applications
1 FLOPS Peak Compute
16 Bytes/s / FLOPS Cache BW
1 Byte / FLOPS Memory
3 Bytes/s / FLOPS Memory BW
0.5 Bytes/s / FLOPS Interconnect BW
50 Bytes / FLOPS Local Disk
0.002 Byte/s / FLOPS Parallel I/O BW
0.0001 Byte/s / FLOPS Archive BW
1000 Bytes / FLOPS Archive
6
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
SNL/Intel ASCI Red
• Good interconnect bandwidth
• Poor memory/NIC bandwidth
• Good interconnect bandwidth
• Poor memory/NIC bandwidth CPU CPUNIC 256 MB
CPU CPUNIC 256 MB
Kestrel Compute Board ~2332 in system
CPUNIC 128 MB
CPU 128 MB
Eagle I/O Board ~100 in system
128 MB
128 MB
Terascale Operating System (TOS)
cc on node, not cc across board
Cougar Operating System
38 switches
32 switches
node
node
6.25 TB
6.25 TB
Storage
Classified
Unclassified
2 GB each Sustained
(total system)
800 MB
bi-directional per link
Shortest hop ~13 µs, Longest hop ~16 µs
Aggregate link bandwidth = 1.865 TB/sAggregate link bandwidth = 1.865 TB/s
Bottleneck
Good
7
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
SNL/Compaq ASCI C-Plant
S
S S
S S
C-Plant is located at Sandia National Laboratory
Currently a Hypercube, C-Plant will be reconfigured as a mesh in 2000
C-Plant has 50 scalable units
LINUX Operating System
C-Plant is a “Beowulf” configuration
•16 “boxes”
•2 16 port Myricom switches
•160 MB each direction
•320 MB total
Scalable Unit:
“Box:”
•500 MHz Compaq eV56 processor
•192 MB SDRAM
•55 MB NIC
•Serial port
•Ethernet port
•______ Disk spaceHypercube
S S
S
This is a research project — a long way from being a production system.This is a research project — a long way from being a production system.
8
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
LANL/SGI/Cray ASCI Blue Mountain
3.072 TeraOPS Peak
48x128-CPU SMP= 3 TF
48x32 GB/SMP= 1.5TB
48x1.44 TB/SMP=70 TB
Inter-SMP Compute Fabric Bandwidth=48 x 12 HIPPI/SMP=115,200 MB/s bi-directional
20(14) HPSS Movers=40(27) HPSS Tape drives of 10-20 MB/s each=400-800 MB/s Bandwidth
LAN
HPSS Meta
Data Server
Bandwidth=48 X 1 HIPPI/SMP=9,600 MB/s bi-directional
Link to LAN
Link to
Legacy Mxs
Bandwidth=8 HIPPI= 1,600 MB/s bi-directional
HPSS Bandwidth=4 HIPPI= 800 MB/s bi-directional
144
120
12
2
1
RAID Bandwidth=48 X 10 FC/SMP=96,000 MB/s bi-directional
Aggregate link bandwidth = 0.115 TB/sAggregate link bandwidth = 0.115 TB/s
9
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Blue Mountain
Planned GSN Compute Fabric
9 Separate 32x32 X-Bar Switch Networks
1 2 3 4 5 6 7 8 9
1 2 3
Expected Improvements
Throughput 115,200 MB/s => 460,800 MB/s, 4x
Link Bandwidth 200 MB/s => 1,600 MB/s, 8x
Round Trip Latency 110 µs => ~ 10 µs ,11x
3 Groups of 16 Computers each
Aggregate link bandwidth = 0.461 TB/sAggregate link bandwidth = 0.461 TB/s
10
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
LLNL/IBM Blue-Pacific
3.889 TeraOP/s Peak
Sector S
Sector Y
Sector K
24
24
24
Each SP sector comprised of
• 488 Silver nodes
• 24 HPGN Links
System Parameters
• 3.89 TFLOP/s Peak
• 2.6 TB Memory
• 62.5 TB Global disk
HPGN
HiPPI
2.5 GB/node Memory
24.5 TB Global Disk
8.3 TB Local Disk
1.5 GB/node Memory
20.5 TB Global Disk
4.4 TB Local Disk
1.5 GB/node Memory
20.5 TB Global Disk
4.4 TB Local Disk
FDDI
SST Achieved
>1.2TFLOP/s
on sPPM and Problem
>70x Larger
Than Ever Solved Before!
6
6
12
Aggregate link bandwidth = 0.439 TB/sAggregate link bandwidth = 0.439 TB/s
11
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
I/O Hardware Architecture of
SST
System Data and Control Networks
488 Node IBM SP Sector
56 GPFS
Servers
432 Silver Compute Nodes
Each SST Sector
• Has local and global I/O file system
• 2.2 GB/s delivered global I/O performance
• 3.66 GB/s delivered local I/O performance
• Separate SP first level switches
• Independent command and control
• Link bandwidth = 300 Mb/s Bi-directional
Full system mode
• Application launch over full 1,464 Silver nodes
• 1,048 MPI/us tasks, 2,048 MPI/IP tasks
• High speed, low latency communication between all nodes
• Single STDIO interface
GPFS GPFS GPFS GPFS GPFS GPFS GPFS GPFS
24 SP Links to
Second Level
Switch
12
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Partisn (SN-Method) Scaling
1
10
100
1000
10000
1 10 100 1000 10000
Number of Processors
RED
Blue Pac (1p/n)
Blue Pac (4p/n)
Blue Mtn (UPS)
Blue Mtn (MPI)
Blue Mtn(MPI.2s2r)
Blue Mtn (UPS.offbox)
Constant Number of
Cells per Processor
0
10
20
30
40
50
60
70
80
90
100
1 10 100 1000 10000
Number of Processors
RED
Blue Pac (1p/n)
Blue Pac (4p/n)
Blue Mtn (UPS)
Blue Mtn (MPI)
Blue Mtn(MPI.2s2r)
Blue Mtn(UPS.offbox)
Constant Number of
Cells per Processor
13
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
The JEEP calculation adds to our
understanding the performance of insensitive
high explosives
• This calculation involved 600 atoms (largest
number ever at such a high resolution) with
1,920 electrons, using about 3,840 processors
• This simulation provides crucial insight into the
detonation properties of IHE at high pressures
and temperatures.
• This calculation involved 600 atoms (largest
number ever at such a high resolution) with
1,920 electrons, using about 3,840 processors
• This simulation provides crucial insight into the
detonation properties of IHE at high pressures
and temperatures.
• Relevant experimental data
(e.g., shock wave data) on
hydrogen fluoride (HF) are
almost nonexistent because
of its corrosive nature.
• Quantum-level simulations,
like this one, of HF- H2O
mixtures can substitute for
such experiments.
14
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Silver Node delivered memory
bandwidth is around 150-200
MB/s/process
0
50
100
150
200
250
300
350
1P 2P 4P 8P
E4000 E6000 Silver AS4100 AS8400 Octane Onyx2
Silver Peak Bytes:FLOP/S Ratio is 1.3/2.565
= 0.51
Silver Delivered B:F Ratio is 200/114 = 1.75
15
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
MPI_SEND/US delivers low latency and aggregate
high bandwidth, but counter intuitive behavior per
MPI task
0
20
40
60
80
100
120
0.00E+00 5.00E+05 1.00E+06 1.50E+06 2.00E+06 2.50E+06 3.00E+06 3.50E+06
Message Size (B)
Performance(MB/s)
One Pair
Two Pair
Four Pair
Two Agg
Four Agg
16
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
LLNL/IBM White
10.2 TeraOPS Peak
Aggregate link bandwidth = 2.048 TB/s
Five times better than the SST; Peak is three times better
Ratio of Bytes:FLOPS is improving
Aggregate link bandwidth = 2.048 TB/s
Five times better than the SST; Peak is three times better
Ratio of Bytes:FLOPS is improving
MuSST (PERF) System
• 8 PDEBUG nodes w/16 GB SDRAM
• ~484 PBATCH nodes w/8 GB SDRAM
• 12.8 GB/s delivered global I/O performance
• 5.12 GB/s delivered local I/O performance
• 16 GigaBit Ethernet External Network
• Up to 8 HIPPI-800
Programming/Usage Model
• Application launch over ~492 NH-2 nodes
• 16-way MuSPPA, Shared Memory, 32b MPI
• 4,096 MPI/US tasks
• Likely usage is 4 MPI tasks/node with
4 threads/MPI task
• Single STDIO interface
17
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Interconnect issues for future
machines
— Why Optical? —
Need to increase Bytes:FLOPS ratio
Memory bandwidth (cache line) utilization will be dramatically lower for
codes that utilize arbitrarily connected meshes and adaptive refinement
 indirect addressing.
Interconnect bandwidth must be increased and latency must be reduced
to allow a broader range of applications and packages to scale well
To get very large configurations (30  70  100 TeraOPS) larger
SMPs will be deployed
For fixed B:F interconnect ratio this means that more bandwidth coming
out of an SMP
Multiple pipes/planes will be used  Optical reduces cable count
Machjne footprint is growing  24,000 square feet may require
optical
Network interface paradigm
Virtual memory direct memory access
Low-latency remote get/put
Reliability Availability and Serviceability (RAS)
18
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Lawrence Livermore National Laboratory , P. O. Box 808, Livermore, CA 94551
ASCI-00-003.1
ASCI Terascale Simulation
Requirements and
Deployments
David A. Nowak
ASCI Program Leader
Mark Seager
ASCI Terascale Systems Principal Investigator
Lawrence Livermore National Laboratory
University of California
*
Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

More Related Content

What's hot

MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
MVAPICH: How a Bunch of Buckeyes Crack Tough NutsMVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
inside-BigData.com
 
Dynamic user trace
Dynamic user traceDynamic user trace
Dynamic user trace
Vipin Varghese
 
NAT64 en LACNIC 18: Experimentos con NAT64 sin estado
NAT64 en LACNIC 18: Experimentos con NAT64 sin estadoNAT64 en LACNIC 18: Experimentos con NAT64 sin estado
NAT64 en LACNIC 18: Experimentos con NAT64 sin estado
Carlos Martinez Cagnazzo
 
Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009James McGalliard
 
To HDR and Beyond
To HDR and BeyondTo HDR and Beyond
To HDR and Beyond
inside-BigData.com
 
Streaming exa-scale data over 100Gbps networks
Streaming exa-scale data over 100Gbps networksStreaming exa-scale data over 100Gbps networks
Streaming exa-scale data over 100Gbps networks
balmanme
 
A performance-aware power capping orchestrator for the Xen hypervisor
A performance-aware power capping orchestrator for the Xen hypervisorA performance-aware power capping orchestrator for the Xen hypervisor
A performance-aware power capping orchestrator for the Xen hypervisor
NECST Lab @ Politecnico di Milano
 
Debug dpdk process bottleneck & painpoints
Debug dpdk process bottleneck & painpointsDebug dpdk process bottleneck & painpoints
Debug dpdk process bottleneck & painpoints
Vipin Varghese
 
Sierra overview
Sierra overviewSierra overview
Sierra overview
Ganesan Narayanasamy
 
Early Application experiences on Summit
Early Application experiences on Summit Early Application experiences on Summit
Early Application experiences on Summit
Ganesan Narayanasamy
 
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
Continuent
 
NAT64 and DNS64 in 30 minutes
NAT64 and DNS64 in 30 minutesNAT64 and DNS64 in 30 minutes
NAT64 and DNS64 in 30 minutes
Ivan Pepelnjak
 
Setup & Operate Tungsten Replicator
Setup & Operate Tungsten ReplicatorSetup & Operate Tungsten Replicator
Setup & Operate Tungsten Replicator
Continuent
 
Oow 2008 yahoo_pie-db
Oow 2008 yahoo_pie-dbOow 2008 yahoo_pie-db
Oow 2008 yahoo_pie-db
bohanchen
 
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...
Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...
Ganesan Narayanasamy
 
YOW2021 Computing Performance
YOW2021 Computing PerformanceYOW2021 Computing Performance
YOW2021 Computing Performance
Brendan Gregg
 
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareMirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Ryan Aydelott
 
Introduction to DPDK RIB library
Introduction to DPDK RIB libraryIntroduction to DPDK RIB library
Introduction to DPDK RIB library
Глеб Хохлов
 

What's hot (20)

MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
MVAPICH: How a Bunch of Buckeyes Crack Tough NutsMVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
 
Dynamic user trace
Dynamic user traceDynamic user trace
Dynamic user trace
 
NAT64 en LACNIC 18: Experimentos con NAT64 sin estado
NAT64 en LACNIC 18: Experimentos con NAT64 sin estadoNAT64 en LACNIC 18: Experimentos con NAT64 sin estado
NAT64 en LACNIC 18: Experimentos con NAT64 sin estado
 
Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009
 
To HDR and Beyond
To HDR and BeyondTo HDR and Beyond
To HDR and Beyond
 
Streaming exa-scale data over 100Gbps networks
Streaming exa-scale data over 100Gbps networksStreaming exa-scale data over 100Gbps networks
Streaming exa-scale data over 100Gbps networks
 
A performance-aware power capping orchestrator for the Xen hypervisor
A performance-aware power capping orchestrator for the Xen hypervisorA performance-aware power capping orchestrator for the Xen hypervisor
A performance-aware power capping orchestrator for the Xen hypervisor
 
Debug dpdk process bottleneck & painpoints
Debug dpdk process bottleneck & painpointsDebug dpdk process bottleneck & painpoints
Debug dpdk process bottleneck & painpoints
 
Sierra overview
Sierra overviewSierra overview
Sierra overview
 
Early Application experiences on Summit
Early Application experiences on Summit Early Application experiences on Summit
Early Application experiences on Summit
 
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
 
optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
 
PROSE
PROSEPROSE
PROSE
 
NAT64 and DNS64 in 30 minutes
NAT64 and DNS64 in 30 minutesNAT64 and DNS64 in 30 minutes
NAT64 and DNS64 in 30 minutes
 
Setup & Operate Tungsten Replicator
Setup & Operate Tungsten ReplicatorSetup & Operate Tungsten Replicator
Setup & Operate Tungsten Replicator
 
Oow 2008 yahoo_pie-db
Oow 2008 yahoo_pie-dbOow 2008 yahoo_pie-db
Oow 2008 yahoo_pie-db
 
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...
Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...
 
YOW2021 Computing Performance
YOW2021 Computing PerformanceYOW2021 Computing Performance
YOW2021 Computing Performance
 
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareMirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
 
Introduction to DPDK RIB library
Introduction to DPDK RIB libraryIntroduction to DPDK RIB library
Introduction to DPDK RIB library
 

Viewers also liked

Hadoop Streaming: Programming Hadoop without Java
Hadoop Streaming: Programming Hadoop without JavaHadoop Streaming: Programming Hadoop without Java
Hadoop Streaming: Programming Hadoop without Java
Glenn K. Lockwood
 
Overview of Spark for HPC
Overview of Spark for HPCOverview of Spark for HPC
Overview of Spark for HPC
Glenn K. Lockwood
 
Boulder County February 2017 Real Estate Statistics
Boulder County February 2017 Real Estate StatisticsBoulder County February 2017 Real Estate Statistics
Boulder County February 2017 Real Estate Statistics
Neil Kearney
 
Casa atalaia cond. imperial atalaia patrícia
Casa atalaia cond. imperial atalaia   patríciaCasa atalaia cond. imperial atalaia   patrícia
Casa atalaia cond. imperial atalaia patrícia
Elias Almeida
 
Concentracion Del Desarrollo Rural
Concentracion Del Desarrollo RuralConcentracion Del Desarrollo Rural
Concentracion Del Desarrollo Rural
Giselle Natalia Sepulveda Arredondo
 
Ira tarea 6
Ira tarea 6Ira tarea 6
Ira tarea 6
Sharonmss
 
Dynamics Day 2017 Brisbane: Dynamics 365 Enterprise Operations
Dynamics Day 2017 Brisbane: Dynamics 365 Enterprise OperationsDynamics Day 2017 Brisbane: Dynamics 365 Enterprise Operations
Dynamics Day 2017 Brisbane: Dynamics 365 Enterprise Operations
Empired
 
Dynamics Day 2017 Brisbane: Dynamics 365 Field and Project Services
Dynamics Day 2017 Brisbane: Dynamics 365 Field and Project ServicesDynamics Day 2017 Brisbane: Dynamics 365 Field and Project Services
Dynamics Day 2017 Brisbane: Dynamics 365 Field and Project Services
Empired
 
Encuento de culturas
Encuento de culturasEncuento de culturas
Encuento de culturas
Vimarie Negrón
 
Livro Usando as mãos: contando de cinco em cinco
Livro   Usando as mãos: contando de cinco em cincoLivro   Usando as mãos: contando de cinco em cinco
Livro Usando as mãos: contando de cinco em cincoFabiana Esteves
 
1f9618 marcatacochadbactividadsupletorio2016 17-dcimo
1f9618 marcatacochadbactividadsupletorio2016 17-dcimo1f9618 marcatacochadbactividadsupletorio2016 17-dcimo
1f9618 marcatacochadbactividadsupletorio2016 17-dcimo
alonzo encarnacion
 
Dynamics Day 2017 Brisbane: Dynamics 365 making it real
Dynamics Day 2017 Brisbane: Dynamics 365 making it realDynamics Day 2017 Brisbane: Dynamics 365 making it real
Dynamics Day 2017 Brisbane: Dynamics 365 making it real
Empired
 
Series originales de
Series originales deSeries originales de
Series originales de
Juanse0
 
Proyecto informática
Proyecto informáticaProyecto informática
Proyecto informática
Nalyan Mora
 
El área y el perímetro
El área y el perímetroEl área y el perímetro
El área y el perímetro
Vimarie Negrón
 
Caracterização dos fenómenos psíquicos
Caracterização dos fenómenos psíquicosCaracterização dos fenómenos psíquicos
Caracterização dos fenómenos psíquicos
Learn English
 
Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...
Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...
Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...
Álvaro Ángel Blanco Carran Montalv
 
Mayor q menor q
Mayor q menor qMayor q menor q
Mayor q menor q
Vimarie Negrón
 
презентация к нод «пришла весна»
презентация к нод «пришла весна»презентация к нод «пришла весна»
презентация к нод «пришла весна»
azzaq1
 

Viewers also liked (20)

Hadoop Streaming: Programming Hadoop without Java
Hadoop Streaming: Programming Hadoop without JavaHadoop Streaming: Programming Hadoop without Java
Hadoop Streaming: Programming Hadoop without Java
 
Overview of Spark for HPC
Overview of Spark for HPCOverview of Spark for HPC
Overview of Spark for HPC
 
Boulder County February 2017 Real Estate Statistics
Boulder County February 2017 Real Estate StatisticsBoulder County February 2017 Real Estate Statistics
Boulder County February 2017 Real Estate Statistics
 
Casa atalaia cond. imperial atalaia patrícia
Casa atalaia cond. imperial atalaia   patríciaCasa atalaia cond. imperial atalaia   patrícia
Casa atalaia cond. imperial atalaia patrícia
 
Concentracion Del Desarrollo Rural
Concentracion Del Desarrollo RuralConcentracion Del Desarrollo Rural
Concentracion Del Desarrollo Rural
 
Ira tarea 6
Ira tarea 6Ira tarea 6
Ira tarea 6
 
Inglés 6
Inglés 6Inglés 6
Inglés 6
 
Dynamics Day 2017 Brisbane: Dynamics 365 Enterprise Operations
Dynamics Day 2017 Brisbane: Dynamics 365 Enterprise OperationsDynamics Day 2017 Brisbane: Dynamics 365 Enterprise Operations
Dynamics Day 2017 Brisbane: Dynamics 365 Enterprise Operations
 
Dynamics Day 2017 Brisbane: Dynamics 365 Field and Project Services
Dynamics Day 2017 Brisbane: Dynamics 365 Field and Project ServicesDynamics Day 2017 Brisbane: Dynamics 365 Field and Project Services
Dynamics Day 2017 Brisbane: Dynamics 365 Field and Project Services
 
Encuento de culturas
Encuento de culturasEncuento de culturas
Encuento de culturas
 
Livro Usando as mãos: contando de cinco em cinco
Livro   Usando as mãos: contando de cinco em cincoLivro   Usando as mãos: contando de cinco em cinco
Livro Usando as mãos: contando de cinco em cinco
 
1f9618 marcatacochadbactividadsupletorio2016 17-dcimo
1f9618 marcatacochadbactividadsupletorio2016 17-dcimo1f9618 marcatacochadbactividadsupletorio2016 17-dcimo
1f9618 marcatacochadbactividadsupletorio2016 17-dcimo
 
Dynamics Day 2017 Brisbane: Dynamics 365 making it real
Dynamics Day 2017 Brisbane: Dynamics 365 making it realDynamics Day 2017 Brisbane: Dynamics 365 making it real
Dynamics Day 2017 Brisbane: Dynamics 365 making it real
 
Series originales de
Series originales deSeries originales de
Series originales de
 
Proyecto informática
Proyecto informáticaProyecto informática
Proyecto informática
 
El área y el perímetro
El área y el perímetroEl área y el perímetro
El área y el perímetro
 
Caracterização dos fenómenos psíquicos
Caracterização dos fenómenos psíquicosCaracterização dos fenómenos psíquicos
Caracterização dos fenómenos psíquicos
 
Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...
Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...
Vida Real, Realidad, Humanidad, Seres Vivos, Maldad, Odio, Envidia, Discrimin...
 
Mayor q menor q
Mayor q menor qMayor q menor q
Mayor q menor q
 
презентация к нод «пришла весна»
презентация к нод «пришла весна»презентация к нод «пришла весна»
презентация к нод «пришла весна»
 

Similar to ASCI Terascale Simulation Requirements and Deployments

Collaborate nfs kyle_final
Collaborate nfs kyle_finalCollaborate nfs kyle_final
Collaborate nfs kyle_finalKyle Hailey
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
inside-BigData.com
 
Gfarm Fs Tatebe Tip2004
Gfarm Fs Tatebe Tip2004Gfarm Fs Tatebe Tip2004
Gfarm Fs Tatebe Tip2004xlight
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
Hitoshi Sato
 
Architecting a 35 PB distributed parallel file system for science
Architecting a 35 PB distributed parallel file system for scienceArchitecting a 35 PB distributed parallel file system for science
Architecting a 35 PB distributed parallel file system for science
Speck&Tech
 
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...
Avere Systems
 
System Interconnects for HPC
System Interconnects for HPCSystem Interconnects for HPC
System Interconnects for HPC
inside-BigData.com
 
Decoupling Compute from Memory, Storage and IO with OMI
Decoupling Compute from Memory, Storage and IO with OMIDecoupling Compute from Memory, Storage and IO with OMI
Decoupling Compute from Memory, Storage and IO with OMI
Allan Cantle
 
Ics21 workshop decoupling compute from memory, storage & io with omi - ...
Ics21 workshop   decoupling compute from memory, storage & io with omi - ...Ics21 workshop   decoupling compute from memory, storage & io with omi - ...
Ics21 workshop decoupling compute from memory, storage & io with omi - ...
Vaibhav R
 
Presenta completaoow2013
Presenta completaoow2013Presenta completaoow2013
Presenta completaoow2013
Fran Navarro
 
Unleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucsUnleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucs
solarisyougood
 
Shak larry-jeder-perf-and-tuning-summit14-part2-final
Shak larry-jeder-perf-and-tuning-summit14-part2-finalShak larry-jeder-perf-and-tuning-summit14-part2-final
Shak larry-jeder-perf-and-tuning-summit14-part2-finalTommy Lee
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Community
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail Internet World
 
Opportunities of ML-based data analytics in ABCI
Opportunities of ML-based data analytics in ABCIOpportunities of ML-based data analytics in ABCI
Opportunities of ML-based data analytics in ABCI
Ryousei Takano
 
M series technical presentation-part 1
M series technical presentation-part 1M series technical presentation-part 1
M series technical presentation-part 1
xKinAnx
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...
Larry Smarr
 
RDFox Poster
RDFox PosterRDFox Poster
RDFox Poster
DBOnto
 
Advances at the Argonne Leadership Computing Center
Advances at the Argonne Leadership Computing CenterAdvances at the Argonne Leadership Computing Center
Advances at the Argonne Leadership Computing Center
davidemartin
 

Similar to ASCI Terascale Simulation Requirements and Deployments (20)

Collaborate nfs kyle_final
Collaborate nfs kyle_finalCollaborate nfs kyle_final
Collaborate nfs kyle_final
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
Gfarm Fs Tatebe Tip2004
Gfarm Fs Tatebe Tip2004Gfarm Fs Tatebe Tip2004
Gfarm Fs Tatebe Tip2004
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
 
Architecting a 35 PB distributed parallel file system for science
Architecting a 35 PB distributed parallel file system for scienceArchitecting a 35 PB distributed parallel file system for science
Architecting a 35 PB distributed parallel file system for science
 
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...
 
System Interconnects for HPC
System Interconnects for HPCSystem Interconnects for HPC
System Interconnects for HPC
 
Decoupling Compute from Memory, Storage and IO with OMI
Decoupling Compute from Memory, Storage and IO with OMIDecoupling Compute from Memory, Storage and IO with OMI
Decoupling Compute from Memory, Storage and IO with OMI
 
Ics21 workshop decoupling compute from memory, storage & io with omi - ...
Ics21 workshop   decoupling compute from memory, storage & io with omi - ...Ics21 workshop   decoupling compute from memory, storage & io with omi - ...
Ics21 workshop decoupling compute from memory, storage & io with omi - ...
 
Presenta completaoow2013
Presenta completaoow2013Presenta completaoow2013
Presenta completaoow2013
 
Unleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucsUnleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucs
 
Shak larry-jeder-perf-and-tuning-summit14-part2-final
Shak larry-jeder-perf-and-tuning-summit14-part2-finalShak larry-jeder-perf-and-tuning-summit14-part2-final
Shak larry-jeder-perf-and-tuning-summit14-part2-final
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail
 
Opportunities of ML-based data analytics in ABCI
Opportunities of ML-based data analytics in ABCIOpportunities of ML-based data analytics in ABCI
Opportunities of ML-based data analytics in ABCI
 
M series technical presentation-part 1
M series technical presentation-part 1M series technical presentation-part 1
M series technical presentation-part 1
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...
 
RDFox Poster
RDFox PosterRDFox Poster
RDFox Poster
 
Advances at the Argonne Leadership Computing Center
Advances at the Argonne Leadership Computing CenterAdvances at the Argonne Leadership Computing Center
Advances at the Argonne Leadership Computing Center
 

More from Glenn K. Lockwood

Understanding and Measuring I/O Performance
Understanding and Measuring I/O PerformanceUnderstanding and Measuring I/O Performance
Understanding and Measuring I/O Performance
Glenn K. Lockwood
 
Parallel R and Hadoop
Parallel R and HadoopParallel R and Hadoop
Parallel R and Hadoop
Glenn K. Lockwood
 
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
Glenn K. Lockwood
 
myHadoop 0.30
myHadoop 0.30myHadoop 0.30
myHadoop 0.30
Glenn K. Lockwood
 
Large-scale Genomic Analysis Enabled by Gordon
Large-scale Genomic Analysis Enabled by GordonLarge-scale Genomic Analysis Enabled by Gordon
Large-scale Genomic Analysis Enabled by Gordon
Glenn K. Lockwood
 
SR-IOV: The Key Enabling Technology for Fully Virtualized HPC Clusters
SR-IOV: The Key Enabling Technology for Fully Virtualized HPC ClustersSR-IOV: The Key Enabling Technology for Fully Virtualized HPC Clusters
SR-IOV: The Key Enabling Technology for Fully Virtualized HPC Clusters
Glenn K. Lockwood
 

More from Glenn K. Lockwood (6)

Understanding and Measuring I/O Performance
Understanding and Measuring I/O PerformanceUnderstanding and Measuring I/O Performance
Understanding and Measuring I/O Performance
 
Parallel R and Hadoop
Parallel R and HadoopParallel R and Hadoop
Parallel R and Hadoop
 
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
 
myHadoop 0.30
myHadoop 0.30myHadoop 0.30
myHadoop 0.30
 
Large-scale Genomic Analysis Enabled by Gordon
Large-scale Genomic Analysis Enabled by GordonLarge-scale Genomic Analysis Enabled by Gordon
Large-scale Genomic Analysis Enabled by Gordon
 
SR-IOV: The Key Enabling Technology for Fully Virtualized HPC Clusters
SR-IOV: The Key Enabling Technology for Fully Virtualized HPC ClustersSR-IOV: The Key Enabling Technology for Fully Virtualized HPC Clusters
SR-IOV: The Key Enabling Technology for Fully Virtualized HPC Clusters
 

Recently uploaded

LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
Vlad Stirbu
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
KAMESHS29
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
RinaMondal9
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
Peter Spielvogel
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
UiPathCommunity
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 

Recently uploaded (20)

LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 

ASCI Terascale Simulation Requirements and Deployments

  • 1. 1 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Lawrence Livermore National Laboratory , P. O. Box 808, Livermore, CA 94551 ASCI-00-003.1 ASCI Terascale Simulation Requirements and Deployments David A. Nowak ASCI Program Leader Mark Seager ASCI Terascale Systems Principal Investigator Lawrence Livermore National Laboratory University of California * Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.
  • 2. 2 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Overview ASCI program background Applications requirements Balanced terascale computing environment Red Partnership and CPLANT Blue-Mountain partnership Sustained Stewardship TeraFLOP/s (SST) Blue-Pacific partnership Sustained Stewardship TeraFLOP/s (SST)
  • 3. 3 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. A successful Stockpile Stewardship Program requires a successful ASCI B61 W62 W76 W78 W80 B83 W84 W87 W88 PrimaryPrimary CertificationCertification MaterialsMaterials DynamicsDynamics AdvancedAdvanced RadiographyRadiography SecondarySecondary CertificationCertification EnhancedEnhanced SuretySurety ICFICF EnhancedEnhanced SurveillanceSurveillance HostileHostile EnvironmentsEnvironments WeaponsWeapons SystemSystem EngineeringEngineering Directed Stockpile Work
  • 4. 4 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. ASCI’s major technical elements meet Stockpile Stewardship requirements Defense Applications & Modeling University Partnerships Integration Simulation & Computer Science Integrated Computing Systems Applications Development Physical & Materials Modeling V&V DisCom2 PSE Physical Infrastructure & Platforms Operation of Facilities Alliances Institutes VIEWS PathForward
  • 5. 5 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Example terascale computing environment in CY00 with ASCI White at LLNL Programs Application Performance Computing Speed Parallel I/O Archive BW Cache BW Archival Storage TFLOPS PetaBytes GigaBytes/sec GigaBytes/sec 10 3 10 5 0.1 1 10 100 1.6 16 160 TeraBytes/sec 1600 0.1 1 100 10 0.2 2 20 200 10 1 0.01 ‘96 ‘97 ‘00 2004 Year Disk TeraBytes 5.0 50 500 5000 Memory Memory BW 10 0 1 0 1 0.130 30 0 3 0.3 TeraBytes TeraBytes/sec 1 Interconnect TeraBytes/sec 0.05 5 50 10 2 0.1 0.5 ASCI is achieving programmatic objectives, but the computing environment will not be in balance at LLNL for the ASCI White platform. ASCI is achieving programmatic objectives, but the computing environment will not be in balance at LLNL for the ASCI White platform. Computational Resource Scaling for ASCI Physics Applications 1 FLOPS Peak Compute 16 Bytes/s / FLOPS Cache BW 1 Byte / FLOPS Memory 3 Bytes/s / FLOPS Memory BW 0.5 Bytes/s / FLOPS Interconnect BW 50 Bytes / FLOPS Local Disk 0.002 Byte/s / FLOPS Parallel I/O BW 0.0001 Byte/s / FLOPS Archive BW 1000 Bytes / FLOPS Archive Computational Resource Scaling for ASCI Physics Applications 1 FLOPS Peak Compute 16 Bytes/s / FLOPS Cache BW 1 Byte / FLOPS Memory 3 Bytes/s / FLOPS Memory BW 0.5 Bytes/s / FLOPS Interconnect BW 50 Bytes / FLOPS Local Disk 0.002 Byte/s / FLOPS Parallel I/O BW 0.0001 Byte/s / FLOPS Archive BW 1000 Bytes / FLOPS Archive
  • 6. 6 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. SNL/Intel ASCI Red • Good interconnect bandwidth • Poor memory/NIC bandwidth • Good interconnect bandwidth • Poor memory/NIC bandwidth CPU CPUNIC 256 MB CPU CPUNIC 256 MB Kestrel Compute Board ~2332 in system CPUNIC 128 MB CPU 128 MB Eagle I/O Board ~100 in system 128 MB 128 MB Terascale Operating System (TOS) cc on node, not cc across board Cougar Operating System 38 switches 32 switches node node 6.25 TB 6.25 TB Storage Classified Unclassified 2 GB each Sustained (total system) 800 MB bi-directional per link Shortest hop ~13 µs, Longest hop ~16 µs Aggregate link bandwidth = 1.865 TB/sAggregate link bandwidth = 1.865 TB/s Bottleneck Good
  • 7. 7 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. SNL/Compaq ASCI C-Plant S S S S S C-Plant is located at Sandia National Laboratory Currently a Hypercube, C-Plant will be reconfigured as a mesh in 2000 C-Plant has 50 scalable units LINUX Operating System C-Plant is a “Beowulf” configuration •16 “boxes” •2 16 port Myricom switches •160 MB each direction •320 MB total Scalable Unit: “Box:” •500 MHz Compaq eV56 processor •192 MB SDRAM •55 MB NIC •Serial port •Ethernet port •______ Disk spaceHypercube S S S This is a research project — a long way from being a production system.This is a research project — a long way from being a production system.
  • 8. 8 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. LANL/SGI/Cray ASCI Blue Mountain 3.072 TeraOPS Peak 48x128-CPU SMP= 3 TF 48x32 GB/SMP= 1.5TB 48x1.44 TB/SMP=70 TB Inter-SMP Compute Fabric Bandwidth=48 x 12 HIPPI/SMP=115,200 MB/s bi-directional 20(14) HPSS Movers=40(27) HPSS Tape drives of 10-20 MB/s each=400-800 MB/s Bandwidth LAN HPSS Meta Data Server Bandwidth=48 X 1 HIPPI/SMP=9,600 MB/s bi-directional Link to LAN Link to Legacy Mxs Bandwidth=8 HIPPI= 1,600 MB/s bi-directional HPSS Bandwidth=4 HIPPI= 800 MB/s bi-directional 144 120 12 2 1 RAID Bandwidth=48 X 10 FC/SMP=96,000 MB/s bi-directional Aggregate link bandwidth = 0.115 TB/sAggregate link bandwidth = 0.115 TB/s
  • 9. 9 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Blue Mountain Planned GSN Compute Fabric 9 Separate 32x32 X-Bar Switch Networks 1 2 3 4 5 6 7 8 9 1 2 3 Expected Improvements Throughput 115,200 MB/s => 460,800 MB/s, 4x Link Bandwidth 200 MB/s => 1,600 MB/s, 8x Round Trip Latency 110 µs => ~ 10 µs ,11x 3 Groups of 16 Computers each Aggregate link bandwidth = 0.461 TB/sAggregate link bandwidth = 0.461 TB/s
  • 10. 10 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. LLNL/IBM Blue-Pacific 3.889 TeraOP/s Peak Sector S Sector Y Sector K 24 24 24 Each SP sector comprised of • 488 Silver nodes • 24 HPGN Links System Parameters • 3.89 TFLOP/s Peak • 2.6 TB Memory • 62.5 TB Global disk HPGN HiPPI 2.5 GB/node Memory 24.5 TB Global Disk 8.3 TB Local Disk 1.5 GB/node Memory 20.5 TB Global Disk 4.4 TB Local Disk 1.5 GB/node Memory 20.5 TB Global Disk 4.4 TB Local Disk FDDI SST Achieved >1.2TFLOP/s on sPPM and Problem >70x Larger Than Ever Solved Before! 6 6 12 Aggregate link bandwidth = 0.439 TB/sAggregate link bandwidth = 0.439 TB/s
  • 11. 11 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. I/O Hardware Architecture of SST System Data and Control Networks 488 Node IBM SP Sector 56 GPFS Servers 432 Silver Compute Nodes Each SST Sector • Has local and global I/O file system • 2.2 GB/s delivered global I/O performance • 3.66 GB/s delivered local I/O performance • Separate SP first level switches • Independent command and control • Link bandwidth = 300 Mb/s Bi-directional Full system mode • Application launch over full 1,464 Silver nodes • 1,048 MPI/us tasks, 2,048 MPI/IP tasks • High speed, low latency communication between all nodes • Single STDIO interface GPFS GPFS GPFS GPFS GPFS GPFS GPFS GPFS 24 SP Links to Second Level Switch
  • 12. 12 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Partisn (SN-Method) Scaling 1 10 100 1000 10000 1 10 100 1000 10000 Number of Processors RED Blue Pac (1p/n) Blue Pac (4p/n) Blue Mtn (UPS) Blue Mtn (MPI) Blue Mtn(MPI.2s2r) Blue Mtn (UPS.offbox) Constant Number of Cells per Processor 0 10 20 30 40 50 60 70 80 90 100 1 10 100 1000 10000 Number of Processors RED Blue Pac (1p/n) Blue Pac (4p/n) Blue Mtn (UPS) Blue Mtn (MPI) Blue Mtn(MPI.2s2r) Blue Mtn(UPS.offbox) Constant Number of Cells per Processor
  • 13. 13 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. The JEEP calculation adds to our understanding the performance of insensitive high explosives • This calculation involved 600 atoms (largest number ever at such a high resolution) with 1,920 electrons, using about 3,840 processors • This simulation provides crucial insight into the detonation properties of IHE at high pressures and temperatures. • This calculation involved 600 atoms (largest number ever at such a high resolution) with 1,920 electrons, using about 3,840 processors • This simulation provides crucial insight into the detonation properties of IHE at high pressures and temperatures. • Relevant experimental data (e.g., shock wave data) on hydrogen fluoride (HF) are almost nonexistent because of its corrosive nature. • Quantum-level simulations, like this one, of HF- H2O mixtures can substitute for such experiments.
  • 14. 14 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Silver Node delivered memory bandwidth is around 150-200 MB/s/process 0 50 100 150 200 250 300 350 1P 2P 4P 8P E4000 E6000 Silver AS4100 AS8400 Octane Onyx2 Silver Peak Bytes:FLOP/S Ratio is 1.3/2.565 = 0.51 Silver Delivered B:F Ratio is 200/114 = 1.75
  • 15. 15 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. MPI_SEND/US delivers low latency and aggregate high bandwidth, but counter intuitive behavior per MPI task 0 20 40 60 80 100 120 0.00E+00 5.00E+05 1.00E+06 1.50E+06 2.00E+06 2.50E+06 3.00E+06 3.50E+06 Message Size (B) Performance(MB/s) One Pair Two Pair Four Pair Two Agg Four Agg
  • 16. 16 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. LLNL/IBM White 10.2 TeraOPS Peak Aggregate link bandwidth = 2.048 TB/s Five times better than the SST; Peak is three times better Ratio of Bytes:FLOPS is improving Aggregate link bandwidth = 2.048 TB/s Five times better than the SST; Peak is three times better Ratio of Bytes:FLOPS is improving MuSST (PERF) System • 8 PDEBUG nodes w/16 GB SDRAM • ~484 PBATCH nodes w/8 GB SDRAM • 12.8 GB/s delivered global I/O performance • 5.12 GB/s delivered local I/O performance • 16 GigaBit Ethernet External Network • Up to 8 HIPPI-800 Programming/Usage Model • Application launch over ~492 NH-2 nodes • 16-way MuSPPA, Shared Memory, 32b MPI • 4,096 MPI/US tasks • Likely usage is 4 MPI tasks/node with 4 threads/MPI task • Single STDIO interface
  • 17. 17 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Interconnect issues for future machines — Why Optical? — Need to increase Bytes:FLOPS ratio Memory bandwidth (cache line) utilization will be dramatically lower for codes that utilize arbitrarily connected meshes and adaptive refinement  indirect addressing. Interconnect bandwidth must be increased and latency must be reduced to allow a broader range of applications and packages to scale well To get very large configurations (30  70  100 TeraOPS) larger SMPs will be deployed For fixed B:F interconnect ratio this means that more bandwidth coming out of an SMP Multiple pipes/planes will be used  Optical reduces cable count Machjne footprint is growing  24,000 square feet may require optical Network interface paradigm Virtual memory direct memory access Low-latency remote get/put Reliability Availability and Serviceability (RAS)
  • 18. 18 Oak Ridge Interconnects Workshop - November 1999 ASCI-00-003. Lawrence Livermore National Laboratory , P. O. Box 808, Livermore, CA 94551 ASCI-00-003.1 ASCI Terascale Simulation Requirements and Deployments David A. Nowak ASCI Program Leader Mark Seager ASCI Terascale Systems Principal Investigator Lawrence Livermore National Laboratory University of California * Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.