SlideShare a Scribd company logo
WE START WITH YES.
HPC ACCELERATING
COMBUSTION
ENGINE DESIGN
erhtjhtyhy
18th April, 2017
65th HPC USER FORUM
SIBENDU SOM
Lead Computational Scientist
Computational Fellow
Argonne National Laboratory
2
OUTLINE
Introduction
Role of M & S For Engine Design
Need for Exascale for Engine Combustion Simulations
Capacity & Capability computing
– Integration with workflow manager “Swift” for high throughput
Sensitivity Analysis and Uncertainty Quantification for Engine
Combustion Simulations
Example uses of HPC for Industry
Pioneering the use of Machine Learning tools in Engine Simulations
ROI (Cluster vs. Supercomputer)
Argonne’s VERIFI program for working with Industry
Take Home Message!
THE EDISONIAN APPROACH IS GREAT, BUT 21ST CENTURY
PROBLEMS REQUIRE MORE
 Computer simulations provide a potentially transformative new strategy
 Multi-objective optimizations can identify "best" solution
 Uncertainty and Sensitivity analysis can identify paths to improve the optimizations and
establish safety margins
 Simulations can complement experiments, especially for problems related to design and
optimization
New Engine design has historically been performed by extensive hardware
testing which is too slow, too hard, and too expensive, and not guaranteed to
reach an optimum solution
HIGH-FIDELITY MODELS ACROSS THE DESIGN SPACE
HIGH THROUGHPUT SIMULATIONS TO EXPLORE THE FULL DESIGN SPACE
vs.
3
Challenges: In-nozzle and spray physics, Cyclic variability (need for high-fidelity turbulence
models), details of combustion chemistry for different fuels, grid convergence, NOx and soot
modeling, after-treatment, material issues (Conjugate Heat Transfer), moving boundaries,
Lagrangian parcels, adaptive meshing, small time steps, densely embedded mesh
INTERNAL COMBUSTION ENGINES
4
Traditional Engine simulations involve:
 Unresolved Nozzle flow
 Simplified combustion models
 Coarse mesh => grid-dependence
 Simplified turbulence models
 Poor load-balancing algorithms
Transformational Approach:
 Detailed chemistry based combustion models
 Fine mesh => grid-convergence
 High-fidelity turbulence models: LES based
 Two-phase physics based fuel spray model
 In-nozzle-flow models
 Scaling on Exascale systems
Towards Predictive
Simulation of the Internal
Combustion Engine
Reality: Extensive tuning to
match experimental data
5
ROLE OF MODELING & SIMULATIONS FOR ENGINES
6
Full in silico Engine and Fuel Co-Design with quantified
uncertainties that dramatically reduces the need for physical
prototyping, allows rapid characterization and testing of novel
engine and fuel designs, and reduces design costs and time-to-
market of new concepts
Reality
We can perform simplified simulations on engine and fuel
components. The uncertainties are poorly understood.
Transformation
Exploit emerging computational capabilities through the
development of automated, highly parallelized CFD and
combustion chemistry codes with fully integrated uncertainty
quantification
Dream
UncertaintyQuantification
[10x] 360 degree cylinder geometry
[10x] multiple cycle variations
[10x] more accurate turbulence model (LES)
[10x] accurate spray dynamics
[50x] detailed chemical kinetics
for real transportation fuels
TOTAL Speedup needs are 500,000 times TODAY’S standard
(today’s industry standard is 64 cores with 24 hour turnaround)
REVOLUTIONIZING THE CO-DESIGN OF ENGINES AND
FUELS IS BIGGER THAN AN EXASCALE PROBLEM
According to John Deur, Director of Engine Research at Cummins Inc., the following levels
of speedup are needed to significantly impact today’s engine development*
OUR PROBLEM IS EXASCALE => 30 million cores for 24 hour turnaround!
* DOE Engine Simulation Roadmap Workshop - August 18, 2014
6
COMPUTATIONAL RESOURCES
Clusters Super-Computer
Fusion Cluster
 320 compute nodes
 2560 compute processors
 12.5 terabytes memory
 500 terabytes disk
 25.9 teraflops peak
Blues Cluster
 310 compute nodes
 4960 compute processors
 107.8 teraflops peak
Next-gen Super-
Computer in 2019
50 racks
180 petaflops
0.18 Exaflops: Still not
Exascale!
8
2013-15 2016-18
HIGH-FIDELITY MODELS ACROSS THE
DESIGN SPACE
(Capability Computing)
9
HIGH THROUGHPUT SIMULATIONS
TO EXPLORE THE FULL DESIGN SPACE
(Capacity Computing)
hole # 1
hole # 2 hole # 3
“String type” cavitation
hole # 1
Few large high-fidelity simulations on
1000s of processors run for 2-3 weeks
1000s of medium-fidelity simulations on
~10k of processors run for 2-3 weeks
EXAMPLES OF CURRENT STATE-OF-THE-ART FOR ENGINE M & S
ENABLING HIGH THROUGHPUT COMPUTING ON MIRA*
Typically engines are designed and
operating conditions are optimized
based on experimental studies and
engineer’s intuition
All design space may not be explored
and engineering intuition may not
work for a completely new concept
being designed from scratch
Goal: Develop capabilities to use
leadership class machines (e.g., Mira)
for high throughput computing to run
~10k simulations in two weeks
 Import engine simulation code
(CONVERGE) on Mira
 Profile the code to identify
computational bottlenecks and
remove them
 Incorporate advanced load-
balancing schemes
 Improve inter-processor
communication
 Improve I/O with MPI
Approach
10
768k cores on Mira
allow for Thousands
of high-fidelity
simulations set-up
 Job launching scripts for
optimum throughput
 Automated restarts
 Error handling
 Pre/post processing
case 1
case 2
case 10,000
workflow
manager
(Swift)
Grey lines: individual cycles
Black line: average of 2048 cycles
Gasoline Compression
Ignition Engine from S.
Ciatti (at Argonne)
• ~0.5 millioncells
• ~2k simulations run in
4 days on Mira
• On a cluster it may
take up to a month
* ~60 Million core hours provided through the ALCC program of ASCR
11
GLOBAL SENSITIVITY ANALYSIS on CFD Simulations of
GASOLINE COMPRESSION IGNITION (GCI)*
Simulation details
 48 species, 152 rxn. (Liu et. al mechanism)
87% iso-octane, 13% n-heptane
 Peak cell count: 0.5 million cells
 RANS simulations on a sector mesh
 Wall-clock time:
 6 hours on 64 processors of a cluster
 30-40 hours on 64 processors on Mira
Note: Mira is 6x slower than cluster on per processor basis
 2 operating conditions simulated: idle, low-load
 ~1024 simulations per load condition
 34 uncertain inputs identified
1.9 L GM diesel engine
* Experimental campaign lead by Dr. S. Ciatti @ ANL
CFD sector mesh
Targets of Interest
 Combustion Characteristics (CA10, CA50)
 Emissions (HC, CO, Nox, Soot)
 Performance (IMEP, Prr)
12
Parameter Mean Min. Max.
T_piston 400 385 415
T_cylinder 380 370 390
T_cylinder head 400 385 415
rpm 1500 1495 1505
swirl ratio 2.2 2 2.4
tkei 1.6 1.2 2
length scale (mm) 8.2 3 13
EGR% 0.1 0.08 0.12
tempi 389.5 386.5 392.5
presi (pa) 138000 135000 140000
critical temp (iso) 540 530 550
density 1 0.95 1.05
heat of vaporization 1 0.9 1.1
vapor pressure 1 0.9 1.1
viscosity 1 0.7 1.3
diameter_noz 141 139 143
angle_xz_noz 74 72 76
discharge_coeff 0.8 0.75 0.85
start_inject -21 -21.2 -20.8
duration of injection 9 8.8 9.2
mass injected (mg) 1.383 1.314 1.452
temp_injected 353 348 358
Parameter Mean Min. Max.
ceps1 1.42 1.35 1.5
ceps2 1.68 1.6 1.76
c_s 0 0 1.5
c_ps 0.03 0 0.16
schmidt 0.78 0.6 0.9
prandtl 0.9 0.85 0.95
Balpha 0.6 0.55 0.65
kh_cnst2 7 5 9
rt_cnst2b 1 0.5 2
cnst3rt 0.1 0.1 1
cone_noz 9 7 12
new parcel cut off 0.05 0.03 0.07
Boundary conditions / Fuel properties Model Constants
INPUT UNCERTAINTIES AND RANGE OF PERTURBATION
 34 inputs perturbed for each operating
condition (idle, low-load)
 These cases have the values of their N inputs
distributed in an n-dimensional hypercube
 Factor of ~30 per input, i.e., 1024 simulations
per operating condition
SENSITIVE PARAMETERS AT IDLE CONDITIONS
13
0 0.1 0.2 0.3
IVC Pressure
ceps2
ceps1
Length scale
Inclusion Angle
IVC Temp
Piston Temperature
Swirl Ratio
cnst3rt
Prandtl Number
Normalized Sensitivity Index
CA10
0 0.1 0.2
ceps2
Schmidt Number
ceps1
Nozzle Cone
Cd
Length scale
Swirl Ratio
Inclusion Angle
Piston Temperature
Fuel density
Normalized Sensitivity Index
NOx
0 0.1 0.2 0.3
Piston Temperature
IVC Temp
Inclusion Angle
ceps1
IVC Pressure
Fuel Mass
ceps2
New Parcel Cutoff
Prandtl Number
Length scale
Normalized Sensitivity Index
HC
0 0.1 0.2 0.3 0
Inclusion Angle
ceps1
IVC Temp
Piston Temperature
ceps2
IVC Pressure
c_s
Fuel Mass
Injection Duration
Cd
Normalized Sensitivity Index
CO
CA 10
0 0.1 0.2 0.3
IVC Pressure
ceps2
ceps1
Length scale
Inclusion Angle
IVC Temp
Piston Temperature
Swirl Ratio
cnst3rt
Prandtl Number
Normalized Sensitivity Index
CA10
0 0.1 0.2 0.3
ceps2
Schmidt Number
ceps1
Nozzle Cone
Cd
Length scale
Swirl Ratio
Inclusion Angle
Piston Temperature
Fuel density
Normalized Sensitivity Index
NOx
0 0.1 0.2 0.3
Piston Temperature
IVC Temp
Inclusion Angle
ceps1
IVC Pressure
Fuel Mass
ceps2
New Parcel Cutoff
Prandtl Number
Length scale
Normalized Sensitivity Index
HC
0 0.1 0.2 0.3 0.4
Inclusion Angle
ceps1
IVC Temp
Piston Temperature
ceps2
IVC Pressure
c_s
Fuel Mass
Injection Duration
Cd
Normalized Sensitivity Index
CO
NOx
0 0.1 0.2 0.3
IVC Pressure
ceps2
ceps1
Length scale
Inclusion Angle
IVC Temp
Piston Temperature
Swirl Ratio
cnst3rt
Prandtl Number
Normalized Sensitivity Index
CA10
0 0.1 0.2 0.
ceps2
Schmidt Number
ceps1
Nozzle Cone
Cd
Length scale
Swirl Ratio
Inclusion Angle
Piston Temperature
Fuel density
Normalized Sensitivity Index
NOx
0 0.1 0.2 0.3
Piston Temperature
IVC Temp
Inclusion Angle
ceps1
IVC Pressure
Fuel Mass
ceps2
New Parcel Cutoff
Prandtl Number
Length scale
Normalized Sensitivity Index
HC
0 0.1 0.2 0.3 0.4
Inclusion Angle
ceps1
IVC Temp
Piston Temperature
ceps2
IVC Pressure
c_s
Fuel Mass
Injection Duration
Cd
Normalized Sensitivity Index
CO
HC
0 0.1 0.2 0.3
IVC Pressure
ceps2
ceps1
Length scale
Inclusion Angle
IVC Temp
Piston Temperature
Swirl Ratio
cnst3rt
Prandtl Number
Normalized Sensitivity Index
CA10
0 0.1 0.2 0.3
ceps2
Schmidt Number
ceps1
Nozzle Cone
Cd
Length scale
Swirl Ratio
Inclusion Angle
Piston Temperature
Fuel density
Normalized Sensitivity Index
NOx
0 0.1 0.2 0.3
Piston Temperature
IVC Temp
Inclusion Angle
ceps1
IVC Pressure
Fuel Mass
ceps2
New Parcel Cutoff
Prandtl Number
Length scale
Normalized Sensitivity Index
HC
0 0.1 0.2 0.3 0.4
Inclusion Angle
ceps1
IVC Temp
Piston Temperature
ceps2
IVC Pressure
c_s
Fuel Mass
Injection Duration
Cd
Normalized Sensitivity Index
CO
CO
 Different input parameters have different sensitivities towards various targets
 GSA can be used as a screening tool to reduce the design space and then DoE can be performed
HPC ENABLING OPTIMIZATION OF GCI INJECTION TIMING
14
Visualization, courtesy J. Insley (ALCF, ANL)
IDENTIFYING STOCHASTIC EVENTS IN ENGINES THROUGH
DATA ANALYTIC TOOLS
 Cycle-to-Cycle Variation (CCV) is a concern for SI engine operation
 We are pioneering the use of ML techniques to identify the key factors determining if a
cycle is a “high” cycle or a “low” cycle by mining LES engine data
 Standard regression techniques can be challenged by complex interactions of factors
15Collaboration with P. Balaprakash at MCS, Argonne
high cycle low cycle
ML analysis (Random Forest) tells us that
cycles that have high skewness of the flame
in the Z direction, and also eccentricity of the
flame Center of Mass (COM) tend to be low
cycles
Random Forest
ranks factors by
their importance
in affecting peak
cylinder pressure
*expt engine data from Politecnico Torino,
LES funded by GTI and Polito
16
HIGH THROUGHPUT COMPUTING ON MIRA FOR CCV
16
 128 parallel simulations were run
simultaneously on MIRA supercomputer for
2 consecutive cycles each
 The wallclock time is ~60 hours per cycle on
64 cores on MIRA
 Including wait time, simulating 2 cycles of
128 parallel simulations took approx. 10 days
on MIRA
50 cycles in 2.5 months (cluster) vs. 256 cycles in 10 days (Mira)
 LES based turbulence model is
employed to understand cyclic
variability during the gas
exchange process
 50 consecutive simulations were
run on a cluster which took 2.5
months
mira cluster (best case) cluster (worst case)
cases per ensemble 12000 16 3
cores per ensemble 768000 1024 192
walltime per ensemble 36 hr 6 hr 6 hr
time per ensemble (incl. queue) 3-4 days 6 hr (no queue) 6 hr (no queue)
number of waves needed 1 750 4000
time to science 4 days 6 months 3 years
number of core hours 27 mil 4.6 mil 4.6 mil
cost per core-hr 1 cent 5 cent 5 cent
total core-hr cost $270K $230K $230K
design time 1X 45X 250X
ROI: COST BENEFIT ANALYSIS: 12,000 cases, 64 cores/case
17
Total cost = core-hrs + design time cost
supercomputer = $ 270 K+ 4-5 days
cluster (best case) = $ 230 K+ 6 months
cluster (worst case) = $ 230 K+ 3 years
18
http://www.verifi.anl.gov/
3rd workshop in November 2017
Predicting Cyclic Variability through Simulations
LEVERAGING COLLABORATIONS BETWEEN DIFFERENT DOE
OFFICES TO AID INDUSTRY WORK
DOE Office of Science DOE EERE: VTP Industry
Extensive Collaborations between Argonne (basic and applied areas) and Industry
aiding the development of engine simulation tools for solving real-world problems
0.02 0.03 0.04 0.05 0.06 0.07 0.08
-0.015
-0.01
-0.005
0
0.005
0.01
0.015
720
740
760
780
800
820
840
860
 Automotive OEMs
 Heavy-duty OEMs
 Software vendors
 Oil and energy companies
22
CORE CAPABILITIES & COLLABORATIONS
Large variety of engine platforms and fuels
• Light- and heavy-duty engines
• CI – SI – LTC – Dual-fuel
• Gasoline – Diesel – Biofuels
• Gasoline Compression Ignition
Model Development
• Nozzle flow for liquid and gaseous fuels
• Diesel Sprays
• Gasoline Sprays
• Detailed chemistry, Mechanism reduction
• Large Eddy Simulations
• Spark Modeling
• Parallel approach to capture Cyclic variability
• Turbulence chemistry interaction
• Optimization
• Uncertainty Analysis
• Conjugate Heat Transfer
• Develop lower-order models based on high-
fidelity CFD
Computational resources
• Clusters and Super-Computer Facility 20
HPC Enabler for Simulation based Engine Design
Use of High-spatial and temporal resolution
Robust turbulence models
Use of detailed chemistry based combustion models
Solving “one-of-a-kind” problem
Benefits
Unprecedented insights into the combustion process
Grid-convergent results => Increased predictive capability
Modify “best practices” in industry
Enable the use of next-generation computational architectures
Cluster Super-Computer
21
HPC Innovation award by IDC in Collaboration with Caterpillar and Convergent Science Inc., 2014
FLC Award for Excellence in Technology Transfer to Convergent Science Inc., 2015
Sponsors: U.S. Department of Energy, Office of Vehicle Technology
under the management of Gurpreet Singh, Leo Breton, Kevin Stork
We gratefully acknowledge the computing resources provided by the ASCR
Leadership Computing Challenge (ALCC) award of 60 million core-hours on Mira
supercomputer at the ALCF at Argonne National Laboratory
Argonne National Laboratory
J. Kodavasal, K. Saha, M. Ameen, N. Van Dam, P. Kundu, R. Torelli, P. Pal
A. Wagner (Chemical Science and Engineering)
K. Harms, J. Insley, M. Garcia, R. Bair (Leadership Computing Facility)
Past: Y. Pei, Q. Xue, Z. Wang, Prof. Michele Battistoni (U. Perugia)
Convergent Science Inc.
GM R&D
Acknowledgements
22
www.anl.gov
THANK YOU
ssom@anl.gov
24
Back-Up Slides
25
THE ENERGY OUTLOOK
COURTESY: INTERNATIONAL ENERGY ASSOCIATION 2013
 The transportation sector relies heavily on Oil and Gas now
– Forecasts for 2030 and 2050 show that more than 50% of the world’s total
energy will still come from Fossil Fuels
Let us burn these fossils more efficiently and cleanly!!
26
GOVERNING EQUATIONS
𝜕𝜌
𝜕𝑡
+
𝜕𝜌𝑢𝑖
𝜕𝑥𝑖
= 𝑠
𝜕𝜌𝑢𝑖
𝜕𝑡
+
𝜕𝜌𝑢𝑖 𝑢𝑗
𝜕𝑥𝑗
= −
𝜕𝑃
𝜕𝑥𝑖
+
𝜕𝜎𝑖𝑗
𝜕𝑥𝑗
+ 𝑠𝑖
𝜕𝜌𝑒
𝜕𝑡
+
𝜕𝜌𝑢𝑖 𝑒
𝜕𝑥𝑖
= −𝑃
𝜕𝑢𝑖
𝜕𝑥𝑖
+ 𝜎𝑖𝑗
𝜕𝑢𝑖
𝜕𝑥𝑗
+
𝜕
𝜕𝑥𝑖
(𝑘
𝜕𝑇
𝜕𝑥𝑖
) +
𝜕
𝜕𝑥𝑖
(𝜌 ෍
𝑚
𝐷ℎ 𝑚
𝜕𝑌 𝑚
𝜕𝑥𝑖
) + 𝑠
𝜕𝜌 𝑚
𝜕𝑡
+
𝜕𝜌 𝑚 𝑢𝑖
𝜕𝑥𝑖
=
𝜕
𝜕𝑥𝑖
𝜌𝐷
𝜕𝑌 𝑚
𝜕𝑥𝑖
+ 𝑠 𝑚
Where ρ is the density of the mixture, ρm is the
density of species m, σij denotes the stress tensor, k
stands for the conductivity, Ym is the mass fraction of
species m, D is the mass diffusion coefficient, hm
denotes the species enthalpy, e stands for the
specific internal energy, and s is the respective
source.
Adaptive Meshing:
Boon: Engine Simulations
Bane: High Performance Computing
• AMR enables to provide mesh anywhere in the
flow-field as desired
• Load-distribution is extremely challenging with
embedding levels higher than 4, especially
with moving boundaries (piston, valves,
needle, etc.)
CFD DOMAIN, BCS AND MESH ARRANGEMENT
27
Intake Boundary
Exhaust Boundary
Mesh Size=0.25mm
mesh
size=1mm
1D GT-Power modeling result is used to define
boundary conditions for 3D CFD simulation
1D Model
Code CONVERGE v2.1
Computational Grid 3D, structured without Adaptive Mesh Refinement
Spatial Discretization 2nd order, finite volume
Computational Grid Size Base grid size: 1 mm in cylinder
Fixed embedding in the valve seat region (0.5 mm),
piston liner (0.5 mm) and spark plug region (0.25 mm)
Peak Cell count ~1.1 million
Time Discretization PISO (Pressure Implicit with Splitting of Operators)
Subgrid-scale Model 1-Eq. Dynamic Structure
Number of cores 48 cores
40 consecutive cycles are simulated
 Uncertainty arising from different sources—errors in the:
• Data, Parameter estimation procedure
• Alternative model structures
 Propagate through the model for uncertainty analysis
 Their relative importance is quantified via sensitivity analysis
• Design the experiments
• Obtain more accurate predictions
SENSITIVITY ANALYSIS
28

More Related Content

What's hot

Hardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLHardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and ML
inside-BigData.com
 
SDVIs and In-Situ Visualization on TACC's Stampede
SDVIs and In-Situ Visualization on TACC's StampedeSDVIs and In-Situ Visualization on TACC's Stampede
SDVIs and In-Situ Visualization on TACC's Stampede
Intel® Software
 
ARM HPC Ecosystem
ARM HPC EcosystemARM HPC Ecosystem
ARM HPC Ecosystem
inside-BigData.com
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
Intel® Software
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
inside-BigData.com
 
Best Practices and Performance Studies for High-Performance Computing Clusters
Best Practices and Performance Studies for High-Performance Computing ClustersBest Practices and Performance Studies for High-Performance Computing Clusters
Best Practices and Performance Studies for High-Performance Computing Clusters
Intel® Software
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
inside-BigData.com
 
Intel python 2017
Intel python 2017Intel python 2017
Intel python 2017
DESMOND YUEN
 
IBM HPC Transformation with AI
IBM HPC Transformation with AI IBM HPC Transformation with AI
IBM HPC Transformation with AI
Ganesan Narayanasamy
 
TAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platform
Ganesan Narayanasamy
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhere
inside-BigData.com
 
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
Ganesan Narayanasamy
 
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER TutorialSCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
Ganesan Narayanasamy
 
Introduction to architecture exploration
Introduction to architecture explorationIntroduction to architecture exploration
Introduction to architecture exploration
Deepak Shankar
 
Increasing Throughput per Node for Content Delivery Networks
Increasing Throughput per Node for Content Delivery NetworksIncreasing Throughput per Node for Content Delivery Networks
Increasing Throughput per Node for Content Delivery Networks
DESMOND YUEN
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
inside-BigData.com
 
2018 Intel AI Developer Conference Keynote
2018 Intel AI Developer Conference Keynote2018 Intel AI Developer Conference Keynote
2018 Intel AI Developer Conference Keynote
DESMOND YUEN
 
Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2
Gaurav Raina
 
Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...
Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...
Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...
Gaurav Raina
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
inside-BigData.com
 

What's hot (20)

Hardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLHardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and ML
 
SDVIs and In-Situ Visualization on TACC's Stampede
SDVIs and In-Situ Visualization on TACC's StampedeSDVIs and In-Situ Visualization on TACC's Stampede
SDVIs and In-Situ Visualization on TACC's Stampede
 
ARM HPC Ecosystem
ARM HPC EcosystemARM HPC Ecosystem
ARM HPC Ecosystem
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
 
Best Practices and Performance Studies for High-Performance Computing Clusters
Best Practices and Performance Studies for High-Performance Computing ClustersBest Practices and Performance Studies for High-Performance Computing Clusters
Best Practices and Performance Studies for High-Performance Computing Clusters
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
Intel python 2017
Intel python 2017Intel python 2017
Intel python 2017
 
IBM HPC Transformation with AI
IBM HPC Transformation with AI IBM HPC Transformation with AI
IBM HPC Transformation with AI
 
TAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platform
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhere
 
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
 
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER TutorialSCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
 
Introduction to architecture exploration
Introduction to architecture explorationIntroduction to architecture exploration
Introduction to architecture exploration
 
Increasing Throughput per Node for Content Delivery Networks
Increasing Throughput per Node for Content Delivery NetworksIncreasing Throughput per Node for Content Delivery Networks
Increasing Throughput per Node for Content Delivery Networks
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
 
2018 Intel AI Developer Conference Keynote
2018 Intel AI Developer Conference Keynote2018 Intel AI Developer Conference Keynote
2018 Intel AI Developer Conference Keynote
 
Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2
 
Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...
Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...
Presentation Thesis - Convolutional net on the Xeon Phi using SIMD - Gaurav R...
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 

Similar to HPC Accelerating Combustion Engine Design

Towards Exascale Engine Simulations with NEK5000
Towards Exascale Engine Simulations with NEK5000Towards Exascale Engine Simulations with NEK5000
Towards Exascale Engine Simulations with NEK5000
inside-BigData.com
 
Cray HPC Environments for Leading Edge Simulations
Cray HPC Environments for Leading Edge SimulationsCray HPC Environments for Leading Edge Simulations
Cray HPC Environments for Leading Edge Simulations
inside-BigData.com
 
Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009
James McGalliard
 
Large-Scale Optimization Strategies for Typical HPC Workloads
Large-Scale Optimization Strategies for Typical HPC WorkloadsLarge-Scale Optimization Strategies for Typical HPC Workloads
Large-Scale Optimization Strategies for Typical HPC Workloads
inside-BigData.com
 
Czero Engineering - Feb 2017
Czero Engineering  - Feb 2017Czero Engineering  - Feb 2017
Czero Engineering - Feb 2017
Czero
 
Altair slideshare nanofluidx summary 2017
Altair slideshare nanofluidx summary 2017Altair slideshare nanofluidx summary 2017
Altair slideshare nanofluidx summary 2017
Paul Kirkham
 
Magma trcak b
Magma  trcak bMagma  trcak b
Magma trcak b
Alona Gradman
 
HPC Midlands - E.ON Supercomputing Case Study
HPC Midlands - E.ON Supercomputing Case StudyHPC Midlands - E.ON Supercomputing Case Study
HPC Midlands - E.ON Supercomputing Case Study
Martin Hamilton
 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
byteLAKE
 
N130305105111
N130305105111N130305105111
N130305105111
IOSR Journals
 
High Performance Computing for Instabilities in Aerospace Propulsion Systems
High Performance Computing for Instabilities in Aerospace Propulsion SystemsHigh Performance Computing for Instabilities in Aerospace Propulsion Systems
High Performance Computing for Instabilities in Aerospace Propulsion Systems
inside-BigData.com
 
In cylinder cold flow cfd simulation of ic engine using hybrid approach
In cylinder cold flow cfd simulation of ic engine using hybrid approachIn cylinder cold flow cfd simulation of ic engine using hybrid approach
In cylinder cold flow cfd simulation of ic engine using hybrid approach
eSAT Publishing House
 
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Fisnik Kraja
 
Sushil-CV
Sushil-CVSushil-CV
Industrial plant optimization in reduced dimensional spaces
Industrial plant optimization in reduced dimensional spacesIndustrial plant optimization in reduced dimensional spaces
Industrial plant optimization in reduced dimensional spaces
Capstone
 
Amorim.pdf
Amorim.pdfAmorim.pdf
Amorim.pdf
RomanbabuOinam
 
OpenACC Monthly Highlights: September 2021
OpenACC Monthly Highlights: September 2021OpenACC Monthly Highlights: September 2021
OpenACC Monthly Highlights: September 2021
OpenACC
 
CV_Akram_Malak_Eng
CV_Akram_Malak_EngCV_Akram_Malak_Eng
CV_Akram_Malak_Eng
Akram Malak
 
AHF_IDETC_2011_Jie
AHF_IDETC_2011_JieAHF_IDETC_2011_Jie
AHF_IDETC_2011_Jie
MDO_Lab
 
Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...
Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...
Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...
IRJET Journal
 

Similar to HPC Accelerating Combustion Engine Design (20)

Towards Exascale Engine Simulations with NEK5000
Towards Exascale Engine Simulations with NEK5000Towards Exascale Engine Simulations with NEK5000
Towards Exascale Engine Simulations with NEK5000
 
Cray HPC Environments for Leading Edge Simulations
Cray HPC Environments for Leading Edge SimulationsCray HPC Environments for Leading Edge Simulations
Cray HPC Environments for Leading Edge Simulations
 
Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009Benchmark Analysis of Multi-core Processor Memory Contention April 2009
Benchmark Analysis of Multi-core Processor Memory Contention April 2009
 
Large-Scale Optimization Strategies for Typical HPC Workloads
Large-Scale Optimization Strategies for Typical HPC WorkloadsLarge-Scale Optimization Strategies for Typical HPC Workloads
Large-Scale Optimization Strategies for Typical HPC Workloads
 
Czero Engineering - Feb 2017
Czero Engineering  - Feb 2017Czero Engineering  - Feb 2017
Czero Engineering - Feb 2017
 
Altair slideshare nanofluidx summary 2017
Altair slideshare nanofluidx summary 2017Altair slideshare nanofluidx summary 2017
Altair slideshare nanofluidx summary 2017
 
Magma trcak b
Magma  trcak bMagma  trcak b
Magma trcak b
 
HPC Midlands - E.ON Supercomputing Case Study
HPC Midlands - E.ON Supercomputing Case StudyHPC Midlands - E.ON Supercomputing Case Study
HPC Midlands - E.ON Supercomputing Case Study
 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
 
N130305105111
N130305105111N130305105111
N130305105111
 
High Performance Computing for Instabilities in Aerospace Propulsion Systems
High Performance Computing for Instabilities in Aerospace Propulsion SystemsHigh Performance Computing for Instabilities in Aerospace Propulsion Systems
High Performance Computing for Instabilities in Aerospace Propulsion Systems
 
In cylinder cold flow cfd simulation of ic engine using hybrid approach
In cylinder cold flow cfd simulation of ic engine using hybrid approachIn cylinder cold flow cfd simulation of ic engine using hybrid approach
In cylinder cold flow cfd simulation of ic engine using hybrid approach
 
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
 
Sushil-CV
Sushil-CVSushil-CV
Sushil-CV
 
Industrial plant optimization in reduced dimensional spaces
Industrial plant optimization in reduced dimensional spacesIndustrial plant optimization in reduced dimensional spaces
Industrial plant optimization in reduced dimensional spaces
 
Amorim.pdf
Amorim.pdfAmorim.pdf
Amorim.pdf
 
OpenACC Monthly Highlights: September 2021
OpenACC Monthly Highlights: September 2021OpenACC Monthly Highlights: September 2021
OpenACC Monthly Highlights: September 2021
 
CV_Akram_Malak_Eng
CV_Akram_Malak_EngCV_Akram_Malak_Eng
CV_Akram_Malak_Eng
 
AHF_IDETC_2011_Jie
AHF_IDETC_2011_JieAHF_IDETC_2011_Jie
AHF_IDETC_2011_Jie
 
Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...
Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...
Diesel Engine CFD Simulations: Investigation of Time Step on the Combustion P...
 

More from inside-BigData.com

Major Market Shifts in IT
Major Market Shifts in ITMajor Market Shifts in IT
Major Market Shifts in IT
inside-BigData.com
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
inside-BigData.com
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
inside-BigData.com
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
inside-BigData.com
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
inside-BigData.com
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
inside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
inside-BigData.com
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
inside-BigData.com
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
inside-BigData.com
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
inside-BigData.com
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
inside-BigData.com
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
inside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
inside-BigData.com
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
inside-BigData.com
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
inside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
inside-BigData.com
 
Data Parallel Deep Learning
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
inside-BigData.com
 
Making Supernovae with Jets
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
inside-BigData.com
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
inside-BigData.com
 

More from inside-BigData.com (20)

Major Market Shifts in IT
Major Market Shifts in ITMajor Market Shifts in IT
Major Market Shifts in IT
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
 
Data Parallel Deep Learning
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
 
Making Supernovae with Jets
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
 

Recently uploaded

zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
Alex Pruden
 
9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...
9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...
9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...
saastr
 
Fueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte WebinarFueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte Webinar
Zilliz
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Alpen-Adria-Universität
 
Introduction of Cybersecurity with OSS at Code Europe 2024
Introduction of Cybersecurity with OSS  at Code Europe 2024Introduction of Cybersecurity with OSS  at Code Europe 2024
Introduction of Cybersecurity with OSS at Code Europe 2024
Hiroshi SHIBATA
 
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge GraphGraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
Neo4j
 
Apps Break Data
Apps Break DataApps Break Data
Apps Break Data
Ivo Velitchkov
 
JavaLand 2024: Application Development Green Masterplan
JavaLand 2024: Application Development Green MasterplanJavaLand 2024: Application Development Green Masterplan
JavaLand 2024: Application Development Green Masterplan
Miro Wengner
 
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
Edge AI and Vision Alliance
 
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor Ivaniuk
"Frontline Battles with DDoS: Best practices and Lessons Learned",  Igor Ivaniuk"Frontline Battles with DDoS: Best practices and Lessons Learned",  Igor Ivaniuk
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor Ivaniuk
Fwdays
 
Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |
AstuteBusiness
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
Antonios Katsarakis
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
MichaelKnudsen27
 
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
Jason Yip
 
AppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSFAppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSF
Ajin Abraham
 
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Pitangent Analytics & Technology Solutions Pvt. Ltd
 
June Patch Tuesday
June Patch TuesdayJune Patch Tuesday
June Patch Tuesday
Ivanti
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
akankshawande
 
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframeDigital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Precisely
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Safe Software
 

Recently uploaded (20)

zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
 
9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...
9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...
9 CEO's who hit $100m ARR Share Their Top Growth Tactics Nathan Latka, Founde...
 
Fueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte WebinarFueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte Webinar
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
 
Introduction of Cybersecurity with OSS at Code Europe 2024
Introduction of Cybersecurity with OSS  at Code Europe 2024Introduction of Cybersecurity with OSS  at Code Europe 2024
Introduction of Cybersecurity with OSS at Code Europe 2024
 
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge GraphGraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
 
Apps Break Data
Apps Break DataApps Break Data
Apps Break Data
 
JavaLand 2024: Application Development Green Masterplan
JavaLand 2024: Application Development Green MasterplanJavaLand 2024: Application Development Green Masterplan
JavaLand 2024: Application Development Green Masterplan
 
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
 
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor Ivaniuk
"Frontline Battles with DDoS: Best practices and Lessons Learned",  Igor Ivaniuk"Frontline Battles with DDoS: Best practices and Lessons Learned",  Igor Ivaniuk
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor Ivaniuk
 
Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
 
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
 
AppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSFAppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSF
 
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
 
June Patch Tuesday
June Patch TuesdayJune Patch Tuesday
June Patch Tuesday
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
 
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframeDigital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
 

HPC Accelerating Combustion Engine Design

  • 1. WE START WITH YES. HPC ACCELERATING COMBUSTION ENGINE DESIGN erhtjhtyhy 18th April, 2017 65th HPC USER FORUM SIBENDU SOM Lead Computational Scientist Computational Fellow Argonne National Laboratory
  • 2. 2 OUTLINE Introduction Role of M & S For Engine Design Need for Exascale for Engine Combustion Simulations Capacity & Capability computing – Integration with workflow manager “Swift” for high throughput Sensitivity Analysis and Uncertainty Quantification for Engine Combustion Simulations Example uses of HPC for Industry Pioneering the use of Machine Learning tools in Engine Simulations ROI (Cluster vs. Supercomputer) Argonne’s VERIFI program for working with Industry Take Home Message!
  • 3. THE EDISONIAN APPROACH IS GREAT, BUT 21ST CENTURY PROBLEMS REQUIRE MORE  Computer simulations provide a potentially transformative new strategy  Multi-objective optimizations can identify "best" solution  Uncertainty and Sensitivity analysis can identify paths to improve the optimizations and establish safety margins  Simulations can complement experiments, especially for problems related to design and optimization New Engine design has historically been performed by extensive hardware testing which is too slow, too hard, and too expensive, and not guaranteed to reach an optimum solution HIGH-FIDELITY MODELS ACROSS THE DESIGN SPACE HIGH THROUGHPUT SIMULATIONS TO EXPLORE THE FULL DESIGN SPACE vs. 3
  • 4. Challenges: In-nozzle and spray physics, Cyclic variability (need for high-fidelity turbulence models), details of combustion chemistry for different fuels, grid convergence, NOx and soot modeling, after-treatment, material issues (Conjugate Heat Transfer), moving boundaries, Lagrangian parcels, adaptive meshing, small time steps, densely embedded mesh INTERNAL COMBUSTION ENGINES 4
  • 5. Traditional Engine simulations involve:  Unresolved Nozzle flow  Simplified combustion models  Coarse mesh => grid-dependence  Simplified turbulence models  Poor load-balancing algorithms Transformational Approach:  Detailed chemistry based combustion models  Fine mesh => grid-convergence  High-fidelity turbulence models: LES based  Two-phase physics based fuel spray model  In-nozzle-flow models  Scaling on Exascale systems Towards Predictive Simulation of the Internal Combustion Engine Reality: Extensive tuning to match experimental data 5 ROLE OF MODELING & SIMULATIONS FOR ENGINES
  • 6. 6 Full in silico Engine and Fuel Co-Design with quantified uncertainties that dramatically reduces the need for physical prototyping, allows rapid characterization and testing of novel engine and fuel designs, and reduces design costs and time-to- market of new concepts Reality We can perform simplified simulations on engine and fuel components. The uncertainties are poorly understood. Transformation Exploit emerging computational capabilities through the development of automated, highly parallelized CFD and combustion chemistry codes with fully integrated uncertainty quantification Dream
  • 7. UncertaintyQuantification [10x] 360 degree cylinder geometry [10x] multiple cycle variations [10x] more accurate turbulence model (LES) [10x] accurate spray dynamics [50x] detailed chemical kinetics for real transportation fuels TOTAL Speedup needs are 500,000 times TODAY’S standard (today’s industry standard is 64 cores with 24 hour turnaround) REVOLUTIONIZING THE CO-DESIGN OF ENGINES AND FUELS IS BIGGER THAN AN EXASCALE PROBLEM According to John Deur, Director of Engine Research at Cummins Inc., the following levels of speedup are needed to significantly impact today’s engine development* OUR PROBLEM IS EXASCALE => 30 million cores for 24 hour turnaround! * DOE Engine Simulation Roadmap Workshop - August 18, 2014 6
  • 8. COMPUTATIONAL RESOURCES Clusters Super-Computer Fusion Cluster  320 compute nodes  2560 compute processors  12.5 terabytes memory  500 terabytes disk  25.9 teraflops peak Blues Cluster  310 compute nodes  4960 compute processors  107.8 teraflops peak Next-gen Super- Computer in 2019 50 racks 180 petaflops 0.18 Exaflops: Still not Exascale! 8 2013-15 2016-18
  • 9. HIGH-FIDELITY MODELS ACROSS THE DESIGN SPACE (Capability Computing) 9 HIGH THROUGHPUT SIMULATIONS TO EXPLORE THE FULL DESIGN SPACE (Capacity Computing) hole # 1 hole # 2 hole # 3 “String type” cavitation hole # 1 Few large high-fidelity simulations on 1000s of processors run for 2-3 weeks 1000s of medium-fidelity simulations on ~10k of processors run for 2-3 weeks EXAMPLES OF CURRENT STATE-OF-THE-ART FOR ENGINE M & S
  • 10. ENABLING HIGH THROUGHPUT COMPUTING ON MIRA* Typically engines are designed and operating conditions are optimized based on experimental studies and engineer’s intuition All design space may not be explored and engineering intuition may not work for a completely new concept being designed from scratch Goal: Develop capabilities to use leadership class machines (e.g., Mira) for high throughput computing to run ~10k simulations in two weeks  Import engine simulation code (CONVERGE) on Mira  Profile the code to identify computational bottlenecks and remove them  Incorporate advanced load- balancing schemes  Improve inter-processor communication  Improve I/O with MPI Approach 10 768k cores on Mira allow for Thousands of high-fidelity simulations set-up  Job launching scripts for optimum throughput  Automated restarts  Error handling  Pre/post processing case 1 case 2 case 10,000 workflow manager (Swift) Grey lines: individual cycles Black line: average of 2048 cycles Gasoline Compression Ignition Engine from S. Ciatti (at Argonne) • ~0.5 millioncells • ~2k simulations run in 4 days on Mira • On a cluster it may take up to a month * ~60 Million core hours provided through the ALCC program of ASCR
  • 11. 11 GLOBAL SENSITIVITY ANALYSIS on CFD Simulations of GASOLINE COMPRESSION IGNITION (GCI)* Simulation details  48 species, 152 rxn. (Liu et. al mechanism) 87% iso-octane, 13% n-heptane  Peak cell count: 0.5 million cells  RANS simulations on a sector mesh  Wall-clock time:  6 hours on 64 processors of a cluster  30-40 hours on 64 processors on Mira Note: Mira is 6x slower than cluster on per processor basis  2 operating conditions simulated: idle, low-load  ~1024 simulations per load condition  34 uncertain inputs identified 1.9 L GM diesel engine * Experimental campaign lead by Dr. S. Ciatti @ ANL CFD sector mesh Targets of Interest  Combustion Characteristics (CA10, CA50)  Emissions (HC, CO, Nox, Soot)  Performance (IMEP, Prr)
  • 12. 12 Parameter Mean Min. Max. T_piston 400 385 415 T_cylinder 380 370 390 T_cylinder head 400 385 415 rpm 1500 1495 1505 swirl ratio 2.2 2 2.4 tkei 1.6 1.2 2 length scale (mm) 8.2 3 13 EGR% 0.1 0.08 0.12 tempi 389.5 386.5 392.5 presi (pa) 138000 135000 140000 critical temp (iso) 540 530 550 density 1 0.95 1.05 heat of vaporization 1 0.9 1.1 vapor pressure 1 0.9 1.1 viscosity 1 0.7 1.3 diameter_noz 141 139 143 angle_xz_noz 74 72 76 discharge_coeff 0.8 0.75 0.85 start_inject -21 -21.2 -20.8 duration of injection 9 8.8 9.2 mass injected (mg) 1.383 1.314 1.452 temp_injected 353 348 358 Parameter Mean Min. Max. ceps1 1.42 1.35 1.5 ceps2 1.68 1.6 1.76 c_s 0 0 1.5 c_ps 0.03 0 0.16 schmidt 0.78 0.6 0.9 prandtl 0.9 0.85 0.95 Balpha 0.6 0.55 0.65 kh_cnst2 7 5 9 rt_cnst2b 1 0.5 2 cnst3rt 0.1 0.1 1 cone_noz 9 7 12 new parcel cut off 0.05 0.03 0.07 Boundary conditions / Fuel properties Model Constants INPUT UNCERTAINTIES AND RANGE OF PERTURBATION  34 inputs perturbed for each operating condition (idle, low-load)  These cases have the values of their N inputs distributed in an n-dimensional hypercube  Factor of ~30 per input, i.e., 1024 simulations per operating condition
  • 13. SENSITIVE PARAMETERS AT IDLE CONDITIONS 13 0 0.1 0.2 0.3 IVC Pressure ceps2 ceps1 Length scale Inclusion Angle IVC Temp Piston Temperature Swirl Ratio cnst3rt Prandtl Number Normalized Sensitivity Index CA10 0 0.1 0.2 ceps2 Schmidt Number ceps1 Nozzle Cone Cd Length scale Swirl Ratio Inclusion Angle Piston Temperature Fuel density Normalized Sensitivity Index NOx 0 0.1 0.2 0.3 Piston Temperature IVC Temp Inclusion Angle ceps1 IVC Pressure Fuel Mass ceps2 New Parcel Cutoff Prandtl Number Length scale Normalized Sensitivity Index HC 0 0.1 0.2 0.3 0 Inclusion Angle ceps1 IVC Temp Piston Temperature ceps2 IVC Pressure c_s Fuel Mass Injection Duration Cd Normalized Sensitivity Index CO CA 10 0 0.1 0.2 0.3 IVC Pressure ceps2 ceps1 Length scale Inclusion Angle IVC Temp Piston Temperature Swirl Ratio cnst3rt Prandtl Number Normalized Sensitivity Index CA10 0 0.1 0.2 0.3 ceps2 Schmidt Number ceps1 Nozzle Cone Cd Length scale Swirl Ratio Inclusion Angle Piston Temperature Fuel density Normalized Sensitivity Index NOx 0 0.1 0.2 0.3 Piston Temperature IVC Temp Inclusion Angle ceps1 IVC Pressure Fuel Mass ceps2 New Parcel Cutoff Prandtl Number Length scale Normalized Sensitivity Index HC 0 0.1 0.2 0.3 0.4 Inclusion Angle ceps1 IVC Temp Piston Temperature ceps2 IVC Pressure c_s Fuel Mass Injection Duration Cd Normalized Sensitivity Index CO NOx 0 0.1 0.2 0.3 IVC Pressure ceps2 ceps1 Length scale Inclusion Angle IVC Temp Piston Temperature Swirl Ratio cnst3rt Prandtl Number Normalized Sensitivity Index CA10 0 0.1 0.2 0. ceps2 Schmidt Number ceps1 Nozzle Cone Cd Length scale Swirl Ratio Inclusion Angle Piston Temperature Fuel density Normalized Sensitivity Index NOx 0 0.1 0.2 0.3 Piston Temperature IVC Temp Inclusion Angle ceps1 IVC Pressure Fuel Mass ceps2 New Parcel Cutoff Prandtl Number Length scale Normalized Sensitivity Index HC 0 0.1 0.2 0.3 0.4 Inclusion Angle ceps1 IVC Temp Piston Temperature ceps2 IVC Pressure c_s Fuel Mass Injection Duration Cd Normalized Sensitivity Index CO HC 0 0.1 0.2 0.3 IVC Pressure ceps2 ceps1 Length scale Inclusion Angle IVC Temp Piston Temperature Swirl Ratio cnst3rt Prandtl Number Normalized Sensitivity Index CA10 0 0.1 0.2 0.3 ceps2 Schmidt Number ceps1 Nozzle Cone Cd Length scale Swirl Ratio Inclusion Angle Piston Temperature Fuel density Normalized Sensitivity Index NOx 0 0.1 0.2 0.3 Piston Temperature IVC Temp Inclusion Angle ceps1 IVC Pressure Fuel Mass ceps2 New Parcel Cutoff Prandtl Number Length scale Normalized Sensitivity Index HC 0 0.1 0.2 0.3 0.4 Inclusion Angle ceps1 IVC Temp Piston Temperature ceps2 IVC Pressure c_s Fuel Mass Injection Duration Cd Normalized Sensitivity Index CO CO  Different input parameters have different sensitivities towards various targets  GSA can be used as a screening tool to reduce the design space and then DoE can be performed
  • 14. HPC ENABLING OPTIMIZATION OF GCI INJECTION TIMING 14 Visualization, courtesy J. Insley (ALCF, ANL)
  • 15. IDENTIFYING STOCHASTIC EVENTS IN ENGINES THROUGH DATA ANALYTIC TOOLS  Cycle-to-Cycle Variation (CCV) is a concern for SI engine operation  We are pioneering the use of ML techniques to identify the key factors determining if a cycle is a “high” cycle or a “low” cycle by mining LES engine data  Standard regression techniques can be challenged by complex interactions of factors 15Collaboration with P. Balaprakash at MCS, Argonne high cycle low cycle ML analysis (Random Forest) tells us that cycles that have high skewness of the flame in the Z direction, and also eccentricity of the flame Center of Mass (COM) tend to be low cycles Random Forest ranks factors by their importance in affecting peak cylinder pressure *expt engine data from Politecnico Torino, LES funded by GTI and Polito 16
  • 16. HIGH THROUGHPUT COMPUTING ON MIRA FOR CCV 16  128 parallel simulations were run simultaneously on MIRA supercomputer for 2 consecutive cycles each  The wallclock time is ~60 hours per cycle on 64 cores on MIRA  Including wait time, simulating 2 cycles of 128 parallel simulations took approx. 10 days on MIRA 50 cycles in 2.5 months (cluster) vs. 256 cycles in 10 days (Mira)  LES based turbulence model is employed to understand cyclic variability during the gas exchange process  50 consecutive simulations were run on a cluster which took 2.5 months
  • 17. mira cluster (best case) cluster (worst case) cases per ensemble 12000 16 3 cores per ensemble 768000 1024 192 walltime per ensemble 36 hr 6 hr 6 hr time per ensemble (incl. queue) 3-4 days 6 hr (no queue) 6 hr (no queue) number of waves needed 1 750 4000 time to science 4 days 6 months 3 years number of core hours 27 mil 4.6 mil 4.6 mil cost per core-hr 1 cent 5 cent 5 cent total core-hr cost $270K $230K $230K design time 1X 45X 250X ROI: COST BENEFIT ANALYSIS: 12,000 cases, 64 cores/case 17 Total cost = core-hrs + design time cost supercomputer = $ 270 K+ 4-5 days cluster (best case) = $ 230 K+ 6 months cluster (worst case) = $ 230 K+ 3 years
  • 18. 18 http://www.verifi.anl.gov/ 3rd workshop in November 2017 Predicting Cyclic Variability through Simulations
  • 19. LEVERAGING COLLABORATIONS BETWEEN DIFFERENT DOE OFFICES TO AID INDUSTRY WORK DOE Office of Science DOE EERE: VTP Industry Extensive Collaborations between Argonne (basic and applied areas) and Industry aiding the development of engine simulation tools for solving real-world problems 0.02 0.03 0.04 0.05 0.06 0.07 0.08 -0.015 -0.01 -0.005 0 0.005 0.01 0.015 720 740 760 780 800 820 840 860  Automotive OEMs  Heavy-duty OEMs  Software vendors  Oil and energy companies 22
  • 20. CORE CAPABILITIES & COLLABORATIONS Large variety of engine platforms and fuels • Light- and heavy-duty engines • CI – SI – LTC – Dual-fuel • Gasoline – Diesel – Biofuels • Gasoline Compression Ignition Model Development • Nozzle flow for liquid and gaseous fuels • Diesel Sprays • Gasoline Sprays • Detailed chemistry, Mechanism reduction • Large Eddy Simulations • Spark Modeling • Parallel approach to capture Cyclic variability • Turbulence chemistry interaction • Optimization • Uncertainty Analysis • Conjugate Heat Transfer • Develop lower-order models based on high- fidelity CFD Computational resources • Clusters and Super-Computer Facility 20
  • 21. HPC Enabler for Simulation based Engine Design Use of High-spatial and temporal resolution Robust turbulence models Use of detailed chemistry based combustion models Solving “one-of-a-kind” problem Benefits Unprecedented insights into the combustion process Grid-convergent results => Increased predictive capability Modify “best practices” in industry Enable the use of next-generation computational architectures Cluster Super-Computer 21 HPC Innovation award by IDC in Collaboration with Caterpillar and Convergent Science Inc., 2014 FLC Award for Excellence in Technology Transfer to Convergent Science Inc., 2015
  • 22. Sponsors: U.S. Department of Energy, Office of Vehicle Technology under the management of Gurpreet Singh, Leo Breton, Kevin Stork We gratefully acknowledge the computing resources provided by the ASCR Leadership Computing Challenge (ALCC) award of 60 million core-hours on Mira supercomputer at the ALCF at Argonne National Laboratory Argonne National Laboratory J. Kodavasal, K. Saha, M. Ameen, N. Van Dam, P. Kundu, R. Torelli, P. Pal A. Wagner (Chemical Science and Engineering) K. Harms, J. Insley, M. Garcia, R. Bair (Leadership Computing Facility) Past: Y. Pei, Q. Xue, Z. Wang, Prof. Michele Battistoni (U. Perugia) Convergent Science Inc. GM R&D Acknowledgements 22
  • 25. 25 THE ENERGY OUTLOOK COURTESY: INTERNATIONAL ENERGY ASSOCIATION 2013  The transportation sector relies heavily on Oil and Gas now – Forecasts for 2030 and 2050 show that more than 50% of the world’s total energy will still come from Fossil Fuels Let us burn these fossils more efficiently and cleanly!!
  • 26. 26 GOVERNING EQUATIONS 𝜕𝜌 𝜕𝑡 + 𝜕𝜌𝑢𝑖 𝜕𝑥𝑖 = 𝑠 𝜕𝜌𝑢𝑖 𝜕𝑡 + 𝜕𝜌𝑢𝑖 𝑢𝑗 𝜕𝑥𝑗 = − 𝜕𝑃 𝜕𝑥𝑖 + 𝜕𝜎𝑖𝑗 𝜕𝑥𝑗 + 𝑠𝑖 𝜕𝜌𝑒 𝜕𝑡 + 𝜕𝜌𝑢𝑖 𝑒 𝜕𝑥𝑖 = −𝑃 𝜕𝑢𝑖 𝜕𝑥𝑖 + 𝜎𝑖𝑗 𝜕𝑢𝑖 𝜕𝑥𝑗 + 𝜕 𝜕𝑥𝑖 (𝑘 𝜕𝑇 𝜕𝑥𝑖 ) + 𝜕 𝜕𝑥𝑖 (𝜌 ෍ 𝑚 𝐷ℎ 𝑚 𝜕𝑌 𝑚 𝜕𝑥𝑖 ) + 𝑠 𝜕𝜌 𝑚 𝜕𝑡 + 𝜕𝜌 𝑚 𝑢𝑖 𝜕𝑥𝑖 = 𝜕 𝜕𝑥𝑖 𝜌𝐷 𝜕𝑌 𝑚 𝜕𝑥𝑖 + 𝑠 𝑚 Where ρ is the density of the mixture, ρm is the density of species m, σij denotes the stress tensor, k stands for the conductivity, Ym is the mass fraction of species m, D is the mass diffusion coefficient, hm denotes the species enthalpy, e stands for the specific internal energy, and s is the respective source. Adaptive Meshing: Boon: Engine Simulations Bane: High Performance Computing • AMR enables to provide mesh anywhere in the flow-field as desired • Load-distribution is extremely challenging with embedding levels higher than 4, especially with moving boundaries (piston, valves, needle, etc.)
  • 27. CFD DOMAIN, BCS AND MESH ARRANGEMENT 27 Intake Boundary Exhaust Boundary Mesh Size=0.25mm mesh size=1mm 1D GT-Power modeling result is used to define boundary conditions for 3D CFD simulation 1D Model Code CONVERGE v2.1 Computational Grid 3D, structured without Adaptive Mesh Refinement Spatial Discretization 2nd order, finite volume Computational Grid Size Base grid size: 1 mm in cylinder Fixed embedding in the valve seat region (0.5 mm), piston liner (0.5 mm) and spark plug region (0.25 mm) Peak Cell count ~1.1 million Time Discretization PISO (Pressure Implicit with Splitting of Operators) Subgrid-scale Model 1-Eq. Dynamic Structure Number of cores 48 cores 40 consecutive cycles are simulated
  • 28.  Uncertainty arising from different sources—errors in the: • Data, Parameter estimation procedure • Alternative model structures  Propagate through the model for uncertainty analysis  Their relative importance is quantified via sensitivity analysis • Design the experiments • Obtain more accurate predictions SENSITIVITY ANALYSIS 28