MT10: Give Your organization better, faster
insights and answers with High Performance
Computing (HPC)
HPC is imperative to research, industry and security.
“HPC systems… are vital to the
Nation’s interests in science,
medicine, engineering, technology,
and industry. The NSCI will spur the
creation and deployment of
computing technology at the
leading edge, helping to advance …
economic competiveness,
scientific discovery, and national
security.” (July 29, 2015)
Now a U.S. top priority:
National Strategic
Computing Initiative (NSCI)
HPC is critical for BIG problems that require faster,
better answers via integrated, scalable solutions.
Cloud Computing
Emerging usage model to
simplify HPC
Modeling and
simulation
Traditional Parallel
Computing Clusters
Large ensemble problems
Analyze multiple data sets
High throughput
computing
Fast answers and insights for
data intensive problems
Big data
analytics
Scalable, end-to-end approach that enables faster insights
and competitive advantage
Lifecycle
services
Modular and
reference
designs
Flexible
architectures
HPC domain
expertise
• Faster time to discovery
End to end services maximize your HPC investment, from financing
to configuration, installation, cluster management and recycling.
• Integrated systems
Engineered and packaged solutions are fine-tuned for use cases.
• Standards-based building blocks
Modular architecture grows capacity as needed.
• 31 years of computer industry experience
Dell is a proven HPC partner, offering standard, open solutions from
Dell and strategic partners.
• Optimized portfolio of industry leading tools
Augments and adapts to customers’ operations.
• Dell HPC engineering and research team
Focus on your business goals.
• Best practices, performance characterizations
Incorporate Dell research into each HPC installation.
Select, deploy, manage and scale your cluster with a single source of support.
Get exactly the help you need.
Rack Integration
Services
Dell HPC Merge Center
15 of the world’s top 500
reported supercomputers
were built at this facility in
Austin. Five were
interconnected onsite.
• Expand IT capabilities to
scale with your business
• Speed up deployments.
and improve
productivity.
• Optimize performance
with flexible, customized
services.
Turnkey outsource system
management service that
provides secure remote
system monitoring,
administration and support
to help increase your
system utilization and
uptime.
Dell Financial Services
provides flexible payment
options so you can get
more compute power for
your money. By providing
the financing, we make it
easier for you to upgrade
and refresh.
Deployment
Services
Remote Cluster
Management Services
Dell Financial
Services
We lead and innovate in HPC, so you can focus
on discoveries and impact
Dell will help more people in research, industry and
government use HPC solutions to make more innovations
and discoveries that improve society and advance human
understanding than any other HPC systems vendor in the
world.
HPC use case:
manufacturing
Challenges
• Improve engineering productivity.
• Reduce design time and costs.
• Enable new breakthrough research.
• Manage third party engineering applications
for a global workforce.
Results
The combination of a well designed, balanced
system with supported software and services is
allowing manufacturer to deliver ground-breaking
HPC resources to engineering community.
HPC use case:
healthcare/life sciences
Challenge
• Achieve remission in “incurable” pediatric
cancer patients by developing customized
treatments fast enough to help patients with
very limited intervention time frames.
Results
• Reduce genomic sequencing time from weeks
to days.
• Deliver individualized therapies in less than a
week.
“We’ve gone from treating every child exactly the same way
to being able to develop individualized therapies. We’re now
able to stop the progression of cancer in 60 percent of our
patients, and today some are cancer-free.”
— Dr. Giselle Sholler, Chair of the NMTRC and Director of the Pediatric
Oncology Research Program at Helen DeVos Children’s Hospital
HPC use case:
financial services
Challenge
• Speed up financial analyses and stock
market trading analyses that use floating
point applications.
• Process huge volumes of transactional
data on a daily basis.
Results
• Powerful cluster based on PowerEdge C6220
servers processes runs all transactional data
and real time interest rate information.
Processor speed is critical.
• Business is in process of cluster expansion by
adding additional GPU accelerator capabilities
to speed up modeling and Monte Carlo
simulations.
HPC use case:
energy/oil and gas
Challenge
• Locate profitable new deep water oil and gas
fields globally.
Results
GPU-based HPC system enables oil and gas
companies to make intelligent decisions prior to
beginning costly deep sea projects.
• CPU processing is handled via
Schlumberger Omega software with many
different algorithms.
• GPU processing is used for specific
algorithms such as Kirchhoff Depth
Migration, Reverse Time Migration,
Dedicated to research and development of HPC solutions
Dell HPC Solutions Engineering Lab
Meet real-life,
workload-specific
challenges
through
collaboration with
the global HPC
research
community.
• Dell XL Consortium
• Dell | Cambridge HPC
Solution Centre
• National Center for
Supercomputing
Applications (NCSA) —
University of Illinois Private
Sector Program
• San Diego Supercomputing
Center (SDSC)
• University of Pisa
• University of Texas — Texas
Accelerated Computing
Center (TACC)
Gateways to Discovery:
Cyberinfrastructure for the
Long Tail of Science
Shawn Strande
Project Manager & Co-PI Deputy Director, SDSC
Overview of talk
1. Motivation
2. Key features
3. Architecture
4. Highlights of research results
“Comet is all about providing
high performance
computing to a much larger
research community – what
we call ‘HPC for the 99
percent’ – and serving as a
gateway to discovery.”
– Mike Norman,
SDSC Director
1. Motivation
2. Key features
3. Architecture
4. Highlights of
research
results
Call to serve the “Long Tail of Science”
NSF’s solicitation 13-528 “High Performance Computing
System Acquisition: Building a More Inclusive Computing
Environment for Science and Engineering”
• Expand the use of high end resources to a
much larger and more diverse community.
• Support the entire spectrum of NSF
communities.
• Promote a more comprehensive and
balanced portfolio.
• Include research communities that are not
users of traditional HPC systems.
Data extracted from NSF’s XDMoD database informed a design that
reflects the way researchers actually use HPC systems.
• 99% of jobs run on
NSF’s HPC resources
in 2012 used <2,048
cores.
• Consumed >50% of
the total core-hours
across NSF resources.
One rack of Comet
A system designed to serve the 99% is
significantly different than one for the 1%
99% 1%
Comet’s integrated architecture is a platform for
a wide range of computing modalities.
128 GB/node, 24 core
nodes support shared
jobs & reduce the need
for runs across racks.
99% of the jobs run
inside a single rack with
full bisection BW.
High performance and
durable storage support
compute & data
workflows, with replication
for critical data
Support science
gateways as a
primary use case.
Compute, GPU, and
large memory nodes
support diverse
computing needs.
Virtual Clusters give
communities control
over their software
environment.
InfiniBand compute, ethernet storage
Comet network architecture
Juniper
100 Gbps
Arista
40GbE
(2x)
Data Mover
Nodes
Research and
Education Network
Access
Data Movers
Internet 2
7x 36-port FDR in each rack
wired as full fat-tree. 4:1 over
subscription between racks.
72 HSWL320 GB
Core
InfiniBand
(2 x 108-
port)
36 GPU
4 Large-
Memory
IB-Ethernet
Bridges (4 x
18-port each)
Performance Storage
7.7 PB, 200 GB/s
32 storage servers
Durable Storage
6 PB, 100 GB/s
64 storage servers
Arista
40GbE
(2x)
27 racks
FDR 36p
FDR 36p
64 128
18
72 HSWL320 GB
72 HSWL
2*36
4*18
Mid-tier
InfiniBand
Additional support ccomponents
(not shown for clarity)
Ethernet management network (10 GbE)
Node-Local
Storage 18
72FDR
FDR
40GbE
40GbE
10GbE
18
switches
4
4
FDR
72
Home File Systems
VM Image Repository
Login
Data Mover
Management
Gateway
Hosts
Comet is built for High Performance Computing.
27 Rack-sized building blocks
• Each Rack: 1728 Cores, 9.2TB Memory,
23TB Flash
• Full System: 46,656 Cores, 248TB Memory,
620TB Flash
Modest heterogeneity for diverse
workflows
• 36 GPU nodes with dual NVIDIA K80s
(4 GPUs/node)
• 4 large-memory nodes: 1.5TB Memory,
64 cores
Large Ethernet connectivity to
storage and outside world
• 72 x 40GbE = 2.8 Tbit/s
• 100Gbit/s to CENIC/Internet2
Large, High-Speed Parallel I/O System
• 7.6 PB of Lustre-based Hard Disk
• > 200GB/sec, sustained
27 rack-based supercomputers ~ 2.0PF/s
18 X
7 X
~ 2 X
4 Nodes/2U: 8X Haswell 12Core 2.5GHz,
0.5TB Memory
Mellanox FDR (56Gb/s) InfiniBand
48 Port 10GbE + 2 x 40GbE
• Single rack supports > 95% of
XSEDE applications.
~75 TF/s / Rack
• Full bisectional cluster
interconnect (InfiniBand)
18 uplinks/Rack (4:1
oversubscription)
• 10GbE management network
Ethernet connectivity – 2.8 Tbit/s
2 Arista 7508 Enterprise
Ethernet Switch-Routers
72 x 40GbE
CometNodesviaIB
72 x FDR IB
Mellanox IB  Ethernet
64 x
40GbE
7.6 PB, 200GB/sec Parallel File
System
100GbE
Connections to Scientists
Virtualized HPC — user-customized HPC
Frontend
Virtual Frontend
Hosting Disk Image Vault
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Compute
public network
private
Virtual
Frontend
Virtual Compute
Virtual Compute
Virtual Compute
private
Virtual
Frontend
Virtual Compute
Virtual Compute
Virtual Compute
private
physical
virtual
virtual
High performance virtual cluster characteristics
Virtual
Frontend
Virtual
Compute
Virtual
Compute
Virtual
Compute
private
Ethernet
InfiniBand
All nodes have
• Private Ethernet
• InfiniBand
• Local Disk Storage
Virtual Compute Nodes can network boot (PXE)
from its virtual front end.
All Disks retain state
• Keep user configuration between boots.
InfiniBand virtualization
• 8% latency overhead
• Nominal bandwidth overhead
Comet – Pathfinder for
Virtualized HPC in XSEDE
Comet serving science and society
DNA nanostructures
Seismic research and
disaster prevention
DNA nanostructures
CIPRES — Assembling
the Tree of Life
Social sciences
Neurosciences and
brain research
Astrophysics
Fluid Turbulent
Physics
Alternative energy
New Materials
Research
Climate change and
environmental
sciences
Molecular science
Comet’s operational polices and software are
designed to support long tail users.
• Optimized for throughput
• Science gateways reach large communities.
• Virtual clusters (VC) will support well-formed communities
Highlights of research using
Comet
Simulations of biological membranes
Wonpil Im (U. Kansas) has
been making extensive use of
Comet to perform molecular
dynamics simulations on
biological membranes to
study their mechanical
properties and the
interactions between lipids
and proteins.
O
OH
HO
O
OH
O
OH
HO
OH
OH
O
HO
OH
O
O
O
O
NHAc
COO-
HO
OH
OH
OH
O
O
AcHN
OH
OH
Gal1
Neu5Ac
Gal2
GalNAc
Glc
CER
(A) (B)
This work can lead to a better understanding of how amyloid plaques
form in the brains of Alzheimer patients.
2D compressible turbulence
Alexei Kritsuk (UCSD) has been using Comet to study compressible
turbulence in two dimensions.
This research can provide insights into the structure of the universe.
(Kritsuk and Falkovich, 15th European Turbulence Conf. 2015)
Colloids and self-assembling systems
Sharon Glotzer (University of
Michigan) uses Comet to simulate
colloids of hard particles, including
spheres, spheres cut by planes,
ellipsoids, convex polyhedra,
convex spheropolyhedra, and
general polyhedra.
Glotzer’s work can lead to the design of better materials, including
surfactants. liquid crystals and nanoparticles that spontaneously assemble
into sheets, tubes, wires or other geometries.
Protein lyophilization (freeze-drying)
Pablo Debenedetti (Princeton)
uses Comet to study
lyophilization (freeze-drying), a
standard technique used to
increase the storage life of
labile biochemical, including
therapeutic proteins, by the
pharmaceutical industry.
Top left: Trp-cage miniprotein structure. Top right: Mean-squared fluctuation for each residue in
Trp-cage for the hydrated and dehydrated powder system. Bottom left: Lysozyme protein
structure. Bottom right: Water sorption isotherm for lysozyme.
Studying flu at the molecular scale
Rommie Amaro (UCSD) uses Comet to understand how properties of the
flu virus affect their infectivity.
Alasdair Steven, NIH
Brownian dynamics illuminates how glycoprotein stalk height impacts
substrate binding.
Comet is a partnership of
academia, the National Science
Foundation, industry and the
XSEDE user community
Users via gateways outnumber those logging in.
Many other
gateways are taking
off with hundreds
of thousands of
users
(nanoHUB, Galaxy,
Folding@Home,
more).
SEAGrid applications & usage on SDSC Comet, the workhorse
Vortex shedding with Nek5000
Jeilani,Y et al Phys. Chem.
Chem. Phys. 2015
Courtesy Sudhakar Pamidighantam
Badieyan, S et al. J. Org. Chem. J. Org. Chem., 2015, 80 (4),
PI Sudhakar Pamidighantam, Indiana U,
www.seagrid.org
611 Users, 24000 Jobs, 11.5M XSEDE SUs used since April 2014
More than 120 Publications, 13 Dissertations
Comet is delivering on its promise.
• Users are benefiting from Comet’s rapid
turnaround and performance.
• Comet is becoming the
workhorse system in XSEDE
for science gateways.
• Our partnership with
Indiana University will lead
to high performance
virtual clusters.
“I applaud your recent launch of the "Comet" platform
which acknowledges what most scientific computing
really looks like…” — Carl Boettiger, UCB
“In general, Comet has become one of the most reliable and
productive clusters that the Ultrascan gateway uses.”
— Gary
Gorbet, University of Texas Health Science Center
“..have you heard about the 1000 CPU-hour allocations? They’re great!...I
never knew how long the queues were because every time I looked at my
job it was running, Comet is a great machine.”
—Ted Wetherbee, Fond du Lac Tribal and Community College
Closing thoughts
Research organizations
must move faster and
better than ever before.
Integrated, cost-effective, flexible
solutions that scale are critical to
solving BIG problems that involve:
• Complex modeling and
simulation.
• Big data analytics.
• Large ensemble problems and
analyzing multiple data sets.
• Emerging usage models that
simplify HPC access.
Take your next steps
to accelerate your
high performance
computing. Start today
Schedule time with a Dell HPC
Solutions Specialist
For more information, visit us online:
• Dell.com/hpc
• HPCatDell.com
Thanks!

Dell High-Performance Computing solutions: Enable innovations, outperform expectations

  • 1.
    MT10: Give Yourorganization better, faster insights and answers with High Performance Computing (HPC)
  • 2.
    HPC is imperativeto research, industry and security. “HPC systems… are vital to the Nation’s interests in science, medicine, engineering, technology, and industry. The NSCI will spur the creation and deployment of computing technology at the leading edge, helping to advance … economic competiveness, scientific discovery, and national security.” (July 29, 2015) Now a U.S. top priority: National Strategic Computing Initiative (NSCI)
  • 3.
    HPC is criticalfor BIG problems that require faster, better answers via integrated, scalable solutions. Cloud Computing Emerging usage model to simplify HPC Modeling and simulation Traditional Parallel Computing Clusters Large ensemble problems Analyze multiple data sets High throughput computing Fast answers and insights for data intensive problems Big data analytics
  • 4.
    Scalable, end-to-end approachthat enables faster insights and competitive advantage Lifecycle services Modular and reference designs Flexible architectures HPC domain expertise • Faster time to discovery End to end services maximize your HPC investment, from financing to configuration, installation, cluster management and recycling. • Integrated systems Engineered and packaged solutions are fine-tuned for use cases. • Standards-based building blocks Modular architecture grows capacity as needed. • 31 years of computer industry experience Dell is a proven HPC partner, offering standard, open solutions from Dell and strategic partners. • Optimized portfolio of industry leading tools Augments and adapts to customers’ operations. • Dell HPC engineering and research team Focus on your business goals. • Best practices, performance characterizations Incorporate Dell research into each HPC installation.
  • 5.
    Select, deploy, manageand scale your cluster with a single source of support. Get exactly the help you need. Rack Integration Services Dell HPC Merge Center 15 of the world’s top 500 reported supercomputers were built at this facility in Austin. Five were interconnected onsite. • Expand IT capabilities to scale with your business • Speed up deployments. and improve productivity. • Optimize performance with flexible, customized services. Turnkey outsource system management service that provides secure remote system monitoring, administration and support to help increase your system utilization and uptime. Dell Financial Services provides flexible payment options so you can get more compute power for your money. By providing the financing, we make it easier for you to upgrade and refresh. Deployment Services Remote Cluster Management Services Dell Financial Services
  • 6.
    We lead andinnovate in HPC, so you can focus on discoveries and impact Dell will help more people in research, industry and government use HPC solutions to make more innovations and discoveries that improve society and advance human understanding than any other HPC systems vendor in the world.
  • 7.
    HPC use case: manufacturing Challenges •Improve engineering productivity. • Reduce design time and costs. • Enable new breakthrough research. • Manage third party engineering applications for a global workforce. Results The combination of a well designed, balanced system with supported software and services is allowing manufacturer to deliver ground-breaking HPC resources to engineering community.
  • 8.
    HPC use case: healthcare/lifesciences Challenge • Achieve remission in “incurable” pediatric cancer patients by developing customized treatments fast enough to help patients with very limited intervention time frames. Results • Reduce genomic sequencing time from weeks to days. • Deliver individualized therapies in less than a week. “We’ve gone from treating every child exactly the same way to being able to develop individualized therapies. We’re now able to stop the progression of cancer in 60 percent of our patients, and today some are cancer-free.” — Dr. Giselle Sholler, Chair of the NMTRC and Director of the Pediatric Oncology Research Program at Helen DeVos Children’s Hospital
  • 9.
    HPC use case: financialservices Challenge • Speed up financial analyses and stock market trading analyses that use floating point applications. • Process huge volumes of transactional data on a daily basis. Results • Powerful cluster based on PowerEdge C6220 servers processes runs all transactional data and real time interest rate information. Processor speed is critical. • Business is in process of cluster expansion by adding additional GPU accelerator capabilities to speed up modeling and Monte Carlo simulations.
  • 10.
    HPC use case: energy/oiland gas Challenge • Locate profitable new deep water oil and gas fields globally. Results GPU-based HPC system enables oil and gas companies to make intelligent decisions prior to beginning costly deep sea projects. • CPU processing is handled via Schlumberger Omega software with many different algorithms. • GPU processing is used for specific algorithms such as Kirchhoff Depth Migration, Reverse Time Migration,
  • 11.
    Dedicated to researchand development of HPC solutions Dell HPC Solutions Engineering Lab Meet real-life, workload-specific challenges through collaboration with the global HPC research community. • Dell XL Consortium • Dell | Cambridge HPC Solution Centre • National Center for Supercomputing Applications (NCSA) — University of Illinois Private Sector Program • San Diego Supercomputing Center (SDSC) • University of Pisa • University of Texas — Texas Accelerated Computing Center (TACC)
  • 12.
    Gateways to Discovery: Cyberinfrastructurefor the Long Tail of Science Shawn Strande Project Manager & Co-PI Deputy Director, SDSC
  • 13.
    Overview of talk 1.Motivation 2. Key features 3. Architecture 4. Highlights of research results “Comet is all about providing high performance computing to a much larger research community – what we call ‘HPC for the 99 percent’ – and serving as a gateway to discovery.” – Mike Norman, SDSC Director 1. Motivation 2. Key features 3. Architecture 4. Highlights of research results
  • 14.
    Call to servethe “Long Tail of Science” NSF’s solicitation 13-528 “High Performance Computing System Acquisition: Building a More Inclusive Computing Environment for Science and Engineering” • Expand the use of high end resources to a much larger and more diverse community. • Support the entire spectrum of NSF communities. • Promote a more comprehensive and balanced portfolio. • Include research communities that are not users of traditional HPC systems.
  • 15.
    Data extracted fromNSF’s XDMoD database informed a design that reflects the way researchers actually use HPC systems. • 99% of jobs run on NSF’s HPC resources in 2012 used <2,048 cores. • Consumed >50% of the total core-hours across NSF resources. One rack of Comet A system designed to serve the 99% is significantly different than one for the 1% 99% 1%
  • 16.
    Comet’s integrated architectureis a platform for a wide range of computing modalities. 128 GB/node, 24 core nodes support shared jobs & reduce the need for runs across racks. 99% of the jobs run inside a single rack with full bisection BW. High performance and durable storage support compute & data workflows, with replication for critical data Support science gateways as a primary use case. Compute, GPU, and large memory nodes support diverse computing needs. Virtual Clusters give communities control over their software environment.
  • 17.
    InfiniBand compute, ethernetstorage Comet network architecture Juniper 100 Gbps Arista 40GbE (2x) Data Mover Nodes Research and Education Network Access Data Movers Internet 2 7x 36-port FDR in each rack wired as full fat-tree. 4:1 over subscription between racks. 72 HSWL320 GB Core InfiniBand (2 x 108- port) 36 GPU 4 Large- Memory IB-Ethernet Bridges (4 x 18-port each) Performance Storage 7.7 PB, 200 GB/s 32 storage servers Durable Storage 6 PB, 100 GB/s 64 storage servers Arista 40GbE (2x) 27 racks FDR 36p FDR 36p 64 128 18 72 HSWL320 GB 72 HSWL 2*36 4*18 Mid-tier InfiniBand Additional support ccomponents (not shown for clarity) Ethernet management network (10 GbE) Node-Local Storage 18 72FDR FDR 40GbE 40GbE 10GbE 18 switches 4 4 FDR 72 Home File Systems VM Image Repository Login Data Mover Management Gateway Hosts
  • 18.
    Comet is builtfor High Performance Computing. 27 Rack-sized building blocks • Each Rack: 1728 Cores, 9.2TB Memory, 23TB Flash • Full System: 46,656 Cores, 248TB Memory, 620TB Flash Modest heterogeneity for diverse workflows • 36 GPU nodes with dual NVIDIA K80s (4 GPUs/node) • 4 large-memory nodes: 1.5TB Memory, 64 cores Large Ethernet connectivity to storage and outside world • 72 x 40GbE = 2.8 Tbit/s • 100Gbit/s to CENIC/Internet2 Large, High-Speed Parallel I/O System • 7.6 PB of Lustre-based Hard Disk • > 200GB/sec, sustained
  • 19.
    27 rack-based supercomputers~ 2.0PF/s 18 X 7 X ~ 2 X 4 Nodes/2U: 8X Haswell 12Core 2.5GHz, 0.5TB Memory Mellanox FDR (56Gb/s) InfiniBand 48 Port 10GbE + 2 x 40GbE • Single rack supports > 95% of XSEDE applications. ~75 TF/s / Rack • Full bisectional cluster interconnect (InfiniBand) 18 uplinks/Rack (4:1 oversubscription) • 10GbE management network
  • 20.
    Ethernet connectivity –2.8 Tbit/s 2 Arista 7508 Enterprise Ethernet Switch-Routers 72 x 40GbE CometNodesviaIB 72 x FDR IB Mellanox IB  Ethernet 64 x 40GbE 7.6 PB, 200GB/sec Parallel File System 100GbE Connections to Scientists
  • 21.
    Virtualized HPC —user-customized HPC Frontend Virtual Frontend Hosting Disk Image Vault Compute Compute Compute Compute Compute Compute Compute Compute Compute public network private Virtual Frontend Virtual Compute Virtual Compute Virtual Compute private Virtual Frontend Virtual Compute Virtual Compute Virtual Compute private physical virtual virtual
  • 22.
    High performance virtualcluster characteristics Virtual Frontend Virtual Compute Virtual Compute Virtual Compute private Ethernet InfiniBand All nodes have • Private Ethernet • InfiniBand • Local Disk Storage Virtual Compute Nodes can network boot (PXE) from its virtual front end. All Disks retain state • Keep user configuration between boots. InfiniBand virtualization • 8% latency overhead • Nominal bandwidth overhead Comet – Pathfinder for Virtualized HPC in XSEDE
  • 23.
    Comet serving scienceand society DNA nanostructures Seismic research and disaster prevention DNA nanostructures CIPRES — Assembling the Tree of Life Social sciences Neurosciences and brain research Astrophysics Fluid Turbulent Physics Alternative energy New Materials Research Climate change and environmental sciences Molecular science
  • 24.
    Comet’s operational policesand software are designed to support long tail users. • Optimized for throughput • Science gateways reach large communities. • Virtual clusters (VC) will support well-formed communities
  • 25.
  • 26.
    Simulations of biologicalmembranes Wonpil Im (U. Kansas) has been making extensive use of Comet to perform molecular dynamics simulations on biological membranes to study their mechanical properties and the interactions between lipids and proteins. O OH HO O OH O OH HO OH OH O HO OH O O O O NHAc COO- HO OH OH OH O O AcHN OH OH Gal1 Neu5Ac Gal2 GalNAc Glc CER (A) (B) This work can lead to a better understanding of how amyloid plaques form in the brains of Alzheimer patients.
  • 27.
    2D compressible turbulence AlexeiKritsuk (UCSD) has been using Comet to study compressible turbulence in two dimensions. This research can provide insights into the structure of the universe. (Kritsuk and Falkovich, 15th European Turbulence Conf. 2015)
  • 28.
    Colloids and self-assemblingsystems Sharon Glotzer (University of Michigan) uses Comet to simulate colloids of hard particles, including spheres, spheres cut by planes, ellipsoids, convex polyhedra, convex spheropolyhedra, and general polyhedra. Glotzer’s work can lead to the design of better materials, including surfactants. liquid crystals and nanoparticles that spontaneously assemble into sheets, tubes, wires or other geometries.
  • 29.
    Protein lyophilization (freeze-drying) PabloDebenedetti (Princeton) uses Comet to study lyophilization (freeze-drying), a standard technique used to increase the storage life of labile biochemical, including therapeutic proteins, by the pharmaceutical industry. Top left: Trp-cage miniprotein structure. Top right: Mean-squared fluctuation for each residue in Trp-cage for the hydrated and dehydrated powder system. Bottom left: Lysozyme protein structure. Bottom right: Water sorption isotherm for lysozyme.
  • 30.
    Studying flu atthe molecular scale Rommie Amaro (UCSD) uses Comet to understand how properties of the flu virus affect their infectivity. Alasdair Steven, NIH Brownian dynamics illuminates how glycoprotein stalk height impacts substrate binding.
  • 31.
    Comet is apartnership of academia, the National Science Foundation, industry and the XSEDE user community
  • 32.
    Users via gatewaysoutnumber those logging in. Many other gateways are taking off with hundreds of thousands of users (nanoHUB, Galaxy, Folding@Home, more).
  • 33.
    SEAGrid applications &usage on SDSC Comet, the workhorse Vortex shedding with Nek5000 Jeilani,Y et al Phys. Chem. Chem. Phys. 2015 Courtesy Sudhakar Pamidighantam Badieyan, S et al. J. Org. Chem. J. Org. Chem., 2015, 80 (4), PI Sudhakar Pamidighantam, Indiana U, www.seagrid.org 611 Users, 24000 Jobs, 11.5M XSEDE SUs used since April 2014 More than 120 Publications, 13 Dissertations
  • 34.
    Comet is deliveringon its promise. • Users are benefiting from Comet’s rapid turnaround and performance. • Comet is becoming the workhorse system in XSEDE for science gateways. • Our partnership with Indiana University will lead to high performance virtual clusters. “I applaud your recent launch of the "Comet" platform which acknowledges what most scientific computing really looks like…” — Carl Boettiger, UCB “In general, Comet has become one of the most reliable and productive clusters that the Ultrascan gateway uses.”
— Gary Gorbet, University of Texas Health Science Center “..have you heard about the 1000 CPU-hour allocations? They’re great!...I never knew how long the queues were because every time I looked at my job it was running, Comet is a great machine.” —Ted Wetherbee, Fond du Lac Tribal and Community College
  • 35.
    Closing thoughts Research organizations mustmove faster and better than ever before. Integrated, cost-effective, flexible solutions that scale are critical to solving BIG problems that involve: • Complex modeling and simulation. • Big data analytics. • Large ensemble problems and analyzing multiple data sets. • Emerging usage models that simplify HPC access.
  • 36.
    Take your nextsteps to accelerate your high performance computing. Start today Schedule time with a Dell HPC Solutions Specialist For more information, visit us online: • Dell.com/hpc • HPCatDell.com
  • 37.