Nov 18th 2013

Bull Extreme Factory
Remote Visualizer
3D Streaming Technology
Nov 18th 2013

Readers’ Choice best server product or technology:
Intel Xeon Processor
Editors’ Choice best server product...
Nov 18th 2013

on

Readers’ Choice:
GENCI CURIE for DEUS
(Dark Energy Universe Simulation) project
Needs
Increase the capacity of University of Reims’ ROMEO HPC Center, an
NVIDIA CUDA® Research Center
Develop the teaching...
Needs
Create a world-class supercomputing center that will be
made available to the Czech science and industry community
F...
Needs
A system that matches the University’s strong involvement in
sustainable development
A minimum performance of 45 Tfl...
Needs
A solution focusing on green IT with a very innovative collaboration
and research program

Solution
A bullx supercom...
Needs
Replace the current 41.8 Tflops vector system by a scalar supercomputer
Two identical systems: one for research & on...
Needs
Replace the 65 TFlops Dutch National Supercomputer Huygens
Support a wide variety of scientific disciplines
A soluti...
Needs
Provide high level computing resources for the R&D teams at
AREVA, Astrium, EDF, INERIS, Safran and CEA
Meet the req...
Needs
Replacement of the bullx cluster installed in 2007
Support a diverse community of users, from experienced practition...
Needs
Upgrade the computing capacities dedicated to aerodynamics

Solution
A homogeneous cluster of 72 compute nodes
A few...
Atomic Weapons Establishment

AWE confirms its trust in Bull with the upgrade
of its 3 bullx supercomputers
New blades in ...
Bundesanstalt für Wasserbau
Needs
Replace one of their 2 compute clusters used for:
 2D and 3D modeling of rivers
 3D mo...
HPC Midlands Consortium

Needs
Make world-class HPC facilities accessible to both academic and
industrial researchers, and...
This research center active in the fields of energy &
transport wanted:
A 100 Tflops extension to their computing resource...
Needs
Create a world-class manufacturing research
centre
Finite-based modeling of detailed 3D timedependent manufacturing ...
Needs
One of the newest public universities in Spain, it
needed a high density compute cluster:
For the Physical Chemistry...
This innovative engineering company
specializing in design for the motor
racing industry wanted to:
Support the use of adv...
Needs
“Keep content looking great wherever it’s played”
An ultra-dense HPC platform optimized for large scale
video proces...
This Belgian research center working for
the aeronautics industry wanted to:
Double their HPC capacity
Find an easy way to...
Banco Bilbao Vizcaya Argentaria needed to
reduce run time for mathematical models to:
manage financial risks better
have a...
The Dutch meteo was looking for:
More computing power to be able to issue early warnings in case
of extreme weather and en...
300 Tflops peak
A massively parallel section (MPI) including
1,350 bullx B500 processing nodes with a total
of 16,200 Inte...
Join
the Bull User group for eXtreme computing

www.bux-org.com
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Upcoming SlideShare
Loading in...5
×

Bullx HPC eXtreme computing cluster references

620

Published on

Designed without compromise for unlimited innovation, Bull's HPC clusters of bullx processors are deployed on several continents with petascale computing power, for applications from sports car design to simulating the whole full observable universe.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
620
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
8
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Bullx HPC eXtreme computing cluster references

  1. 1. Nov 18th 2013 Bull Extreme Factory Remote Visualizer 3D Streaming Technology
  2. 2. Nov 18th 2013 Readers’ Choice best server product or technology: Intel Xeon Processor Editors’ Choice best server product or technology: Intel Xeon Phi Coprocessor Readers’ Choice top product or technology to watch: Intel Xeon Phi Coprocessor
  3. 3. Nov 18th 2013 on Readers’ Choice: GENCI CURIE for DEUS (Dark Energy Universe Simulation) project
  4. 4. Needs Increase the capacity of University of Reims’ ROMEO HPC Center, an NVIDIA CUDA® Research Center Develop the teaching activities on accelerators technologies A system to drive research in mathematics and computer science, physics and engineering sciences, and multiscale molecular modeling. Solution A large GPU-accelerated cluster: 260 NVIDIA Tesla K20X GPU accelerators housed in 130 bullx R421 E3 servers Expected performance: 230 Tflops (Linpack) Free-cooling system based on Bull Cool Cabinet Doors joint scientific and technical collaboration with NVIDIA and BULL The new “Romeo” system will be installed this summer
  5. 5. Needs Create a world-class supercomputing center that will be made available to the Czech science and industry community Find accommodation for the supercomputer while the IT4I supercomputing center is built Solution The Anselm supercomputer, an 82 Tflops bullx system housed in a leased mobull container: 180 bullx B510 compute nodes 23 bullx B515 accelerator blades with NVIDIA M2090 GPUs 4 bullx B515 accelerator blades with Intel® Xeon Phi™ coprocessors Lustre shared file system Water-cooled rear doors bullx supercomputer suite
  6. 6. Needs A system that matches the University’s strong involvement in sustainable development A minimum performance of 45 Tflops Solution A bullx configuration that optimizes power consumption, footprint and cooling, with Direct Liquid Cooling nodes and free cooling: 136 dual-socket bullx DLC B710 compute nodes InfiniBand FDR Lustre file system bullx supercomputer suite PUE 1.13 Free cooling installation
  7. 7. Needs A solution focusing on green IT with a very innovative collaboration and research program Solution A bullx supercomputer based on the bullx DLC series Phase 1 (Q1 2013) Throughput: 180 bullx B500 blades, IB QDR HPC: 270 bullx DLC B710 nodes, 24 bullx B515 accelerator blades, IB FDR Phase 2 (Q3 2014) HPC: 630 bullx DLC B720 nodes, 4 SMP nodes with 2TB each A total peak performance > 1.6 Petaflops at the end of phase 2
  8. 8. Needs Replace the current 41.8 Tflops vector system by a scalar supercomputer Two identical systems: one for research & one for production Solution A bullx configuration that optimizes power consumption, footprint and cooling, with Direct Liquid Cooling nodes: Phase 1 (2013): 2 x 475 Tflops peak ─ 2 x 990 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘IvyBridge EP’ Phase 2 (2015): 2 x 2.85 Pflops peak ─ 2 x 1800 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘Broadwell EP’ Fat tree InfiniBand FDR Lustre file system bullx supercomputer suite
  9. 9. Needs Replace the 65 TFlops Dutch National Supercomputer Huygens Support a wide variety of scientific disciplines A solution that can easily be extended An HPC vendor who can also be a partner Solution A bullx supercomputer delivered in 3 phases: Phase 1: 180 bullx Direct Liquid Cooling B710 nodes – Intel Sandybridge-based + 32 bullx R428 E3 fat nodes Phase 2: 360 bullx Direct Liquid Cooling B710 nodes – Intel Ivybridge-based Phase 3: 1080 bullx Direct Liquid Cooling B710 nodes – Intel Haswell-based A total peak performance in excess of 1.3 Petaflops in phase 3
  10. 10. Needs Provide high level computing resources for the R&D teams at AREVA, Astrium, EDF, INERIS, Safran and CEA Meet the requirements of a large variety of research topics Solution 200 Tflops supercomputer “Airain” with a flexible architecture 594 bullx B510 compute nodes InfiniBand QDR interconnect Lustre file system bullx supercomputer suite + extension used for genomics 180 “memory-rich” bullx B510 compute nodes (128GB of RAM)
  11. 11. Needs Replacement of the bullx cluster installed in 2007 Support a diverse community of users, from experienced practitioners to those just starting to consider HPC Solution A dedicated MPI compute node partition  128 dual-socket bullx B510 compute nodes with Intel® Xeon® E5-2670  16 “memory-rich” nodes for codes with large memory requirements A dedicated HTC compute node partition  72 refurbished bullx B500 blades InfiniBand QDR Lustre bullx supercomputer suite
  12. 12. Needs Upgrade the computing capacities dedicated to aerodynamics Solution A homogeneous cluster of 72 compute nodes A few specialized nodes used either as “pure” compute nodes or as hybrid nodes transferring part of the calculations to accelerators bullx R424-E/F3 2U servers each housing 4 compute nodes 1 NVIDIA 1U system with 4 GPUs InfiniBand QDR Managed with the bullx supercomputer suite. «The bullx cluster provides the ease of use and robustness that our engineers are entitled to expect from an everyday tool for their work.»
  13. 13. Atomic Weapons Establishment AWE confirms its trust in Bull with the upgrade of its 3 bullx supercomputers New blades in the existing infrastructure Simple replacement of the initial blades with new bullx B510 blades featuring the latest Sandy Bridge EP CPUs Willow 2x 35 Tflops  Whitebeam 2x 156 Tflops Blackthorn 145 Tflops  Sycamore 398 Tflops All existing bullx chassis re-used to house the new blades Upgrade of the storage systems Cluster software upgraded to bullx supercomputer suite 4
  14. 14. Bundesanstalt für Wasserbau Needs Replace one of their 2 compute clusters used for:  2D and 3D modeling of rivers  3D modeling of flows  Reliability analyses (Monte Carlo simulations) Solution 126 bullx B510 compute nodes (2x Intel® Xeon® E5-2670) Bull Cool Cabinet doors (water-cooled) Full non-blocking InfiniBand QDR interconnect network Panasas Storage System (110 TB) Cluster software: Hpc.manage powered by scVENUS (a solution from science + computing, a Bull Group company)
  15. 15. HPC Midlands Consortium Needs Make world-class HPC facilities accessible to both academic and industrial researchers, and especially to smaller companies, to facilitate innovation, growth and wealth creation Encourage industrially relevant research to benefit the UK economy Solution A bullx supercomputer with a peak performance of 48 TF: 188 bullx B510 compute nodes (Intel® Xeon® E5-2600) Lustre parallel file system (with LSI/Netapp HW) Water-cooled racks
  16. 16. This research center active in the fields of energy & transport wanted: A 100 Tflops extension to their computing resources To provide sustainable technologies to meet the challenges of climate change, energy diversification & water resource management Solution A bullx supercomputer delivering 130 Tflops peak: 392 B510 compute nodes (Intel® Xeon® E5-2670) new generation InfiniBand FDR interconnect GPFS on LSI storage
  17. 17. Needs Create a world-class manufacturing research centre Finite-based modeling of detailed 3D timedependent manufacturing processes Solution 72 bullx B510 compute nodes (Intel® Xeon® E52670) 1 bullx S6030 supernode
  18. 18. Needs One of the newest public universities in Spain, it needed a high density compute cluster: For the Physical Chemistry Division To design multifunctional nano-structured materials Solution A complete solution with: 36 bullx B500 compute blades (Intel® Xeon® 5640) installation, training, 5-year maintenance
  19. 19. This innovative engineering company specializing in design for the motor racing industry wanted to: Support the use of advanced virtual engineering technologies, developed in-house, for complete simulated vehicle design, development and testing Solution 198 bullx B500 compute blades 2 memory rich bullx S6010 compute nodes for pre and post meshing
  20. 20. Needs “Keep content looking great wherever it’s played” An ultra-dense HPC platform optimized for large scale video processing Solution TITAN, built on bullx B510 blades: a scalable video processing platform that enables massively parallel content transcoding into multiple formats at a very high degree of fidelity to the original
  21. 21. This Belgian research center working for the aeronautics industry wanted to: Double their HPC capacity Find an easy way to extend their computer room capacity Solution A bullx system delivering 40 Teraflops (bullx B500 compute nodes) Installed in a mobull mobile data centre
  22. 22. Banco Bilbao Vizcaya Argentaria needed to reduce run time for mathematical models to: manage financial risks better have a competitive advantage and get the best price for complex financial products Solution A bullx cluster delivering 41 Teraflops, with: 80 bullx R424-E2 compute nodes 2 bullx R423-E2 service nodes
  23. 23. The Dutch meteo was looking for: More computing power to be able to issue early warnings in case of extreme weather and enhance capabilities for climate research Solution A system 40 times more powerful than KNMI’s previous system: 396 bullx B500 compute nodes, equipped with Intel® Xeon® Series 5600 processors 9.5 TB memory peak performance 58.2 Tflop/s “The hardware, combined with Bull's expert support, gives us confidence in our cooperation”
  24. 24. 300 Tflops peak A massively parallel section (MPI) including 1,350 bullx B500 processing nodes with a total of 16,200 Intel® Xeon® cores An SMP (symmetrical multiprocessing) section including 11,456 Intel® Xeon® cores, grouped into 181 bullx S6010/S6030 supernodes Over 90 Terabytes of memory
  25. 25. Join the Bull User group for eXtreme computing www.bux-org.com
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×