Your SlideShare is downloading. ×
Aurora hpc solutions short
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Aurora hpc solutions short

1,632
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,632
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
38
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Aurora systems are scalable from a single working unit to many computing racks with no performance degradation.High density, power efficiency and noiseless operation, thanks to extensive liquid cooling and to advanced packaging. A wide range of configurations is available. The node card is the main processing unit of every Aurora system. A blade hosting two Intel Xeon 5600 series processors (Six cores, up to 3.34GHz, TDP<130W), each connected to up to 24GB of 1333MHz DDR3 memory. CPUs are linked to peripherals via Intel chipset, using Intel QPI at 6.4GTps. One node has a computing power of 155Gflops and a typical power consumption of 350W.A large, high end FPGA allows implementation of a point to point 3D-Torus network for nearest neighbor communications. 3D-Torus interconnect comes with low latency (about 1µs), 60Gbps bandwidth, high robustness and reliability, thanks to redundant lines. Nodes host one Mellanox ConnectX2 device, with 40Gbps bandwidth and <2µs latency, used to implement a QDR Infiniband switched network.Reconfigurable computing functions such as acceleration, GPU-like co-processing are possible thanks to available logic resources on the FPGA (up to 700Gops per device).Aurora nodes are hosted in chassis: each of them features also an IB switch, with 20 QDR ports for QSFP cables. All chassis feature two monitoring and control networks, for reliable operation. Maintenance is possible also using a touch-screen interface on a monitor showing diagnostic data.Aurora Racks can contain up to 16 chassis each, and they provide mechanical support for accessand maintenance, power distribution, cables routing,and piping infrastructure for heat removal via a liquid cooling circuit.
  • Certification and RegistrationIn order for Intel Cluster Ready solutions to come together, hardware companies must test their hardware systems to be Intel Cluster Ready certified as meeting the Intel Cluster Ready specification and rigorously tested for component interoperability. Registered Intel Cluster Ready applications have been validated on a certified Intel Cluster Ready systems. Build Once, Run on ManyRegistered applications can run on any Intel Cluster Ready certified system. This also means that your customer can run multiple registered applications on the same certified cluster without rebuilding the software stack or reconfiguring hardware.Expand Your MarketIntel Cluster Ready systems make it easier for your customers to choose the right system in order to run your application. Because your application can run on multiple Intel Cluster Ready systems, your application goes out to a wider audience, increasing your penetration into many different markets and adding to your licensing revenue. Intel Cluster Ready is also being promoted to customers directly as an easy way to find the right system and application to meet their needs—meaning your company and product get even more exposure.
  • Transcript

    • 1. Eurotech Aurora HPC solutions
    • 2. Eurotech company overview and HPC divisionEUROTECH
    • 3. Eurotech IntroductionVision Computers will be increasingly miniaturised and interconnected. They will merge with the surroundings of everyday life until they become indistinguishable from them, to improve our sensorial and perceptive capabilities.
    • 4. Eurotech IntroductionEurotech is a listed global company (ETH.MI) that integrates hardware,software, services and expertise to deliver embedded computing platforms,sub-systems and high performance computing to leading OEMs, systemintegrators and enterprise customers for successful and efficient deploymentof their products and servicesThe Eurotech HPC division aims to deliver advanced supercomputers andHPC solutions to enable science, technology and business to reach theexcellence that will help the development of the humankind Imagine. Build. Succeed
    • 5. Eurotech IntroductionGroup Global Footprint
    • 6. Eurotech IntroductionSome of our Customers
    • 7. Eurotech Value PropositionProducts and Solutions for Core, Infrastructure, Edge High Performance Connectivity Platforms Components and PervasiveComputing Engines to Build and Connect Devices for REAL WORLD for the CORE the EDGE Applications
    • 8. HPC division highlights• The Eurotech HPC division focuses on designing, manufacturing, delivering and supporting high performance computing solutions• More than 14 years of history of delivering supercomputing systems and solutions to industry and academia• First worldwide company to market hot water cooled high performance computers. First hot water cooled HPC in the market delivered in 2009.• R&D capabilities nurtured in house and through collaboration with the best universities and research centres in Europe: INFN, Julich, Revensburg, Daisy…• Funding member of ETP for HPC
    • 9. Eurotech HPC PrinciplesHigh performance density. We want our customer to decrease time tosolution and footprint running their simulations and applications as fast as thelatest technology allows.Energy efficiency and Green. We aspire to develop green, energy efficientproducts that allow our customer to save energy and leverage sustainability.Scalability. We aim to propose solutions that scale as linearly as possible andare set to modularly increase from giga to petaflopsRAS. We want enhance design, support readiness, preventive maintenanceand quality to to increase the serviceability, availability and reliability of our HPCsystems during their lifetime.Cost effectiveness. We concentrate a lot of our efforts to deliver advancedtechnology at competitive prices together with a compelling total cost ofownership.Versatility. With our products and solutions, we want to help our customers tosolve a vast variety of computational problems in the most effective waypossible
    • 10. Aurora HPC solutions DESIGN ENGINEERING INFRASTRUCTURE SUPPORT& DEPLOYMENT MAINTENANCE TOOLS AND USER DRIVERS LIBRARIES APPLICATIONS OPERATING SYSTEMS MIDDLEWARE CPU HYBRID BOOSTER ARCHITECTURE ARCHITECTURE ARCHITECTURE
    • 11. Aurora solutions highlightsHigh Performance Density Aurora solutions use the latest and fastest technologies available. A high end solution, capable to deliver unparalleled computational power. Cooling Aurora systems are the best in class in terms of computing density per unit rack. High reliability Networks End-to-end solutions
    • 12. Aurora solutions highlightsHigh Performance Density Aurora removes component generated heat using water from 18 °C to 50 °C, with no need for air conditioning, in any climate zone. This allows reaching a data center PUE of 1.05. Cooling Direct control of cooling parameters (T, flow rate…) to maximize energy efficiency High reliability Possibilities for thermal energy recovery . Networks End-to-end solutions
    • 13. Aurora solutions highlightsHigh Performance Density Direct liquid cooling limits hot spots Vibration less operations Board temperature control Redundancy of all critical components Cooling Independent sensor networks Solid state storage Soldered memory Quality controls on production High reliability Serviceability . Networks End-to-end solutions
    • 14. Aurora solutions highlightsHigh Performance Density An Infiniband switched network coexists with an optional FPGA driven 3D Torus switchless network. Cooling Three independent synchronization networks (system, subdomain and local) preserve efficiency at petascale by guaranteeing that the communication and the scheduling of all High reliability nodes are automatically handled. . Networks End-to-end solutions
    • 15. Aurora solutions highlightsHigh Performance Density Solution design, from initial need to systems operations: HW and SW solution design Cooling infrastructure design and deployment Cooling Delivery, installation and testing Software services Maintenance and support High reliability Networks End-to-end solutions
    • 16. Hardware systemsAURORA
    • 17. Aurora HW architectureThe chassis: standalone building block• The chassis is the basic block that is used to form the Aurora supercomputers. It is the minimum unit on sale.• It can be partially or totally filled by nodes, up to 16 nodes per chassis, with a computational peak power ranging from 6.2 to 25 Tflops, depending on the processor and accelerator used.• Aurora supercomputers can scale from few Tflops to multiple Petaflops. Root card with the Infiniband switch Computational nodes Power Supply
    • 18. CPU based supercomputersAURORA
    • 19. The Aurora HPC 10-10A Cutting Edge Highly Powerful SupercomputerKey Features:High Performance Density – 4096 processorcores, up to 100 TFlops in just 1.5 m2Energy efficiency– target datacenter PUE of1.05, no need for air conditioning, up to 50% lessenergyFast interconnects– Infiniband QDR/FDR, 3DTorus, Latency ~ 1usFlexible Liquid Cooling– All components arecooled by water, temperature from 18°C to 52°Cand variable flow ratesReliability– 2 independent sensor networks,soldered memory, uniform cooling, qualitycontrols
    • 20. The Aurora HPC 10-10A Cutting Edge Highly Powerful SupercomputerSpecifications:CONFIGURATION INTERFACES (I/O) PER CHASSIS• 2 CPUs per node • 20 x Infiniband QDR/FDR ports• 16 nodes per chassis • 17 x Gigabit Ethernet ports• 16 chassis per rack• 256 nodes, 512 CPUs, 4096 cores per rack LOCAL STORAGE PER NODE • 1 X 1024 GB 2,5" Sata DiskPERFORMANCE • 1 X 256 GB 1,8" microSATA SSD• 100 TFlop/s per rack (peak) OPERATING SYSTEMCPU • Cent OS, RedHat or Suse• Xeon E5 series MONITORINGMEMORY • 2 independent sensor networks and power measurement• Up to 64GB per node per computing node• ECC DDR3 SDRAM 1600 MT/s• Bandwidth: 85.3 GB/s per compute node POWER AND COOLING• Soldered on board • Aurora Direct Hot Water Cooling • Max power per rack: 100 kWINTERFACES PER NODE• 1 x Infiniband QDR/FDR port (40/56 Gb/s) SOFTWARE• 1x Gigabit Ethernet port • Bright Cluster Manager, ParTec Parastation, PBS• 6 (3+3) x 3D Torus interconnects (60+60 Gb/s), ~1us Professional , Intel Cluster Studio, Lustre Parallel File latency System, Allinea, CAPS
    • 21. Hybrid supercomputersAURORA
    • 22. The Aurora TigonUnleash the hybrid powerKey Features:High Performance Density – 2048 CPUprocessor cores, 256 accelerators, up to 350TFlops in just 1.5 m2Energy efficiency– the Aurora direct coolingtarget datacenter PUE of 1.05, no need for airconditioning, up to 50% less energyPower and programmability – A choice ofaccelerators to suite customer needs, noportability issues with ®Intel PhiFlexible Liquid Cooling– All components arecooled by water, temperature from 18°C to 52°Cand variable flow ratesReliability– 3 independent sensornetworks, soldered memory, no movingparts, uniform cooling, quality controls
    • 23. The Aurora TigonUnleash the hybrid powerSpecifications:CONFIGURATION INTERFACES PER NODE• 2 CPUs per node + 2 Accelerators per node • 1 x Infiniband QDR/FDR port (40/56 Gb/s)• 16 nodes per chassis • 1x Gigabit Ethernet port• 16 chassis per rack • 6 (3+3) x 3D Torus interconnects (240+240 Gb/s), ~1us• 256 CPUs and 256 accelerators per rack latencyPERFORMANCE INTERFACES (I/O) PER CHASSIS• Up to 350 TFlop/s per rack (peak) • 20 x Infiniband QDR/FDR ports • 17 x Gigabit Ethernet portsCPU• Xeon E5 series LOCAL STORAGE PER NODE • 1 X 1024 GB 2,5" Sata DiskACCELERATORS • 1 X 256 GB 1,8" microSATA SSD• 2 x Nvidia Kepler K20 per node OR• 2 x Intel Xeon Phi 5120D per node OPERATING SYSTEM • Cent OS, RedHat or SuseMEMORY• Up to 64GB per node POWER AND COOLING• ECC DDR3 SDRAM 1600 MT/s • Aurora Direct Hot Water Cooling• Bandwidth: 85.3 GB/s per compute node • Power per rack: 100 kW• Soldered on board SOFTWARE • Bright Cluster Manager, ParTec Parastation, PBS Professional, Intel Cluster Studio, Lustre Parallel File System, Cuda, Allinea, CAPS
    • 24. Booster architectureAURORA
    • 25. The Booster architectureRunning toward exascaleDEEP project In the DEEP project, the concept a standard Cluster with an InfiniBand®interconnect and Intel® Xeon® nodes is combined with an innovative, highly scalableBooster constructed of Intel® Xeon PhiTM co-processors (BN) and the EXTOLL high-performance 3D torus network.Offloading Applying accelerators at the Cluster level, rather than the node level, meansthat applications composed of various kernels can run low to medium scalability kernels onthe Cluster side, and offload the kernels with high scalability to the Booster side.
    • 26. Mid range systemsAURORA
    • 27. The Aurora G-StationCompact and powerfulKey Features:Powerful – Aurora G-Station is able to perform at6.25 Tflop/s per rack with CPU only and at 20.6Tflop/s per rack with accelerators. Very fast I/O withInfiniband or Gigabit ETHCompact – Remarkable density, storing 32 powerfulXeon processors in a 12U (or 17U withincorporated heat exchanger) rackEnergy efficient – Direct hot liquid cooling doesn’trequire air conditioning savings on energy bills.Noiseless – The Aurora G-Station is fan-less so itcompletely silentEasy – The G-Station doesn’t have a complicatedand messy cabling because all of the nodes areconnected via backplane
    • 28. The Aurora G-Station Compact and powerfulSpecificationstalComputing 5.5 TFlop/s per rack (peak) or 20.6 Tflop/spower per rack with acceleratorsProcessor Up to 32 X Intel Xeon E5 series per node EnvironmentaltalAccelerators 16 X Nvidia Kepler K20 per node OR Cooling Aurora Embedded Hot Liquid Cooling (external or embedded heat exchanger 16 X Intel Xeon Phi 5115D per node configurations)Architecture Up to 16 backplane connected server Power 4-6 Kw per rack nodes Dimensions and H 93cm x W 60cm x D 74cm weight with external cooling: H 63cm x W 60cm x D Up to 8 backplane connected server 75cm nodes with accelerators SoftwaretalMemory Up to 1 TB RAM (64GB soldered RAM per Operating Linux node). ECC DDR3 SDRAM 1866 MT/s SystemInterconnects 40 Gbps QDR Infiniband Cluster Manager Bright Cluster Manager Compilers, OpenMPI, CUDA, CAPS, Allinea, NICE 1GbE Libraries and software suite Optional: 1+1 3D Torus or 3D mesh Tools BW:up to 240+240Gbps, Latency: ~1usI/O ports 20 x Infiniband + 17x Gigabit Ethernet Job PBS ProfessionalLocal storage up to 16 TB GB 2,5" Sata Disk or up to 4 Management TB 1,8" microSATA SSD Parallel File LustreStorage Up to 200 TB Infiniband storage System Applications Altair, MSC and Ansys solvers
    • 29. StorageAURORA
    • 30. Local storage• Eurotech Aurora systems can accommodate up to 2TB of SATA disk or 512 GB SSD for each node• Each Aurora rack will have up to 512 TB of local storage for scratch memory or other utilizations• Aurora systems are generally coupled with external high performance storage
    • 31. High performance StorageAs part of an overall HPC solution Eurotech can integrate its ownsupercomputers with high speed high performance storageAurora mainly integrates Infiniband for storage, to get a performancecompatible with the computer power it offers. Hence, the preferredstorage solutions are those with a native Infiniband solution.
    • 32. Aurora liquid coolingEUROTECH
    • 33. Liquid cooling for EurotechWhat liquid cooling is for usLiquid cooling for us is:• Hot. So, using hot water wherever possible• Direct. Taken inside the rack in direct contact with the components• Green. Able to use free cooling in any climate zone• Comprehensive. Cools any source of heat in the server (including power supply)• Serviceable and safe. Allows ease of maintenance and doesn’t represent a risk for the electronics in the server
    • 34. Aurora liquid coolingHigh level description• The main components of the Aurora cooling system are:• Cooling circuit, that is the pipe work which bring the water to the heat exchanger and back to the rack• The heat exchanger for free cooling• Distribution pipes which are mounted in the rack• The back plane, which collects the liquid from the distribution pipes and distributes it to the cold plates• Cold plates: aluminum boards which are intimately coupled with the electronic boards of compute nodes, PCUs and switches• Quick disconnects that are mounted on the cold plates for inflow and outflow of liquid in the plates. The quick disconnects are fine pieces of engineering that seal immediately when the boards is extracted from the back plane. No leakage is possible
    • 35. Aurora hot water cooling Switch and Root Card Liquid distribution bar Backplane Quick Cold Plate Cold Plate Cold Plate Cold Plate disconnectsFree Cold PlateCoolers
    • 36. Aurora softwareAURORA
    • 37. Aurora Software Integration• Aurora solutions include software, ranging from firmware OS, 3D torus software and cluster management applications• As part of the solution, Eurotech makes sure that all software necessary to run the customer application is correctly installed• The software installation can be carried out in house or on customer site• While Eurotech leverages partnerships with commercial software vendors, It also has the in house expertise to manage open source and licensed software throughout the software lifecycle• While the software stack depends on customer requirements, an example is reported in the next slide
    • 38. Aurora Software Integration servicesThe Eurotech HPC team integrates drivers, operating systems, clustermanagers, filesystems, resource managers, schedulers… to provide aturnkey environment where the customers run their applications.Several applications can run on Eurotech Aurora supercomputersbenefiting from a combination of compatibility and higher performance. ForexampleAbaqus , Avizo, ANSYS Workbench, ANSYS Fluent, Altair HyperGraph,Altair HyperMesh, Altair HyperView , Blender, EnSight.
    • 39. Aurora supercomputers are Intel cluster ready Aurora supercomputers Benefits : • Wider Audience Intel® Cluster Ready • New Markets Architecture • More Revenue Application Application Application
    • 40. Aurora supercomputers are Intel cluster ready Value of Intel Cluster Ready ! Intel® Cluster Ready is a standardized platform program SO WHAT DOES THAT MEAN TO ME ? HPC adoption is everywhere – and affordable thanks to momentum in hardware and software technology. But clusters are hard to implement. Answer: they are useable for the average non-Linux savvy buyer, if clusters are developed on a standardized platform – like Intel® Cluster Ready. Assurance is that the software will run, and everything will work because it has been tested. Wait a minute – isn’t that like buying a laptop from Best Buy or a computer store ? Yes !!! You know it will work – they have tested it !
    • 41. ServicesAURORA
    • 42. Aurora HPC Services• Eurotech can provide services that complement and complete the product offering over and above the standard warranties• The customer can choose between a range of different options varying from warranties extensions to on site dedicated support• Eurotech also leverages its long history of custom projects and its experience in solution delivery to seamlessly manage HPC project
    • 43. Aurora HPC services The service offering covers end to end an HPC project lifecycle: from customer needs gathering and design to deployment, training and support Eurotech aims to meet customer requirements, to properly design, assemble and install the Aurora solution and to guarantee smooth operations throughout the lifetime of the proposed solution Maintenance Customer Solution Customer Deployment Training and software requirement design Support updates
    • 44. Aurora HPC deliveryAurora solutions are deliveredto customer following aproven methodologyThe methodology consists ina blueprint that covers:• Design• HW production• Pre-test• HW and SW installation• AcceptanceThe blueprint is the structurewithin which the HPC projectis defined
    • 45. Support and maintenanceAs part of the operational supportschema, Eurotech providespreventive maintenance Solution optimizationThis includes activities like• Monitoring• Software Services• Spare part management Support Operational Maintenance supportThe maintenance schema aims toprovide both the proactive actionsthat prevents faults and thereactive capabilities to act quicklyand reduce the mean time torepair (MTTR) TrainingThe final goal is to maximize theavailability and up time of thesystem
    • 46. Aurora HPC support to customers and SIEurotech supports the solutions itdelivers via a structured approachthat allows the customer to benefitfrom one single point of contact.Structure doesn’t preventcustomer intimacy: the engineerswho designed the hardware orassembled the solution are inclose contact with the 2nd line ofsupport and, if necessary, areassigned to follow CustomerService Requests from thebeginning to the end.Eurotech can organize that anengineer, trained on the HPCsolution implemented, is locatedon site to administer the systemand provide immediate support
    • 47. Aurora HPC supportNo rigid structures: any combination of the below options is feasible Bronze Silver Gold Office hours support Extended office hours On site permanent telephone support trained personnel Part replacement and RMA management Part replacement and Part replacement and RMA management RMA management Single point of contact Engineer travels on Software maintenance 24x7 web portal site access On site replacements On site replacements Single point of contact Software maintenance 24x7 web portal Single point of contact access 24x7 web portal access
    • 48. Aurora Software Services• All of the software delivered with Aurora is regularly checked for updates, upgrades, new releases and patches• Depending on the type of update required, Eurotech can either assist or deliver on customer premises patches and upgrades• Depending on the urgency and the impact of the software update, Eurotech will liaise with the 3rd party software vendor
    • 49. Training• Eurotech provides ad-hoc training on the Aurora solutions it designs and assemble for customers• Training covers the Aurora systems (usage, maintenance, software), including the water cooling• Specific software training can be provided by Eurotech or directly by the software vendor training personnel, depending on the type of software installed
    • 50. Aurora solution benefitsAURORA
    • 51. Why Aurora solutions?High performance Scalability RAS and Usedensity Linear scalability to users ReliabilityFastest available technology from Gigaflops to Petaflops Availabilityfor high density Serviceabilitycomputational power Ease of installation and useCompatibility and Green Competenceflexibility Aurora data centers consume HPC division expertiseChoice of interconnects 50% less power than data End to end solutionX86 based systems centers based on standard air deployment cooled technology Flexibility to work with final customer, SI and as OEM
    • 52. High Performance Density • Aurora uses the fastest Intel technology available − Intel Xeon E5 (Sandy Bridge) , Intel Phi, Nvidia Kepler • Fast Infiniband Interconnects − Water cooled infiniband switches included in the systems • 3D Torus − High speed up 60/240 GB/sec fullduplex 3D torus network based on FPGA − In collaboration with research institutions like I.N.F.N, TNW and FBK, Eurotech has developed one of the fastest and most reliable 3DTorus in the market • FPGA accelerators − Aurora nodes have an on-board FPGA that can be programmed as an accelerator • High Density – Aurora systems can pack 350 Tflop/s per standard rack.
    • 53. Scalability • Modular system – Aurora systems can scale from a single chassis with 16 nodes to multiple chassis in a rack and into a system with multiple racks • 3D torus network − Next neighbor low latency network, no switches and no bottlenecks facilitate scalability • Infiniband – Fast interconnections allow low latency communication • Synchronization network – Very fast channel and global commands with subdomain manageability – Low/high level synchronization • Less cabling – Aurora uses less cabling than competitive solutions. Only rack to rack cabling are necessary while all inner rack interconnections are done via backplane – .
    • 54. Reliability, Availability, Serviceability • Aurora HW has no moving parts: – There are no spinning discs and no fans for cooling on-board heat generating components – As a result, there are no vibrations that can harm memory and rotating discs, increasing the longevity of components and, as a consequence, of the whole system. • Aurora cooling limits hot spots − Improved cooling infrastructure and direct to component heat removal allows uniformity in heat production/removal, limiting hot spots and hence another cause of failures − Stable and low temperature environment for all components − Liquid cooling eliminates data center hot spots • Monitoring and resilience – Independent sensor networks – Redundancy of all components including networks – Choice of pro active support (preventive maintenance)
    • 55. Reliability, Availability, Serviceability • Maintainable, despite liquid cooling – Aurora node cards are hot swap thanks to the special connectors that seal instantaneously the water circuit when a card is extracted, with guaranteed no spillage – As a result, there are no vibrations that can harm memory and rotating discs, increasing the longevity of components and, as a consequence, of the whole system. • 3D Torus partitioning − In each partition, the network is isolated from other partitions –no network interference from other partitions • Software integration – Eurotech works with the best HPC software providers to offer robustness across the solution stack – Eurotech software competences help seamless integration • Eurotech commitment to quality – Eurotech produces its boards in its Japanese plant, following high quality standards – Eurotech HPC selects the best in class supplier to set up the Aurora solutions
    • 56. Flexibility and compatibility • Choice of interconnects – Aurora computational units offer both an Infiniband network and a 3D Torus interconnection that can be used in parallel – The customer can choose what is the best technology to be implemented accordingly to the nature of the computational problem to be solved • x86 based solution – Aurora supercomputers are based on x86 processors – Intel cluster ready certified • A choice of software – Aurora can run a vast variety of software, both open source and commercial • A flexible solution approach – Eurotech HPC can design solutions involving accelerators, storage and software, according to customer requirements
    • 57. Green and environmental Everyone nowadays claims to be «green» – but are they? Aurora Green proposition: • Energy efficiency (achievable datacenter PUE of 1.05) • Direct on component water cooling • High reliability means less spare parts and hence less waste • 230 AC to 10 V DC in 2 steps for a power conversion close to 97% • Free cooling (heat exchangers rather than chillers and AHU) • Thermal energy recovery • High density means floor space and real estate savings (tons of CO2 and other pollutants spared) • Noiseless operations means better work environments • Compatible with energy optimization software and job schedulers
    • 58. Competence • 10 years+ of HPC experience: – An extended experience in top HPC projects – Collaborated with best in class research center to develop advanced supercomputer prototypes – Developed experience in delivering large systems (15M$+) • Structure and agility: – Eurotech HPC division benefits from the structure coming from the Eurotech group, while keeping the typical agility of a start up – This means prompt adaptability to customer needs. Eurotech can deliver end to end HPC solutions inclusive of supercomputers, storage, software and services. But they can also work with system integrators, delivering parts of larger systems and as OEM for larger vendors • Financial solidity – Differently from the pure HPC players in the market, Eurotech income statements relies on multiple line of business, helping the HPC division to smooth revenues and to rely on abundant resources
    • 59. Aurora total cost of ownershipAURORA
    • 60. Data center TCO driversDriver Cost componentsIT CAPEX Initial SW and HW capital expendituresSpace occupancy Cost of the occupied space and auxiliary infrastructure: rent,(footprint) opportunity cost, civil, structural and engineering, permits and taxesData center infrastructure Electrical (UPS, generator, cables…) Cooling (Chillers, AHUs, heat exchangers, pumps…) Facilities (fire prevention, plants, security, building mng systems)Installation Delivery, installation, test and tuning of IT, electrical and cooling equipmentEnergy Cost of energy: IT, cooling, lighting and wasteMaintenance and additional Warranty extensions, support, software licenses, IT maintenance,operation costs electrical and cooling maintenance, facilities maintenance, costs of outages, heating, securityOther: disposal, green… Costs of end of life, carbon footprint (missed) incentives, fines…
    • 61. Main Areas of Aurora systems impact on TCO• Area 1: energy savings – Lower costs due to energy cost optimization and better PUE• Area 2: density (FLOPS/ m2) – Aurora density allows for space, racks, electrical, cooling and network savings• Area 3: reliability, availability and serviceability – Aurora reliability contributes to lower maintenance costs and outage business costs• Area 4: liquid cooling – A part from the energy savings that water cooling implies, it also allows to save on the capital costs of cooling infrastructure
    • 62. TCO - energy“Typical” power breakdown in datacenters Data from APC Image from APC
    • 63. TCO- energyPower breakdown in an Aurora datacenter leading to a PUE of 1.05
    • 64. Eurotech Aurora energy savings points IT equipment Architecture and Eurotech Aurora Maximize Flops / Watt cooling Data Center Reduce Cooling Reduce House Load Energy consumption Optimize power Data Center or conversion ecosystem Reuse thermal energy
    • 65. TCO - density - IT hardware costs, like racks - electrical - cabling and cooling hardware costs - cooling MORE - maintenance Flops/m2 costs - space - energy costs occupancy (m2) - raised floor - volume costs occupancy(m3) - civil, structural and engineering costs
    • 66. TCO - reliability The indirect impact depends on type of organisation and could range from thousands to millions € per hour of outage. So the impact of low reliability on the business could offset any saving reached during purchase and installation of IT solutions!
    • 67. Impact of downtime
    • 68. TCO – liquid cooling Adopting liquid cooling technology, it is possible to avoid most of air conditioning used to cool the IT equipment This means that savings come from the avoidance of chillers, AHU, CFD (computational fluid dynamics), raised floor, air conditioning tuning Liquid cooling infrastructure is generally cheaper relying on components like piping, pumps and free coolers. If we take a 1 MW installation and we consider a cost of air cooling infrastructure of 3000$/KW, the total cooling cost would be 3M$. The same 1MW data center would require roughly 20% of that expenditure, if hot liquid cooling is implemented.