236341 Idc How Nations Are Using Hpc August 2012
Upcoming SlideShare
Loading in...5

Like this? Share it with your network


236341 Idc How Nations Are Using Hpc August 2012

Uploaded on

Attached is an IDC write-up on the National use of High-end HPC for both scientific innovation and economic advancement. It also includes a list of the current petascale systems around the world.

Attached is an IDC write-up on the National use of High-end HPC for both scientific innovation and economic advancement. It also includes a list of the current petascale systems around the world.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads


Total Views
On Slideshare
From Embeds
Number of Embeds



Embeds 1

http://www.linkedin.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide


  • 1. INDUSTRY DEVELOPMENTS AND MODELS How Nations Are Applying High-End Petascale Supercomputers for Innovation and Economic Advancement in 2012 Earl C. Joseph, Ph.D. Chirag Dekate, Ph.D. Steve Conway IDC OPINIONwww.idc.com There is a growing contingent of nations around the world that are investing considerable resources to install supercomputers with the potential to reshape how science and engineering are accomplished. In many cases, the cost of admission for each supercomputer now exceeds $100 million and could soon reach $1 billion.F.508.935.4015 Some countries are laying the groundwork to build "fleets" of these truly large-scale computers to create a sustainable long-term competitive advantage. In a growing number of cases, it is becoming a requirement to show or "prove" the return on investment (ROI) of the supercomputer and/or the entire center. With the current global economic pressures, nations need to better understand the ROI of makingP.508.872.8200 these large investments and decide if even larger investments should be made. Its not an easy path, as there are sizable challenges in implementing these truly super supercomputers:  Along with the cost of the computers, there are substantial costs for power,Global Headquarters: 5 Speen Street Framingham, MA 01701 USA cooling, and storage.  In most cases, new datacenters have to be built, and often entire new research organizations are created around the new or expanded mission.  In many situations, the application software needs to be rewritten or even fundamentally redesigned in order to perform well on these systems.  But the most important ingredient is still the research team. The pool of talent that can take advantage of these systems is very limited, and competition is heating up to attract them. Filing Information: August 2012, IDC #236341, Volume: 1 Technical Computing: Industry Developments and Models
  • 2. IN THIS STUDYThis study explores the potential impact of the latest generation of large-scalesupercomputers that are just now becoming available to researchers around theworld. It also provides a list of the current petascale-class supercomputers around theworld. These systems, along with others planned to be installed over the next fewyears, create an opportunity to substantially change how scientific research isaccomplished in many fields.IDC has conducted a number of studies around petascale initiatives, petascaleapplications, buyer requirements, and what will be required to compete in the future.IDC has also worked with the U.S. Council on Competitiveness on a number of HPCstudies, and the council has accurately summarized the situation with the phrase thatcompanies and nations must "outcompute to outcompete."SITUATION OVERVIEWOver the past five years, there has been a race around the world to buildsupercomputers with peak and actual (sustained) performance of 1PFLOPS (apetaflop equals a computer that can calculate a million million operations everysecond). The next-generation supercomputers are targeting 1,000 times moreperformance, called exascale computing (10^18 operations per second), starting in2018. Currently, systems in the 10–20PFLOPS range are being installed, and100PFLOPS systems are expected by 2015.Already, the petascale and near-petascale level of performance is allowing science ofunprecedented complexity to be accomplished on a computer: Scientific research beyond what is feasible in a physical laboratory is becoming possible for a larger set of researchers. First principles–based science is becoming possible in more disciplines. Research based on the nano level and at the molecular level is becoming possible.  Future research at an atomic level will soon become possible for designing many everyday items. Analysis via simulation instead of live experiments will be possible on a greatly increased scale.In the United States, the race for petascale supercomputers was started whenDARPA launched a multiphase runoff program to design innovative, sustainedpetascale-class supercomputers that also addressed a growing concern around userproductivity. The great unanswered (and poorly addressed) question in the high-endHPC market is: how can a broad set of users actually make use of the newgeneration of computers that are so highly parallel? Some of the complexities of thisproblem are: Processor speeds in general are not getting faster, so users have to find ways to use more processors to obtain speedups.©2012 IDC #236341 1
  • 3.  The memory bandwidth in and out of each processor core (and the per-core memory size) is declining at an alarming rate and is projected to continue to decline over the foreseeable future. Many user applications were designed and/or written many years ago, often over 20 years ago. They tended to be designed around a single processor with strong memory access capabilities (e.g., vector computers). These codes have been enhanced and improved over the years, but in most cases, they need to be fundamentally redesigned to work at scale on the newest supercomputers.The current situation is opening the door for new applications that can take advantageof the latest computers. Many current applications have a more difficult time goingthrough this level of transformation because of the lack of talent, the steep costs ofredesigns, the need to bring all of the legacy features forward, and an unclearbusiness model. A debate is growing about whether a more open source applicationmodel may work in many areas.The Use of Higher-End Computer SimulationModel–Based ScienceFigure 1 shows that the use of supercomputer-class HPC systems was roughly stable(from a revenue perspective) up to 2008; then starting in 2009, major growth began.IDC defines a supercomputer as a technical server that costs over $500,000. Thestrongest growth has been for supercomputers with a price over $3 million.Supercomputer revenue was running at around $4.5 billion a year in 2011.Figure 2 shows IDCs forecast supercomputer revenue for 2006–2016. This portion ofthe HPC market is expected to grow to over $5 billion in a few years and get close to$6 billion by 2016. Note that with system prices growing at the top of the market,yearly revenue will likely show major swings up and down, as having singular salesnear $1 billion can have major impacts (e.g., two $1 billion sales in a single year isover a third of the entire market sector).IDC expects that the sales of very large HPC systems, those selling for over $50million, will likely increase over the next few years, as more countries invest in largesimulation capabilities. In some countries, this may reduce their spending on moremoderate-size systems as they pool resources to invest in a few very large systems.One example is the new K system in Japan that has a price of over $500 million.There is a strong possibility that multiple HPC systems costing close to $1 billion willbe built over the next five years.2 #236341 ©2012 IDC
  • 4. FIGURE 1Worldwide Supercomputer Technical Server Revenue,2006–2011Source: IDC, 2012FIGURE 2Worldwide Supercomputer Technical Server Revenue,2006–2016Source: IDC, 2012©2012 IDC #236341 3
  • 5. Petascale Supercomputers and Their PlannedUses (as of August 2012)There are currently 20 known petascale-class supercomputers in the world: The top-ranked 20-petaflop Sequoia system, hosted at LLNL, is used to ensure the safety and reliability of the U.S. nuclear weapons stockpile and advancing research in various domains including astronomy, energy, human genome science, and climate change. The 11.28-petaflop K computer hosted at RIKEN is planned to be used for climate research, disaster prevention, and medical research. The Mira supercomputer hosted at ANL, providing over 10.06 petaflops of compute capability, is devoted entirely to open science applications such as climate studies, engine design, cosmology, and battery research. The 3.18-petaflop SuperMUC supercomputer hosted at Leibniz-Rechenzentrum is scheduled to be used for three key disciplines — astrophysics, engineering and energy, and chemistry and materials research. The Tianhe-1A supercomputer at the National Supercomputing Center in Tianjin, China, providing over 4.70 petaflops of compute capability, has been used to execute record-breaking scientific simulations to further research in solar energy and a petascale molecular dynamics simulation that utilizes both CPUs and GPUs. At ORNL, the Jaguar supercomputer system, which currently provides 2.63 petaflops of computation capability, has been used for a diverse range of simulations, from probing the potential for new energy sources to dissecting the dynamics of climate change to manipulating protein functions. Jaguar has also been used in studying complex problems including supernovae, nuclear fusion, photosynthesis research enabling next generation of ethanol, and complex ecological phenomena. Molecular dynamics, protein simulations, quantum chemistry and molecular dynamics, engineering and structural simulations, and astrophysics are some of the applications that are being ported to the 2.09-petaflop FERMI supercomputer hosted at CINECA (Italy). The 1.68-petaflop JUQUEEN supercomputer at Forschungszentrum Jülich is planned to be used for a broad range of applications including computational plasma physics, protein folding, quantum information processing, and pedestrian dynamics. French researchers have simulated the first-ever simulation of the entire observable universe from Big Bang to the present day comprising over 500 billion particles on the 1.67-petaflop CURIE supercomputer operated by Grand Equipement National de Calcul Intensif (GENCI) at the CEA in France.4 #236341 ©2012 IDC
  • 6.  The 2.98-petaflop Nebulae supercomputer hosted at the National Supercomputing Center in Shenzhen, China, has been used in geological simulations including oil exploration, space exploration, and avionics, among other large-scale traditional scientific and engineering simulations. The Pleiades supercomputer at NASA Ames, which provides over 1.73 petaflops of compute capability, is being applied to understand complex problems like calculation of size, orbit, and location of planets surrounding stars; research and development of next-generation space launch vehicles; astrophysics simulations; and climate change simulations. The 1.52-petaflop Helios supercomputer at the International Fusion Energy Research Center, Japan, is dedicated to researching and advancing fusion energy research as part of the ITER project and is used to model the behavior of plasma and ultra-high-temperature ionized gas in intensive magnetic fields and design materials that are subjected to extreme temperatures and pressures. The 1.47-petaflop Blue Joule supercomputer hosted at the Science and Technology Facilities Council in Daresbury, the United Kingdom, is being used to develop a next-generation weather forecasting model for the United Kingdom that would simulate the winds, temperature, and pressure that, in combination with dynamic processes such as cloud formation, would allow for the simulation of changing weather conditions. The TSUBAME2.0 supercomputer at the Tokyo Institute of Technology, which enables over 2.29 petaflops of capability, has been used to run a Gordon Bell award-winning simulation involving complex dendritic structures, utilizing over 16,000 CPUs and 4,000 GPUs. The 1.37-petaflop Cielo supercomputer hosted at Los Alamos National Laboratory enables the scientists to increase their understanding of complex physics and improve confidence in the predictive capability for stockpile stewardship. Cielo is primarily utilized to perform milestone weapons calculations. The Cray XT5 Hopper supercomputer at Lawrence Berkeley National Lab is capable of 1.29 petaflops and has been used to process over 3.5 terabases of genome sequence data comprising over 5,000 samples from 242 people, requiring over 2 million computer hours to complete the project. The 1.25-petaflop Tera 100 supercomputer hosted at the CEA in France has been used for solving problems in diverse domains including astrophysics, life sciences, plasma accelerators, seismic simulations, molecular dynamics, and defense applications. The SPARC64-based 1.04-petaflop Oakleaf-FX supercomputer in Japan is planned to be used for a wide variety of simulations, including earth sciences, weather modeling, seismology, hydrodynamics, biology, materials science, astrophysics, solid state physics, and energy simulations.©2012 IDC #236341 5
  • 7.  The 1.26-petaflop DiRAC supercomputer, based on the BlueGene/Q architecture and hosted at University of Edinburgh, has been used to study complex astrophysics simulations, including physics of dark matter, and computational techniques like lattice quantum chromodynamics. IBMs "Roadrunner," the worlds first petascale supercomputer, is being used primarily to ensure the safety and reliability of Americas nuclear weapons stockpile but also to tackle problems in astronomy, energy, human genome science, and climate change.In addition, the United States, China, Russia, the European Union (EU), Japan, India,and others are investing in several petascale systems (10–100 petaflops) inpreparation to reach the exascale dream.Petascale Applications in Use TodayMaterial science and nanotechnology typically rank high on the list of petascalecandidate applications because they can never get enough computing power.Weather forecasting and climate science also have insatiable appetites for computepower, as do combustion modeling, astrophysics, particle physics, drug discovery,and newer fields including fusion energy research. Scientists today are performing abinitio calculations with a few thousand atoms, but they would like to increase that tohundreds of thousands of atoms.Note: Ab initio quantum chemistry methods are computational chemistry methodsbased on quantum chemistry. The term ab initio indicates that the calculation is fromfirst principles and that no empirical data is used.Additional intriguing examples include real-time magnetic resonance imaging (MRI)during surgery, cosmology, plasma rocket engines for space exploration, functionalbrain modeling, design of advanced aircraft and, last but hardly least, severe weatherprediction.Table 1 shows a broad landscape of the current petascale research areas. Many ofthese are already being planned for running at the exascale level.6 #236341 ©2012 IDC
  • 8. TABLE 1 Current Petascale Research Areas Sector Application Description Potential Breakthrough/Value of the Work Aerospace Advanced aircraft, spacecraft design Improved safety and fuel efficiency Alternative energy Identify and test newer energy sources National energy security Automotive Full fidelity crash testing Improved safety, reduced noise and fuel consumption Automotive True computational steering Cars that travel 150,000mi without repairs Business intelligence More complex analytics algorithms Improved business decision making and management Electrical power grid distribution Track and distribute power Better distribution efficiency and response to changing needs Global climate modeling High-resolution climate models Improved response to climate change Cryptography and signal Process digital code and signals Improved national defense capabilities processing DNA sequence analysis Full genome analysis and comparison Person-specific drugs (pharmacogenetics) Digital brain and heart High-resolution, functional models New disease-fighting capabilities Haptics Simulate sense of touch Improved surgical training and planning Nanotechnology Model materials at nanoscale Pioneering new materials for many fields National-scale economic modeling Model national economy Better economic planning and responsiveness Nuclear weapons stewardship Test stability, functionality (nonlive) Ensure weapons safety and preparedness Oil and gas Improved seismic, reservoir simulation Improved discovery and extraction of oil and gas resources Pandemic diseases, designer Identify the nature of and response to Combat new disease strains and plagues diseases bioterrorism Pharmaceutical Virtual surgical planning Greatly reduce health costs while greatly increasing the quality of healthcare Real-time medical imaging Process medical images without delay Improved patient care and satisfaction Weather forecasting High-resolution forecast models Better prediction of severe storms (reducing damage to lives and property) Source: IDC, 2012©2012 IDC #236341 7
  • 9. Scientific Breakthroughs Possible with Large-Scale HPC Systems Simulating about 100 mesocircuits using current simulation capabilities (With larger-scale systems and supporting computational models, researchers can potentially simulate over 1,000 times the current cortical simulators to develop a first simulation matching the scale of a human brain.) High-fidelity combustion simulations, which require extreme-scale computing to predict the behavior of alternative fuels in novel fuel-efficient, clean engines and so facilitate the design of optimal combined engine-fuel system extreme-scale systems that can be used to improve the predictive integrated modeling of nuclear systems:  Capability to simulate multicomponent chemical reacting solutions  Computations and simulations of 3D plant design Multiscale modeling and simulation of combustion at exascale, which is essential for enabling innovative prediction and design; extreme-scale systems that enable:  Design and optimization of diesel/HCCI engines, in geometry with surrogate large-molecule fuels representative of actual fuels  Direct numerical simulation of turbulent jet flames at engine pressures (30– 50atm) with isobutanol (50–100 species) for diesel or HCCI engine thermochemical conditions  Full-scale molecular dynamics simulations with on-the-fly ab initio force field to study the combined physical and chemical processes affecting nanoparticle and soot formation processes Improvement in scalability of applications to enable multiscale materials design, critical for improving the efficiency and cost of photovoltaics:  Extreme-scale computations that help accurately predict the combined effect of multiple interacting structures and their effects upon electronic excitation and ultimate performance of a photovoltaic cell Modeling the entire U.S. power grid from every household power meter to every power-generation source, with feedback to all users on how to conserve energy, and then testing new alternative energy solutions against this "full" model of the whole U.S. power grid Modeling the evolution possibilities for DNA — going out in long time frames (e.g., several million years) — and investigating the potential impact this could have on human physiology Extreme-scale systems to enable full quantum simulations that are required for efficient storage of energy in chemical bonds found in catalysts (Current systems can simulate up to 1,000 atoms at 3nm resolution. Higher-resolution simulations8 #236341 ©2012 IDC
  • 10. on the order of 6nm with 10,000 atoms are needed to effectively conduct multiscale models.) Modeling of the entire worlds economy to first obtain the ability to see problems before they happen and then the ability to pretest solutions to better understand their likely effects and eventually find ways to assist the economies and avoid severe downturns The design of automobiles with 150,000mi reliability, with no failures Designing drugs that are directly based on each individuals DNA, via simulation of protein folding for both disease control and specific drug design Designing solar panels that are at least 50% efficient and are low cost to produce; for example, a 10 x 20ft panel at consumer prices to be purchased at consumer home improvement stores and no more difficult to install than a lawn sprinkler system, where a few of these panels can generate more electricity than is needed for a house and two cars Full airflow around an entire vehicle like a helicopter that is accurate enough to remove the need for any wind tunnel testing Petascale systems and beyond to also help in parallel direct numerical simulation of combustion in a low-temperature, high-pressure environment shedding light on how jet flames ignite and become stable in diesel environments (The systems can also accelerate the development of predictive models for clean and efficient combustion of new transportation fuels.) Simulations to help investigate Cooper pairs, the electron pairings that transform materials into superconductors (Superconducting research can be accelerated to help better understand the difference in temperature at which various materials become superconductors, with the goal of developing superconductors that do not require cooling.)FUTURE OUTLOOKHPC and the Balance of PowerBetween NationsThe competition among nations for economic, scientific, and military leadership andadvantage is increasingly being driven by computational capabilities. Very large-scaleHPC systems will tend to push nations that employ them ahead of those that dont. Aset of petascale supercomputers, along with the researchers to use them, canreshape how a nation advances itself in both scientific innovation and economicindustrial growth. This is why countries/regions like the United States, China, theEuropean Union, Japan and, even, Russia are working to install multiple petascalecomputers with an eye on exascale in the future.©2012 IDC #236341 9
  • 11. Some immediate advantages of installing a fleet of petascale supercomputers in acountry are: It tends to keep the top scientists, engineers, and researchers within your country. It helps attract top scientists and engineers around the world to your country. It motivates more students within your country to go into scientific and engineering disciplines — building the intellectual base for the future.The major mid- to longer-term advantages include: Increasing the economic size (GDP) and growth of a nation by having more competitive products and services, faster time to market, and the ability to manufacture products that other nations cant Increasing the capability of homeland defense and military security — nations without this scale of computational power will more often find themselves asking: What happened?The Need for Better Measurement of the ROI from HPC InvestmentsIn a growing number of cases, it is becoming a requirement to show or "prove" thereturn on investment of the supercomputer and/or the entire center. With the currentglobal economic pressures, nations need to better understand the ROI of makingthese large investments and decide if even larger investments should be made. IDCis researching new ways to better measure and report the ROI with HPC for bothscience and industry. For science, ROI can be measured by focusing on innovationand discovery, while for industry, its focused on dollars (profits, revenues, or lowercosts) and making better products/services. The new IDC HPC innovation awardprogram is one way for collecting a broad set of ROI examples.ESSENTIAL GUIDANCEPetascale initiatives will benefit a substantial number of users. Although the initialcrop of petascale systems and their antecedents are still only a few in number, manythousands of users will have access to these systems (e.g., the U.S. Department ofEnergys INCITE program and analogous undertakings). Industrial users will also gainaccess to these systems for their most advanced research.Petascale and exascale initiatives will intensify efforts to increase scalable applicationperformance. Sites that are advancing in phases toward petascale/exascale systems,often in direct or quasi competition with other sites for funding, will be highlymotivated to demonstrate impressive performance gains that may also result inscientific breakthroughs. To achieve these performance gains, researchers will needto intensify their efforts to boost scalable application performance by taming large-scale, many-core parallelism. These efforts will then benefit the larger HPCcommunity. Over time, it will likely also benefit the computer industry as a whole.10 #236341 ©2012 IDC
  • 12. Some important applications will likely be left behind. Petascale initiatives will notbenefit applications that do not scale well on todays HPC systems, unless theseapplications are fundamentally rewritten. Rewriting is an especially daunting andcostly proposition for some important engineering codes that require a decade ormore to certify and incrementally enhance.Some petascale developments and technologies will likely "trickle down" to themainstream HPC market, while others may not. Evolutionary improvements,especially in programming languages and file systems, will be most readily acceptedby mainstream HPC users but may be inadequate for exploiting the potential ofpetascale/exascale systems. Conversely, revolutionary improvements (e.g., newprogramming languages) that greatly facilitate work on petascale systems may berejected by some HPC users as requiring too painful of a change. The larger issue iswhether petascale developments will bring the government-driven high-end HPCmarket closer to the HPC mainstream or push these two segments further apart. Thisremains to be seen.LEARN MORERelated ResearchRelated research from IDCs Technical Computing hardware program and mediaarticles written by IDCs Technical Computing team include the following: HPC End-User Site Update: RIKEN Advanced Institute for Computational Science (IDC #233690, March 2012) National Supercomputing Center in Tianjin (IDC #233971, March 2012) Worldwide Technical Computing 2012 Top 10 Predictions (IDC #233355, March 2012) IDC Predictions 2012: High Performance Computing (IDC #WC20120221, February 2012) Worldwide Data Intensive–Focused HPC Server Systems 2011–2015 Forecast (IDC #232572, February 2012) Exploring the Big Data Market for High-Performance Computing (IDC #231572, November 2011) IDCs Worldwide Technical Server Taxonomy, 2011 (IDC #231335, November 2011) High Performance Computing Center Stuttgart (IDC #230329, October 2011) Oak Ridge $97 Million HPC Deal Confirms New Paradigm for the Exascale Era (IDC #lcUS23085111, October 2011)©2012 IDC #236341 11
  • 13.  Large-Scale Simulation Using HPC: HPC User Forum, September 2011, San Diego, California (IDC #230694, October 2011) Chinas Third Petascale Supercomputer Uses Homegrown Processors (IDC #lcUS23116111, October 2011) Exascale Challenges and Opportunities: HPC User Forum, September 2011, San Diego, California (IDC #230642, October 2011) The Worlds New Fastest Supercomputer: The K Computer at RIKEN in Japan (IDC #229353, July 2011) A Strategic Agenda for European Leadership in Supercomputing: HPC 2020 — IDC Final Report of the HPC Study for the DG Information Society of the European Commission (IDC #SR03S, September 2010) HPC Petascale Programs: Progress at RIKEN and AICS Plans for Petascale Computing (IDC #223625, June 2010) The Shanghai Supercomputer Center: China on the Move (IDC #222287, March 2010) IDC Leads Consortium Awarded Contract to Help Develop Supercomputing Strategy for the European Union (IDC #prUK22194910, February 2010) An Overview of 2009 China TOP100 Release (IDC #221675, January 2010) Massive HPC Systems Could Redefine Scientific Research and Shift the Balance of Power Among Nations (IDC #219948, September 2009)SynopsisThis IDC study explores the potential impact of the latest generation of large-scalesupercomputers that are just now becoming available to researchers in the UnitedStates and around the world. These systems, along with the ones planned to beinstalled over the next few years, create an opportunity to substantially change howscientific research is accomplished in many fields.According to Earl Joseph, IDC program vice president, High-Performance Computing,"We see the use of large-scale supercomputers becoming a driving factor in thebalance of power between nations."12 #236341 ©2012 IDC
  • 14. Copyright NoticeThis IDC research document was published as part of an IDC continuous intelligenceservice, providing written research, analyst interactions, telebriefings, andconferences. Visit www.idc.com to learn more about IDC subscription and consultingservices. To view a list of IDC offices worldwide, visit www.idc.com/offices. Pleasecontact the IDC Hotline at 800.343.4952, ext. 7988 (or +1.508.988.7988) orsales@idc.com for information on applying the price of this document toward thepurchase of an IDC service or for information on additional copies or Web rights.Copyright 2012 IDC. Reproduction is forbidden unless authorized. All rights reserved.©2012 IDC #236341 13