This document provides an overview of supercomputers including their common uses, challenges, history and top systems. Some key points:
- Supercomputers are used for highly complex tasks like weather forecasting, climate modeling, and simulating nuclear weapons. They can process vast amounts of data and perform quadrillions of calculations per second.
- Major challenges include cooling systems to manage the large amounts of heat generated and high-speed data transfer between components.
- The US and Japan have historically dominated supercomputing. Early systems included the CDC 6600 (1964) and Cray-1 (1976). Modern systems use thousands of processors networked together.
- The top supercomputers today include China's Tianhe
Tianhe 2 is the worlds fastest super computer according to top500.org. This presentation is based on Tianhe 2. Tianhe 2 Software architecture, Hardware architecture, specifications, motive factors and several other key aspects are highlighted in this presentation.
This presentation prepared by me with 4 other Computer Science Undergraduate from University of Colombo School of Computing, Sri Lanka.
Hi guys!!!
This is a presentation on super computers which gives you a better overview on the information of super computers.And i hope that this ppt has been useful for you.
If you have any doubts on this topic don't hesitate to ask in the comments section.
THANK YOU!!!!
Tianhe 2 is the worlds fastest super computer according to top500.org. This presentation is based on Tianhe 2. Tianhe 2 Software architecture, Hardware architecture, specifications, motive factors and several other key aspects are highlighted in this presentation.
This presentation prepared by me with 4 other Computer Science Undergraduate from University of Colombo School of Computing, Sri Lanka.
Hi guys!!!
This is a presentation on super computers which gives you a better overview on the information of super computers.And i hope that this ppt has been useful for you.
If you have any doubts on this topic don't hesitate to ask in the comments section.
THANK YOU!!!!
Today is the trend of technology. Technology is not possible without the computer technology. Each country try to achieve best of all as possible. Super Computer plays an important role in the research areas of the country.
Supercomputer is the fastest in the world that can process significant amount of data very quickly.The computing performance of super computers is measured very high as compared to a general purpose computer.
This document tells about some brief idea about Supercomputer.
The list of Supercomuters in the world.
Little bit idea about Clustering Of Computer (HPC Cluster) and about the model of it.
it's all about the supercomputer as per 2017 standards
it is a presentation given at my college level
thank you, if you can get some knowledge out from it!
Today is the trend of technology. Technology is not possible without the computer technology. Each country try to achieve best of all as possible. Super Computer plays an important role in the research areas of the country.
Supercomputer is the fastest in the world that can process significant amount of data very quickly.The computing performance of super computers is measured very high as compared to a general purpose computer.
This document tells about some brief idea about Supercomputer.
The list of Supercomuters in the world.
Little bit idea about Clustering Of Computer (HPC Cluster) and about the model of it.
it's all about the supercomputer as per 2017 standards
it is a presentation given at my college level
thank you, if you can get some knowledge out from it!
A graph is a data structure composed of vertices/dots and edges/lines. A graph database is a software system used to persist and process graphs. The common conception in today's database community is that there is a tradeoff between the scale of data and the complexity/interlinking of data. To challenge this understanding, Aurelius has developed Titan under the liberal Apache 2 license. Titan supports both the size of modern data and the modeling power of graphs to usher in the era of Big Graph Data. Novel techniques in edge compression, data layout, and vertex-centric indices that exploit significant orders are used to facilitate the representation and processing of a single atomic graph structure across a multi-machine cluster. To ensure ease of adoption by the graph community, Titan natively implements the TinkerPop 2 Blueprints API. This presentation will review the graph landscape, Titan's techniques for scale by distribution, and a collection of satellite graph technologies to be released by Aurelius in the coming summer months of 2012.
In the 2011 book “Physics of the Future”, author Michio Kaku predicted that Moore’s Law will end and this would turn Silicon Valley into rust if an alternative and suitable replacement for silicon was not found. For the last 4 decades, Moore’s Law came about to represent unstoppable technological progress. At its heart was the observation that the number of transistors fabricated onto a chip would double every two years and that the cost would also fall off at a similar rate. It is very important to note that this law is an observation and not an actual physical or natural law. However, as of 2010 the update to the International Technology Roadmap for Semiconductors has shown growth slowing by 2013 after which densities are going to double only every three years. We are hitting the limits of the number of electrons that can be fit in a given area.
One option to overcome this limitation is to create quantum computers that will take advantage of the quantum character of molecules to perform the processing tasks of a conventional computer. Quantum computers could very possibly one day be able to replace silicon chips, just as the transistor replaced vacuum tube.
Today scientists are in research to create an artificial brain that can think, respond,
take decision, and keep anything in memory. The main aim is to upload human brain into
machine. So that man can think, take decision without any effort. After the death of the
body, the virtual brain will act as the man. So, even after the death of a person we will not
lose the knowledge, intelligence, personalities, feelings and memories of that man, that can
be used for the development of the human society. Technology is growing faster than
everything. IBM is now in research to create a virtual brain, called “Blue brain”.
This PPT is about Blue Brain that is Virtual Brain, Nanobots,Technical Brain. Blue brain is a technical brain, artificial brain or may say machine Brain. Blue brain is a developing technology. Blue Brain is so called because of the Blue electric colour. Blue Brain stores the memory of human being in it. Nanobots are used to receive data into blue brain. Blue Brain Blue Brain Blue Brain Blue Brain Blue Brain Blue Brain.
This presentation is about the Blue Brain project of IBM. Blue brain is a simple chip that can be installed into the human brain for which the short term memory and volatile memory at the old age can be avoided. Virtual brain is an artificial brain, which does not actually the natural brain, but can act as the brain. Intelligence ,Knowledge and skill of a person can be made eternal. The complete information about the blue brain project is present in the presentation.
Running head: QUANTUM COMPUTING
QUANTUM COMPUTING 9
Research Paper: Quantum Computing
(Student’s Name)
(Professor’s Name)
(Course Title)
(Date of Submission)
Abstract
Quantum computers are a new era of invention, and its innovation is still to come. The revolution of the quantum computers produced a lot of challenges for ethical decision-making and predictions at different levels of life; therefore, it raised new concerns such as invasion of privacy and national security. In fact, it can be used easily to access and steal private information and data, while on the other hand, quantum computers can help to eliminate these unethical intrusions and secure the information.
Quantum computers will be the most powerful computer in the world that would open the door to encrypt the information in much less time. On the contrary, the supercomputers sometimes take so many hours to encrypt, whereas quantum computers can be used for the same purpose in a shorter time period making it harder to decrypt the data and information.
Many years from now, quantum computers will become mainstays throughout the world of computing. It will serve the individual and the community, but there is a significant concern that quantum computers could be used to invade people’s privacy (Hirvensalo, 2012).
Literature Review
The study area that is aimed on the implementation of quantum theory principles to develop computer technology is called Quantum computing. The field of quantum mechanics arose from German physicist Max Planck’s attempts to describe the spectrum emitted by hot bodies and specifically he wondered the reason behind the shift in color from red to yellow to blue as the temperature of a flame increased.
https://www.stratfor.com/analysis/approaching-quantum-leap-computing
There has been tremendous development in quantum computing since then and more research is been done to realize its full potential. Generally, quantum computing depends on quantum laws of physics. Rather than store information as 0s or 1s as conventional computers do, a quantum computer uses qubits which can be a 1 or a 0 or both at the same time. The quantum superposition along with the quantum effects of entanglement and quantum tunneling enable computers to consider and manipulate all combinations of bits simultaneously. This effect will make quantum computation powerful and fast (Williams, 2014).
http://www.dwavesys.com/quantum-computing
Researchers in quantum computing have enjoyed a greater level of success. The first small 2-qubit quantum computer was developed in 1997 and in 2001 a 5-qubit quantum computer was used to successfully factor the number 15 [85].Since then, experimental progress on a number of different technologies has been steady but slow, although the practical problems facing physical realizations of quantum computers can be addressed. It is believed that a quant.
In this slidecast, Jason Stowe from Cycle Computing describes the company's recent record-breaking Petascale CycleCloud HPC production run.
"For this big workload, a 156,314-core CycleCloud behemoth spanning 8 AWS regions, totaling 1.21 petaFLOPS (RPeak, not RMax) of aggregate compute power, to simulate 205,000 materials, crunched 264 compute years in only 18 hours. Thanks to Cycle's software and Amazon's Spot Instances, a supercomputing environment worth $68M if you had bought it, ran 2.3 Million hours of material science, approximately 264 compute-years, of simulation in only 18 hours, cost only $33,000, or $0.16 per molecule."
Learn more: http://blog.cyclecomputing.com/2013/11/back-to-the-future-121-petaflopsrpeak-156000-core-cyclecloud-hpc-runs-264-years-of-materials-science.html
Watch the video presentation: http://wp.me/p3RLHQ-aO9
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
2. Table of Contents
Introduction.................................................................................................................................4
1. Some Common Uses of Supercomputers ................................................................................4
1.1 Building brains...............................................................................................................4
1.2 Predicting climate change .............................................................................................. 5
1.3 Forecasting hurricanes................................................................................................... 5
1.4 Testing nuclear weapons................................................................................................ 5
1.5 Mapping the blood stream............................................................................................. 6
1.6 Understanding earthquakes ........................................................................................... 6
1.7 Recreating the Big Bang.................................................................................................6
2. Supercomputer challenges..................................................................................................... 7
3. Operating Systems ................................................................................................................ 7
4. Processing Speeds................................................................................................................. 7
5. Energy usage and heat management...................................................................................... 8
6. History ................................................................................................................................ 9
7. Supercomputingin Pakistan...................................................................................................9
8. Introducing the world's first personal supercomputer .............................................................11
9. Manufacturers of Supercomputers.........................................................................................11
10. Architecture and operating systems................................................................................... 12
Top 10 Super computers..............................................................................................................14
11. Tianhe-2 ......................................................................................................................... 15
National Supercomputing Center in Guangzhou ,...........................................................................15
11.1 Tianhe-Specification...................................................................................................15
12. Titan................................................................................................................................18
Oak Ridge National Laboratory.....................................................................................................18
12.1 Titan Specification:.......................................................................................................18
13. Sequoia............................................................................................................................22
Lawrence Livermore National Laboratory......................................................................................22
13.1 Sequoia Specification:...................................................................................................22
14. K computer..................................................................................................................... 24
RIKEN......................................................................................................................................... 24
14.1 K Computer Specification:......................................................................................... 24
15. Mira(Blue Gene/Q)...........................................................................................................27
Argonne National Laboratory .......................................................................................................27
15.1 Mira(Blue Gene/Q) Specification:...............................................................................27
16. piz daint(Cray XC30)......................................................................................................... 30
3. Swiss National Supercomputing Centre........................................................................................ 30
16.1 piz daint(Cray XC30)Specification:............................................................................ 30
17. Stampede.........................................................................................................................33
TexasAdvanced Computing Center...............................................................................................33
17.1 Stampede Specification:..............................................................................................33
18. JUQUEEN .........................................................................................................................35
Forschungszentrum Jülich............................................................................................................35
18.1 JUQUEEN specification .............................................................................................35
19. Vulcan (Blue Gene/Q) ...................................................................................................... 38
Lawrence Livermore National Laboratory..................................................................................... 38
19.1 Vulcan Specification .................................................................................................. 38
20. Cray CS Strom...............................................................................................................40
20.1 Cray CS Strom Specification: ..................................................................................... 40
21. Supercomputer based on Xeon processors:................................................................... 42
22. Supercomputer based on IBM Power BQC Processors.................................................. 43
23. Supercomputer based on Fujitsu SPARC 64Villfx Processor ................................................44
24. Supercomputer base on NVIDIA tesla/Intel Phi processor .................................................. 45
25. Efficiency and Performance chart:................................................................................. 46
References: ............................................................................................................................... 47
4. Introduction
A supercomputer is the fastest type of computer. Supercomputers are very expensive and
are employed for specialized applications that require large amounts of mathematical
calculations. The chief difference between a supercomputer and a mainframe is that a
supercomputer channels all its power into executing a few programs as fast as possible,
whereas a mainframe uses its power to execute many programs concurrently.
1. Some Common Uses of Supercomputers
Supercomputers are used for highly calculation-intensive tasks such as problems involving
quantum mechanical physics, weather forecasting, climate research, molecular modeling
(computing the structures and properties of chemical compounds, biological
macromolecules, polymers, and crystals), physical simulations (such as simulation of
airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research
into nuclear fusion), cryptanalysis, and many others. Major universities, military agencies
and scientific research laboratories depend on and make use of supercomputers very
heavily.
1.1 Building brains
So how do supercomputers stack up to human
brains ? Well, they're really good at computation: It
would take 120 billion people with 120 billion
calculators 50 years to do what the Sequoia
supercomputer will be able to do in a day. But when it
comes to the brain's ability to process information in
parallel by doing many calculations simultaneously,
even supercomputers lag behind. Dawn, a
supercomputer at Lawrence Livermore National
Laboratory, can simulate the brain power of a cat — but 100 to 1,000 times slower than a
real cat brain.
Nonetheless, supercomputers are useful for modeling the nervous system. In 2006,
researchers at the École Polytechnique Fédérale de Lausanne in Switzerland successfully
simulated a 10,000-neuron chunk of a rat brain called a neocortical unit. With enough of
these units, the scientists on this so-called "Blue Brain" project hope to eventually build a
complete model of the human brain.
The brain would not be an artificial intelligence system, but rather a working neural circuit
that researchers could use to understand brain function and test virtual psychiatric
treatments. But Blue Brain could be even better than artificial intelligence, lead researcher
Henry Markram told The Guardian newspaper in 2007: "If we build it right, it should speak."
5. 1.2 Predicting climate change
The challenge of predicting global climate is immense.
There are hundreds of variables, from the reflectivity of
the earth's surface (high for icy spots, low for dark
forests) to the vagaries of ocean currents. Dealing with
these variables requires supercomputing capabilities.
Computer power is so coveted by climate scientists that
the U.S. Department of Energy gives out access to its
most powerful machines as a prize.
The resulting simulations both map out the past and look into the future. Models of the
ancient past can be matched with fossil data to check for reliability, making future
predictions stronger. New variables, such as the effect of cloud cover on climate, can be
explored. One model, created in 2008 at Brookhaven National Laboratory in New York,
mapped the aerosol particles and turbulence of clouds to a resolution of 30 square feet.
These maps will have to become much more detailed before researchers truly understand
how clouds affect climate over time.
1.3 Forecasting hurricanes
With Hurricane Ike bearing down on the Gulf Coast in
2008, forecasters turned to Ranger for clues about
the storm's path. This supercomputer, with its
cowboy moniker and 579 trillion calculations per
second processing power, resides at the TACC in
Austin, Texas. Using data directly from National
Oceanographic and Atmospheric Agency airplanes,
Ranger calculated likely paths for the storm.
According to a TACC report, Ranger improved the five-day hurricane forecast by 15 percent.
Simulations are also useful after a storm. When Hurricane Rita hit Texas in 2005, Los Alamos
National Laboratory in New Mexico lent manpower and computer power to model
vulnerable electrical lines and power stations, helping officials make decisions about
evacuation, power shutoff and repairs.
1.4 Testing nuclear weapons
Since 1992, the United States has banned the testing
of nuclear weapons. But that doesn't mean the nuclear
arsenal is out of date.
The Stockpile Stewardship program uses non-nuclear lab
tests and, yes, computer simulations to ensure that the
country's cache of nuclear weapons are functional and
safe. In 2012, IBM plans to unveil a new supercomputer,
Sequoia, at Lawrence Livermore National Laboratory in California. According to IBM,
6. Sequoia will be a 20 petaflop machine, meaning it will be capable of performing twenty
thousand trillion calculations each second. Sequoia's prime directive is to create better
simulations of nuclear explosions and to do away with real-world nuke testing for good.
1.5 Mapping the blood stream
Think you have a pretty good idea of how your
blood flows? Think again. The total length of all of
the veins, arteries and capillaries in the human body
is between 60,000 and 100,000 miles. To map blood
flow through this complex systemin real time,
Brown University professor of applied mathematics
George Karniadakis works with multiple laboratories
and multiple computer clusters.
In a 2009 paper in the journal Philosophical Transactions of the Royal Society, Karniadakas
and his team describe the flow of blood through the brain of a typical person compared with
blood flow in the brain of a person with hydrocephalus, a condition in which cranial fluid
builds up inside the skull. The results could help researchers better understand strokes,
traumatic brain injury and other vascular brain diseases, the authors write.
1.6 Understanding earthquakes
Other supercomputer simulations hit closer to home.
By modeling the three-dimensional structure of the
Earth, researchers can predict how earthquake waves
will travel both locally and globally. It's a problem that
seemed intractable two decades ago, says Princeton
geophysicist Jeroen Tromp. But by using
supercomputers, scientists can solve very complex
equations that mirror real life.
"We can basically say, if this is your best model of what the earth looks like in a 3-D sense,
this is what the waves look like," Tromp said.
By comparing any remaining differences between simulations and real data, Tromp and his
team are perfecting their images of the earth's interior. The resulting techniques can be
used to map the subsurface for oil exploration or carbon sequestration, and can help
researchers understand the processes occurring deep in the Earth's mantle and core.
1.7 Recreating the Big Bang
It takes big computers to look into the biggest question of all: What is the origin of the
universe?
7. The "Big Bang," or the initial expansion of all energy and
matter in the universe, happened more than 13 billion
years ago in trillion-degree Celsius temperatures, but
supercomputer simulations make it possible to observe
what went on during the universe's birth. Researchers
at the Texas Advanced Computing Center (TACC) at the
University of Texas in Austin have also used supercomputers to simulate the formation of
the first galaxy, while scientists at NASA’s Ames Research Center in Mountain View, Calif.,
have simulated the creation of stars from cosmic dust and gas.
Supercomputer simulations also make it possible for physicists to answer questions about
the unseen universe of today. Invisible dark matter makes up about 25 percent of the
universe, and dark energy makes up more than 70 percent, but physicists know little about
either. Using powerful supercomputers like IBM's Roadrunner at Los Alamos National
Laboratory, researchers can run models that require upward of a thousand trillion
calculations per second
2.Supercomputer challenges
A supercomputer generates large amounts of heat and therefore must be cooled with
complex cooling systems to ensure that no part of the computer fails. Many of these cooling
systems take advantage of liquid gases, which can get extremely cold.
Another issue is the speed at which information can be transferred or written to a storage
device, as the speed of data transfer will limit the supercomputer's performance.
Information cannot move faster than the speed of light between two parts of a
supercomputer.
Supercomputers consume and produce massive amounts of data in a very short period of
time. Much work on external storage bandwidth is needed to ensure that this information
can be transferred quickly and stored/retrieved correctly.
3.Operating Systems
Most supercomputers run on a Linux or Unix operating system, as these operating systems
are extremely flexible, stable, and efficient. Supercomputers typically have multiple
processors and a variety of other technological tricks to ensure that they run smoothly.
4.Processing Speeds
Supercomputer computational power is rated in FLOPS (Floating Point Operations Per
Second). The first commercially available supercomputers reached speeds of 10 to 100
million FLOPS. The next generation of supercomputers is predicted to break the petaflop
level. This would represent computing power more than 1,000 times faster than a teraflop
8. machine. A relatively old supercomputer such as the Cray C90 (built in the mid to late
1990s) has a processing speed of only 8 gigaflops. It can solve a problem, which takes a
personal computer a few hours, in .002 seconds! From this, we can understand the vast
development happening in the processing speed of a supercomputer.
K2The site http://www.top500.org/is dedicated to providing information about the current
500 sites with the fastest supercomputers. Both the list and the content at this site is
updated regularly, providing those interested with a wealth of information about the
developments in supercomputing technology.
5.Energy usage and heat management
A typical supercomputer consumes large amounts of electrical power, almost all of which is
converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 Megawatts
of electricity. The cost to power and cool the system can be significant, e.g. 4MW at
$0.10/kWh is $400 an hour or about $3.5 million per year.
Heat management is a major issue in complex electronic
devices, and affects powerful computer systems in various
ways. The thermal design power and CPU power dissipation
issues in supercomputing surpass those of traditional
computer coolingtechnologies. The supercomputing awards
for green computing reflect this issue.
The packing of thousands of processors together inevitably
generates significant amounts of heat density that need to
be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert "cooling waterfall" which
was forced through the modules under pressure. However, the submerged liquid cooling
approach was not practical for the multi-cabinet systems based on off-the-shelf processors,
and in System X a special cooling systemthat combined air conditioning with liquid cooling
was developed in conjunction with the Liebert company.
In the Blue Gene systemIBM deliberately used low power processors to deal with heat
density. On the other hand, the IBMPower 775, released in 2011, has closely packed
elements that require water cooling. The IBM Aquasar system, on the other hand uses hot
water cooling to achieve energy efficiency, the water being used to heat buildings as well.
The energy efficiency of computer systems is generally measured in terms of "FLOPS per
Watt". In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt. In November 2010, the
Blue Gene/Q reached 1684 MFLOPS/Watt. In June 2011 the top 2 spots on the Green 500
list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W)
with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.
An IBM HS20 blade
9. 6. History
The history of supercomputing goes back to the 1960s when a
series of computers at Control Data Corporation (CDC) were
designed by Seymour Cray to use innovative designs and
parallelism to achieve superior computational peak
performance. The CDC 6600, released in 1964, is generally
considered the first supercomputer.
Cray left CDC in 1972 to form his own company. Four years
after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976,
and it became one of the most successful supercomputers in
history. The Cray-2 released in 1985 was an 8 processor liquid
cooled computer and Fluor inert was pumped through it as it
operated. It performed at 1.9 gigaflops and was the world's
fastest until 1990.
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines
with thousands of processors began to appear both in the United States and in Japan, setting
new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer
used 166 vector processors to gainthe top spot in 1994 with a peak speed of 1.7 gigaflops per
processor. The Hitachi SR2201 obtained a peak performance
of 600 gigaflops in 1996 by using 2048 processors connected
via a fast three dimensional crossbar network. The Intel
Paragon could have 1000 to 4000 Intel i860processors in
various configurations, and was ranked the fastest in the
world in 1993. The Paragon was a MIMD machine which
connected processors via a high speed two dimensional
mesh, allowing processes to execute on separate nodes; communicating via the Message
Passing Interface.
1954: TheIBM 704 was considered to be the world's first super-computer and took up awhole
room designed for engineering and scientific calculations.
7.Supercomputing in Pakistan
Supercomputing is a recent area of technology in which Pakistan has made progress, driven
in part by the growth of the information technology age in the country. The fastest
supercomputer currently in usein Pakistanis developed and hosted by the National University
of Sciences and Technology at its modeling and simulation research center. As of November
2012, there are no supercomputers from Pakistan on the Top500 list.
The initial interests of Pakistan in the research and development of supercomputing began
during the early 1980s, at several high-powered institutions of the country. During this time,
senior scientists at the Pakistan Atomic Energy Commission (PAEC) were the first to engage in
research on high performance computing, while calculating and determining exact values
involving fast-neutron calculations. According to one scientist involved in the development of
A Cray-1 preserved at the Deutsches
Museum (German Museum)
10. the supercomputer, a team of the leading scientists at PAEC developed powerful
computerized electronic codes, acquired powerful high performance computers to designthis
system and came up with the first design that was to be manufactured, as part of the atomic
bomb project. However, the most productive and pioneering research was carried out by
physicist M.S. Zubairy at the Institute of Physics of Quaid-e-Azam University. Zubairy
published two important books on Quantum Computers and high-performance computing
throughout his career that are presently taught worldwide. In 1980s and 1990s, the scientific
research and mathematical work on the supercomputers was also carried out by
mathematician Dr. Tasneem Shah at the Kahuta Research Laboratories while trying to solve
additive problems in Computational mathematics and the Statistical physics using the Monte
Carlo method. During the most of the 1990s era, the technological import in supercomputers
were denied to Pakistan, as well as India, due to an arms embargo placed on, as the foreign
powers feared that the imports and enhancement to the supercomputing development was
a dual use of technology and could be used for developing nuclear weapons.
During the Bush administration, in an effort to help US-based companies gain competitive
ground in developing information technology-based markets, the U.S. government eased
regulations that applied to exporting high-performance computers to Pakistan and four other
technologically developing countries. The new regulations allowed these countries to import
supercomputer systems that were capable of processing information at a speed of 190,000
million theoretical operations per second (MTOPS); the previous limit had been 85,000
MTOPS.
The COMSATS Institute of Information Technology (CIIT) has been actively involved in
research in the areas of parallel computing and computer cluster systems. In 2004, CIIT built
a cluster-based supercomputer for research purposes. The project was funded by the Higher
Education Commission of Pakistan. The Linux-based computing cluster, which was tested and
configured for optimization, achieved a performance of 158 GFLOPS per second. The
packaging of the cluster was locally designed.
National University of Sciences and Technology (NUST) in Islamabad has developed the fastest
supercomputing facility in Pakistan till date. The supercomputer, which operates at the
university's Research Centre for Modeling and Simulation (RCMS), was inaugurated in
September 2012. The supercomputer has parallel computation abilities and has a
performance of 132 teraflops per second (i.e. 132 trillion floating point operations per
second), making it the fastest graphics processing unit (GPU) parallel computing system
currently in operation in Pakistan. It has multi-core processors and graphics co-processors,
with an inter-process communication speed of 40 gigabits per second. According to
specifications available of the system, the cluster consists of a "66 NODE supercomputer with
30,992 processor cores, 2 head nodes (16 processor cores), 32 dual quad core computer
nodes (256 processor cores) and 32 Nvidia computing processors. Each processor has 960
processor cores (30,720 processor cores), QDR InfiniBand interconnection and 21.6 TB SAN
storage.”
11. 8.Introducing the world's first personal
supercomputer
The world's first personal supercomputer,
which is 250 times faster than the average
PC, has been unveiled. Although at £4,000 it
is beyond the reach of most consumers, the
high-performance processor could become
invaluable to universities and medical
institutions. The revolutionary Tesla
supercomputer was launched in London.
The NVIDIA's Tesla computer could prove
invaluable to medical researchers and
accelerate the discovery of cancer
treatments
The desktop workstations are built with innovative NVIDIA graphics processing units (GPUs),
which are capable of handling simultaneous calculations usually relegated to £70,000
supercomputing 'clusters' that take up entire rooms.
'The technology represents a great leap forward in the history of computing,' NVIDIA
spokesman Benjamin Berraondo said.
'It is a change equivalent to the invention of the microchip.'
PHD students at Cambridge and Oxford Universities and MIT in America are already using
GPU-based personal supercomputers for research.
Scientists believe the new systems could help find cures for diseases.
The device lets them run hundreds of thousands of sciencecodes to create ashortlist of drugs
that are most likely to offer potential cures.
'This exceptional speedup has the ability to accelerate the discovery of potentially life-saving
anti-cancer drugs,' said Jack Collins from the Advanced Biomedical Computing Centre in
Maryland.
The new computers make innovative use of the revolutionary graphics processing units,
which NIVIDA claims could bring lightning speeds to the next generation of home computers.
'A traditional processor handles one task at a time in a linear style, but GPUs work on tasks
simultaneously to do things such as get color pixels together on screens to present moving
images,' Mr Berraondo said.
'So while downloading a filmonto an iPod would take up to six hours on a traditional system,
a graphics card could bring this down to 20 minutes.'
The supercomputers, made by a number of UK based companies including Viglen, Armari and
Dell are currently on sale to universities and to the science and research community.
PC maker Dell said they would soon be mass producing them for the general consumer
market.
9.Manufacturers of Supercomputers
12. IBM
AspenSystems
SGI
Cray Research
Compaq
Hewlett-Packer
ThinkingMachines
Cray ComputerCorporation
Control Data Corporation
10. Architecture and operating systems
As of November 2014, TOP500 supercomputers are mostly based on x86
64 CPUs (Intel EMT64 and AMD AMD64 instruction set architecture), with these few exceptions
(all RISC-based), 39 supercomputers based on Power Architecture used by IBM POWER
microprocessors, three SPARC (including two Fujitsu/SPARC-based, one of which surprisingly
made the top in 2011 without a GPU, currently ranked fourth), and one Shen Wei-based (ranked
11 in 2011, ranked 65th in November 2014) making up the remainder. Prior to the ascendance
of 32-bit x86 and later 64-bit x86-64 in the early 2000s, a variety of RISC processor families made
up the majority of TOP500 supercomputers, including RISC architectures such
as SPARC, MIPS, PA-RISC and Alpha.
In recent years heterogeneous computing, mostly using NVidias graphics processing units (GPU)
as coprocessors, has become a popular way to reach a better performance per watt ratio and
higher absolute performance; almost required for good performance and to make the top (or top
10), with some exceptions, such as the mentioned SPARC computer without any coprocessors.
Then an x86-based coprocessor, Xeon Phi, has also been used.
13. Shareof processorarchitecturefamilies in TOP500supercomputersby timetrend.
Allthe fastestsupercomputers inthe decade sincethe Earth Simulator supercomputer, have used
a Linux-based operating system. As of November 2014, 485 or 97% of the world's fastest
supercomputers use the Linux kernel Remaining 3% run some Unix variant (which is AIX for all of
them except one), with one supercomputer running Windows and one with a "mixed" operating
system. Within those 97% are the most powerful supercomputers including those ranking as the
top ten.
Since November 2014, Windows Azure cloud computer is no longer on the list of fastest
supercomputers (its best rank was 165 in 2012), leaving the Shanghai Supercomputer Center's
"Magic Cube" as the only Windows-based supercomputer, running Windows HPC 2008 and
ranked 360 (its best rank was 11 in 2008).
15. 11. Tianhe-2
National Supercomputing Center inGuangzhou , China, 2013
Manufacture: NUDT
Cores: 3,120,000 cores
Power: 17.6 megawatts
Interconnect: Custom
Operating System: Kylin Linux
11.1 Tianhe-Specification
Total having 125 cabinets housing 16,000 compute nodes each of which contains two
Intel Xeon (Ivy Bridge) CPUs and three 57-core Intel Xeon Phi accelerator cards
Each compute node has a total of 88GB of RAM.
There are a total of 3,120,000 Intel cores and 1.404 petabytes of RAM, making Tianhe-2
by far the largest installation of Intel CPUs in the world.
The systemwill have theoretical peak performance of 54.9 PetaFlops.(Floating point
operation per seconds and one petaFlops carries 1015 computing )
Most microprocessor can carry 4 Flops per clock cycle and single core of 2.5Mhz has a
theoretical performance of 10billion Flops=10 Glops
16. Storage System:
256 I/o nodes and 64 storage server with total capacity of 12.4PB
I/o nodes:
Burst I/o bandwidth :5GB/s
17. Cooling System:
Closed compelled chilled water cooling
High Cooling capacity using power of 80KW
Using City cooling System to supply cool water as shown in diagram below
Programming languagessupported:
OpenCL,Open MP,CUDA orOpenACCis dependonthe programmerbutmajorlyare performedonthose
whichare definedearlier
18. 12. Titan
Oak Ridge National
Laboratory , United
States 2012
Manufacturer: Cray
Cores: 299,008 CPU cores
Power: Total power consumption for Titan should be around 9 megawatts under full load and
around 7 megawatts during typical use. The building that Titan is housed in has over 25
megawatts of power delivered to it.
Interconnect: Gemini
Operating System: Cray Linux Environment
12.1 Titan Specification:
Total having 200 cabinets.Inside each cabinets are Cray XK7 boards, each of which has
four AMD G34 sockets and four PCI slots
Having 20PF Plus peak performance or more than 20,000 trillion calculations per second
Having total 299,008 CPU cores
Total CPU memory is 710Tera bytes
Having 18,688 compute nodes
22. 13. Sequoia
Lawrence Livermore National Laboratory
UnitedStates, 2013
Manufacture: IBM
Cores: 1,572,864 processor cores
Power: 7.9 MW
Interconnect:
5-dimensional torus topology
Operating System: Red Hat Enterprise
Linux
13.1 Sequoia Specification:
96 racks containing 98,304 compute nodes. The compute nodes are 16-corePowerPC
A2 processor chips with 16 GB of DDR3 memory each
Sequoia is a Blue Gene/Q design, building off previous Blue Gene designs
Sequoia will be used primarily for nuclear weapons simulation, replacing the current Blue
Gene/L and ASC Purple supercomputers at Lawrence Livermore National Laboratory.
Sequoia will also be available for scientific purposes such as astronomy, energy, lattice
QCD, study of the human genome, and climate change
24. 14.K computer
RIKEN Japan, 2011
Manufacture: Fujitsu
Cores: 640,000 cores
Power: 12.6 MW
Interconnect: six-dimensional torus interconnect
Operating System: Linux Kernel
14.1 K Computer Specification:
the K computer comprises over 80,000 2.0 GHz 8-core SPARC64
VIIIfx processors contained in 864 cabinets, for a total of over 640,000 cores
Each cabinet contains 96 computing nodes, in addition to 6 I/O nodes. Each computing
node contains a single processor and 16 GB of memory
The computer's water cooling system is designed to minimize failure rate and power
consumption.
K had set a record with a performance of 8.162petaflops, making it the fastest
supercomputer
K computer has the most complex water cooling systemin the world.
K computer reported the highest total power consumption of any 2011 TOP500
supercomputer (9.89 MW – the equivalent of almost 10,000 suburban homes), it is
relatively efficient, achieving 824.6 GFlop/kWatt. This is 29.8% more efficient
than China's NUDT TH MPP (ranked #2 in 2011), and 225.8% more efficient than Oak
Ridge's Jaguar-Cray XT5-HE (ranked #3 in 2011).
Target performance was 10pFlops.but it was recorded at 8.162pFlops
25. Cooling Systemfor K-computers:
K computers having world complex cooling system on which its cooling is divided in two parts
as follow:
Item Configuration
Processor 80,000 2.0 GHz 8-core SPARC64 VIIIfx processors containedin864 cabinets
Interconnect interconnectedina6-dimensionaltorustopology
Memory 16GB(2GB/core)
Storage 1pB
Cabinet 96 racks containing98,304 compute nodes
Power 7.9 MW
Cooling Worldcomplex watercoolingsystem
27. 15.Mira(Blue Gene/Q)
Argonne National Laboratory
UnitedStates, 2013
Manufacture: IBM
Cores: 786,432 cores
Power: 3.9 MW
Interconnect:five-dimensional torus
interconnect
Operating System: CNK (for Compute Node
Kernel) is the node level operating system for
the IBM Blue Gene supercomputer. A CNK
instanceruns on eachof the compute nodes. A CNK is a lightweight kernel that runs on each node
and supports a single application running for a single user on that node. For the sake of efficient
operation, the design of CNK was kept simple and minimal, and it was implemented in about
5,000 lines of C++ code.
15.1 Mira(Blue Gene/Q) Specification:
The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit PowerPC A2 processor cores
and run at 1.6 GHz.
16 Processor cores are used for computing, and a 17th core for operating system assist
functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is
used as a redundant spare, used to increase manufacturing yield.
Mira having total of 1024 compute nodes, 16,384 user cores and 16 TB RAM
The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It delivers
a peak performance of 204.8 GFLOPS at 1.6 GHz, drawing about 55 watts. The chip
measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The chip is
mounted on a compute card along with 16 GB DDR3 DRAM
28. Applications:
Item Configuration
Processor The Blue Gene/QCompute chipisan 18 core chip.The 64-bit PowerPC
A2 processorcoresand run at 1.6 GHz
Interconnect interconnectedina5-dimensionaltorustopology
Memory 16 TB RAM
Storage 768 TiB
Cabinet 16 compute drawerswill have atotal of 512 compute nodes,electrically
interconnectedina5D torus configuration.Rackshave twomidplanes,thus32
compute drawers,fora total of 1024 compute nodes.
Power 3.9 MW
Cooling 91% of the coolingisprovidedbywaterand9% of the coolingisaccomplishedby
air
29. Cooling Systemof Mira(blue gene/Q):
For the Blue Gene/Qcompute rack, approximately91% of the coolingisprovidedbywater and9% of
the coolingisaccomplishedbyair.Forthe air cooling portion,the airisdrawn into the rack from both
the front andback. Hot air isexhaustedoutthe topof the rack. For compute racks,a hot/coldaisle is
not required.
To showthe coolingsystem:
30. 16.piz daint(Cray XC30)
Swiss National Supercomputing Centre
Switzerland, 2013
Manufacture: Cray
Corers: 115,984-cores
Power:90KW
Interconnects: Aries interconnect
Operating System: Cray Linux Environment.
16.1 piz daint(Cray XC30)Specification:
64-bit Intel Xeon processor E5 family; up to 384 per cabinet
Peak performance: initially up to 99 TFLOPS per systemcabinet
For added performance the Piz Daint supercomputer features the NVIDIA graphical
processing units. Therefore, though it has 116,000 cores it is capable of 6.3 petaflops of
performance.
Cray continues to advance its cooling efficiency advantages, integrating a combination of
vertical liquid coil units per compute cabinet and transverse air flow reused through the
system. Fans in blower cabinets can be hot swapped and the systemyields “room neutral”
air exhaust.
Full line of FC, SAS and IB based disk arrays with support for FC and SATA disk drives,
data storage system.
The Cray XC30 series architecture implements two processor engines per compute
node, and has four compute nodes per blade. Compute blades stack in eight pairs (16 to
a chassis) and each cabinet can be populated with up to three chassis, culminating in
384 sockets per cabinet.
31. Item Configuration
Processor 64-bit Intel Xeon processorE5 family;upto384 percabinet
Interconnect interconnectedina5-dimensionaltorustopology
Memory 16 TB RAM
Storage 768 TiB
Cabinet 16 compute drawerswill have atotal of 512 compute nodes,electrically
interconnectedina5D torus configuration.Rackshave twomidplanes,thus32
compute drawers,fora total of 1024 compute nodes.
Power 3.9 MW
Cooling 91% of the coolingisprovided bywaterand9% of the coolingisaccomplishedby
air
35. 18.JUQUEEN
ForschungszentrumJülich Germany,
2013
Manufacture: IBM
Core:458,752
Power:2,301.00 kW
Interconnect:Taurus Interconnect
OperatingSystem: SUSE Linux Enterprise Server
18.1 JUQUEEN specification
294,912 processorcores,144 terabyte memory,6petabyte storage in72 racks.
Witha peakperformance of aboutone PetaFLOPS
The systemconsistsof a single rack(1,024 compute nodes) and180 TB of storage.
90% watercooling(18-25°C,demineralized,closedcircle);10% air cooling
Temperature:in:18°C, out:27°C
Simple core,designedforexcellentpowerefficiencyandsmall footprint.
Embedded64 b PowerPCcompliantandIntegratedhealthmonitoringsystem.
2 links(4GB/sin4GB/s out) feedanI/OPCI-e port
36.
37. Performance comparison
Hot water andcooled water for cooling purpose:
In thiswaterisflownthroughthe machine whichisgraduallyreplace bythe cooledwater
38. 19.Vulcan (Blue Gene/Q)
Lawrence Livermore National Laboratory
UnitedStates, 2013
Manufacture: IBM
Cores:393,216
Power:1,972.00 kW
Interconnect:Taurus Interconnect
OperatingSystem: CNK(forCompute Node
Kernel) isthe node level operatingsystem forthe
IBM Blue Gene supercomputer
19.1 Vulcan Specification
Vulcan is equipped with power BQC 16 core 1.6 Ghz processors.
The Vulcan supercomputer has 400,000 cores that perform at 4.3 petaflops. This supercomputer
is used by the US department of Energy’s National Nuclear Safety Administration at the
Livermore National Laboratory.
Vulcan uses a massively parallel architecture and PowerPCprocessors – more than
393,000 cores, in all.
Vulcan is a smaller version of Sequoia, the 20 petaflop/s system that was ranked the world’s
fastest supercomputer in June 2012. The Vulcan supercomputer at LLNL is now available for
collaborative work with industry and research universities to advance science and accelerate
technological innovation through the High Performance Computing Innovation Center.
39. Cooling System:
The Vulcan installation mirrored the original Sequoia design, including the cooling system and the
piping design. Akima Construction Services (ACS) applied similar processes using up to 12-in.
Aquatherm Blue Pipe® (formerly Climatherm) and because the team was under a three-month
turnaround time constraint, Aquatherm’s reliability and installation time and labor savings were key
to the project.similar to the Sequoia installation, the computer manufacturer had established strict
water treatment and water quality requirements for the cooling system, and Aquatherm’s chemical
inertness played a key role in meeting that requirement.
Item Configuration
Processor Vulcan is equipped with power BQC 16 core 1.6 Ghz processors. The Vulcan
supercomputer has 400,000 cores that perform at 4.3 petaflops
Interconnect Taurus interconnect
Memory 393,216 GB of memory
Storage 14PB
Cabinet 24-rack
Power 1.9 kW
Cooling SimilartoSequoiastrictwaterrequirment
40. 20. Cray CS Strom
UnitedStates, 2014
Manufacture: Cray Inc.
Cores:72,800
Power:1,498.90 kW
Interconnect:Infiniband FDR
OperatingSystem: Linux
20.1 Cray CS Strom Specification:
The Cray CS series of cluster supercomputers offers a scalable architecture of high
performance servers, network and software tools that can be fully integrated and managed
as stand-alone systems.
The CS-Storm cluster, an accelerator-optimized system that consists of multiple high-
density multi-GPU server nodes, is designed for massively parallel computing workloads.
Each of these servers integrates eight accelerators and two Intel Xeon processors,
delivering 246 GPU teraflops of compute performance in one 48 rack.
The system can support both single- and double-precision floating-point applications.
Up to eight NVIDIA Tesla K40 GPU accelerators per node
Optional passive or active chilled cooling rear-door heat exchangers
A four cabinet Cray CS-Storm system is capable of delivering more than one petaflop of
peak performance.