WHAT IS SUPERCOMPUTER??
 supercomputer is a computer with great speed and memory.
 This kind of computer can do jobs faster than any other computer of
its generation. They are usually thousands of times faster than
ordinary personal computers made at that time.
 Supercomputers can do arithmetic jobs very fast, so they are used
for weather forecasting, code-breaking, genetic analysis and other jobs
that need many calculations.
USES OF SUPERCOMPUTERS:
 Weather forecasting
 Climate research
 Scientific research
 Nuclear energy research
 Probabilistic analysis
 Analysis of geological data
MEASUREMENT OF SPEED:
The speed of supercomputers are measured in floating point operations
per second(FLOPS) in unit of:
 Kiloflop(KFLOPS)
 Megaflop(MFLOPS)
 Gigaflop(GFLOPS)
 Teraflop(TFLOPS)
 Petaflop(PFLOPS)
Unit FLOPS Power form Example Key decade
Hundred
FLOPS
100 FLOPS 10
2
FLOPS Eniac ~1940s
KFLOPS
(kiloflops)
1,000 FLOPS 10
3
FLOPS IBM 704 ~1950s
MFLOPS
(megaflops)
1,000,000
FLOPS
10
6
FLOPS CDC 6600 ~1960s
GFLOPS
(gigaflops)
1,000,000,000
FLOPS
10
9
FLOPS Cray-2 ~1980s
TFLOPS
(teraflops)
1,000,000,000,
000 FLOPS
10
12
FLOPS ASCI Red ~1990s
PFLOPS
(petaflops)
1,000,000,000,
000,000
FLOPS
10
15
FLOPS Jaguar ~2010s
Performance of supercomputers:
HISTORY OF SUPER COMPUTERS:
 The history of supercomputing goes back to the early 1920s in the
United States with the IBM tabulators at Columbia University and a
series of computers at Control Data Corporation (CDC), designed
by Seymour Cray to use innovative designs and parallelism to achieve
superior computational peak performance . The CDC 6600, released in
1964, is generally considered the first supercomputer.
 While the supercomputers of the 1980s used only a few
processors, in the 1990s, machines with thousands of processors
began to appear both in the United States and in Japan, setting
new records.
 By the end of the 20th century, massively parallel
supercomputers with thousands of "off-the-shelf" processors
similar to those found in personal computers were constructed
and broke through the teraflop computational barrier.
 Progress in the first decade of the 21st century was dramatic and
supercomputers with over 60,000 processors appeared, reaching
petaflop performance levels.
BEGINNING: 1950s 1960s
 In 1957 a group of engineers left Sperry Corporation to form Control
Data Corporation (CDC) in Minneapolis, MN. Seymour Cray left Sperry
a year later to join his colleagues at CDC. In 1960 Cray completed
the CDC1604, one of the first solid state computers, and the fastest
computer in the world.
 Cray completed the CDC 6600 in 1964. Cray switched from
germanium to silicon transistors, built by Fairchild Semiconductor,
that used the planar process
CDC 6600
 Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976,
and it became the most successful supercomputer in history. The Cray 1
used integrated circuits with two gates per chip and was a vector
processor.
 The Cray X-MP (designed by Steve Chen) was released in 1982 as a
105 MHz shared-memory parallel vector processor with better chaining
support and multiple memory pipelines.
 The Cray-2 released in 1985 was a 4 processor liquid cooled computer
totally immersed in a tank of Fluorinert , which bubbled as it operated.
 The Cray Y-MP, also designed by Steve Chen, was released in 1988 as an
improvement of the X-MP and could have eight vector processors at
167 MHz with a peak performance of 333 megaflops per processor.
The Cray era: mid-1970s and 1980s:
Massive processing: the 1990s
 The Cray-2 which set the frontiers of supercomputing in the mid to late
1980s had only 8 processors.
 it made sense to add more and more processors.
 Another development at the end of the 1980s was the arrival of
Japanese supercomputers, some of which were modeled after the Cray-
1.
Petaflop computing in the 21st century:
 Significant progress was made in the first decade of the 21st century
and it was shown that the power of a large number of small processors
can be harnessed to achieve high performance.
 In 2004 the Earth Simulator supercomputer built by NEC at the Japan
Agency for Marine-Earth Science and Technology.
 The IBM Blue Gene supercomputer architecture found widespread use
in the early part of the 21st century, and 27 of the computers on the
TOP500 list used that architecture.
BLUE GENNE APPROACH:-
 The Blue Gene approach is somewhat different in that
it trades processor speed for low power consumption
so that a larger number of processors can be used at air
cooled temperatures. It can use over 60,000
processors, with 2048 processors "per rack", and
connects them via a three-dimensional torus
interconnect.
EVOLUTION OF SUPERCOMPUTER ARCHITECTURE:-
YEAR 1980 1990 2000 2010
ARCHITECTURE SIMD
SINGLE
PROCESSOR
SMP+
S2MP
MPP
SINGLE
PROCESSOR
MPP
CONSTELNS
CLUSTER
CLUSTER
MPP
HYBRID
ARCHITECTURE
EVOLUTION OF SUPER COMPUTER SYSTEM SOFTWARE:-
What software do supercomputers run?
 most supercomputers run fairly ordinary operating systems much like
the ones running on our own PC.
 The most common supercomputer operating system used to be Unix,
but it's now been superseded by Linux.
 Since supercomputers generally work on scientific problems, their
application programs are sometimes written in traditional scientific
programming languages such as Fortran, as well as popular, more
modern languages such as C and C++.
From top 500 super computers the 3 top ranked
are as follows:
1. Tianhe-2 33.86 PFLOPS Guangzhou, China
2. Cray Titan 17.59 PFLOPS Oak Ridge National
Laboratory, Tennessee, USA
3. IBM Sequoia 16.32 PFLOPS Lawrence
Livermore National Laboratory, California, USA

Evolution of modern super computers

  • 1.
    WHAT IS SUPERCOMPUTER?? supercomputer is a computer with great speed and memory.  This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time.  Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations.
  • 2.
    USES OF SUPERCOMPUTERS: Weather forecasting  Climate research  Scientific research  Nuclear energy research  Probabilistic analysis  Analysis of geological data
  • 3.
    MEASUREMENT OF SPEED: Thespeed of supercomputers are measured in floating point operations per second(FLOPS) in unit of:  Kiloflop(KFLOPS)  Megaflop(MFLOPS)  Gigaflop(GFLOPS)  Teraflop(TFLOPS)  Petaflop(PFLOPS)
  • 4.
    Unit FLOPS Powerform Example Key decade Hundred FLOPS 100 FLOPS 10 2 FLOPS Eniac ~1940s KFLOPS (kiloflops) 1,000 FLOPS 10 3 FLOPS IBM 704 ~1950s MFLOPS (megaflops) 1,000,000 FLOPS 10 6 FLOPS CDC 6600 ~1960s GFLOPS (gigaflops) 1,000,000,000 FLOPS 10 9 FLOPS Cray-2 ~1980s TFLOPS (teraflops) 1,000,000,000, 000 FLOPS 10 12 FLOPS ASCI Red ~1990s PFLOPS (petaflops) 1,000,000,000, 000,000 FLOPS 10 15 FLOPS Jaguar ~2010s Performance of supercomputers:
  • 5.
    HISTORY OF SUPERCOMPUTERS:  The history of supercomputing goes back to the early 1920s in the United States with the IBM tabulators at Columbia University and a series of computers at Control Data Corporation (CDC), designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance . The CDC 6600, released in 1964, is generally considered the first supercomputer.
  • 6.
     While thesupercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new records.  By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier.  Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaflop performance levels.
  • 7.
    BEGINNING: 1950s 1960s In 1957 a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, MN. Seymour Cray left Sperry a year later to join his colleagues at CDC. In 1960 Cray completed the CDC1604, one of the first solid state computers, and the fastest computer in the world.  Cray completed the CDC 6600 in 1964. Cray switched from germanium to silicon transistors, built by Fairchild Semiconductor, that used the planar process
  • 8.
  • 9.
     Four yearsafter leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it became the most successful supercomputer in history. The Cray 1 used integrated circuits with two gates per chip and was a vector processor.  The Cray X-MP (designed by Steve Chen) was released in 1982 as a 105 MHz shared-memory parallel vector processor with better chaining support and multiple memory pipelines.  The Cray-2 released in 1985 was a 4 processor liquid cooled computer totally immersed in a tank of Fluorinert , which bubbled as it operated.  The Cray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eight vector processors at 167 MHz with a peak performance of 333 megaflops per processor. The Cray era: mid-1970s and 1980s:
  • 10.
    Massive processing: the1990s  The Cray-2 which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors.  it made sense to add more and more processors.  Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray- 1.
  • 11.
    Petaflop computing inthe 21st century:  Significant progress was made in the first decade of the 21st century and it was shown that the power of a large number of small processors can be harnessed to achieve high performance.  In 2004 the Earth Simulator supercomputer built by NEC at the Japan Agency for Marine-Earth Science and Technology.  The IBM Blue Gene supercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on the TOP500 list used that architecture.
  • 12.
    BLUE GENNE APPROACH:- The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors "per rack", and connects them via a three-dimensional torus interconnect.
  • 13.
    EVOLUTION OF SUPERCOMPUTERARCHITECTURE:- YEAR 1980 1990 2000 2010 ARCHITECTURE SIMD SINGLE PROCESSOR SMP+ S2MP MPP SINGLE PROCESSOR MPP CONSTELNS CLUSTER CLUSTER MPP HYBRID ARCHITECTURE
  • 14.
    EVOLUTION OF SUPERCOMPUTER SYSTEM SOFTWARE:-
  • 15.
    What software dosupercomputers run?  most supercomputers run fairly ordinary operating systems much like the ones running on our own PC.  The most common supercomputer operating system used to be Unix, but it's now been superseded by Linux.  Since supercomputers generally work on scientific problems, their application programs are sometimes written in traditional scientific programming languages such as Fortran, as well as popular, more modern languages such as C and C++.
  • 16.
    From top 500super computers the 3 top ranked are as follows: 1. Tianhe-2 33.86 PFLOPS Guangzhou, China 2. Cray Titan 17.59 PFLOPS Oak Ridge National Laboratory, Tennessee, USA 3. IBM Sequoia 16.32 PFLOPS Lawrence Livermore National Laboratory, California, USA