On October 23rd, 2014, we updated our
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
Overview ofSupercomputers Presented by: Mehmet Demir 20090694 ENG-102
Supercomputers The category of computers that includes the fastest and most powerful (most expensive) ones available at any given time. Designed to solve complex mathematical equations and computational problems very quickly.
What are They Used For Climate prediction & Weather forecasting
What are They Used For (cont.) Computational chemistry Crash analysis Cryptography Nuclear simulation Structural analysis
How Do They Differ From aPersonal Computer Cost range from $100,000s to $1,000,000s Environment most require environmentally controlled rooms Peripherals lack sound cards, graphic boards, keyboards, etc. accessed via workstation or PC Programming language FORTRAN
History Seymour Cray (1925-1996) Developed CDC 1604 – first fully transistorized supercomputer (1958) CDC 6600 (1965), 9 MFlops Founded Cray Research in 1972 (now Cray Inc.) CRAY-1 (1976), $8.8 million, 160 MFlops CRAY-2 (1985) CRAY-3 (1989)
Early Timeline of Supercomputers Period Supercomputer Peak speed Location1943-1944 Colossus 5000 characters per second Bletchley Park, England1945-1950 Manchester Mark I 500 instructions per second University of Manchester, England 20 KIPS (CRT memory) Massachusetts Institute of Technology,1950-1955 MIT Whirlwind 40 KIPS (Core) Cambridge, MA 40 KIPS1956-1958 IBM 704 12 kiloflops 40 KIPS1958-1959 IBM 709 12 kiloflops1959-1960 IBM 7090 210 kiloflops U.S. Air Force BMEWS (RADC), Rome, NY1960-1961 LARC 500 kiloflops (2 CPUs) Lawrence Livermore Laboratory, California 1.2 MIPS1961-1964 IBM 7030 "Stretch" Los Alamos National Laboratory, New Mexico ~600 kiloflops 10 MIPS1965-1969 CDC 6600 Lawrence Livermore Laboratory, California 3 megaflops1969-1975 CDC 7600 36 megaflops Lawrence Livermore Laboratory, California 100 megaflops (vector),1974-1975 CDC Star-100 Lawrence Livermore Laboratory, California ~2 megaflops (scalar) 80 megaflops (vector), Los Alamos National Laboratory, New Mexico1975-1983 Cray-1 72 megaflops (scalar) (1976)
Where Are They Now www.top500.org List released twice a year Scores based on Linpack benchmark Solve dense system of linear equations Speed measured in floating point operations per second (FLOPS)
Architectures - SMP Symmetric Shared- Memory Multiprocessing (SMP) Share memory Common OS Programs are divided into subtasks (threads) among all processors (multithreading)
Architectures – MPP Massively Parallel Processing (MPP) Individual memory for each processor Individual OS’s Messaging interface for communication 200+ processors can work on same application 1. A large retailer wants to know how many camcorders the company sold in 3. Each sub-query is assigned to a specific processor in the system. To 1998, and sends that query to the MPP system. allow this to happen, the database was previously partitioned. For 2. The query goes out to one of the processors which acts as the example, a sales tracking database might be broken down by month, and coordinator, it breaks up the query for optimum performance. For example, it could break the query up by month; this “sub-query” each processor holds data for one month’s worth of sales information. 4. The responses to the queries are returned to a processor to be coordinated—for then goes to all the processors at the same time. example, each month is added up 5. Final answer is returned to the user.
Architectures – Clustering Grid computing Many servers connected together Relies heavily on network speed Easily upgraded with addition of more servers
Processor Types Vector processing Expensive NEC Earth Simulator Scalar processing Grid computing Based on off the shelf parts (ordinary CPUs)
BlueGene/L IBM MPP (massively parallel processing) #1 on top500 as of November 2004 32,768 processors (700Mhz) 70.72 Teraflops (trillions of FLOPS) Runs linux DNA, climate simulation, financial risk Cost more than $100 million
BlueGene/L System Layout 2 Processors Node communication Mathematical calculations
BlueGene/L Compute Card
BlueGene/L Node Board
Some of the Others #2 - Columbia (NASA, USA) – 51.87 TFlops #3 - Earth Simulator (Japan) – 35.86 TFlops #4 - MareNostrum (Spain) – 20.53 TFlops #5 - Thunder (USA) – 19.94 TFlops