Rama krishna ppts for blue gene/LPresentation Transcript
M.S Rama Krishna (06-5A3)
History about supercomputers
Manufacturers / Partners of Blue Gene/L
Why is named as blue gene
Why was it created?
Who are the customers &its cost?
Processors / Memory / Scalability
Asynchronous task dipatch sudsystem
IBM’s Naval Ordnance Research
IBM's Blue Gene/L.
360000000000000 floating-point operations per second (TFLOPS) in March, 2005.
15,000 operations per second.
1999 - 100M $ PROJECT BY IBM
FOR THE US DEPT OF ENERGY (DOE)
- BLUE GENE/L
- BLUE GENE/C (CYCLOPS)
- BLUE GENE/P (PETAFLOPS)
2001 - PARTNERSHIP WITH LAWRENCE
LIVEMORE NATIONAL LABORATORY
“ Blue”: The corporate color of IBM
“ Gene”: The intended use of the Blue Gene clusters – Computational biology, specifically, protein folding
to build a new family of supercomputers optimized for bandwidth, scalability and
the ability to handle large amounts of
data while consuming a fraction of the power and floor space required by today's fastest systems.
to analyze scientific and biological problems
64 rack machine to Lawrence Livermore National Laboratory, California
23 Feb 2004 – 6 rack machine to ASTRON, a leading astronomy organization in the Netherlands to use IBM's Blue Gene/L supercomputer technology as the basis to develop a new type of radio telescope capable of looking back billions of years in time.
May/June 2004 – 1 rack system to Argonne National Laboratory, Illinois
Sept 2004 IBM - 4 rack Blue Gene/L supercomputer to Japan's National Institute of Advanced Industrial Science and Technology (AIST) to investigate the shapes of proteins.
6 Jun 2005 - 4 rack machine to The Ecole Polytechnique Federale de Lausanne (EPFL), in Lausanne, Switzerland to simulate the workings of the human brain .
The initial cost was 1.5 M $/rack
The current cost is 2M $/rack
March 2005 – IBM started renting the machine for about $10,000 per week to use one-eighth of a Blue Gene/L rack.
In computer science, the kernel is the fundamental part of an operating system.
It is a piece of software responsible for providing secure access to the machine's hardware to various computer programs.
Since there are many programs, and access to the hardware is limited, the kernel is also responsible for deciding when and how long a program should be able to make use of a piece of hardware, which is called multiplexing.
Accessing the hardware directly can be very complex, so kernels usually implement some hardware abstractions to hide complexity and provide a clean and uniform interface to the underlying hardware, which helps application programmers.
65,536 DUAL PROCESSOR NODES.
700 MHZ POWER PC 440 PROCESSOR.
512 MB of dynamic random access memory (DRAM) per node.
BLUE GENE/L IS JUST THE FIRST STEP………
2 PowerPC processors
L1 and L2 Caches
4MB embedded DRAM
DDR DRAM interface and DMA controller
Network connectivity hardware(torus)
Control / monitoring equip. (JTAG)
65,356 Compute nodes
ASIC (Application-Specific Integrated Circuit)
ASIC includes two 32-bit PowerPC 440 processing cores, each with two 64-bit FPUs (Floating-Point Units)
compute nodes strictly handle computations
1024 i/o nodes
manages communications for a group of 64 compute nodes.
5 Network connections
Rmax of 280.6 Teraflops
Rpeak of 360 Teraflops
512 MB memory per compute node, 16 TB in entire system.
800 TB of disk space
2,500 square feet
Front-end nodes are commodity PCs running Linux
I/O nodes run a customized Linux kernel
Compute nodes use an extremely lightweight custom kernel
Service node is a single multiprocessor machine running a custom OS
Single user, dual-threaded
Flat address space, no paging
Physical resources are memory-mapped
Provides standard POSIX functionality (mostly)
Two execution modes:
Virtual node mode
Core Management and Control System (CMCS)
BG/L’s “global” operating system.
MMCS - Midplane Monitoring and Control System
CIOMAN - Control and I/O Manager
DB2 relational database
Control or Jtag
or fast Ethernet
Torus n/w connects all the 65,536 compute nodes (32 * 32 * 64).
One node connects to 6 other nodes.
Chosen because provides high bandwidth nearest neighbor connectivity
Single node consists of single ASIC and memory.
Dynamic adaptive routing.
The main parallel programming model for BG/L is message passing using MPI (message passing interface) in C, C++, or FORTRAN.
Supports global address space programming models such as Co-Array FORTRAN (CAF) and Unified Parallel C (UPC).
The I/O and external front-end nodes run Linux, and the compute nodes run a kernel that is inspired by Linux.
Less space (half of the tennis court)
Heat problems most supercomputers face
Memory Limitation (512 MB/node)
Simple node kernel (does not support forks, threads)
BLUE BRAIN PROJECT, 6 JUNE
IBM and Ecole Polytechnique Fédérale de Lausanne (EPFL), in Switzerland to study the behavior of the brain and model it.
It ca takes in different fields like fashion tech nology,gaming
Article published in “THE STANDARD”, china’s business newspaper dated May 29
Military hopes such a development will allow pilots to control jets using their mind
Allow wheelchair users to walk
BG/L shows that a cell architecture for supercomputers is feasible.
Higher performance with a much smaller size and power requirements.
In theory, no limits to scalability of a BlueGene system.
IBM, Journal of Research and Development, volume 49, November 2005.