Your SlideShare is downloading. ×
0
M.S Rama Krishna (06-5A3)
<ul><li>History about supercomputers </li></ul><ul><li>Manufacturers / Partners of Blue Gene/L </li></ul><ul><li>Why is na...
<ul><li>IBM’s Naval Ordnance Research  </li></ul><ul><li>Calculator.  </li></ul><ul><li>IBM's Blue Gene/L. </li></ul>
<ul><li>  </li></ul><ul><li>360000000000000  floating-point  operations per  second (TFLOPS)  in March, 2005. </li></ul><u...
 
<ul><li>1999 - 100M $ PROJECT BY IBM  </li></ul><ul><li>FOR THE US DEPT OF ENERGY (DOE) </li></ul><ul><li>- BLUE GENE/L  <...
<ul><li>“ Blue”: The corporate color of IBM </li></ul><ul><li>“ Gene”: The intended use of the Blue Gene clusters – Comput...
<ul><li>to build a new family of supercomputers optimized for bandwidth, scalability and  </li></ul><ul><li>the ability to...
<ul><li>64 rack machine to Lawrence Livermore National Laboratory, California </li></ul><ul><li>23 Feb 2004 – 6 rack machi...
<ul><li>The initial cost was 1.5 M $/rack  </li></ul><ul><li>The current cost is 2M $/rack  </li></ul><ul><li>March 2005 –...
<ul><li>In computer science, the  kernel  is the fundamental part of an operating system.  </li></ul><ul><li>It is a piece...
<ul><li>PROCESSOR </li></ul><ul><li>65,536 DUAL PROCESSOR NODES. </li></ul><ul><li>700 MHZ POWER PC 440 PROCESSOR. </li></...
 
 
 
 
 
<ul><li>System-on-a-chip (SoC) </li></ul><ul><li>1 ASIC </li></ul><ul><ul><li>2 PowerPC processors </li></ul></ul><ul><ul>...
 
<ul><li>65,356  Compute nodes </li></ul><ul><ul><li>ASIC (Application-Specific Integrated Circuit)  </li></ul></ul><ul><ul...
<ul><li>Cellular architecture </li></ul><ul><li>Rmax of 280.6 Teraflops </li></ul><ul><li>Rpeak of 360 Teraflops </li></ul...
 
<ul><li>Front-end nodes are commodity PCs running Linux </li></ul><ul><li>I/O nodes run a customized Linux kernel </li></u...
<ul><li>Single user, dual-threaded </li></ul><ul><li>Flat address space, no paging </li></ul><ul><li>Physical resources ar...
<ul><li>Core Management and Control System (CMCS) </li></ul><ul><li>BG/L’s “global” operating system. </li></ul><ul><li>MM...
 
<ul><li>3D Torus </li></ul><ul><li>Global tree </li></ul><ul><li>Global interrupts </li></ul><ul><li>Ethernet </li></ul><u...
<ul><li>http://hpc.csie.thu.edu.tw/docs/Tutorial.pdf </li></ul>
<ul><li>Primary connection </li></ul><ul><li>Torus n/w connects all the 65,536 compute nodes (32 * 32 * 64). </li></ul><ul...
 
<ul><li>Middplane monitoring  </li></ul><ul><li>system&control system  </li></ul>
 
 
<ul><li>The main parallel programming model for BG/L is message passing using MPI (message passing interface) in C, C++, o...
<ul><li>Scalable </li></ul><ul><li>Less space (half of the tennis court) </li></ul><ul><li>Heat problems most supercompute...
<ul><ul><li>Memory Limitation (512 MB/node) </li></ul></ul><ul><ul><li>Simple node kernel (does not support forks, threads...
<ul><li>BLUE BRAIN PROJECT, 6 JUNE </li></ul><ul><li>IBM and Ecole Polytechnique Fédérale de Lausanne (EPFL), in Switzerla...
<ul><li>Article published in “THE STANDARD”, china’s business newspaper dated May 29 </li></ul><ul><ul><li>Military hopes ...
<ul><li>BG/L shows that a cell architecture for supercomputers is feasible. </li></ul><ul><li>Higher performance with a mu...
<ul><li>IBM, Journal of Research and Development, volume 49, November 2005. </li></ul><ul><li>Goolge News. </li></ul><ul><...
Upcoming SlideShare
Loading in...5
×

Rama krishna ppts for blue gene/L

4,468

Published on

only archet

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
4,468
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
262
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • compute_chip.gif, 76%
  • Machine initialization and booting System monitoring Job execution
  • Transcript of "Rama krishna ppts for blue gene/L"

    1. 1. M.S Rama Krishna (06-5A3)
    2. 2. <ul><li>History about supercomputers </li></ul><ul><li>Manufacturers / Partners of Blue Gene/L </li></ul><ul><li>Why is named as blue gene </li></ul><ul><li>Why was it created? </li></ul><ul><li>Who are the customers &its cost? </li></ul><ul><li>Processors / Memory / Scalability </li></ul><ul><li>Stepwise Structure </li></ul><ul><li>Hardware Architecture </li></ul><ul><li>Interconnection Network </li></ul><ul><li>Asynchronous task dipatch sudsystem </li></ul><ul><li>Software </li></ul><ul><li>Advantages </li></ul><ul><li>Applications </li></ul>
    3. 3. <ul><li>IBM’s Naval Ordnance Research </li></ul><ul><li>Calculator. </li></ul><ul><li>IBM's Blue Gene/L. </li></ul>
    4. 4. <ul><li> </li></ul><ul><li>360000000000000 floating-point operations per second (TFLOPS) in March, 2005. </li></ul><ul><li>15,000 operations per second. </li></ul>
    5. 6. <ul><li>1999 - 100M $ PROJECT BY IBM </li></ul><ul><li>FOR THE US DEPT OF ENERGY (DOE) </li></ul><ul><li>- BLUE GENE/L </li></ul><ul><li>- BLUE GENE/C (CYCLOPS) </li></ul><ul><li>- BLUE GENE/P (PETAFLOPS) </li></ul><ul><li>2001 - PARTNERSHIP WITH LAWRENCE </li></ul><ul><li>LIVEMORE NATIONAL LABORATORY </li></ul><ul><li>(FIRST CUSTOMER) </li></ul>
    6. 7. <ul><li>“ Blue”: The corporate color of IBM </li></ul><ul><li>“ Gene”: The intended use of the Blue Gene clusters – Computational biology, specifically, protein folding </li></ul>
    7. 8. <ul><li>to build a new family of supercomputers optimized for bandwidth, scalability and </li></ul><ul><li>the ability to handle large amounts of </li></ul><ul><li>data while consuming a fraction of the power and floor space required by today's fastest systems. </li></ul><ul><li>to analyze scientific and biological problems </li></ul><ul><li>(protein folding). </li></ul>
    8. 9. <ul><li>64 rack machine to Lawrence Livermore National Laboratory, California </li></ul><ul><li>23 Feb 2004 – 6 rack machine to ASTRON, a leading astronomy organization in the Netherlands to use IBM's Blue Gene/L supercomputer technology as the basis to develop a new type of radio telescope capable of looking back billions of years in time. </li></ul><ul><li>May/June 2004 – 1 rack system to Argonne National Laboratory, Illinois </li></ul><ul><li>Sept 2004 IBM - 4 rack Blue Gene/L supercomputer to Japan's National Institute of Advanced Industrial Science and Technology (AIST) to investigate the shapes of proteins. </li></ul><ul><li>6 Jun 2005 - 4 rack machine to The Ecole Polytechnique Federale de Lausanne (EPFL), in Lausanne, Switzerland to simulate the workings of the human brain . </li></ul>
    9. 10. <ul><li>The initial cost was 1.5 M $/rack </li></ul><ul><li>The current cost is 2M $/rack </li></ul><ul><li>March 2005 – IBM started renting the machine for about $10,000 per week to use one-eighth of a Blue Gene/L rack. </li></ul>
    10. 11. <ul><li>In computer science, the kernel is the fundamental part of an operating system. </li></ul><ul><li>It is a piece of software responsible for providing secure access to the machine's hardware to various computer programs. </li></ul><ul><li>Since there are many programs, and access to the hardware is limited, the kernel is also responsible for deciding when and how long a program should be able to make use of a piece of hardware, which is called multiplexing. </li></ul><ul><li>Accessing the hardware directly can be very complex, so kernels usually implement some hardware abstractions to hide complexity and provide a clean and uniform interface to the underlying hardware, which helps application programmers. </li></ul>
    11. 12. <ul><li>PROCESSOR </li></ul><ul><li>65,536 DUAL PROCESSOR NODES. </li></ul><ul><li>700 MHZ POWER PC 440 PROCESSOR. </li></ul><ul><li>MEMORY </li></ul><ul><li>512 MB of dynamic random access memory (DRAM) per node. </li></ul><ul><li>SCALABILITY </li></ul><ul><li>BLUE GENE/L IS JUST THE FIRST STEP……… </li></ul>
    12. 18. <ul><li>System-on-a-chip (SoC) </li></ul><ul><li>1 ASIC </li></ul><ul><ul><li>2 PowerPC processors </li></ul></ul><ul><ul><li>L1 and L2 Caches </li></ul></ul><ul><ul><li>4MB embedded DRAM </li></ul></ul><ul><ul><li>DDR DRAM interface and DMA controller </li></ul></ul><ul><ul><li>Network connectivity hardware(torus) </li></ul></ul><ul><ul><li>Control / monitoring equip. (JTAG) </li></ul></ul>
    13. 20. <ul><li>65,356 Compute nodes </li></ul><ul><ul><li>ASIC (Application-Specific Integrated Circuit) </li></ul></ul><ul><ul><li>ASIC includes two 32-bit PowerPC 440 processing cores, each with two 64-bit FPUs (Floating-Point Units) </li></ul></ul><ul><ul><li>compute nodes strictly handle computations </li></ul></ul><ul><li>1024 i/o nodes </li></ul><ul><ul><li>manages communications for a group of 64 compute nodes. </li></ul></ul><ul><li>5 Network connections </li></ul>
    14. 21. <ul><li>Cellular architecture </li></ul><ul><li>Rmax of 280.6 Teraflops </li></ul><ul><li>Rpeak of 360 Teraflops </li></ul><ul><li>512 MB memory per compute node, 16 TB in entire system. </li></ul><ul><li>800 TB of disk space </li></ul><ul><li>2,500 square feet </li></ul>
    15. 23. <ul><li>Front-end nodes are commodity PCs running Linux </li></ul><ul><li>I/O nodes run a customized Linux kernel </li></ul><ul><li>Compute nodes use an extremely lightweight custom kernel </li></ul><ul><li>Service node is a single multiprocessor machine running a custom OS </li></ul>
    16. 24. <ul><li>Single user, dual-threaded </li></ul><ul><li>Flat address space, no paging </li></ul><ul><li>Physical resources are memory-mapped </li></ul><ul><li>Provides standard POSIX functionality (mostly) </li></ul><ul><li>Two execution modes: </li></ul><ul><ul><li>Virtual node mode </li></ul></ul><ul><ul><li>Coprocessor mode </li></ul></ul>
    17. 25. <ul><li>Core Management and Control System (CMCS) </li></ul><ul><li>BG/L’s “global” operating system. </li></ul><ul><li>MMCS - Midplane Monitoring and Control System </li></ul><ul><li>CIOMAN - Control and I/O Manager </li></ul><ul><li>DB2 relational database </li></ul>
    18. 27. <ul><li>3D Torus </li></ul><ul><li>Global tree </li></ul><ul><li>Global interrupts </li></ul><ul><li>Ethernet </li></ul><ul><li>Control or Jtag </li></ul><ul><li>or fast Ethernet </li></ul>
    19. 28. <ul><li>http://hpc.csie.thu.edu.tw/docs/Tutorial.pdf </li></ul>
    20. 29. <ul><li>Primary connection </li></ul><ul><li>Torus n/w connects all the 65,536 compute nodes (32 * 32 * 64). </li></ul><ul><li>One node connects to 6 other nodes. </li></ul><ul><li>Chosen because provides high bandwidth nearest neighbor connectivity </li></ul><ul><li>Single node consists of single ASIC and memory. </li></ul><ul><li>Dynamic adaptive routing. </li></ul>
    21. 31. <ul><li>Middplane monitoring </li></ul><ul><li>system&control system </li></ul>
    22. 34. <ul><li>The main parallel programming model for BG/L is message passing using MPI (message passing interface) in C, C++, or FORTRAN. </li></ul><ul><li>Supports global address space programming models such as Co-Array FORTRAN (CAF) and Unified Parallel C (UPC). </li></ul><ul><li>The I/O and external front-end nodes run Linux, and the compute nodes run a kernel that is inspired by Linux. </li></ul>
    23. 35. <ul><li>Scalable </li></ul><ul><li>Less space (half of the tennis court) </li></ul><ul><li>Heat problems most supercomputers face </li></ul><ul><li>Speed </li></ul>
    24. 36. <ul><ul><li>Memory Limitation (512 MB/node) </li></ul></ul><ul><ul><li>Simple node kernel (does not support forks, threads) </li></ul></ul>
    25. 37. <ul><li>BLUE BRAIN PROJECT, 6 JUNE </li></ul><ul><li>IBM and Ecole Polytechnique Fédérale de Lausanne (EPFL), in Switzerland to study the behavior of the brain and model it. </li></ul><ul><li>It ca takes in different fields like fashion tech nology,gaming </li></ul><ul><li>PROTEIN FOLDING </li></ul><ul><li>Alzheimer’s disease </li></ul>
    26. 38. <ul><li>Article published in “THE STANDARD”, china’s business newspaper dated May 29 </li></ul><ul><ul><li>Military hopes such a development will allow pilots to control jets using their mind </li></ul></ul><ul><ul><li>Allow wheelchair users to walk </li></ul></ul>
    27. 39. <ul><li>BG/L shows that a cell architecture for supercomputers is feasible. </li></ul><ul><li>Higher performance with a much smaller size and power requirements. </li></ul><ul><li>In theory, no limits to scalability of a BlueGene system. </li></ul>
    28. 40. <ul><li>IBM, Journal of Research and Development, volume 49, November 2005. </li></ul><ul><li>Goolge News. </li></ul><ul><li>http://www.linuxworld.com/read/48131.htm </li></ul><ul><li>http://sc-2002.org/paperpdfs/pap.pap207.pdf </li></ul><ul><li>http://www.ipab.org/Presentation/sem04/04-02-2.pdf </li></ul><ul><li>http://www.desy.de/dvsem/WS0405/steinmacherBurow-20050221.pdf </li></ul><ul><li>www.scd.ucar.edu/info/UserForum/presentations/loft.ppt </li></ul><ul><li>REDBOOKS </li></ul>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×