Rama krishna ppts for blue gene/L
Upcoming SlideShare
Loading in...5
×
 

Rama krishna ppts for blue gene/L

on

  • 4,204 views

only archet

only archet

Statistics

Views

Total Views
4,204
Views on SlideShare
3,384
Embed Views
820

Actions

Likes
0
Downloads
238
Comments
0

7 Embeds 820

http://seminarprojecttopics.blogspot.in 777
http://seminarprojecttopics.blogspot.com 34
http://feeds.feedburner.com 5
http://www.slideshare.net 1
http://seminarprojecttopics.blogspot.kr 1
http://seminarprojecttopics.blogspot.co.uk 1
http://www.google.co.in 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • compute_chip.gif, 76%
  • Machine initialization and booting System monitoring Job execution

Rama krishna ppts for blue gene/L Presentation Transcript

  • 1. M.S Rama Krishna (06-5A3)
  • 2.
    • History about supercomputers
    • Manufacturers / Partners of Blue Gene/L
    • Why is named as blue gene
    • Why was it created?
    • Who are the customers &its cost?
    • Processors / Memory / Scalability
    • Stepwise Structure
    • Hardware Architecture
    • Interconnection Network
    • Asynchronous task dipatch sudsystem
    • Software
    • Advantages
    • Applications
  • 3.
    • IBM’s Naval Ordnance Research
    • Calculator.
    • IBM's Blue Gene/L.
  • 4.
    • 360000000000000 floating-point operations per second (TFLOPS) in March, 2005.
    • 15,000 operations per second.
  • 5.  
  • 6.
    • 1999 - 100M $ PROJECT BY IBM
    • FOR THE US DEPT OF ENERGY (DOE)
    • - BLUE GENE/L
    • - BLUE GENE/C (CYCLOPS)
    • - BLUE GENE/P (PETAFLOPS)
    • 2001 - PARTNERSHIP WITH LAWRENCE
    • LIVEMORE NATIONAL LABORATORY
    • (FIRST CUSTOMER)
  • 7.
    • “ Blue”: The corporate color of IBM
    • “ Gene”: The intended use of the Blue Gene clusters – Computational biology, specifically, protein folding
  • 8.
    • to build a new family of supercomputers optimized for bandwidth, scalability and
    • the ability to handle large amounts of
    • data while consuming a fraction of the power and floor space required by today's fastest systems.
    • to analyze scientific and biological problems
    • (protein folding).
  • 9.
    • 64 rack machine to Lawrence Livermore National Laboratory, California
    • 23 Feb 2004 – 6 rack machine to ASTRON, a leading astronomy organization in the Netherlands to use IBM's Blue Gene/L supercomputer technology as the basis to develop a new type of radio telescope capable of looking back billions of years in time.
    • May/June 2004 – 1 rack system to Argonne National Laboratory, Illinois
    • Sept 2004 IBM - 4 rack Blue Gene/L supercomputer to Japan's National Institute of Advanced Industrial Science and Technology (AIST) to investigate the shapes of proteins.
    • 6 Jun 2005 - 4 rack machine to The Ecole Polytechnique Federale de Lausanne (EPFL), in Lausanne, Switzerland to simulate the workings of the human brain .
  • 10.
    • The initial cost was 1.5 M $/rack
    • The current cost is 2M $/rack
    • March 2005 – IBM started renting the machine for about $10,000 per week to use one-eighth of a Blue Gene/L rack.
  • 11.
    • In computer science, the kernel is the fundamental part of an operating system.
    • It is a piece of software responsible for providing secure access to the machine's hardware to various computer programs.
    • Since there are many programs, and access to the hardware is limited, the kernel is also responsible for deciding when and how long a program should be able to make use of a piece of hardware, which is called multiplexing.
    • Accessing the hardware directly can be very complex, so kernels usually implement some hardware abstractions to hide complexity and provide a clean and uniform interface to the underlying hardware, which helps application programmers.
  • 12.
    • PROCESSOR
    • 65,536 DUAL PROCESSOR NODES.
    • 700 MHZ POWER PC 440 PROCESSOR.
    • MEMORY
    • 512 MB of dynamic random access memory (DRAM) per node.
    • SCALABILITY
    • BLUE GENE/L IS JUST THE FIRST STEP………
  • 13.  
  • 14.  
  • 15.  
  • 16.  
  • 17.  
  • 18.
    • System-on-a-chip (SoC)
    • 1 ASIC
      • 2 PowerPC processors
      • L1 and L2 Caches
      • 4MB embedded DRAM
      • DDR DRAM interface and DMA controller
      • Network connectivity hardware(torus)
      • Control / monitoring equip. (JTAG)
  • 19.  
  • 20.
    • 65,356 Compute nodes
      • ASIC (Application-Specific Integrated Circuit)
      • ASIC includes two 32-bit PowerPC 440 processing cores, each with two 64-bit FPUs (Floating-Point Units)
      • compute nodes strictly handle computations
    • 1024 i/o nodes
      • manages communications for a group of 64 compute nodes.
    • 5 Network connections
  • 21.
    • Cellular architecture
    • Rmax of 280.6 Teraflops
    • Rpeak of 360 Teraflops
    • 512 MB memory per compute node, 16 TB in entire system.
    • 800 TB of disk space
    • 2,500 square feet
  • 22.  
  • 23.
    • Front-end nodes are commodity PCs running Linux
    • I/O nodes run a customized Linux kernel
    • Compute nodes use an extremely lightweight custom kernel
    • Service node is a single multiprocessor machine running a custom OS
  • 24.
    • Single user, dual-threaded
    • Flat address space, no paging
    • Physical resources are memory-mapped
    • Provides standard POSIX functionality (mostly)
    • Two execution modes:
      • Virtual node mode
      • Coprocessor mode
  • 25.
    • Core Management and Control System (CMCS)
    • BG/L’s “global” operating system.
    • MMCS - Midplane Monitoring and Control System
    • CIOMAN - Control and I/O Manager
    • DB2 relational database
  • 26.  
  • 27.
    • 3D Torus
    • Global tree
    • Global interrupts
    • Ethernet
    • Control or Jtag
    • or fast Ethernet
  • 28.
    • http://hpc.csie.thu.edu.tw/docs/Tutorial.pdf
  • 29.
    • Primary connection
    • Torus n/w connects all the 65,536 compute nodes (32 * 32 * 64).
    • One node connects to 6 other nodes.
    • Chosen because provides high bandwidth nearest neighbor connectivity
    • Single node consists of single ASIC and memory.
    • Dynamic adaptive routing.
  • 30.  
  • 31.
    • Middplane monitoring
    • system&control system
  • 32.  
  • 33.  
  • 34.
    • The main parallel programming model for BG/L is message passing using MPI (message passing interface) in C, C++, or FORTRAN.
    • Supports global address space programming models such as Co-Array FORTRAN (CAF) and Unified Parallel C (UPC).
    • The I/O and external front-end nodes run Linux, and the compute nodes run a kernel that is inspired by Linux.
  • 35.
    • Scalable
    • Less space (half of the tennis court)
    • Heat problems most supercomputers face
    • Speed
  • 36.
      • Memory Limitation (512 MB/node)
      • Simple node kernel (does not support forks, threads)
  • 37.
    • BLUE BRAIN PROJECT, 6 JUNE
    • IBM and Ecole Polytechnique Fédérale de Lausanne (EPFL), in Switzerland to study the behavior of the brain and model it.
    • It ca takes in different fields like fashion tech nology,gaming
    • PROTEIN FOLDING
    • Alzheimer’s disease
  • 38.
    • Article published in “THE STANDARD”, china’s business newspaper dated May 29
      • Military hopes such a development will allow pilots to control jets using their mind
      • Allow wheelchair users to walk
  • 39.
    • BG/L shows that a cell architecture for supercomputers is feasible.
    • Higher performance with a much smaller size and power requirements.
    • In theory, no limits to scalability of a BlueGene system.
  • 40.
    • IBM, Journal of Research and Development, volume 49, November 2005.
    • Goolge News.
    • http://www.linuxworld.com/read/48131.htm
    • http://sc-2002.org/paperpdfs/pap.pap207.pdf
    • http://www.ipab.org/Presentation/sem04/04-02-2.pdf
    • http://www.desy.de/dvsem/WS0405/steinmacherBurow-20050221.pdf
    • www.scd.ucar.edu/info/UserForum/presentations/loft.ppt
    • REDBOOKS