Successfully reported this slideshow.

Nettab 2006 Tutorial 3B part 2

524 views

Published on

Nettab 2006 presentation about Michelangelo cluster

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Nettab 2006 Tutorial 3B part 2

  1. 1. HPC Infrastructures Matteo Vit NETTAB 2006 – Santa Margherita di Pula (CA) - July 10-13, 2006
  2. 2. HARDWARE INFRASTRUCTURE Agenda Introduction Cluster design constraints The design of Michelangelo Hardware description Design diagrams 2
  3. 3. HARDWARE INFRASTRUCTURE Agenda Introduction Cluster design constraints The design of Michelangelo Hardware description Design diagrams 3
  4. 4. HARDWARE INFRASTRUCTURE Introduction (I) 280 cores (70 blade nodes with dual AMD opteron dual core) 560 GB of RAM (8 GB per node) Infiniband and Ethernet infrastructure 26 TB of disc space Diskless nodes 4
  5. 5. HARDWARE INFRASTRUCTURE Introduction (II) Cluster's tasks Beowulf Grid Different OSs Linux Solaris Windows Other 5
  6. 6. HARDWARE INFRASTRUCTURE Agenda Introduction Cluster design constraints The design of Michelangelo Hardware description Design diagrams 6
  7. 7. CLUSTER DESIGN Design constraints Installation Volume Power consumption Heat dissipation Future expansion Heterogeneous Tools and libraries Add-on cards Scientific applications 7
  8. 8. HARDWARE INFRASTRUCTURE Agenda Introduction Cluster design constraints The design of Michelangelo Hardware description Design diagrams 8
  9. 9. THE DESIGN OF MICHELANGELO Blade nodes (I) Compact Redundancy Easy to service Low cable count Low power consumption 9
  10. 10. THE DESIGN OF MICHELANGELO Blade nodes (II) 10
  11. 11. THE DESIGN OF MICHELANGELO Infiniband Performance Bandwidth Latency Industry standard Reliable 11
  12. 12. THE DESIGN OF MICHELANGELO Diskless Reliable Easy and fast maintenance Reconfigurable Easy expansion Single point of administration 12
  13. 13. THE DESIGN OF MICHELANGELO Storage Area Network (SAN) Performance Reliable Industry standard Quality of Service (QoS) Future expansion Easy interface to backup systems 13
  14. 14. HARDWARE INFRASTRUCTURE Agenda Introduction Cluster design constraints The design of Michelangelo Hardware description Design diagrams 14
  15. 15. HARDWARE DESCRIPTION Blade node Gigabit NIC 4x DDR400 Sockets PCI-E Extended B/P Connector PCI-X Expansion Opteron 2xx + + + + + + + IDE + + Port + + + + Combo + I/O Port Service Processor IPMI Controller KVM Controller ATI RageXL VGA Controller 15
  16. 16. HARDWARE DESCRIPTION Blade chassis 6+1 2100w Redundant Power 3+1 System Fan Tray AC 100~240V Inlets Two Gigabit LAN Bays One Fast Ethernet Bay One KVM One Service Processor 16
  17. 17. HARDWARE DESCRIPTION APELink board Field Programmable Gate Array (FPGA) 6 optional channels 6.4 Gb/s Reconfigurable hardware Offload processing 17
  18. 18. HARDWARE INFRASTRUCTURE Agenda Introduction Cluster design constraints The design of Michelangelo Hardware description Design diagrams 18
  19. 19. DESIGN DIAGRAMS OF MICHELANGELO Overview 3 racks 10 blade chassis 70 blades 42x500GB disc array 14x400GB disc array Infiniband switch Fast-ethernet switch Gbit ethernet switch Fiber channel switch KVM switch over IP 19
  20. 20. DESIGN DIAGRAMS OF MICHELANGELO Connectivity KVMoIP Fibre Channel, GFS 70+2 Infiniband 10Gb/s MPI, (GFS, NFS) 20
  21. 21. DESIGN DIAGRAMS OF MICHELANGELO Photos from installation (I) 21
  22. 22. DESIGN DIAGRAMS OF MICHELANGELO Photos from installation (II) 22
  23. 23. DESIGN DIAGRAMS OF MICHELANGELO Photos from installation (III) 23

×