Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

InfiniBand.ppt

4,912 views

Published on

Published in: Technology
  • Be the first to comment

InfiniBand.ppt

  1. 1. Infiniband Bart Taylor
  2. 2. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be built, deployed and managed. By creating a centralized I/O fabric, InfiniBand Architecture enables greater server performance and design density while creating data center solutions that offer greater reliability and performance scalability. InfiniBand technology is based upon a channel-based switched fabric point-to-point architecture. --www.infinibandta.org
  3. 3. History <ul><li>Infiniband is the result of a merger of two competing designs for an inexpensive high-speed network. </li></ul><ul><li>Future I/O combined with Next Generation I/O form what we know as Infiniband. </li></ul><ul><li>Future I/O was being developed by Compaq, IBM, and HP </li></ul><ul><li>Next Generation I/O was being developed by Intel, Microsoft, and Sun Microsystems </li></ul><ul><li>Infiniband Trade Association maintains the specification </li></ul>
  4. 4. The Basic Idea <ul><li>High speed, low latency data transport </li></ul><ul><li>Bidirectional serial bus </li></ul><ul><li>Switched fabric topology </li></ul><ul><ul><li>Several devices communicate at once </li></ul></ul><ul><li>Data transferred in packets that together form messages </li></ul><ul><ul><li>Messages are direct memory access, channel send/receive, or mulitcast </li></ul></ul><ul><li>Host Channnel Adapters (HCAs) are deployed on PCI cards </li></ul>
  5. 5. Main Features <ul><li>Low Latency Messaging: < 6 microseconds </li></ul><ul><li>Highly Scalable: Tens of thousands of nodes </li></ul><ul><li>Bandwidth: 3 levels of link performance </li></ul><ul><ul><li>2.5 Gbps </li></ul></ul><ul><ul><li>10 Gbps </li></ul></ul><ul><ul><li>30 Gbps </li></ul></ul><ul><li>Allows multiple fabrics on a single cable </li></ul><ul><ul><li>Up to 8 virtual lanes per link </li></ul></ul><ul><ul><li>No interdependency between different traffic flows </li></ul></ul>
  6. 6. Physical Devices <ul><li>Standard copper cabling </li></ul><ul><ul><li>Max distance of 17 meters </li></ul></ul><ul><li>Fiber-optic cabling </li></ul><ul><ul><li>Max distance of 10 kilometers </li></ul></ul><ul><li>Host Channnel Adapters on PCI cards </li></ul><ul><ul><li>PCI, PCI-X, PCI-Express </li></ul></ul><ul><li>InfiniBand Switches </li></ul><ul><ul><li>10Gbps non-blocking, per port </li></ul></ul><ul><ul><li>Easily cascadable </li></ul></ul>
  7. 7. Host Channel Adapters <ul><li>Standard PCI </li></ul><ul><ul><li>133 MBps </li></ul></ul><ul><ul><li>PCI 2.2 - 533 MBps </li></ul></ul><ul><li>PCI-X </li></ul><ul><ul><li>1066 MBps </li></ul></ul><ul><ul><li>PCI-X 2 - 2133 MBps </li></ul></ul><ul><li>PCI-Express </li></ul><ul><ul><li>x1 5Gbps </li></ul></ul><ul><ul><li>x4 20Gbps </li></ul></ul><ul><ul><li>x8 40Gbps </li></ul></ul><ul><ul><li>x16 80Gbps </li></ul></ul>
  8. 8. DAFS <ul><li>Direct Access File System </li></ul><ul><ul><li>Protocol for file storage and access </li></ul></ul><ul><ul><li>Data transferred as logical files, not physical storage blocks </li></ul></ul><ul><ul><li>Transferred directly from storage to client </li></ul></ul><ul><ul><ul><li>Bypasses CPU and Kernel </li></ul></ul></ul><ul><li>Provides RDMA functionality </li></ul><ul><li>Uses the Virtual Interface (VI) architecture </li></ul><ul><ul><li>Developed by Microsoft, Intel, and Compaq in 1996 </li></ul></ul>
  9. 9. RDMA
  10. 10. TCP/IP Packet Overhead
  11. 11. Latency Comparison <ul><li>Standard Ethernet TCP/IP Driver </li></ul><ul><ul><li>80 to 100 microseconds latency </li></ul></ul><ul><li>Standard Ethernet Dell NIC with MPICH over TCP/IP </li></ul><ul><ul><li>65 microseconds latency </li></ul></ul><ul><li>Infiniband 4X with MPI Driver </li></ul><ul><ul><li>6 microseconds </li></ul></ul><ul><li>Myrinet </li></ul><ul><ul><li>6 microseconds </li></ul></ul><ul><li>Quadrics </li></ul><ul><ul><li>3 microseconds </li></ul></ul>
  12. 12. Latency Comparison
  13. 13. References <ul><li>Infiniband Trade Association - www.infinibandta.org </li></ul><ul><li>OpenIB Alliance - www.openib.org </li></ul><ul><li>TopSpin - www.topspin.com </li></ul><ul><li>Wikipedia - www.wikipedia.org </li></ul><ul><li>O’Reilly - www.oreillynet.com </li></ul><ul><li>Sourceforge - infiniband.sourceforge.net </li></ul><ul><li>Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics. Computer and Information Science. Ohio State University. - nowlab.cis.ohio-state.edu/projects/mpi-iba/publication/sc03.pdf </li></ul>

×