InfiniBand.ppt
Upcoming SlideShare
Loading in...5
×
 

InfiniBand.ppt

on

  • 3,508 views

 

Statistics

Views

Total Views
3,508
Views on SlideShare
3,508
Embed Views
0

Actions

Likes
1
Downloads
169
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    InfiniBand.ppt InfiniBand.ppt Presentation Transcript

    • Infiniband Bart Taylor
    • What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be built, deployed and managed. By creating a centralized I/O fabric, InfiniBand Architecture enables greater server performance and design density while creating data center solutions that offer greater reliability and performance scalability. InfiniBand technology is based upon a channel-based switched fabric point-to-point architecture. --www.infinibandta.org
    • History
      • Infiniband is the result of a merger of two competing designs for an inexpensive high-speed network.
      • Future I/O combined with Next Generation I/O form what we know as Infiniband.
      • Future I/O was being developed by Compaq, IBM, and HP
      • Next Generation I/O was being developed by Intel, Microsoft, and Sun Microsystems
      • Infiniband Trade Association maintains the specification
    • The Basic Idea
      • High speed, low latency data transport
      • Bidirectional serial bus
      • Switched fabric topology
        • Several devices communicate at once
      • Data transferred in packets that together form messages
        • Messages are direct memory access, channel send/receive, or mulitcast
      • Host Channnel Adapters (HCAs) are deployed on PCI cards
    • Main Features
      • Low Latency Messaging: < 6 microseconds
      • Highly Scalable: Tens of thousands of nodes
      • Bandwidth: 3 levels of link performance
        • 2.5 Gbps
        • 10 Gbps
        • 30 Gbps
      • Allows multiple fabrics on a single cable
        • Up to 8 virtual lanes per link
        • No interdependency between different traffic flows
    • Physical Devices
      • Standard copper cabling
        • Max distance of 17 meters
      • Fiber-optic cabling
        • Max distance of 10 kilometers
      • Host Channnel Adapters on PCI cards
        • PCI, PCI-X, PCI-Express
      • InfiniBand Switches
        • 10Gbps non-blocking, per port
        • Easily cascadable
    • Host Channel Adapters
      • Standard PCI
        • 133 MBps
        • PCI 2.2 - 533 MBps
      • PCI-X
        • 1066 MBps
        • PCI-X 2 - 2133 MBps
      • PCI-Express
        • x1 5Gbps
        • x4 20Gbps
        • x8 40Gbps
        • x16 80Gbps
    • DAFS
      • Direct Access File System
        • Protocol for file storage and access
        • Data transferred as logical files, not physical storage blocks
        • Transferred directly from storage to client
          • Bypasses CPU and Kernel
      • Provides RDMA functionality
      • Uses the Virtual Interface (VI) architecture
        • Developed by Microsoft, Intel, and Compaq in 1996
    • RDMA
    • TCP/IP Packet Overhead
    • Latency Comparison
      • Standard Ethernet TCP/IP Driver
        • 80 to 100 microseconds latency
      • Standard Ethernet Dell NIC with MPICH over TCP/IP
        • 65 microseconds latency
      • Infiniband 4X with MPI Driver
        • 6 microseconds
      • Myrinet
        • 6 microseconds
      • Quadrics
        • 3 microseconds
    • Latency Comparison
    • References
      • Infiniband Trade Association - www.infinibandta.org
      • OpenIB Alliance - www.openib.org
      • TopSpin - www.topspin.com
      • Wikipedia - www.wikipedia.org
      • O’Reilly - www.oreillynet.com
      • Sourceforge - infiniband.sourceforge.net
      • Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics. Computer and Information Science. Ohio State University. - nowlab.cis.ohio-state.edu/projects/mpi-iba/publication/sc03.pdf