DIY InfiniBand networking

6,617 views

Published on

My infiniband + sandybridge experience, presented at InfiniBand Day 03. Held on 9th May, 2011.

Published in: Technology
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
6,617
On SlideShare
0
From Embeds
0
Number of Embeds
272
Actions
Shares
0
Downloads
96
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • DIY InfiniBand networking

    1. 1. DIYInfiniBand Networking InfiniBand Day 03 : 9th May, 2011 @syoyo
    2. 2. Global illuminationLocal Illumination Global Illumination x100 CPU power CONFIDENTIAL x10 I/O
    3. 3. We need BandWidth!
    4. 4. • 4 SandyBridge CPUs• 4 2-port IB QDR HCA• Scientific Linux 6 QDRQDR QDR QDRQDR To server
    5. 5. Why InfiniBand?• Faster networking • x10 ~ x40 faster than 1GbE.• Cost-effective! • $100 for IB 10Gbps(SDR) HCA• We need at least 1GB/s transfer
    6. 6. Why DIY?• No vendor seems did not yet provide support for SandyBridge(AVX) + IB QDR combination. • Kernel 2.6.32~(RHEL6 or Ubu 10.4 ~) is required for AVX support.• SandyBridge IB .
    7. 7. What you need• HW • IB HCA • IB Cable • IB Switch(optional for small network)• SW • Driver suite(OFED, WinOF)
    8. 8. OFED• OSS IB driver suite• 1.5.3.1 as of May 2011. • IPoIB CM is now default connection mode • Accelerate TCP/IP app without modification.
    9. 9. Linux + OFED 1.5.3.1 confirmed distro• RHEL6 • Scientific Linux6 • SRPT needs recent scst svn trunk• Ubuntu • 10.04, 10.10
    10. 10. Performance & Functionalities &Interoperability
    11. 11. MPI• Confirmed • 3.2 GB/s unidirectional bandwidth! • Hits 5GT/s x 8 PCI-ex BW peak. • 1 ~ 2 us latency!
    12. 12. IPoIB• IP over InfiniBand• Accelerate existing TCP/IP app without modification • Still requires larger CPU time.• 930 MB/s BW confirmed with IB SDR HCA(With Connected Mode).
    13. 13. SRP(iSCSI + RDMA)• 700 ~ 800 MB/s BW confirmed on IB SDR HCA• iSCSI + IPoIB CM also achieves same performance. • Even though larger CPU time was consumed.
    14. 14. WinOF 2.3 OFED 1.5.3.1 Confirmed : IPoIB, SRP(iSCSI)
    15. 15. SRP iSCSI Client: Win7 SRPT : Linux IB SDR tmpfs
    16. 16. FlexBoot• Network boot from IB HCA • Faster boot image xfer?• We just confirmed FlexBoot works on ConnectX HCA.
    17. 17. Conclution
    18. 18. Pros• Fast!• Cost effective!• IPoIB CM shows good performance. • Accelerates TCP/IP app without modification.• Win <-> Lin seems now stable in OFED 1.5.3.1.
    19. 19. Cons• Hard to install OFED drivers • On newer distribution, newer system configuration.• Kernel recompilation required for • SRPT• IPoIB CM seems not available on Windows yet
    20. 20. ToDo• InfiniBand virtualization • e.g. 4 VMs sharing One 40 Gbps IB HCA possible(10 Gbps per VM)• InfiniBand-ready distributed storage system • Glusterfs, NFS/RDMA• Diskless boot with FlexBoot.
    21. 21. InfiniBand @ HOME• InfiniBand?
    22. 22. mini ITX ...
    23. 23. IB QDR ...
    24. 24. IB (4 )
    25. 25. Many thanks goes to:• @hiroysato FlexBoot advice• @yitabashi IB• @takahashi9854 InfiniBand

    ×