InfiniBand in the Lab

Erik Bussink
@ErikBussink
www.bussink.ch
www.VSAN.info

1
Executive
Summary

Fast
Cheap
2
Who uses
InfiniBand for
Storage ?
• EMC Isilon
• EMC xtremIO
• PureStorage
• Oracle Exadata
• Nexenta
• TeraData
• Gluster...
What InfiniBand
Offers
• Lowest Layer for scalable IO interconnect
• High-Performance
• Low Latency
• Reliable Switch Fabr...
InfiniBand physical layer
• Physical layer based on 802.3z specification operating at 2.5Gb/s
same standard as 10G etherne...
6
Connectors
• For SDR (10Gb/s) and DDR (20Gb/s) use the CX4 connectors

• For QDR (40Gb/s) and FDR (56Gb/s) use QSFP connec...
Price comparison on Adapters

8
And switches too...
Example... 24 ports DDR (20Gb/s) for $499
Latest firmware (4.2.5) has embedded Subnet Manager

9
Subnet Manager
• The Subnet Manager assigns Local IDentifiers (LIDs) to each port in
the IB fabric, and develops a routing...
OpenFabrics Enterprise
Distribution (OFED)

11
Installing InfiniBand on vSphere
5.1

12
Configure MTU and OpenSM
• esxcli system module paramters set y-m=mlx4_core -p=mtu_4k=1
• On Old switches (No 4K support) ...
14
Installing InfiniBand on vSphere
5.5
• Mellanox drivers are in vSphere 5.5, and they support 40GbE inbox
when using Connec...
Installing InfiniBand on vSphere 5.5

16
Copy partitions.conf
• Set MTU for mlx4
• Copy partitions.conf to /scratch/opensm/<HCA_Port>/

17
InfiniBand IPoIB backbone for VSAN

18
Configure VMkernel MTU
InfiniBand in the Lab
Fast & Cheap
Erik Bussink
Thanks to Raphael Schitz,William Lam,
Vladan Seget, Gregory Roche

@ErikBu...
Upcoming SlideShare
Loading in …5
×

InfiniBand in the Lab (vSphere 5.5) VMworld 2013 Barcelona vBrownbag Tech Talk

2,659
-1

Published on

Small presentation given at VMworld 2013 EMEA Barcelona for the vBrownbag Tech Talks about how and why to use InfiniBand in the Lab

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,659
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
50
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

InfiniBand in the Lab (vSphere 5.5) VMworld 2013 Barcelona vBrownbag Tech Talk

  1. 1. InfiniBand in the Lab Erik Bussink @ErikBussink www.bussink.ch www.VSAN.info 1
  2. 2. Executive Summary Fast Cheap 2
  3. 3. Who uses InfiniBand for Storage ? • EMC Isilon • EMC xtremIO • PureStorage • Oracle Exadata • Nexenta • TeraData • Gluster 3
  4. 4. What InfiniBand Offers • Lowest Layer for scalable IO interconnect • High-Performance • Low Latency • Reliable Switch Fabric • Offers higher layers of functionality • Application Clustering • Fast Inter Process Communications • Storage Area Networks 4
  5. 5. InfiniBand physical layer • Physical layer based on 802.3z specification operating at 2.5Gb/s same standard as 10G ethernet (3.125Gb/s) • InfiniBand layer 2 switching uses 16 bit local address (LID), so limited to 2^16 nodes on a subnet. 5
  6. 6. 6
  7. 7. Connectors • For SDR (10Gb/s) and DDR (20Gb/s) use the CX4 connectors • For QDR (40Gb/s) and FDR (56Gb/s) use QSFP connectors 7
  8. 8. Price comparison on Adapters 8
  9. 9. And switches too... Example... 24 ports DDR (20Gb/s) for $499 Latest firmware (4.2.5) has embedded Subnet Manager 9
  10. 10. Subnet Manager • The Subnet Manager assigns Local IDentifiers (LIDs) to each port in the IB fabric, and develops a routing table based off the assigned LIDs • Hardware based Subnet Manager are located in the Switches • Software based Subnet Manager is located on a host connected to the IB fabric • The OpenSM from the OpenFabrics Alliance works on Windows and Linux 10
  11. 11. OpenFabrics Enterprise Distribution (OFED) 11
  12. 12. Installing InfiniBand on vSphere 5.1 12
  13. 13. Configure MTU and OpenSM • esxcli system module paramters set y-m=mlx4_core -p=mtu_4k=1 • On Old switches (No 4K support) MTU is 2044 (2048-4) for IPoIB • vi partitions.conf • Add single line "Default=ox7fff,ipoib,mtu=5:ALL=full;" • copy partitions.conf /scratch/opensm/adapter_1_hca/ • copy partitions.conf /scratch/opensm/adapter_2_hca/ 13
  14. 14. 14
  15. 15. Installing InfiniBand on vSphere 5.5 • Mellanox drivers are in vSphere 5.5, and they support 40GbE inbox when using ConnectX-3and SwitchX products. • If not same HCA you need to uninstall the Mellanox drivers • esxcli software vib remove -n=net-mlx4-en -n=net-mlx4-core • And reboot • Then install the Mellanox OFED 1.8.2 • esxcli software vib install -d /tmp/MLX • And reboot 15
  16. 16. Installing InfiniBand on vSphere 5.5 16
  17. 17. Copy partitions.conf • Set MTU for mlx4 • Copy partitions.conf to /scratch/opensm/<HCA_Port>/ 17
  18. 18. InfiniBand IPoIB backbone for VSAN 18
  19. 19. Configure VMkernel MTU
  20. 20. InfiniBand in the Lab Fast & Cheap Erik Bussink Thanks to Raphael Schitz,William Lam, Vladan Seget, Gregory Roche @ErikBussink www.bussink.ch www.VSAN.info 20
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×