Your SlideShare is downloading. ×
InfiniBand in the Lab (vSphere 5.5) VMworld 2013 Barcelona vBrownbag Tech Talk
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

InfiniBand in the Lab (vSphere 5.5) VMworld 2013 Barcelona vBrownbag Tech Talk

2,271
views

Published on

Small presentation given at VMworld 2013 EMEA Barcelona for the vBrownbag Tech Talks about how and why to use InfiniBand in the Lab

Small presentation given at VMworld 2013 EMEA Barcelona for the vBrownbag Tech Talks about how and why to use InfiniBand in the Lab

Published in: Technology

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,271
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
46
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. InfiniBand in the Lab Erik Bussink @ErikBussink www.bussink.ch www.VSAN.info 1
  • 2. Executive Summary Fast Cheap 2
  • 3. Who uses InfiniBand for Storage ? • EMC Isilon • EMC xtremIO • PureStorage • Oracle Exadata • Nexenta • TeraData • Gluster 3
  • 4. What InfiniBand Offers • Lowest Layer for scalable IO interconnect • High-Performance • Low Latency • Reliable Switch Fabric • Offers higher layers of functionality • Application Clustering • Fast Inter Process Communications • Storage Area Networks 4
  • 5. InfiniBand physical layer • Physical layer based on 802.3z specification operating at 2.5Gb/s same standard as 10G ethernet (3.125Gb/s) • InfiniBand layer 2 switching uses 16 bit local address (LID), so limited to 2^16 nodes on a subnet. 5
  • 6. 6
  • 7. Connectors • For SDR (10Gb/s) and DDR (20Gb/s) use the CX4 connectors • For QDR (40Gb/s) and FDR (56Gb/s) use QSFP connectors 7
  • 8. Price comparison on Adapters 8
  • 9. And switches too... Example... 24 ports DDR (20Gb/s) for $499 Latest firmware (4.2.5) has embedded Subnet Manager 9
  • 10. Subnet Manager • The Subnet Manager assigns Local IDentifiers (LIDs) to each port in the IB fabric, and develops a routing table based off the assigned LIDs • Hardware based Subnet Manager are located in the Switches • Software based Subnet Manager is located on a host connected to the IB fabric • The OpenSM from the OpenFabrics Alliance works on Windows and Linux 10
  • 11. OpenFabrics Enterprise Distribution (OFED) 11
  • 12. Installing InfiniBand on vSphere 5.1 12
  • 13. Configure MTU and OpenSM • esxcli system module paramters set y-m=mlx4_core -p=mtu_4k=1 • On Old switches (No 4K support) MTU is 2044 (2048-4) for IPoIB • vi partitions.conf • Add single line "Default=ox7fff,ipoib,mtu=5:ALL=full;" • copy partitions.conf /scratch/opensm/adapter_1_hca/ • copy partitions.conf /scratch/opensm/adapter_2_hca/ 13
  • 14. 14
  • 15. Installing InfiniBand on vSphere 5.5 • Mellanox drivers are in vSphere 5.5, and they support 40GbE inbox when using ConnectX-3and SwitchX products. • If not same HCA you need to uninstall the Mellanox drivers • esxcli software vib remove -n=net-mlx4-en -n=net-mlx4-core • And reboot • Then install the Mellanox OFED 1.8.2 • esxcli software vib install -d /tmp/MLX • And reboot 15
  • 16. Installing InfiniBand on vSphere 5.5 16
  • 17. Copy partitions.conf • Set MTU for mlx4 • Copy partitions.conf to /scratch/opensm/<HCA_Port>/ 17
  • 18. InfiniBand IPoIB backbone for VSAN 18
  • 19. Configure VMkernel MTU
  • 20. InfiniBand in the Lab Fast & Cheap Erik Bussink Thanks to Raphael Schitz,William Lam, Vladan Seget, Gregory Roche @ErikBussink www.bussink.ch www.VSAN.info 20

×