Successfully reported this slideshow.

Mutating IP Network Model Ethernet-InfiniBand Interconnect

4

Share

Upcoming SlideShare
20140310 fpgax
20140310 fpgax
Loading in …3
×
1 of 14
1 of 14

More Related Content

Similar to Mutating IP Network Model Ethernet-InfiniBand Interconnect

Related Books

Free with a 14 day trial from Scribd

See all

Related Audiobooks

Free with a 14 day trial from Scribd

See all

Mutating IP Network Model Ethernet-InfiniBand Interconnect

  1. 1. 05 Feb, 2013 SAKURA Internet Research Center Senior Researcher / Naoto MATSUMOTO
  2. 2. HighGig Market Overview 56Gbps-HCA $1,311- Over $16,000- 56Gbps-IBSW/40GbE-SW 40Gbps-HCA $814- $8,280 - $11,750 10GbE-SW(48-64P) 40GbE-NIC $779- $1,923 - $6,250 10GbE-NIC $299- 40Gbps-IBSW(8-18-36P) $Network Adapter $Network Switch Lowest Price (Dual-Port) Lowest Price (8-18-36-48-64-Port) SOURCE: © 2013 Colfax International.
  3. 3. HighGig Network Benchmark RAMDISK SERVER SERVER RAMDISK 5.58 Gbit/sec using 8.00 Gbit/sec THTTPD 13.68 Gbit/sec RCOPY 18.23 Gbit/sec Corei7-3930K CPU @ 3.20GHz / 32GB DDR3-DIMM / PCI Express 3.0 / Mellanox Connect-X3 VPI Card (10/40/56Gbit/s) SOURCE: SAKURA Internet Research Center. 12/2012 rev2 Project THORN. using “wget + thttpd” and “rcopy libcm package” for 10GB DATA Chunk Transfer.
  4. 4. InfiniBand-Ethernet Interconnect Peak (RX) 8.23 Gbit/s 709,288 pps (MTU1500) Peak (RX) 308.99 Mbit/s 791,004 pps (MTU64) Packet Generator VyattaCore 6.5R1[VM] SERVER CLIENT VMware ESXi 5.1 SERVER Ethernet InfiniBand SERVER 10Gbit/s 40/56Gbit/s IP over Ethernet Network IP over Infiniband Network Corei7-3930K CPU @ 3.20GHz / 32GB DDR3-DIMM / PCI Express 3.0 / Mellanox Connect-X3 VPI Card (10/40/56Gbit/s) SOURCE: SAKURA Internet Research Center. 02/2013 Project THORN. using linux standard pktgen.
  5. 5. Cost Effective IP networking model Flexibility /Cost Effective Existing Infrastructure / Resource High Performance * 48/64 port 10GbE Switch $8,280 - $11,750 * 8/18/36 port 40G IBSwitch $1,923 - $6,250 Cost > Ethernet IPoIB Gateway InfiniBand (Software/Hardware) 10Gbit/s 40/56Gbit/s IP over Ethernet Network IP over Infiniband Network SOURCE: © 2013 Colfax International. SOURCE: SAKURA Internet Research Center. 02/2013 Project THORN.
  6. 6. (C)Copyright 1996-2010 SAKURA Internet Inc.
  7. 7. Infiniband Device list 40/56Gbit/s Infiniband mode 10/40Gbit/s Ethernet mode *Mellanox VPI (Virtual Protocol Interconnect) Only Infiniband HCA (Host Channel Adapter) Infiniband 40/56Gbit/s QDR Switch QSFP Cable (Copper/Fibre) + Infiniband OFED (OpenFabrics Enterprise Distribution) Software
  8. 8. IB Networking Example SERVER OpenSM *OpenSM InfiniBand subnet manager and administration Loop-Free IB-SWITCH OpenSM OpenSM IB-SWITCH IB-SWITCH IB-SWITCH IB-SWITCH IB-SWITCH
  9. 9. Install OFED for Linux 1) Download MLNX_OFED # uname -a Linux 2.6.32-220.el6.x86_64 #1 SMP Sat Dec 10 17:04:11 CST 2011 # wget http://mellanox.com/downloads/ofed/MLNX_OFED_LINUX-1.5.3-3.1.0 -rhel6.2-x86_64.tgz 2) Install MLNX_OFED Package # yum install tcl tk glibc-develglibc-devel.i686 # tar xzvf ./MLNX_OFED_LINUX-1.5.3-3.1.0-rhel6.2-x86_64.tgz # ./MLNX_OFED_LINUX-1.5.3-3.1.0-rhel6.2-x86_64/mlnxofedinstall # chkcnfig opensmd on; opensm –c /etc/opensm/opensm.conf # sync; sync; sync; reboot
  10. 10. Install OFED for VMware ESXi 5.1 1) Enable ESXi Shell & SSH Troubleshooting Options > Enable ESXi Shell, Enable SSH and SSH login. # vmware -v VMware ESXi 5.1.0 build-799733 2) Download/Install MLNX_OFED # cd /opt # wget http://mellanox.com/downloads/Drivers/MLNX-OFED-ESX-1.8.0.0.zip # esxcli software vib install -d /opt/MLNX-OFED-ESX-1.8.0.0.zip # sync; sync; sync; reboot –f # esxcfg-nics –l vmnic_ib0 0000:01:00.00 ib_ipoib Up 56252Mbps Full 00:02:c9:34:1c:f1 1500 Mellanox Technologies MT27500 Family [ConnectX-3]
  11. 11. Install OFED for Windows 1) DownloadMLNX_OFED from Mellanox Web page. 2) Install MLNX_OFED Package.
  12. 12. IB Fabric Example VM VM VM VM VMware VMware Windows Linux Windows Linux Windows Linux 40/56Gbit/s Windows Linux IP over Infiniband Network High Speed Server Interconnect Fabric for Mixed PHYSICAL and VIRTUAL.
  13. 13. IB Diagnostic Commands # ibhosts *show InfiniBand host nodes in topology Ca : 0x0002c9030010c36c ports 2 "hostname HCA-2" Ca : 0x0002c9030010c360 ports 2 "hostname HCA-2" Ca : 0x0002c9030010c328 ports 2 "sl6 HCA-1 # iblinkinfo *report link info for all links in the fabric Switch 0x000b8cffff006f38 MT43132 Mellanox Technologies: : 2 6[ ] ==( 4X 2.5 Gbps Active/ LinkUp)==> 8 1[ ] "sl6 HCA-1" ( ) 2 7[ ] ==( 4X 2.5 Gbps Active/ LinkUp)==> 9 1[ ] "hostname HCA-2" ( ) 2 8[ ] ==( 4X 2.5 Gbps Active/ LinkUp)==> 10 1[ ] "hostname HCA-2" ( ) # ibtracert 9 10 *trace InfiniBand path From ca {0x0002c9030010c360} portnum 1 lid 9-9 "hostname HCA-2" [1] -> switch port {0x000b8cffff006f38}[7] lid 2-2 "MT43132 Mellanox Technologies" [8] -> ca port {0x0002c9030010c36d}[1] lid 10-10 "hostname HCA-2" To ca {0x0002c9030010c36c} portnum 1 lid 10-10 "hostname HCA-2"
  14. 14. Thanks for your interest. SAKURA Internet Research Center.

×