Your SlideShare is downloading. ×
0
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Docker infiniband
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Docker infiniband

4,281

Published on

Docker + InfiniBand lightning talk at Docker Meetup in Tokyo #1

Docker + InfiniBand lightning talk at Docker Meetup in Tokyo #1

Published in: Self Improvement
0 Comments
13 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
4,281
On Slideshare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
69
Comments
0
Likes
13
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Docker + InfiniBand @syoyo Docker Meetup in Tokyo #1 (Feb 12, 2014) Thursday, February 13, 14
  • 2. Why InfiniBand? • Faster networking • 3.2 GB/s on QDR • Low laterncy • ~3 us • de-facto inter-connect standard in HPC environment Thursday, February 13, 14
  • 3. Crehan Research Thursday, February 13, 14
  • 4. My contribution (1/2) Thursday, February 13, 14
  • 5. My contribution (2/2) • Thursday, February 13, 14 OFED userland port for OpenIndiana(Solaris) • https://github.com/syoyo/oi-build/tree/ ofed-1.5.4.1/components/open-fabrics
  • 6. Existing virtualization technique • SR-IOV • HW dependent. • Need driver/firmware support • PCI-passthrough • Only one guest can access • ZeroVM • NaCl-based application isolation Thursday, February 13, 14
  • 7. LXC • Lightweight virtualization • Direct-access to PCI-ex devices Thursday, February 13, 14
  • 8. Existing research • Performance Evaluation of Container-based Virtualization for High Performance Computing Environments • Xavier, M.G. et. al. PDP2013 • http://marceloneves.org/papers/pdp2013containers.pdf Thursday, February 13, 14
  • 9. • Simple, easy but powerful! Thursday, February 13, 14
  • 10. (*) Memory CPU User A User B User C OS (*) http://commons.wikimedia.org/wiki/Smiley Thursday, February 13, 14
  • 11. Memory Memory Memory CPU CPU CPU Container A Container B Container C OS Thursday, February 13, 14
  • 12. /opt/gcc-4.4 /opt/gcc-4.5 /opt/gcc-4.6 User A Thursday, February 13, 14 User B User C
  • 13. gcc4.4 gcc4.5 gcc4.6 Container A Container B Container C OS Thursday, February 13, 14
  • 14. IB switch ConnectX-2 CentOS 6.5 192.168.11.16 192.168.11.8 ConnectX-2 Container (CentOS6.5) CentOS 6.5 Thursday, February 13, 14
  • 15. • Privilege : on • bind /dev/infiniband $ sudo docker run -privileged -v /dev/infiniband:/dev/infiniband -i -t centos6-ib /bin/bash Thursday, February 13, 14
  • 16. Thursday, February 13, 14
  • 17. LXC Container # ib_read_bw 192.168.11.16 … ----------------------------------#bytes #iterations BW peak[MB/sec] BW average[MB/sec] 65536 1000 3008.83 3008.80 ---------------------------------------------------- MsgRate[Mpps] 0.048141 # ib_write_bw 192.168.11.16 … ----------------------------------#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 65536 5000 3066.07 3066.05 0.049057 --------------------------------------------------------------------------------------# ib_read_lat 192.168.11.16 … ----------------------------------#bytes #iterations t_min[usec] t_max[usec] t_typical[usec] 2 1000 1.66 20.13 2.65 --------------------------------------------------------------------------------------# ib_write_lat 192.168.11.16 … ----------------------------------#bytes #iterations t_min[usec] t_max[usec] t_typical[usec] 2 1000 0.80 7.12 0.86 --------------------------------------------------------------------------------------- Thursday, February 13, 14
  • 18. Native $ ib_read_bw 192.168.11.16 … ----------------------------------#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 65536 1000 3015.43 3011.54 0.048185 --------------------------------------------------------------------------------------$ ib_write_bw 192.168.11.16 … ----------------------------------#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 65536 5000 3065.79 3065.63 0.049050 -------------------------------------------------------------------------------$ ib_read_lat 192.168.11.16 … ----------------------------------#bytes #iterations t_min[usec] t_max[usec] t_typical[usec] 2 1000 1.68 39.48 2.67 --------------------------------------------------------------------------------------$ ib_write_lat 192.168.11.16 … ----------------------------------#bytes #iterations t_min[usec] t_max[usec] t_typical[usec] 2 1000 0.80 4.70 0.86 --------------------------------------------------------------------------------------Thursday, February 13, 14
  • 19. Details • http://qiita.com/syoyo/items/ bea48de8d7c6d8c73435 • in Japanese Thursday, February 13, 14
  • 20. Summary • Docker + InfiniBand works! • No virtualization overhead! Thursday, February 13, 14
  • 21. Future work(1/3) • • • Per-container(per-user) access control for IB device • e.g. IB port0 -> User A, IB port1 -> User B Security concern Other PCI-ex devices • • • Thursday, February 13, 14 GPU PCI-SSD etc.
  • 22. Future work(2/3) • Virtualized, highly- efficient cloud platform for rendering task. Thursday, February 13, 14
  • 23. Future work(3/3) InfiniBand Render E Render A Render B Render C Render D etcd Thursday, February 13, 14
  • 24. • Thank you! Thursday, February 13, 14

×