Virtual Network Performance Challenge

1,423 views

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,423
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
22
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Linux network has evolved. Able to keep with 1G data rate in 2005. Multiqueue NIC allowed distributing work across cores. Now able to 10G bandwidth 1M packets / per second / per core Hypervisor's support
  • Forwarding measures how packets per second Throughput measures bytes per second Latency measures round trip time
  • Hardware tester sends back-to-back packets And measures how many arrive. Bi-directional. 1G = 1.4 Mpps * 2
  • Same test but makes 2 round trips through hypervisor
  • QA test of 100% load frame loss test On the same hardware, Intel(R) Xeon(R) CPU X5560 @ 2.80GHz 1MB L2 / 8MB L3 cache Clock speed: 2.80 GHz, Tuned IRQ alignment. Onboard NIC not multiqueue
  • Emulated NIC Easy to install Works with other OS Compatible (mostly) Virtual NIC Requires driver Only works with Linux Hypervisor ↔ Guest must be compatiable
  • An emulated NIC pretends to be E1000, 8139cp, … Guest PCI space faults into Hypervisor Packet data copied by Hypervisor
  • Guest has queue of packets in shared memory. Can avoid memory copies.
  • These are 3 different boxes, cpu memory, NIC, etc. Hyper-V: emulated NIC performance is awful Vmware: vmxnet is almost same as emulate Realtek KVM: reaches almost 100%
  • Hyper-V has almost no offload Vmware has most features (on Enterprise version) VLAN is important
  • Explain Vmware LRO bug. LRO aggregates packets which is good at final target; but violates end-to-end. Router VM would get terrible performance Kernel attempts to disable LRO if doing bridging or forwarding. Driver bug.
  • The main cost of Virt net is hypervisor context switch For bulk transfer, better to do more work on each context switch
  • VM to VM (on KVM) Log scale Iperf
  • IEEE 802.1d The Maximum Service Data Unit Size supported by a Bridge between two LANs is the smaller of that supported by the LANs. No attempt is made by a Bridge to relay a frame to a LAN that does not support the size of Service Data Unit conveyed by that frame”. Linux has per route MTU
  • Super light Lotus Firewalling, iptables, etc are expensive. Ipsec Connection tracking
  • MacVtap is simpler, fastest Bridge by default does iptables, connection tracking
  • When doing SMP
  • Explain parallel packet processing throgh layers. Can be done manually. Scheduler tries to help. Multiqueue NIC can help (and hurt)
  • Doing UP guest is faster now
  • Current KVM VNIC is single queue (bottleneck)
  • QA test notice significant improvements in Xen. Haven't been testing KVM but similar gains. Future work: -
  • Virtual Network Performance Challenge

    1. 1. Virtualized Networking PerformanceStephen HemmingerPrincipal Engineershemminger@vyatta.com
    2. 2. Age nd aWh at is a Virtu alize d N e tworkP e rform ance b e nch m arksTu ning tip s
    3. 3. P h ys ical E nte rp ris e D atace nte r BORDER ROUTER FIREWALL VPN INTRUSION PREVENTION SWITCH 10.0.0.0/24 WEBSERVERS 10.3.0.0/24 APPS & STORAGE 10.4.0.0/24 DATABASE
    4. 4. Virtu al N e twork Arch ite ctu re
    5. 5. P e rform ance M ile s tone s2005 2006 2007 2008 2009 2010 2011 1G line rate Multiqueue NIC 10G bamdwidth 1M packets / secondXen 3.0 KVM Hyper-V drivers
    6. 6. Be nch m arksF orward ing R F C 2544 m inim u m p acke tTh rou gh p u t TC P b u lk trans fe rLate ncy R e q u e s t / R e s p ons e
    7. 7. R ou te r Be nch m ark R F C 2544 Router Under TestSpirent
    8. 8. Virtu alize d R ou te r Be nch m ark Router Under Guest Test Hypervisor Bridge BridgeSpirent
    9. 9. Router Forwarding performance 1G bit/sec bidirectional 100% 80% Bare MetalFrames forwarded 60% Vmware ESX Xen 40% KVM 20% 0% 0 250 500 750 1000 1250 1500 Packet size
    10. 10. E m u late d vs Virtu al N IC
    11. 11. E m u late d N e twork Inte rface Fake PCI regionGuest Hypervisor Packet Buffer
    12. 12. Virtu al N ICGuest Hypervisor Netwo Shared rk Memory
    13. 13. E m u late d vs Virtu al N IC 100 75Throughput vs Bare Metal Emulated Tx 50 Virtual Tx Emulated Rx Virtual Rx 25 0 Hyper-V Vmware KVM
    14. 14. VN IC ch aracte ris tics Hyper-V Vmware Xen KVM vmxnet3 netfront virtio-netMTU 1500 9000 65521 65535Checksum Y Y YoffloadSegmentation Y Y YoffloadNAPI Y Y YLRO YVLAN YMultiqueue Y ? ?
    15. 15. O ffload not always a good id e a LR O ??
    16. 16. Tip #2:Use Jumbo MTU
    17. 17. VM to VM p e rform ance 2,000Thorouhput (10^6 bits/sec) 0 100 1,000 10,000 100,000 MTU (bytes)
    18. 18. M TU vs Brid ge
    19. 19. Tip #3: Minimize overhead
    20. 20. Virtu al S witch Typ e s 10000 To Hypervisor From HypervisorThroughput (10^6 bits/sec) 7500 VM to VM 5000 2500 0 NAT Bridged Tap
    21. 21. Tip #4: D ont C ros s th e S tre am s
    22. 22. C ontrol flow ↔ C P UCreator:cairo 1.10.2 (http://cairographiCreationDate:Thu Oct 20 16:29:57 2011LanguageLevel:2
    23. 23. M u ltith re ad b e nch m ark 20,000 UP Guest SMP GuestTotal Transactions/sec 15,000 10,000 5,000 0 Packet Steering (RPS) Baseline Multi Queue NIC
    24. 24. M u ltip le Q u e u e H W or S W Guest VM Thread Device queueFlow Thread Deviceclas VNIC sfi queue er Thread Device queue
    25. 25. Tip #5: H e lp ou t!
    26. 26. X e n P e rform ance Im p rove m e nts 1G bidirectional frame loss 100% 2.6.31 2.6.35 2.6.37 75%Packets forwarded 50% 25% 0% 0 200 400 600 800 1000 1200 1400 1600 Packet size
    27. 27. O ngoing workIm p rove d Trans m it wake u pC op y-le s s trans fe rM u ltiqu e u e VN ICF low s te e ring
    28. 28. 5 Ways to Im p rove p e rform anceVirtu al (not e m u late d ) ne twork inte rfaceM axim ize p acke t s izeM inim ize p acke t ove rh e adS tay on s am e C P UC ontrib u te to fu tu re d e ve lop m e nts

    ×