Successfully reported this slideshow.

Linux PV on HVM

4

Share

Upcoming SlideShare
Linux PV on HVM
Linux PV on HVM
Loading in …3
×
1 of 24
1 of 24

Linux PV on HVM

4

Share

Download to read offline

Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.

Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.

More Related Content

More from The Linux Foundation

Related Books

Free with a 14 day trial from Scribd

See all

Related Audiobooks

Free with a 14 day trial from Scribd

See all

Linux PV on HVM

  1. 1. Linux PV on HVM paravirtualized interfaces in HVM guests Stefano Stabellini
  2. 2. Linux as a guests: problems Linux PV guests have limitations: - difficult “different” to install - some performance issue on 64 bit - limited set of virtual hardware Linux HVM guests: - install the same way as native - very slow
  3. 3. Linux PV on HVM: the solution - install the same way as native - PC-like hardware - access to fast paravirtualized devices - exploit nested paging
  4. 4. Linux PV on HVM: initial feats Initial version in Linux 2.6.36: - introduce the xen platform device driver - add support for HVM hypercalls, xenbus and grant table - enables blkfront, netfront and PV timers - add support to PV suspend/resume - the vector callback mechanism
  5. 5. Old style event injection
  6. 6. Receiving an interrupt do_IRQ handle_fasteoi_irq handle_irq_event xen_evtchn_do_upcall ack_apic_level ← >=3 VMEXIT
  7. 7. The new vector callback
  8. 8. Receiving a vector callback xen_evtchn_do_upcall
  9. 9. Linux PV on HVM: newer feats Later enhancements (2.6.37+): - ballooning - PV spinlocks - PV IPIs - Interrupt remapping onto event channels - MSI remapping onto event channels
  10. 10. Interrupt remapping
  11. 11. MSI remapping
  12. 12. PV spectrum HVM guests Classic Enhanced Hybrid PV PV guests PV on HVM PV on HVM on HVM Boot emulated emulated emulated paravirtualized sequence Memory hardware hardware hardware paravirtualized Interrupts emulated emulated paravirtualized paravirtualized Timers emulated emulated paravirtualized paravirtualized Spinlocks emulated emulated paravirtualized paravirtualized Disk emulated paravirtualized paravirtualized paravirtualized Network emulated paravirtualized paravirtualized paravirtualized Privileged hardware hardware hardware paravirtualized operations
  13. 13. Benchmarks: the setup Hardware setup: Dell PowerEdge R710 CPU: dual Intel Xeon E5520 quad core CPUs @ 2.27GHz RAM: 22GB Software setup: Xen 4.1, 64 bit Dom0 Linux 2.6.32, 64 bit DomU Linux 3.0 rc4, 8GB of memory, 8 vcpus
  14. 14. PCI passthrough: benchmark PCI passthrough of an Intel Gigabit NIC CPU usage: the lower the better: 200 180 160 140 120 100 CPU usage domU 80 CPU usage dom0 60 40 20 0 interrupt remapping no interrupt remapping
  15. 15. Kernbench Results: percentage of native, the lower the better 140 135 130 125 120 115 110 105 100 95 90 PV on HVM 32 bit HVM 32 bit PV 32 bit PV on HVM 64 bit HVM 64 bit PV 64 bit
  16. 16. Kernbench Results: percentage of native, the lower the better 140 135 130 125 120 115 110 105 100 95 90 PV on HVM 32 bit HVM 64 bit PV 64 bit PV on HVM 64 bit KVM 64 bit HVM 32 bit PV 32 bit
  17. 17. PBZIP2 Results: percentage of native, the lower the better 160 150 140 130 120 110 100 PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
  18. 18. PBZIP2 Results: percentage of native, the lower the better 160 150 140 130 120 110 100 KVM 64 bit PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
  19. 19. SPECjbb2005 Results: percentage of native, the higher the better 100 90 80 70 60 50 40 30 20 10 0 PV 64 bit PV on HVM 64 bit
  20. 20. SPECjbb2005 Results: percentage of native, the higher the better 100 90 80 70 60 50 40 30 20 10 0 PV 64 bit PV on HVM 64 bit KVM 64 bit
  21. 21. Iperf tcp Results: gbit/sec, the higher the better 8 7 6 5 4 3 2 1 0 PV 64 bit PV on HVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
  22. 22. Iperf tcp Results: gbit/sec, the higher the better 8 7 6 5 4 3 2 1 0 PV 64 bit PV on HVM 64 bit KVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
  23. 23. Conclusions PV on HVM guests are very close to PV guests in benchmarks that favor PV MMUs PV on HVM guests are far ahead of PV guests in benchmarks that favor nested paging
  24. 24. Questions?

×