• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Linux PV on HVM
 

Linux PV on HVM

on

  • 4,096 views

Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to ...

Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.

Statistics

Views

Total Views
4,096
Views on SlideShare
1,740
Embed Views
2,356

Actions

Likes
3
Downloads
33
Comments
0

8 Embeds 2,356

http://xen.org 1171
http://www.xen.org 917
http://www-archive.xenproject.org 215
http://lars.1.xen.org 34
http://xen.xensource.com 13
http://translate.googleusercontent.com 4
http://staging.xen.org 1
http://webcache.googleusercontent.com 1
More...

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Linux PV on HVM Linux PV on HVM Presentation Transcript

    • Linux PV on HVMparavirtualized interfaces in HVM guests Stefano Stabellini
    • Linux as a guests: problemsLinux PV guests have limitations:- difficult “different” to install- some performance issue on 64 bit- limited set of virtual hardwareLinux HVM guests:- install the same way as native- very slow
    • Linux PV on HVM: the solution- install the same way as native- PC-like hardware- access to fast paravirtualized devices- exploit nested paging
    • Linux PV on HVM: initial featsInitial version in Linux 2.6.36:- introduce the xen platform device driver- add support for HVM hypercalls, xenbus and grant table- enables blkfront, netfront and PV timers- add support to PV suspend/resume- the vector callback mechanism
    • Old style event injection
    • Receiving an interruptdo_IRQ handle_fasteoi_irq handle_irq_event xen_evtchn_do_upcall ack_apic_level ← >=3 VMEXIT
    • The new vector callback
    • Receiving a vector callbackxen_evtchn_do_upcall
    • Linux PV on HVM: newer featsLater enhancements (2.6.37+):- ballooning- PV spinlocks- PV IPIs- Interrupt remapping onto event channels- MSI remapping onto event channels
    • Interrupt remapping
    • MSI remapping
    • PV spectrum HVM guests Classic Enhanced Hybrid PV PV guests PV on HVM PV on HVM on HVMBoot emulated emulated emulated paravirtualizedsequenceMemory hardware hardware hardware paravirtualizedInterrupts emulated emulated paravirtualized paravirtualizedTimers emulated emulated paravirtualized paravirtualizedSpinlocks emulated emulated paravirtualized paravirtualizedDisk emulated paravirtualized paravirtualized paravirtualizedNetwork emulated paravirtualized paravirtualized paravirtualizedPrivileged hardware hardware hardware paravirtualizedoperations
    • Benchmarks: the setupHardware setup:Dell PowerEdge R710CPU: dual Intel Xeon E5520 quad core CPUs @ 2.27GHzRAM: 22GBSoftware setup:Xen 4.1, 64 bitDom0 Linux 2.6.32, 64 bitDomU Linux 3.0 rc4, 8GB of memory, 8 vcpus
    • PCI passthrough: benchmarkPCI passthrough of an Intel Gigabit NICCPU usage: the lower the better: 200 180 160 140 120 100 CPU usage domU 80 CPU usage dom0 60 40 20 0 interrupt remapping no interrupt remapping
    • Kernbench Results: percentage of native, the lower the better140135130125120115110105100 95 90 PV on HVM 32 bit HVM 32 bit PV 32 bit PV on HVM 64 bit HVM 64 bit PV 64 bit
    • Kernbench Results: percentage of native, the lower the better140135130125120115110105100 95 90 PV on HVM 32 bit HVM 64 bit PV 64 bit PV on HVM 64 bit KVM 64 bit HVM 32 bit PV 32 bit
    • PBZIP2Results: percentage of native, the lower the better 160 150 140 130 120 110 100 PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
    • PBZIP2Results: percentage of native, the lower the better 160 150 140 130 120 110 100 KVM 64 bit PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
    • SPECjbb2005Results: percentage of native, the higher the better 100 90 80 70 60 50 40 30 20 10 0 PV 64 bit PV on HVM 64 bit
    • SPECjbb2005Results: percentage of native, the higher the better 100 90 80 70 60 50 40 30 20 10 0 PV 64 bit PV on HVM 64 bit KVM 64 bit
    • Iperf tcp Results: gbit/sec, the higher the better876543210 PV 64 bit PV on HVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
    • Iperf tcp Results: gbit/sec, the higher the better876543210 PV 64 bit PV on HVM 64 bit KVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
    • ConclusionsPV on HVM guests are very close to PV guestsin benchmarks that favor PV MMUsPV on HVM guests are far ahead of PV guestsin benchmarks that favor nested paging
    • Questions?