Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

XPDDS19: Support of PV Devices in Nested Xen - Jürgen Groß, SUSE


Published on

Current support of nested virtualization with Xen is limited to fully emulated devices for the L1 hypervisor (L0 hypervisor being the one running on the physical machine). For being able to let L2 dom0 make use of L1 PV devices several new interfaces are needed.

In this design session I'll present my ideas how to add support of PV devices for L2 dom0. There are several possibilities how to do the work which I'd like to discuss.

Published in: Software
  • Be the first to comment

  • Be the first to like this

XPDDS19: Support of PV Devices in Nested Xen - Jürgen Groß, SUSE

  1. 1. Support of pv-devices in nested Xen Jürgen Groß Virtualization Kernel Developer SUSE Linux GmbH,
  2. 2. 2 Agenda • Nested Xen overview • Nested Xen with pv-devices • Possible solutions • Related work
  3. 3. Nested Xen overview
  4. 4. 4 Naming definitions • L0-Xen: Xen hypervisor running on real hardware • L0-dom0: Dom0 on top of L0-Xen • L0-domU: domU on top of L0-Xen • L1-Xen: Xen hypervisor running as HVM guest on top of L0-Xen (L1-Xen is a L0-domU) • L1-dom0: Dom0 on top of L1-Xen • L2-domU: domU on top of L1-Xen
  5. 5. 5 Today • L1-dom0 and L1-domUs can’t access L0 pv-devices, as they have no access to event channels and grants presented to L1-Xen by L0-Xen • L1-dom0 only sees the devices emulated by qemu in L0- dom0 (legacy devices), those are available for backing backends for L1-domUs • I/O performance in L1-Xen is rather bad
  6. 6. Nested Xen with pv-devices
  7. 7. 7 What we want to achieve • L1-dom0 should be able to use pv-devices assigned to L1-Xen by L0-dom0 • Those pv-devices should be usable in L1-dom0 as backing devices for backends • Introduced new interfaces should allow L1 driver domains to use those devices, too
  8. 8. 8 Needed functionality • Access to L0 event-channels in L1-dom0 and eventually in L1 driver domains • Possibility to grant access of L1-dom0 memory pages to L0-dom0 • Access to L0 xenstore from L1-dom0 and eventually L1 driver domains
  9. 9. Possible solutions
  10. 10. 10 General considerations • One Passthrough hypercall (similar to multicall, but for passing hypercalls to L0-Xen) or multiple new hypercalls as needed (passthrough event, passthrough grant, …) • Multiplexing of L1-guests (L1-dom0 and possibly L1 driver domains) at L1-Xen level or at L0 level or via L1- dom0 driver? • Do we want to support even deeper nestings (L2, L3, …)?
  11. 11. 11 Event channels • Direct mapping of L0 event channels to L1 event channels or “nested event channels” (all L0 events coming through via one L1 event + sub-event)? • Support of 2-level events or fifo or both? • If both: at the same time or only as alternative? • Or like pv-shim all to L1-dom0 and the redirecting to L1 driver domains?
  12. 12. 12 Grant pages • Only L1-dom0 allowed to grant pages? • Per L1-domain grant frames merged at L1-Xen level? Problem: stealing of grant references possible (L1 driver domain could put grant of L1-dom0 in request to L0) • Multiple grant frame arrays presented to L0-Xen? Problem: how to specify the individual grant in e.g. L0- dom0 • Support of PVH/HVM L1-dom0/L1-driver-domains?
  13. 13. 13 Xenstore • Multiple Xenstores in L1-dom0 • Multiplexing for L1 driver domains? At which level (L0, L1-Xen, L1-dom0, L1-Xenstore-stubdom)? • Merging of L0-Xenstore into L1 Xenstore (“mount”)?
  14. 14. Related work
  15. 15. 15 Related work • Nested VMX/SVM • PV-Shim • Xenblanket (series by Christopher Clark, OpenXT) • Xen HVM guest support in KVM (series by Ankur Arora, Oracle)
  16. 16. 16