Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

XPDDS18: LCC18: Xen Project: After 15 years, What's Next? - George Dunlap, Citrix

210 views

Published on

The Xen Hypervisor is 15 years old, but like Linux, it is still undergoing significant upgrades and improvements. This talk will cover recent and upcoming developments in Xen on the x86 architecture, including the newly-released 'PVH' guest virtualization mode, the future of PV mode, qemu deprivileging, and more. We will cover why these new features are important for a wide range of environments, from cloud to embedded.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

XPDDS18: LCC18: Xen Project: After 15 years, What's Next? - George Dunlap, Citrix

  1. 1. Xen on x86, 15 years later Recent development, future direction
  2. 2. PVH Guests KConfig PVShim PVCalls VM Introspection / Memaccess PVH dom0 QEMU Deprivileging Panopticon Sub-page protection NVDIMM Posted Interrupts Large guests (288 vcpus) PV IOMMU ACPI Memory Hotplug Hypervisor Multiplexing
  3. 3. Talk approach • Highlight some key features • Recently finished • In progress • Cool Idea: Should be possible, nobody committed to working on it yet • Highlight how these work together to create interesting theme
  4. 4. • PVH (with PVH dom0) • KConfig • … to disable PV • PVshim • Windows in PVH
  5. 5. PVH: Finally here • Full PVH DomU support in Xen 4.10, Linux 4.15 • First backwards-compatibility hack • Experimental PVH Dom0 support in Xen 4.11
  6. 6. PVH: What is it? • Next-generation paravirtualization mode • Takes advantage of hardware virtualization support • No need for emulated BIOS or emulated devices • Lower performance overhead than PV • Lower memory overhead than HVM • More secure than either PV or HVM mode
  7. 7. • PVH (with PVH dom0) • KConfig • … to disable PV • PVshim • Windows in PVH
  8. 8. KConfig • KConfig for Xen allows… • Users to produce smaller / more secure binaries • Makes it easier to merge experimental functionality • KConfig option to disable PV entirely
  9. 9. • PVH • KConfig • … to disable PV • PVshim • Windows in PVH
  10. 10. PVShim • Some older kernels can only run in PV mode • Expect to run in ring 1, ask a hypervisor to perform privileged actions • “Shim”: A build of Xen designed to allow an unmodified PV guest to run in PVH mode • type=‘pvh’ / pvshim=1 Xen PVH Guest “Shim” Hypervisor (ring 0) PV-only kernel (ring 1)
  11. 11. • PVH • KConfig • … to disable PV • PVshim • Windows in PVH No-PV
 Hypervisors
  12. 12. • PVH • KConfig • … to disable PV • PVshim • Windows in PVH
  13. 13. Windows in PVH • Windows EFI should be able to do • OVMF (Virtual EFI implementation) already has • PVH support • Xen PV disk, network support • Only need PV Framebuffer support…?
  14. 14. • PVH • KConfig • … to disable PV • PVshim • Windows in PVH One guest type
 to rule them all
  15. 15. Is PV mode obsolete then?
  16. 16. • KConfig: No HVM • PV 9pfs • PVCalls • rkt Stage 1 • Hypervisor Multiplexing
  17. 17. Containers: Passing through “host” OS resources Container Container Container Container Host Filesystem Linux Container HostHost Port Host Port
  18. 18. Containers: Passing through “host” OS resources • Allows file-based difference tracking rather than block-based • Allows easier inspection of container state from host OS • Allows setting up multiple isolated services without needing to mess around with multiple IP addresses Container Container Container Container Host Filesystem LinuxHost Host
  19. 19. • KConfig: No HVM • PV 9pfs • PVCalls • rkt Stage 1 • Hypervisor Multiplexing
  20. 20. PV 9pfs • Allows dom0 to expose files directly to guests Xen Dom0 9pfs Backend Host Filesystem 9pfs Front 9pfs Front
  21. 21. • KConfig: No HVM • PV 9pfs • PVCalls • rkt Stage 1 • Hypervisor Multiplexing
  22. 22. PV Calls • Pass through specific system calls • socket() • listen() • accept() • read() • write() Xen Dom0 PVCalls Backend Host Port PVCalls Front PVCalls Front Host Port
  23. 23. • KConfig: No HVM • PV 9pfs • PVCalls • rkt Stage 1 • Hypervisor Multiplexing
  24. 24. rkt Stage 1 • rkt: “Container abstraction” part of CoreOS • Running rkt containers (part of CoreOS) under Xen
  25. 25. • KConfig: No HVM • PV 9pfs • PVCalls • rkt Stage 1 • Hypervisor Multiplexing Xen as full Container Host
  26. 26. • KConfig: No HVM • PV 9pfs • PVCalls • rkt Stage 1 • Hypervisor Multiplexing
  27. 27. Hypervisor multiplexing • Xen can run in an HVM guest /without nested HVM support/ • PV protocols use xenbus + hypercalls • At the moment, Linux code assumes only one xenbus / hypervisor • Host PV drivers • OR Guest PV drivers • Multiplexing: Allow both L1 Xen Back FrontFront L0 Xen Back L1 Dom0 L0 Dom0HVM Guest
  28. 28. • KConfig: No HVM • PV 9pfs • PVCalls • rkt Stage 1 • Hypervisor Multiplexing Xen as Cloud-ready Container Host
  29. 29. QEMU Deprivileging • Restricting hypercalls to a single guest • Restricting what QEMU can do within dom0
  30. 30. Panopticon / No Secrets • Spectre-style information leaks • You can only leak what you can see • Xen has all of physical memory mapped • But this is not really necessary • Assume that all guests can read hypervisor memory at all times
  31. 31. PVH Guests KConfig PVShim PVCalls VM Introspection / Memaccess PVH dom0 QEMU Deprivileging Panopticon Sub-page protection NVDIMM Posted Interrupts Large guests (288 vcpus) PV IOMMU ACPI Memory Hotplug Hypervisor Multiplexing
  32. 32. Questions

×