Successfully reported this slideshow.
Your SlideShare is downloading. ×

XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star Lab Corporation

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 26 Ad

XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star Lab Corporation

Download to read offline

The Open Source Xen-Blanket software was developed by researchers at IBM and Cornell University, as extensions to the Xen hypervisor and its PV drivers, to enable seamless use of Xen PV drivers in guest VMs of nested Xen deployments. It was presented at the EuroSys 2012 conference, with a paper that has been widely cited since, and deployed in Cornell's SuperCloud.

Xen-Blanket has never been presented to the Xen Community and the software left unmaintained. However, recent work by Star Lab has modernized its implementation, aiming to encourage its adoption and incorporation into the Xen Project software.

This session will introduce the Xen-Blanket, describing its motivation and features; present the structure of the implementation in the hypervisor and device drivers; outline an example architecture for its deployment; and summarize its current state and plans within the Xen Project.

The Open Source Xen-Blanket software was developed by researchers at IBM and Cornell University, as extensions to the Xen hypervisor and its PV drivers, to enable seamless use of Xen PV drivers in guest VMs of nested Xen deployments. It was presented at the EuroSys 2012 conference, with a paper that has been widely cited since, and deployed in Cornell's SuperCloud.

Xen-Blanket has never been presented to the Xen Community and the software left unmaintained. However, recent work by Star Lab has modernized its implementation, aiming to encourage its adoption and incorporation into the Xen Project software.

This session will introduce the Xen-Blanket, describing its motivation and features; present the structure of the implementation in the hypervisor and device drivers; outline an example architecture for its deployment; and summarize its current state and plans within the Xen Project.

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Similar to XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star Lab Corporation (20)

Advertisement

More from The Linux Foundation (20)

Recently uploaded (20)

Advertisement

XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star Lab Corporation

  1. 1. The Xen Blanket for 2019 Running Xen on Xen in the cloud Hypervisor interface for nested PV drivers Christopher Clark & Kelli Little Star Lab Corp Xen Summit, July 2019
  2. 2. Hello! Christopher Clark Software Engineer, consultant. Working with Xen since 2003. Kelli Little Software Engineer. Working on the IARPA VirtUE project leveraging XenBlanket.
  3. 3. Project objective, motivation Virtual machine agility within and across heterogeneous clouds -- without requiring cloud provider support. Relocatable VMs running standard Operating Systems with unmodified kernels and standard drivers for virtual devices.
  4. 4. The Plan Use Xen's Live Migration ○ Add a hypervisor within the cloud provider instances ○ Add a network overlay and shared storage to build an overlay cloud.
  5. 5. The Plan: more detail Run guest VMs with unmodified Xen PV device drivers on a guest Xen hypervisor with its own dom0, all within a HVM guest VM of a host Xen hypervisor (eg. from a cloud provider). Guests can then be live-migrated between cloud instances that run nested Xen. Bonus: Virtual machine introspection of guests can be performed. => VM’s internal state monitored transparently, validated by software running externally. => Kernel debugging is enabled. Bonus: Multiple guest VMs can reside within a single cloud instance. => Advantageous for unikernel deployments, where cloud-provider billing granularity is a challenge. Constraint: Cloud providers do not offer CPU hardware virtualization to many virtualized instances ie. no Intel VT-x / AMD SVM capability. In such an instance, this nested system can only support PV guests: no HVM, no PVH.
  6. 6. Standard Xen System Architecture Network interface device driver Storage device driver Bridging & Routing PV net-back PV net-front Dom0: Linux Toolstack Ubuntu Guest VM (PV mode) Xen An example deployment configuration. Xen daemons PV block-back PV block-front PV net-front CentOS Guest VM (PV mode) PV block-front
  7. 7. Nested Xen System Architecture PV net-front to connect to cloud provider networks PV block-front to access cloud provider storage Bridging & Routing PV net-back PV net-front Dom0: Linux Toolstack Ubuntu Guest VM (PV mode) Xen : L0 host hypervisor (the public cloud provider’s hypervisor) A nested deployment configuration. Xen daemons PV block-back PV block-front PV net-front CentOS Guest VM (PV mode) PV block-front Xen : L1 guest hypervisor Virtual Machine (HVM mode) eg. an instance in a public cloud
  8. 8. Existing state of play So: what happens when you install Xen on a cloud VM? => package installation proceeds ok => bootloader configuration is updated to add the new hypervisor => ??? oh no!… where are my PV devices??!? user@cloud:~$ sudo apt-get install xen-hypervisor xen-tools Including Xen overrides from /etc/default/grub.d/xen.cfg WARNING: GRUB_DEFAULT changed to boot into Xen by default! ... user@cloud:~$ sudo reboot
  9. 9. Existing state of play: what happened? Issue 1: Hypercall authority Our Linux VM -- just converted into L1 Dom0 by installing the L1 hypervisor and Xen packages -- does not run with the privilege that it needs to issue hypercalls to the cloud L0 hypervisor. The Linux kernel now runs in CPU ring 1, rather than ring 0 as when it was just a Linux VM. L1 Xen now runs in ring 0, so it must issue hypercalls to cloud L0 hypervisor, on behalf of L1 Dom0. Issue 2: Address translation Introducing L1 Xen adds an extra layer of memory address translation.
  10. 10. The Xen Blanket Enables PV device drivers for guests of Xen on Xen. ● Presented at the ACM EuroSys conference in 2012 by researchers from IBM and Cornell University. http://www1.unine.ch/eurosys2012/program/conference.html https://dl.acm.org/citation.cfm?doid=2168836.2168849 ● Software not submitted to the Xen development community. ○ Possibly informally? Do not know. Upshot: feature never integrated into Xen. ● Deployed in the xcloud and SuperCloud projects at Cornell: http://xcloud.cs.cornell.edu http://supercloud.cs.cornell.edu/ ● Code derived from the original efforts is available here: https://code.google.com/archive/p/xen-blanket/ https://github.com/danlythemanly/xen-blanket A blanket is something you throw over something else, to solve a problem you have.
  11. 11. The Xen Blanket: Structure ● Runs within a HVM cloud provider instance: running Xen itself as a HVM guest ● Modifications to the Xen hypervisor to enable unmodified in-guest PV drivers ● Modifications to dom0 kernel: blanket drivers to support PV guest devices ● Supports PV guests within the instance
  12. 12. The Xen Blanket: Star Lab development We re-implemented the architecture with modern Xen and Linux. ● Original Xen versions: 4.1.1; rough Xen 4.2.2; 4.8.2 (modified, got this working) ● Original Linux versions: 2.6.18, 3.1.2, and rough 3.4.53 forward port New versions: ● Overhaul, full integration with modern Xen ○ including addition of XSM/Flask support, fixes to Xen’s existing KConfig items, etc. ● Xen 4.12 and Xen unstable (as of June 2019) variants ○ RFC patch series posted to xen-devel ○ https://lists.xenproject.org/archives/html/xen-devel/2019-06/msg01359.html ● Linux 4.15.5 ○ Source available on GitHub ○ https://github.com/starlab-io/xenblanket-linux
  13. 13. XenBlanket System Architecture PV net-front with Xen Blanket modifications PV block-front with Xen Blanket modifications Bridging & Routing PV net-back PV net-front Dom0: Linux Toolstack Ubuntu Guest VM (PV mode) Xen : L0 host hypervisor (the public cloud provider’s hypervisor) An nested deployment configuration. Xen daemons PV block-back PV block-front PV net-front CentOS Guest VM (PV mode) PV block-front Xen : L1 with Xen Blanket modifications Virtual Machine (HVM mode) eg. an instance in a public cloud
  14. 14. Nested Xen System Components ● New PV frontend device drivers for dom0 ○ “Blanket drivers” replace the physical device drivers used in a non-nested Xen system ○ Enables frontends to use block and network devices of the cloud L0 system ● Modified dom0 Linux kernel ○ Updated components that interact with Xen to support Blanket drivers ● Modified L1 Xen to run within the cloud instance as the L1 hypervisor ○ New hypercalls to support the Blanket device drivers ■ Existing hypercalls are needed as well to support PV device drivers in guest VMs ● Standard Xen tools running within dom0 to support and manage guests ○ Working with VM config files, disk images, networking, etc.
  15. 15. The new hypercalls With modern Xen, these additional six hypercalls are sufficient to enable use of the Xen Blanket PV driver front-ends for network and disk: ● nested_xen_version : version, get_features ● nested_memory_op : add_to_physmap for: shared info, grant table ● nested_hvm_op : get_param, set_param ● nested_grant_table_op : query_size ● nested_event_channel_op : alloc_unbound, bind_vcpu, close, send, unmask ● nested_sched_op : shutdown Implemented as separate hypercalls because each proxies a subset of an existing hypercall’s operations to the lower hypervisor.
  16. 16. XSM/Flask integration ● New initial sid: nestedxen ● Expanded control over xen_version to each individual op ● Events: most significant difference for the nested vs. non-nested cases ○ In a non-nested Xen system, the security identifier for an event channel is a compound formed from the security labels of both endpoints. ○ In a nested system, the remote endpoint for a Blanket event channel is governed by a different hypervisor, so XSM/Flask security label is not available. ■ Do not use compound security labels for nested event channels. ■ New security class: nested_event
  17. 17. Contrast with Xen’s PV Shim ● Both the nested and non-nested hypercalls are available concurrently ● Differences in the sub-ops supported in the nested hypercalls ○ A smaller set is preferable: reduces hypervisor size, complexity and attack surface ● XSM/Flask integration ● Distrust the correctness of the lower hypervisor: do not disable SMAP
  18. 18. Deployment: The Case Study GOAL: Develop a cloud-based user environment focused on security and usability. IMPLEMENTATION: Leverage XenBlanket on AWS EC2 instances for DomU management.
  19. 19. Design: Architecture Xen : L0 AWS EC2 Dom0: Ubuntu Xen : L1 with Xen Blanket modifications EC2 HVM Instance 1 CentOS Guest VM (PV mode) CentOS Guest VM (PV mode) Dom0: Ubuntu Xen : L1 with Xen Blanket modifications EC2 HVM Instance 2 CentOS Guest VM (PV mode) Xen : L0 AWS EC2
  20. 20. Features: Introspection Dom0: Ubuntu Introspection monitor: CentOS Guest VM (PV mode) Xen : L1 with Xen Blanket modifications Xen : L0 AWS EC2 Kernel modules Running processes DomU LSM
  21. 21. Feature: Migration Xen : L0 AWS EC2 Dom0: Ubuntu Xen : L1 with Xen Blanket modifications EC2 HVM Instance 1 CentOS Guest VM (PV mode) CentOS Guest VM (PV mode) Dom0: Ubuntu Xen : L1 with Xen Blanket modifications EC2 HVM Instance 2 CentOS Guest VM (PV mode) Xen : L0 AWS EC2
  22. 22. Feature: Migration Xen : L0 AWS EC2 Dom0: Ubuntu Xen : L1 with Xen Blanket modifications EC2 HVM Instance 1 CentOS Guest VM (PV mode) CentOS Guest VM (PV mode) Dom0: Ubuntu Xen : L1 with Xen Blanket modifications EC2 HVM Instance 2 CentOS Guest VM (PV mode) Xen : L0 AWS EC2
  23. 23. Demo: Migration times
  24. 24. Demo: Migration in action
  25. 25. Xen Blanket: Next steps Collaboration with the Xen and Linux Open Source communities ● RFC patch series for Xen posted to support discussion at this conference ● Need to agree on the driver interface for PV drivers in nested systems ● Need a set of nesting-aware Xen device drivers in Linux Enable Xen as a first-class cloud workload.
  26. 26. Further resources ● Open Source material from this project: https://github.com/UTSA-ICS/galahad ● IARPA VirtUE: https://www.iarpa.gov/index.php/research-programs/virtue ● Xen research: Live Migration of Virtual Machines USENIX NSDI 2005.

×