SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.
SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.
Successfully reported this slideshow.
Activate your 14 day free trial to unlock unlimited reading.
During the last few months of 2011 the Xen Community started an effort to port Xen to ARMv7 with virtualization extensions, using the Cortex A15 processor as reference platform.The new Xen port is exploiting this set of hardware capabilities to run guest VMs in the most efficient way possible while keeping the ARM specific changes to the hypervisor and the Linux kernel to a minimum.
Developing the new port we took the chance to remove legacy concepts like PV or HVM guests and only support a single kind of guests that is comparable to "PV on HVM" in the Xen X86 world.
This talk will explain the reason behind this and other design choices that we made during the early development process and it will go through the main technical challenges that we had to solve in order to accomplish our goal.
Notable examples are the way Linux guests issue hypercalls and receive event channels notifications from Xen.
During the last few months of 2011 the Xen Community started an effort to port Xen to ARMv7 with virtualization extensions, using the Cortex A15 processor as reference platform.The new Xen port is exploiting this set of hardware capabilities to run guest VMs in the most efficient way possible while keeping the ARM specific changes to the hypervisor and the Linux kernel to a minimum.
Developing the new port we took the chance to remove legacy concepts like PV or HVM guests and only support a single kind of guests that is comparable to "PV on HVM" in the Xen X86 world.
This talk will explain the reason behind this and other design choices that we made during the early development process and it will go through the main technical challenges that we had to solve in order to accomplish our goal.
Notable examples are the way Linux guests issue hypercalls and receive event channels notifications from Xen.
4.
ARM Servers
coming to market
4GB RAM, 4 cores per node
3 x 6 x 4 x 4 = 288 cores
single node virtualization -
manageability -
5.
Challenges
Another PVOPs infrastructure for ARM in
Linux?
How would the Linux Community react?
6.
ARMv7 virtualization
extensions to the rescue!
"The ARM Architecture virtualization extension
and Large Physical Address Extension (LPAE)
enable the efficient implementation of virtual
machine hypervisors for ARM architecture
compliant processors."
7.
Design goals
● exploit the hardware as much as possible
● one type of guest
○ no PVOPs
○ use PV interfaces for IO
● Rearchitected for the modern age:
○ no QEMU
○ no compat code
○ no shadow pagetables
9.
One type of guest
Like PV guests do it:
● support booting from a supplied kernel
● no emulated devices
● use PV interfaces for IO
no need for QEMU
10.
One type of guest
Like HVM guests do it:
● no PV MMU calls: exploit HW nested paging
● same entry point on native and on Xen
● use Device Tree to discover Xen presence
● no unnecessary devices in the Device Tree
● simple device emulation can be done in Xen
no need for QEMU
11.
Exploit the hardware
Exploit the hardware virtualization extensions
support as much as possible:
● hypervisor mode
● MMU: second stage translation
○ no PV MMU calls: no need for PVOPs
○ no shadow pagetables: -10721 lines of code!!
● hypercall: HVC
● generic timer
12.
General Interrupt Controller
an interrupt controller with virtualization support
● use the GIC to inject hardware interrupts into
dom0
● use the GIC to inject event notifications into
any guest domains with Xen support
○ use PPI 31
○ advertise the IRQ via Device Tree
● No special Xen entry point
● No Xen platform PCI device
13.
The hypercall calling
convention
the hypercall interface:
● hvc instruction
● hypervisor specific imm 0xEA1
● hypercall arguments passed on registers
14.
a 64 bit "ready" ABI
● a single hypercall ABI for 32 bit guests and
64 bit guests
no compat code in Xen
○ 2600 lines of code lighter
15.
a 64 bit "ready" ABI
make unsigned long and pointers 8
bytes sized and 8 bytes aligned
everywhere
16.
a 64 bit "ready" ABI
Guest
8 bytes unsigned long
8 bytes aligned
---
8 bytes pointer
8 bytes aligned
HVC | r0 | r1 | r2 | r3 | r4 | ... | r12
Xen
17.
a 64 bit "ready" ABI
Guest
8 bytes unsigned long
8 bytes aligned
32 bit register ---
8 bytes pointer
8 bytes aligned
HVC | r0 | r1 | r2 | r3 | r4 | ... | r12
Xen
XEN_GUEST_HANDLE_PARAM XEN_GUEST_HANDLE
18.
a 64 bit "ready" ABI
make unsigned long and pointers 8
bytes sized and 8 bytes aligned
everywhere
Collateral damage: a 1547 lines patch to
s/XEN_GUEST_HANDLE/XEN_GUEST_HANDLE_PARAM/
19.
Device Tree
Use Device Tree to describe the virtual platform
hypervisor {
compatible = "xen,xen", "xen,xen-4.2";
reg = <0xb0000000 0x20000>;
interrupts = <1 15 0xf08>;
};
20.
Device Tree
Use Device Tree to describe the virtual platform
version of the Xen ABI
hypervisor {
compatible = "xen,xen", "xen,xen-4.2";
reg = <0xb0000000 0x20000>;
interrupts = <1 15 0xf08>;
}; Grant table
memory area
event notifications IRQ
21.
Design goals:
did we meet them?
● exploit the hardware as much as possible
● one type of guest
○ no PVOPs
○ use PV interfaces for IO
● Rearchitected for the modern age:
○ no QEMU
○ no compat code
○ no shadow pagetables
22.
Design goals:
did we meet them?
● exploit the hardware as much as possible
● one type of guest
nested paging in HW, same entry point as nativ
○ no PVOPs
no device emulation, use DT to describe the HW
○ use PV interfaces for IO
● Rearchitected for the modern age:
○ no QEMU no device emulation, use DT to describe the HW
○ no compat code 64 bit ready ABI
○ no shadow pagetables nested paging in HW
23.
Status of the Project
● Xen and Dom0 booting
● VM creation and destruction
● PV console, disk and network working
● Xen ARM patches mostly upstream
● Linux Xen ARM patches v3 sent to LKML
24.
Open Questions: ACPI
"Modern PCs are horrible. ACPI is a complete design disaster in
every way. But we're kind of stuck with it."
Linus Torvalds
● ACPI? Really??
● What about Device Tree?
● Do we need an ACPI parser in Xen?
drivers/acpi: 110418 lines of code!
Equivalent to 38% of the Xen (x86 and ARM) code base!!
25.
Open Questions: UEFI
"EFI is this other [...] brain-damage (the first one being ACPI). "
Linus Torvalds
● Xen as Portable Executable
● Grub2 on ARM: multiboot2?
● UEFI runtime services
○ PVOPS? Trap and emulate?
26.
Open Questions:
Client Devices
● lack of a reference architecture
● UEFI Secure Boot
● Windows 8