Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Example - ELC NA 2022

S
Stefano StabelliniSenior Principal Software Engineer at Citrix
Static Partitioning with Xen, LinuxRT, and Zephyr:
a concrete end-to-end example
Stefano Stabellini
Embedded Linux Conference 2022
2 |
Mixed-Criticality and Static Partitioning
• It is common to have a mix of critical and non-critical components
• Real-time vs. non real-time
• Function-critical vs. user interface or cloud interface
• Running them in the same environment put the critical function at risk
• A failure in the non-critical component causes a critical system crash
• The non-critical component using too much CPU time causes a missed deadline
• Don’t put all your eggs in one basket
• Static partitioning allows you to run them separately in isolation
• Many baskets!
• Each domain has direct hardware access (IOMMU protected)
• Real-time (< 5us latency)
• Strong isolation between the domains
• Real-time isolation
• Failure isolation
• By default, each component has only enough privilege to do what it needs to do
2
Xen
Critical
Non-
Critical
Non-
Critical
3 |
Static Partitioning
Linux Dom0
Dom1 Zephyr
LinuxRT
Programmable
Logic
GEM0 MMC0
GEM1 MMC1
TTC
SATA
4 |
Xen Features for Static Partitioning
• Xen Dom0less: static domains configuration for Xen
• Statically-defined domains
• Sub-second boot times
• No need to create new VMs at runtime (possible but not required)
• No need for Dom0 (possible but not required)
• Each domain has direct hardware access
• Real-time and Cache Coloring: isolation from cache interference
• Deterministic interrupt latency at 3us
• VM-to-VM communication
• Static shared memory and event channels
• PV drivers for device sharing
• Safety-certifiability improvements underway
• MISRA C
• Deterministic interrupt handling and memory allocations
5 |
Step1: a simple Xen dom0less static configuration
• Board: Xilinx ZCU102
• Xen and 3 VMs
• Regular Linux Dom0 from Xilinx BSP
• LinuxRT DomU with a minimal busybox rootfs
• Zephyr DomU
• Configuration parameters
• Memory allocation
• CPUs allocation
• Device assignment added later
• Do we need Dom0?
• Useful for system monitoring and domains restart
• If dom0 is required, the Xen userspace tools (xl) need to be built and added to the dom0 rootfs
Xen
LinuxRT
Linux
Dom0
Zephyr
6 |
Xen Dom0less boot
U-Boot
Xen
Linux Dom0 LinuxRT DomU Zephyr DomU
CPU 0 CPU 1-2 CPU 3
boots
boots
7 |
ImageBuilder
• A tool to automatically generate a bootable configuration from a simple text config file
• https://gitlab.com/xen-project/imagebuilder
MEMORY_START="0x0"
MEMORY_END="0x80000000"
DEVICE_TREE="elc22demo1/mpsoc.dtb"
XEN="elc22demo1/xen"
DOM0_KERNEL="elc22demo1/xilinx-linux"
DOM0_RAMDISK="elc22demo1/xen-rootfs.cpio.gz"
DOM0_MEM=1024
NUM_DOMUS=2
DOMU_KERNEL[0]="elc22demo1/linuxrt"
DOMU_RAMDISK[0]="elc22demo1/initrd.cpio"
DOMU_MEM[0]=2048
DOMU_VCPUS[0]=2
DOMU_KERNEL[1]="elc22demo1/zephyr.bin"
DOMU_MEM[1]=128
# bash imagebuilder/scripts/uboot-script-gen -t tftp -d . -c elc22demo/config
8 |
ImageBuilder boot.source
• ImageBuilder generates boot.source and boot.scr to be loaded by u-boot
• Editing boot.source is possible
• Then call mkimage by hand
# mkimage -A arm64 -T script -C none -a 0xc00000 -e 0xc00000 -d boot.source boot.scr
9 |
Building Xen with or without cache coloring support
• Building the hypervisor is very simple (it cross-compiles like Linux)
• Cache coloring is available as a menuconfig option
• Upstreaming in progress
• Currently available at https://github.com/Xilinx/xen
• Nothing else is needed unless you plan to use the Xen tools (xl) in dom0
# export CROSS_COMPILE=/path/to/cross-compiler
# export XEN_TARGET_ARCH=arm64
# cd xen.git/xen/
# make menuconfig
Architecture Features à L2 cache coloring
# make
10 |
Building a Linux/LinuxRT kernel for Xen
• Vanilla Linux releases and binaries work
• Using Linux v5.17 for the demo
• The following Kconfig options are recommended:
• CONFIG_XEN
• CONFIG_XEN_BLKDEV_BACKEND
• CONFIG_XEN_NETDEV_BACKEND
• CONFIG_XEN_GNTDEV
• CONFIG_XEN_GRANT_DEV_ALLOC
• CONFIG_XEN_GRANT_DMA_ALLOC
• CONFIG_SERIAL_AMBA_PL011
• CONFIG_SERIAL_AMBA_PL011_CONSOLE
• CONFIG_BRIDGE
• Either CONFIG_XEN or CONFIG_SERIAL_AMBA_PL011 is needed for console output
• 2 patches for PV drivers support for dom0less
• Patches already accepted and soon to be upstream
11 |
Building a Dom0 rootfs
• Dom0 requires the Xen userspace tools in Dom0
• Build a Linux rootfs for Dom0 with Xen support using Yocto:
DISTRO_FEATURES:append = " xen"
DISTRO_FEATURES:append = " virtualization"
IMAGE_INSTALL:append = " xen-base zlib-dev"
ASSUME_PROVIDED += ” iasl-native"
12 |
Building Zephyr with Xen support
• Initial Zephyr build environment setup required
• See https://docs.zephyrproject.org/latest/develop/getting_started/index.html
• Use the existing xenvm machine
• Build time Device Tree: boards/arm64/xenvm/xenvm.dts
• Memory mappings: soc/arm64/xenvm/mmu_regions.c
• To run Zephyr as Dom0less guest:
• change xen_hvc with xen_consoleio_hvc in xenvm.dts
• build Zephyr with Dom0 support
• build Xen with DEBUG enabled
# west build -t menuconfig
-> SoC/CPU/Configuration Selection (Xen virtual machine on aarch64)
-> Xen virtual machine on aarch64
-> Zephyr as Xen Domain 0
-> Device Drivers
-> Serial Drivers
-> Xen hypervisor Dom0 console UART driver
# west build -b xenvm samples/hello_world/
13 |
Step2: Add Device Assignment
• Device Assigment:
• Direct hardware access
• Protected by the IOMMU
• Example configuration:
• Assigning the Network Device (GEM) to LinuxRT
• Assigning the TTC timer to Zephyr
Xen
LinuxRT
Linux
Dom0
Zephyr
GEM0 TTC0
14 |
Xen Device Assignment
• Based on “partial device trees”
• They describe the device for the domU
• They configure device assignment
• Everything under passthrough is copied
to the guest device tree
• Special device assignment
configuration properties:
• xen,reg
• xen,path
• xen,force-assign-without-iommu
• interrupt is used for interrupt remapping
• interrupt-parent must be 0xfde8
/dts-v1/;
/ {
#address-cells = <0x2>;
#size-cells = <0x2>;
passthrough {
compatible = "simple-bus";
ranges;
#address-cells = <0x2>;
#size-cells = <0x2>;
ethernet@ff0e0000 {
compatible = "cdns,zynqmp-gem";
[…]
reg = <0x0 0xff0e0000 0x0 0x1000>;
interrupt-parent = <0xfde8>;
interrupts = <0x0 0x3f 0x4 0x0 0x3f 0x4>;
xen,reg = <0x0 0xff0e0000 0x0 0x1000 0x0 0xff0e0000>;
xen,path = "/amba/ethernet@ff0e0000";
};
};
};
15 |
Xen Device Assignment and ImageBuilder
• Add xen,passthrough; under the corresponding node on the host device tree
• Add the partial device tree binaries to the ImageBuilder config file
• ImageBuilder will load them automatically for you at boot
[…]
NUM_DOMUS=2
DOMU_KERNEL[0]="elc22demo2/linuxrt"
DOMU_RAMDISK[0]="elc22demo2/initrd.cpio"
DOMU_MEM[0]=2048
DOMU_VCPUS[0]=2
DOMU_PASSTHROUGH_DTB[0]="elc22demo2/ethernet@ff0e0000.dtb"
DOMU_KERNEL[1]="elc22demo2/zephyr.bin"
DOMU_PASSTHROUGH_DTB[1]="elc22demo2/timer@ff110000.dtb"
DOMU_MEM[1]=128
16 |
Xilinx Xen Device Tree examples repository
• Repository with device tree passthrough examples: https://github.com/Xilinx/xen-passthrough-device-trees
• Simplified ImageBuilder configuration with PASSTHROUGH_DTS_REPO, DOMU_PASSTHROUGH_PATHS
• Based on file names and example files, no automatic device tree generation yet
• Careful about “axi” vs “amba”, the name needs to match everywhere!
• host DTB, passthrough DTB, xen,path property, ImageBuider config file
[…]
PASSTHROUGH_DTS_REPO="git@github.com:Xilinx/xen-passthrough-device-trees.git device-trees-2021.2"
NUM_DOMUS=2
DOMU_KERNEL[0]="elc22demo2/linuxrt"
DOMU_RAMDISK[0]="elc22demo2/initrd.cpio"
DOMU_MEM[0]=2048
DOMU_VCPUS[0]=2
DOMU_PASSTHROUGH_PATHS[0]="/axi/ethernet@ff0e0000”
DOMU_KERNEL[1]="elc22demo2/zephyr.bin"
DOMU_MEM[1]=128
DOMU_PASSTHROUGH_PATHS[1]="/axi/timer@ff110000"
17 |
Xen Partial Device Trees Automatic Generation
• System Device Tree: Device Tree based description of a full heterogeneous system
• Lopper
• a Device Tree manipulation tool
• takes a System Device Tree as input and generates multiple Device Trees as output
• Use Lopper to automatically generate Xen passthrough DTBs
• Just need to provide the host DTB and the device path, e.g. /amba/ethernet@ff0e0000
• Attend the presentation “System Device Tree and Lopper: Concrete Examples”, Thursday June 23 at 12PM!
18 |
Make use of a new device in Zephyr
• Add it to the board device tree boards/arm64/xenvm/xenvm.dts
• Map the device memory in soc/arm64/xenvm/mmu_regions.c
MMU_REGION_FLAT_ENTRY("TTC",
DT_REG_ADDR_BY_IDX(DT_INST(0, xlnx_ttcps), 0),
DT_REG_SIZE_BY_IDX(DT_INST(0, xlnx_ttcps), 0),
MT_DEVICE_nGnRnE | MT_P_RW_U_RW | MT_NS),
ttc0: timer@ff110000 {
compatible = "xlnx,ttcps";
status = "okay";
interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL IRQ_DEFAULT_PRIORITY>,
<GIC_SPI 37 IRQ_TYPE_LEVEL IRQ_DEFAULT_PRIORITY>,
<GIC_SPI 38 IRQ_TYPE_LEVEL IRQ_DEFAULT_PRIORITY>;
interrupt-names = "irq_0", "irq_1", "irq_2";
reg = <0x0 0xff110000 0x0 0x1000>;
label = "ttc0";
interrupt-parent = <&gic>;
clock-frequency = <8320000>;
};
© Copyright 2020 Xilinx
Step3: Xen Cache Coloring
L2
Core 1 Core 2 Core 3 Core 4
DDR
L1 L1 L1 L1
© Copyright 2020 Xilinx
Xen Cache Coloring
L2
Core 1 Core 2 Core 3 Core 4
DDR
L1 L1 L1 L1
© Copyright 2020 Xilinx
4CPUs clusters often share L2 cache
4Interference via L2 cache affects performance
¬ App0 running on CPU0 can cause cache entries
evictions, which affect App1 running on CPU1
¬ App1 running on CPU1 could miss a deadline due to
App0’s behavior
¬ It can happen between Apps running on the same OS &
between VMs on the same hypervisor
4Hypervisor Solution: Cache Partitioning,
AKA Cache Coloring
¬ Each VM gets its own allocation of cache entries
¬ No shared cache entries between VMs
¬ Allows real-time apps to run with deterministic IRQ
latency
D
D
R
Xen Cache Coloring
Core
1
Core
2
Core
3
Core
4
L2
22 |
Xen Cache Coloring and ImageBuilder
• Cache Coloring on Xen ZCU102
• 16 colors, each color corresponds to 256MB
• Color bitmask is 0xF000
• Configuration:
• Xen à color 0
• Dom0 à color 1-5
• LinuxRT à color 6-14
• Zephyr à color 15
# Xen and Dom0, use XEN_CMD in the ImageBuilder config:
XEN_CMD=“…xen_colors=0-0 dom0_colors=1-5 way_size=65536”
DOMU_COLORS[0]="6-14”
DOMU_COLORS[1]="15-15"
Dom0
LinuxRT Zephyr
CPU 0 CPU 1-2 CPU 3
Linux Dom0
ring
buffer
Step4: Shared Memory and Event Channels
24 |
Shared Memory Configuration
• A static shared memory region chosen at build time
• E.g. 0x7fe0_1000
• The address matter for cache coloring: 0x7fe0_1000 means color “1”
• Xen event channels can be used for notifications
• A BSD-licensed xen.h file with Xen hypercall definitions is provided availble
• Can be embedded in Zephyr and other RTOSes
• Makes it easy to add event channel support anywhere
• Static event channel definitions based on device tree available in the next Xen release
• Today event channels are created dynamically
• From the next release it is possible to configure them statically as part of the Xen domains (i.e. ImageBuilder)
• Static shared memory support also coming in the next Xen release
• Possible to define shared memory without editing the memory node
• Work in progress by ARM
25 |
Shared Memory Configuration
• Carve out an address range from the memory node
• 0xf7fe00000 - 0xf7ff00000
• Add it as a separate sram node
memory {
device_type = "memory";
reg = <0x00 0x00 0x00 0x7fe00000 0x08 0x00 0x00 0x80000000>;
};
sram@7fe00000 {
compatible = "mmio-sram";
reg = <0x0 0x7fe00000 0x0 0x100000>;
xen,passthrough;
};
26 |
Shared Memory Configuration
• Add shared memory to the domU passthrough device tree
passthrough {
compatible = "simple-bus";
ranges;
#address-cells = <0x2>;
#size-cells = <0x2>;
/* timer has been tested with baremetal app only */
timer@ff110000 {
[…]
sram@7fe01000 {
compatible = "mmio-sram";
reg = <0x0 0x7fe01000 0x0 0x1000>;
xen,reg = <0x0 0x7fe01000 0x0 0x1000 0x0 0x7fe01000>;
xen,force-assign-without-iommu = <0x1>;
};
27 |
Make use of the shared memory and event channels in Zephyr
• Map the shared memory
• Setup event channels
MMU_REGION_FLAT_ENTRY("RAM_SHARED",
0x7fe01000,
0x1000,
MT_NORMAL_NC | MT_P_RW_U_RW | MT_NS),
#include "xen.h”
[…]
bind.remote_dom = remote_domid;
bind.remote_port = remote_port;
bind.local_port = 0;
ret = xen_hypercall(EVTCHNOP_bind_interdomain, (unsigned long)&bind, 0, 0, HYPERVISOR_event_channel_op);
28 |
Make use of the shared memory and event channels in Linux
• Write a new driver
static int __init shared_mem_init(void)
{
char *str = "go";
struct resource r;
struct device_node *np = of_find_compatible_node(NULL, NULL, "xen,shared-memory-v1");
struct evtchn_alloc_unbound alloc_unbound;
of_address_to_resource(np, 0, &r);
shared_mem = phys_to_virt(r.start);
alloc_unbound.dom = DOMID_SELF;
alloc_unbound.remote_dom = 1; /* domid 1 */
HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
irq = bind_evtchn_to_irqhandler_lateeoi(alloc_unbound.port, evtchn_handler, 0, “shared_intf", NULL);
irq = rc;
memcpy(shared_mem, str, 3);
mb();
/* send port number to other domain */
memcpy(shared_mem + 4, &alloc_unbound.port, sizeof(alloc_unbound.port));
return 0;
}
29 |
Step5: Dom0less with PV Drivers
Dom0
U-Boot
Xen
LinuxRT
CPU CPU
NetBack NetFront
init-dom0less Zephyr
CPU 3
30 |
Dom0less and PV Drivers
Linux Dom0 LinuxRT DomU
Zephyr DomU
CPU 0 CPU 1-2 CPU 3
Xen
31 |
Dom0less and PV Drivers
• Xen PV Drivers are used to share a device across multiple domains
• Share the network across multiple domains
• Share a disk across multiple domains, a partition to each domain
• PV network can also be used for VM-to-VM communication
• With dom0less, PV Drivers are instantiated after the boot is finished from Dom0
• Fast boot times: the critical application can start immediately, no need to wait
• PV drivers connect later after Dom0 completed booting
• A Dom0 environment with Xen userspace tools is required to setup the connection
• Use init-dom0less in Dom0 to initialize PV drivers, then hotplug a PV network connection
/usr/lib/xen/bin/init-dom0less
brctl addbr xenbr0; ifconfig xenbr0 192.168.0.1 up
xl network-attach 1
Questions?
1 of 32

Recommended

Xen in Safety-Critical Systems - Critical Summit 2022 by
Xen in Safety-Critical Systems - Critical Summit 2022Xen in Safety-Critical Systems - Critical Summit 2022
Xen in Safety-Critical Systems - Critical Summit 2022Stefano Stabellini
389 views31 slides
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx by
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxXPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
5.7K views23 slides
System Device Tree and Lopper: Concrete Examples - ELC NA 2022 by
System Device Tree and Lopper: Concrete Examples - ELC NA 2022System Device Tree and Lopper: Concrete Examples - ELC NA 2022
System Device Tree and Lopper: Concrete Examples - ELC NA 2022Stefano Stabellini
162 views19 slides
Xen on ARM for embedded and IoT: from secure containers to dom0less systems by
Xen on ARM for embedded and IoT: from secure containers to dom0less systemsXen on ARM for embedded and IoT: from secure containers to dom0less systems
Xen on ARM for embedded and IoT: from secure containers to dom0less systemsStefano Stabellini
441 views42 slides
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo... by
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...The Linux Foundation
5.5K views28 slides
XPDDS18: Design and Implementation of Automotive: Virtualization Based on Xen... by
XPDDS18: Design and Implementation of Automotive: Virtualization Based on Xen...XPDDS18: Design and Implementation of Automotive: Virtualization Based on Xen...
XPDDS18: Design and Implementation of Automotive: Virtualization Based on Xen...The Linux Foundation
1.2K views23 slides

More Related Content

What's hot

Xen and the art of embedded virtualization (ELC 2017) by
Xen and the art of embedded virtualization (ELC 2017)Xen and the art of embedded virtualization (ELC 2017)
Xen and the art of embedded virtualization (ELC 2017)Stefano Stabellini
25.9K views33 slides
Safety-Certifying Open Source Software: The Case of the Xen Hypervisor by
Safety-Certifying Open Source Software: The Case of the Xen HypervisorSafety-Certifying Open Source Software: The Case of the Xen Hypervisor
Safety-Certifying Open Source Software: The Case of the Xen HypervisorStefano Stabellini
668 views29 slides
Qemu Introduction by
Qemu IntroductionQemu Introduction
Qemu IntroductionChiawei Wang
5.3K views52 slides
Xen Hypervisor by
Xen HypervisorXen Hypervisor
Xen HypervisorSusheel Thakur
6.6K views154 slides
Embedded Hypervisor for ARM by
Embedded Hypervisor for ARMEmbedded Hypervisor for ARM
Embedded Hypervisor for ARMNational Cheng Kung University
12.2K views85 slides
OSSNA18: Xen Beginners Training by
OSSNA18: Xen Beginners Training OSSNA18: Xen Beginners Training
OSSNA18: Xen Beginners Training The Linux Foundation
95.5K views61 slides

What's hot(20)

Xen and the art of embedded virtualization (ELC 2017) by Stefano Stabellini
Xen and the art of embedded virtualization (ELC 2017)Xen and the art of embedded virtualization (ELC 2017)
Xen and the art of embedded virtualization (ELC 2017)
Stefano Stabellini25.9K views
Safety-Certifying Open Source Software: The Case of the Xen Hypervisor by Stefano Stabellini
Safety-Certifying Open Source Software: The Case of the Xen HypervisorSafety-Certifying Open Source Software: The Case of the Xen Hypervisor
Safety-Certifying Open Source Software: The Case of the Xen Hypervisor
Stefano Stabellini668 views
Qemu Introduction by Chiawei Wang
Qemu IntroductionQemu Introduction
Qemu Introduction
Chiawei Wang5.3K views
KVM tools and enterprise usage by vincentvdk
KVM tools and enterprise usageKVM tools and enterprise usage
KVM tools and enterprise usage
vincentvdk5K views
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware by Linaro
HKG15-505: Power Management interactions with OP-TEE and Trusted FirmwareHKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
Linaro1.2K views
Arm device tree and linux device drivers by Houcheng Lin
Arm device tree and linux device driversArm device tree and linux device drivers
Arm device tree and linux device drivers
Houcheng Lin17.5K views
Virtualization with KVM (Kernel-based Virtual Machine) by Novell
Virtualization with KVM (Kernel-based Virtual Machine)Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)
Novell15.9K views
XPDDS17: PL011 UART Emulation in Xen on ARM - Bhupinder Thakur, Qualcomm Data... by The Linux Foundation
XPDDS17: PL011 UART Emulation in Xen on ARM - Bhupinder Thakur, Qualcomm Data...XPDDS17: PL011 UART Emulation in Xen on ARM - Bhupinder Thakur, Qualcomm Data...
XPDDS17: PL011 UART Emulation in Xen on ARM - Bhupinder Thakur, Qualcomm Data...
Kvm and libvirt by plarsen67
Kvm and libvirtKvm and libvirt
Kvm and libvirt
plarsen67525 views
Project ACRN hypervisor introduction by Project ACRN
Project ACRN hypervisor introduction Project ACRN hypervisor introduction
Project ACRN hypervisor introduction
Project ACRN170 views
Jagan Teki - U-boot from scratch by linuxlab_conf
Jagan Teki - U-boot from scratchJagan Teki - U-boot from scratch
Jagan Teki - U-boot from scratch
linuxlab_conf709 views
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor by Linaro
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Linaro1.5K views

Similar to Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Example - ELC NA 2022

Qemu device prototyping by
Qemu device prototypingQemu device prototyping
Qemu device prototypingYan Vugenfirer
276 views41 slides
RHEL5 XEN HandOnTraining_v0.4.pdf by
RHEL5 XEN HandOnTraining_v0.4.pdfRHEL5 XEN HandOnTraining_v0.4.pdf
RHEL5 XEN HandOnTraining_v0.4.pdfPaul Yang
5 views36 slides
Dom0less - Xen Developer Summit 2019 by
Dom0less  - Xen Developer Summit 2019Dom0less  - Xen Developer Summit 2019
Dom0less - Xen Developer Summit 2019Stefano Stabellini
382 views23 slides
RunX ELCE 2020 by
RunX ELCE 2020RunX ELCE 2020
RunX ELCE 2020Stefano Stabellini
152 views27 slides
Stateless Hypervisors at Scale by
Stateless Hypervisors at ScaleStateless Hypervisors at Scale
Stateless Hypervisors at ScaleAntony Messerl
9.7K views34 slides
From printk to QEMU: Xen/Linux Kernel debugging by
From printk to QEMU: Xen/Linux Kernel debuggingFrom printk to QEMU: Xen/Linux Kernel debugging
From printk to QEMU: Xen/Linux Kernel debuggingThe Linux Foundation
7.8K views24 slides

Similar to Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Example - ELC NA 2022(20)

RHEL5 XEN HandOnTraining_v0.4.pdf by Paul Yang
RHEL5 XEN HandOnTraining_v0.4.pdfRHEL5 XEN HandOnTraining_v0.4.pdf
RHEL5 XEN HandOnTraining_v0.4.pdf
Paul Yang5 views
Stateless Hypervisors at Scale by Antony Messerl
Stateless Hypervisors at ScaleStateless Hypervisors at Scale
Stateless Hypervisors at Scale
Antony Messerl9.7K views
Platform Security Summit 18: Xen Security Weather Report 2018 by The Linux Foundation
Platform Security Summit 18: Xen Security Weather Report 2018Platform Security Summit 18: Xen Security Weather Report 2018
Platform Security Summit 18: Xen Security Weather Report 2018
Rmll Virtualization As Is Tool 20090707 V1.0 by guest72e8c1
Rmll Virtualization As Is Tool 20090707 V1.0Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0
guest72e8c1506 views
the NML project by Lei Yang
the NML projectthe NML project
the NML project
Lei Yang597 views
OffensiveCon2022: Case Studies of Fuzzing with Xen by Tamas K Lengyel
OffensiveCon2022: Case Studies of Fuzzing with XenOffensiveCon2022: Case Studies of Fuzzing with Xen
OffensiveCon2022: Case Studies of Fuzzing with Xen
Tamas K Lengyel1.6K views
XPDDS17: NoXS: Death to the XenStore - Filipe Manco, NEC by The Linux Foundation
XPDDS17:  NoXS: Death to the XenStore - Filipe Manco, NECXPDDS17:  NoXS: Death to the XenStore - Filipe Manco, NEC
XPDDS17: NoXS: Death to the XenStore - Filipe Manco, NEC
Release notes 3_d_v61 by sundar sivam
Release notes 3_d_v61Release notes 3_d_v61
Release notes 3_d_v61
sundar sivam566 views
Device virtualization and management in xen by Lingfei Kong
Device virtualization and management in xenDevice virtualization and management in xen
Device virtualization and management in xen
Lingfei Kong3.1K views
MemVerge: The Software Stack for CXL Environments by CXL Forum
MemVerge: The Software Stack for CXL EnvironmentsMemVerge: The Software Stack for CXL Environments
MemVerge: The Software Stack for CXL Environments
CXL Forum310 views
Armboot process zeelogic by Aleem Shariff
Armboot process zeelogicArmboot process zeelogic
Armboot process zeelogic
Aleem Shariff2.3K views

More from Stefano Stabellini

ELC21: VM-to-VM Communication Mechanisms for Embedded by
ELC21: VM-to-VM Communication Mechanisms for EmbeddedELC21: VM-to-VM Communication Mechanisms for Embedded
ELC21: VM-to-VM Communication Mechanisms for EmbeddedStefano Stabellini
283 views28 slides
RunX: deploy real-time OSes as containers at the edge by
RunX: deploy real-time OSes as containers at the edgeRunX: deploy real-time OSes as containers at the edge
RunX: deploy real-time OSes as containers at the edgeStefano Stabellini
316 views22 slides
System Device Tree update: Bus Firewalls and Lopper by
System Device Tree update: Bus Firewalls and LopperSystem Device Tree update: Bus Firewalls and Lopper
System Device Tree update: Bus Firewalls and LopperStefano Stabellini
178 views35 slides
Cache coloring Xen Summit 2020 by
Cache coloring Xen Summit 2020Cache coloring Xen Summit 2020
Cache coloring Xen Summit 2020Stefano Stabellini
359 views31 slides
Xen Cache Coloring: Interference-Free Real-Time System by
Xen Cache Coloring: Interference-Free Real-Time SystemXen Cache Coloring: Interference-Free Real-Time System
Xen Cache Coloring: Interference-Free Real-Time SystemStefano Stabellini
591 views33 slides
Xen Project for ARM Servers by
Xen Project for ARM ServersXen Project for ARM Servers
Xen Project for ARM ServersStefano Stabellini
699 views18 slides

More from Stefano Stabellini(9)

Recently uploaded

2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx by
2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx
2023-November-Schneider Electric-Meetup-BCN Admin Group.pptxanimuscrm
15 views19 slides
360 graden fabriek by
360 graden fabriek360 graden fabriek
360 graden fabriekinfo33492
122 views25 slides
ShortStory_qlora.pptx by
ShortStory_qlora.pptxShortStory_qlora.pptx
ShortStory_qlora.pptxpranathikrishna22
5 views10 slides
EV Charging App Case by
EV Charging App Case EV Charging App Case
EV Charging App Case iCoderz Solutions
5 views1 slide
Sprint 226 by
Sprint 226Sprint 226
Sprint 226ManageIQ
5 views18 slides
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko... by
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...Deltares
14 views23 slides

Recently uploaded(20)

2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx by animuscrm
2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx
2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx
animuscrm15 views
360 graden fabriek by info33492
360 graden fabriek360 graden fabriek
360 graden fabriek
info33492122 views
Sprint 226 by ManageIQ
Sprint 226Sprint 226
Sprint 226
ManageIQ5 views
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko... by Deltares
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...
Deltares14 views
tecnologia18.docx by nosi6702
tecnologia18.docxtecnologia18.docx
tecnologia18.docx
nosi67025 views
Software evolution understanding: Automatic extraction of software identifier... by Ra'Fat Al-Msie'deen
Software evolution understanding: Automatic extraction of software identifier...Software evolution understanding: Automatic extraction of software identifier...
Software evolution understanding: Automatic extraction of software identifier...
Gen Apps on Google Cloud PaLM2 and Codey APIs in Action by Márton Kodok
Gen Apps on Google Cloud PaLM2 and Codey APIs in ActionGen Apps on Google Cloud PaLM2 and Codey APIs in Action
Gen Apps on Google Cloud PaLM2 and Codey APIs in Action
Márton Kodok6 views
SUGCON ANZ Presentation V2.1 Final.pptx by Jack Spektor
SUGCON ANZ Presentation V2.1 Final.pptxSUGCON ANZ Presentation V2.1 Final.pptx
SUGCON ANZ Presentation V2.1 Final.pptx
Jack Spektor23 views
Airline Booking Software by SharmiMehta
Airline Booking SoftwareAirline Booking Software
Airline Booking Software
SharmiMehta6 views
Myths and Facts About Hospice Care: Busting Common Misconceptions by Care Coordinations
Myths and Facts About Hospice Care: Busting Common MisconceptionsMyths and Facts About Hospice Care: Busting Common Misconceptions
Myths and Facts About Hospice Care: Busting Common Misconceptions
Generic or specific? Making sensible software design decisions by Bert Jan Schrijver
Generic or specific? Making sensible software design decisionsGeneric or specific? Making sensible software design decisions
Generic or specific? Making sensible software design decisions
Dev-HRE-Ops - Addressing the _Last Mile DevOps Challenge_ in Highly Regulated... by TomHalpin9
Dev-HRE-Ops - Addressing the _Last Mile DevOps Challenge_ in Highly Regulated...Dev-HRE-Ops - Addressing the _Last Mile DevOps Challenge_ in Highly Regulated...
Dev-HRE-Ops - Addressing the _Last Mile DevOps Challenge_ in Highly Regulated...
TomHalpin96 views
FOSSLight Community Day 2023-11-30 by Shane Coughlan
FOSSLight Community Day 2023-11-30FOSSLight Community Day 2023-11-30
FOSSLight Community Day 2023-11-30
Shane Coughlan5 views
Dev-Cloud Conference 2023 - Continuous Deployment Showdown: Traditionelles CI... by Marc Müller
Dev-Cloud Conference 2023 - Continuous Deployment Showdown: Traditionelles CI...Dev-Cloud Conference 2023 - Continuous Deployment Showdown: Traditionelles CI...
Dev-Cloud Conference 2023 - Continuous Deployment Showdown: Traditionelles CI...
Marc Müller41 views

Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Example - ELC NA 2022

  • 1. Static Partitioning with Xen, LinuxRT, and Zephyr: a concrete end-to-end example Stefano Stabellini Embedded Linux Conference 2022
  • 2. 2 | Mixed-Criticality and Static Partitioning • It is common to have a mix of critical and non-critical components • Real-time vs. non real-time • Function-critical vs. user interface or cloud interface • Running them in the same environment put the critical function at risk • A failure in the non-critical component causes a critical system crash • The non-critical component using too much CPU time causes a missed deadline • Don’t put all your eggs in one basket • Static partitioning allows you to run them separately in isolation • Many baskets! • Each domain has direct hardware access (IOMMU protected) • Real-time (< 5us latency) • Strong isolation between the domains • Real-time isolation • Failure isolation • By default, each component has only enough privilege to do what it needs to do 2 Xen Critical Non- Critical Non- Critical
  • 3. 3 | Static Partitioning Linux Dom0 Dom1 Zephyr LinuxRT Programmable Logic GEM0 MMC0 GEM1 MMC1 TTC SATA
  • 4. 4 | Xen Features for Static Partitioning • Xen Dom0less: static domains configuration for Xen • Statically-defined domains • Sub-second boot times • No need to create new VMs at runtime (possible but not required) • No need for Dom0 (possible but not required) • Each domain has direct hardware access • Real-time and Cache Coloring: isolation from cache interference • Deterministic interrupt latency at 3us • VM-to-VM communication • Static shared memory and event channels • PV drivers for device sharing • Safety-certifiability improvements underway • MISRA C • Deterministic interrupt handling and memory allocations
  • 5. 5 | Step1: a simple Xen dom0less static configuration • Board: Xilinx ZCU102 • Xen and 3 VMs • Regular Linux Dom0 from Xilinx BSP • LinuxRT DomU with a minimal busybox rootfs • Zephyr DomU • Configuration parameters • Memory allocation • CPUs allocation • Device assignment added later • Do we need Dom0? • Useful for system monitoring and domains restart • If dom0 is required, the Xen userspace tools (xl) need to be built and added to the dom0 rootfs Xen LinuxRT Linux Dom0 Zephyr
  • 6. 6 | Xen Dom0less boot U-Boot Xen Linux Dom0 LinuxRT DomU Zephyr DomU CPU 0 CPU 1-2 CPU 3 boots boots
  • 7. 7 | ImageBuilder • A tool to automatically generate a bootable configuration from a simple text config file • https://gitlab.com/xen-project/imagebuilder MEMORY_START="0x0" MEMORY_END="0x80000000" DEVICE_TREE="elc22demo1/mpsoc.dtb" XEN="elc22demo1/xen" DOM0_KERNEL="elc22demo1/xilinx-linux" DOM0_RAMDISK="elc22demo1/xen-rootfs.cpio.gz" DOM0_MEM=1024 NUM_DOMUS=2 DOMU_KERNEL[0]="elc22demo1/linuxrt" DOMU_RAMDISK[0]="elc22demo1/initrd.cpio" DOMU_MEM[0]=2048 DOMU_VCPUS[0]=2 DOMU_KERNEL[1]="elc22demo1/zephyr.bin" DOMU_MEM[1]=128 # bash imagebuilder/scripts/uboot-script-gen -t tftp -d . -c elc22demo/config
  • 8. 8 | ImageBuilder boot.source • ImageBuilder generates boot.source and boot.scr to be loaded by u-boot • Editing boot.source is possible • Then call mkimage by hand # mkimage -A arm64 -T script -C none -a 0xc00000 -e 0xc00000 -d boot.source boot.scr
  • 9. 9 | Building Xen with or without cache coloring support • Building the hypervisor is very simple (it cross-compiles like Linux) • Cache coloring is available as a menuconfig option • Upstreaming in progress • Currently available at https://github.com/Xilinx/xen • Nothing else is needed unless you plan to use the Xen tools (xl) in dom0 # export CROSS_COMPILE=/path/to/cross-compiler # export XEN_TARGET_ARCH=arm64 # cd xen.git/xen/ # make menuconfig Architecture Features à L2 cache coloring # make
  • 10. 10 | Building a Linux/LinuxRT kernel for Xen • Vanilla Linux releases and binaries work • Using Linux v5.17 for the demo • The following Kconfig options are recommended: • CONFIG_XEN • CONFIG_XEN_BLKDEV_BACKEND • CONFIG_XEN_NETDEV_BACKEND • CONFIG_XEN_GNTDEV • CONFIG_XEN_GRANT_DEV_ALLOC • CONFIG_XEN_GRANT_DMA_ALLOC • CONFIG_SERIAL_AMBA_PL011 • CONFIG_SERIAL_AMBA_PL011_CONSOLE • CONFIG_BRIDGE • Either CONFIG_XEN or CONFIG_SERIAL_AMBA_PL011 is needed for console output • 2 patches for PV drivers support for dom0less • Patches already accepted and soon to be upstream
  • 11. 11 | Building a Dom0 rootfs • Dom0 requires the Xen userspace tools in Dom0 • Build a Linux rootfs for Dom0 with Xen support using Yocto: DISTRO_FEATURES:append = " xen" DISTRO_FEATURES:append = " virtualization" IMAGE_INSTALL:append = " xen-base zlib-dev" ASSUME_PROVIDED += ” iasl-native"
  • 12. 12 | Building Zephyr with Xen support • Initial Zephyr build environment setup required • See https://docs.zephyrproject.org/latest/develop/getting_started/index.html • Use the existing xenvm machine • Build time Device Tree: boards/arm64/xenvm/xenvm.dts • Memory mappings: soc/arm64/xenvm/mmu_regions.c • To run Zephyr as Dom0less guest: • change xen_hvc with xen_consoleio_hvc in xenvm.dts • build Zephyr with Dom0 support • build Xen with DEBUG enabled # west build -t menuconfig -> SoC/CPU/Configuration Selection (Xen virtual machine on aarch64) -> Xen virtual machine on aarch64 -> Zephyr as Xen Domain 0 -> Device Drivers -> Serial Drivers -> Xen hypervisor Dom0 console UART driver # west build -b xenvm samples/hello_world/
  • 13. 13 | Step2: Add Device Assignment • Device Assigment: • Direct hardware access • Protected by the IOMMU • Example configuration: • Assigning the Network Device (GEM) to LinuxRT • Assigning the TTC timer to Zephyr Xen LinuxRT Linux Dom0 Zephyr GEM0 TTC0
  • 14. 14 | Xen Device Assignment • Based on “partial device trees” • They describe the device for the domU • They configure device assignment • Everything under passthrough is copied to the guest device tree • Special device assignment configuration properties: • xen,reg • xen,path • xen,force-assign-without-iommu • interrupt is used for interrupt remapping • interrupt-parent must be 0xfde8 /dts-v1/; / { #address-cells = <0x2>; #size-cells = <0x2>; passthrough { compatible = "simple-bus"; ranges; #address-cells = <0x2>; #size-cells = <0x2>; ethernet@ff0e0000 { compatible = "cdns,zynqmp-gem"; […] reg = <0x0 0xff0e0000 0x0 0x1000>; interrupt-parent = <0xfde8>; interrupts = <0x0 0x3f 0x4 0x0 0x3f 0x4>; xen,reg = <0x0 0xff0e0000 0x0 0x1000 0x0 0xff0e0000>; xen,path = "/amba/ethernet@ff0e0000"; }; }; };
  • 15. 15 | Xen Device Assignment and ImageBuilder • Add xen,passthrough; under the corresponding node on the host device tree • Add the partial device tree binaries to the ImageBuilder config file • ImageBuilder will load them automatically for you at boot […] NUM_DOMUS=2 DOMU_KERNEL[0]="elc22demo2/linuxrt" DOMU_RAMDISK[0]="elc22demo2/initrd.cpio" DOMU_MEM[0]=2048 DOMU_VCPUS[0]=2 DOMU_PASSTHROUGH_DTB[0]="elc22demo2/ethernet@ff0e0000.dtb" DOMU_KERNEL[1]="elc22demo2/zephyr.bin" DOMU_PASSTHROUGH_DTB[1]="elc22demo2/timer@ff110000.dtb" DOMU_MEM[1]=128
  • 16. 16 | Xilinx Xen Device Tree examples repository • Repository with device tree passthrough examples: https://github.com/Xilinx/xen-passthrough-device-trees • Simplified ImageBuilder configuration with PASSTHROUGH_DTS_REPO, DOMU_PASSTHROUGH_PATHS • Based on file names and example files, no automatic device tree generation yet • Careful about “axi” vs “amba”, the name needs to match everywhere! • host DTB, passthrough DTB, xen,path property, ImageBuider config file […] PASSTHROUGH_DTS_REPO="git@github.com:Xilinx/xen-passthrough-device-trees.git device-trees-2021.2" NUM_DOMUS=2 DOMU_KERNEL[0]="elc22demo2/linuxrt" DOMU_RAMDISK[0]="elc22demo2/initrd.cpio" DOMU_MEM[0]=2048 DOMU_VCPUS[0]=2 DOMU_PASSTHROUGH_PATHS[0]="/axi/ethernet@ff0e0000” DOMU_KERNEL[1]="elc22demo2/zephyr.bin" DOMU_MEM[1]=128 DOMU_PASSTHROUGH_PATHS[1]="/axi/timer@ff110000"
  • 17. 17 | Xen Partial Device Trees Automatic Generation • System Device Tree: Device Tree based description of a full heterogeneous system • Lopper • a Device Tree manipulation tool • takes a System Device Tree as input and generates multiple Device Trees as output • Use Lopper to automatically generate Xen passthrough DTBs • Just need to provide the host DTB and the device path, e.g. /amba/ethernet@ff0e0000 • Attend the presentation “System Device Tree and Lopper: Concrete Examples”, Thursday June 23 at 12PM!
  • 18. 18 | Make use of a new device in Zephyr • Add it to the board device tree boards/arm64/xenvm/xenvm.dts • Map the device memory in soc/arm64/xenvm/mmu_regions.c MMU_REGION_FLAT_ENTRY("TTC", DT_REG_ADDR_BY_IDX(DT_INST(0, xlnx_ttcps), 0), DT_REG_SIZE_BY_IDX(DT_INST(0, xlnx_ttcps), 0), MT_DEVICE_nGnRnE | MT_P_RW_U_RW | MT_NS), ttc0: timer@ff110000 { compatible = "xlnx,ttcps"; status = "okay"; interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL IRQ_DEFAULT_PRIORITY>, <GIC_SPI 37 IRQ_TYPE_LEVEL IRQ_DEFAULT_PRIORITY>, <GIC_SPI 38 IRQ_TYPE_LEVEL IRQ_DEFAULT_PRIORITY>; interrupt-names = "irq_0", "irq_1", "irq_2"; reg = <0x0 0xff110000 0x0 0x1000>; label = "ttc0"; interrupt-parent = <&gic>; clock-frequency = <8320000>; };
  • 19. © Copyright 2020 Xilinx Step3: Xen Cache Coloring L2 Core 1 Core 2 Core 3 Core 4 DDR L1 L1 L1 L1
  • 20. © Copyright 2020 Xilinx Xen Cache Coloring L2 Core 1 Core 2 Core 3 Core 4 DDR L1 L1 L1 L1
  • 21. © Copyright 2020 Xilinx 4CPUs clusters often share L2 cache 4Interference via L2 cache affects performance ¬ App0 running on CPU0 can cause cache entries evictions, which affect App1 running on CPU1 ¬ App1 running on CPU1 could miss a deadline due to App0’s behavior ¬ It can happen between Apps running on the same OS & between VMs on the same hypervisor 4Hypervisor Solution: Cache Partitioning, AKA Cache Coloring ¬ Each VM gets its own allocation of cache entries ¬ No shared cache entries between VMs ¬ Allows real-time apps to run with deterministic IRQ latency D D R Xen Cache Coloring Core 1 Core 2 Core 3 Core 4 L2
  • 22. 22 | Xen Cache Coloring and ImageBuilder • Cache Coloring on Xen ZCU102 • 16 colors, each color corresponds to 256MB • Color bitmask is 0xF000 • Configuration: • Xen à color 0 • Dom0 à color 1-5 • LinuxRT à color 6-14 • Zephyr à color 15 # Xen and Dom0, use XEN_CMD in the ImageBuilder config: XEN_CMD=“…xen_colors=0-0 dom0_colors=1-5 way_size=65536” DOMU_COLORS[0]="6-14” DOMU_COLORS[1]="15-15"
  • 23. Dom0 LinuxRT Zephyr CPU 0 CPU 1-2 CPU 3 Linux Dom0 ring buffer Step4: Shared Memory and Event Channels
  • 24. 24 | Shared Memory Configuration • A static shared memory region chosen at build time • E.g. 0x7fe0_1000 • The address matter for cache coloring: 0x7fe0_1000 means color “1” • Xen event channels can be used for notifications • A BSD-licensed xen.h file with Xen hypercall definitions is provided availble • Can be embedded in Zephyr and other RTOSes • Makes it easy to add event channel support anywhere • Static event channel definitions based on device tree available in the next Xen release • Today event channels are created dynamically • From the next release it is possible to configure them statically as part of the Xen domains (i.e. ImageBuilder) • Static shared memory support also coming in the next Xen release • Possible to define shared memory without editing the memory node • Work in progress by ARM
  • 25. 25 | Shared Memory Configuration • Carve out an address range from the memory node • 0xf7fe00000 - 0xf7ff00000 • Add it as a separate sram node memory { device_type = "memory"; reg = <0x00 0x00 0x00 0x7fe00000 0x08 0x00 0x00 0x80000000>; }; sram@7fe00000 { compatible = "mmio-sram"; reg = <0x0 0x7fe00000 0x0 0x100000>; xen,passthrough; };
  • 26. 26 | Shared Memory Configuration • Add shared memory to the domU passthrough device tree passthrough { compatible = "simple-bus"; ranges; #address-cells = <0x2>; #size-cells = <0x2>; /* timer has been tested with baremetal app only */ timer@ff110000 { […] sram@7fe01000 { compatible = "mmio-sram"; reg = <0x0 0x7fe01000 0x0 0x1000>; xen,reg = <0x0 0x7fe01000 0x0 0x1000 0x0 0x7fe01000>; xen,force-assign-without-iommu = <0x1>; };
  • 27. 27 | Make use of the shared memory and event channels in Zephyr • Map the shared memory • Setup event channels MMU_REGION_FLAT_ENTRY("RAM_SHARED", 0x7fe01000, 0x1000, MT_NORMAL_NC | MT_P_RW_U_RW | MT_NS), #include "xen.h” […] bind.remote_dom = remote_domid; bind.remote_port = remote_port; bind.local_port = 0; ret = xen_hypercall(EVTCHNOP_bind_interdomain, (unsigned long)&bind, 0, 0, HYPERVISOR_event_channel_op);
  • 28. 28 | Make use of the shared memory and event channels in Linux • Write a new driver static int __init shared_mem_init(void) { char *str = "go"; struct resource r; struct device_node *np = of_find_compatible_node(NULL, NULL, "xen,shared-memory-v1"); struct evtchn_alloc_unbound alloc_unbound; of_address_to_resource(np, 0, &r); shared_mem = phys_to_virt(r.start); alloc_unbound.dom = DOMID_SELF; alloc_unbound.remote_dom = 1; /* domid 1 */ HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound); irq = bind_evtchn_to_irqhandler_lateeoi(alloc_unbound.port, evtchn_handler, 0, “shared_intf", NULL); irq = rc; memcpy(shared_mem, str, 3); mb(); /* send port number to other domain */ memcpy(shared_mem + 4, &alloc_unbound.port, sizeof(alloc_unbound.port)); return 0; }
  • 29. 29 | Step5: Dom0less with PV Drivers Dom0 U-Boot Xen LinuxRT CPU CPU NetBack NetFront init-dom0less Zephyr CPU 3
  • 30. 30 | Dom0less and PV Drivers Linux Dom0 LinuxRT DomU Zephyr DomU CPU 0 CPU 1-2 CPU 3 Xen
  • 31. 31 | Dom0less and PV Drivers • Xen PV Drivers are used to share a device across multiple domains • Share the network across multiple domains • Share a disk across multiple domains, a partition to each domain • PV network can also be used for VM-to-VM communication • With dom0less, PV Drivers are instantiated after the boot is finished from Dom0 • Fast boot times: the critical application can start immediately, no need to wait • PV drivers connect later after Dom0 completed booting • A Dom0 environment with Xen userspace tools is required to setup the connection • Use init-dom0less in Dom0 to initialize PV drivers, then hotplug a PV network connection /usr/lib/xen/bin/init-dom0less brctl addbr xenbr0; ifconfig xenbr0 192.168.0.1 up xl network-attach 1