Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

libvirt integration and testing for enterprise KVM/ARM - BUD17-213


Published on

Session ID: BUD17-213
Session Name: libvirt integration and testing for enterprise KVM/ARM - BUD17-213
Speaker: Drew Jones, Eric Auger
Track: LEG

★ Session Summary ★
This technical discussion will highlight on-going Red Hat activities for the integration of an Enterprise Virtualization stack and present a vision of an enterprise guest. The status of features such as live migration, device assignment, H52PCIe topology, large guest support, and more will be presented. The status will cover the integration up to the libvirt level of the stack, and exhibit that components above libvirt can """"just work"""", even when development, to this time, has been predominately focused on other architectures. As feature parity with x86 is a goal, the remaining gaps with x86 wil also be highlighted. Finally, the status of the verification efforts, specifically those involving the Avocado and kvm-unit-tests frameworks, will be presented.
★ Resources ★
Event Page:

★ Event Details ★
Linaro Connect Budapest 2017 (BUD17)
6-10 March 2017
Corinthia Hotel, Budapest,
Erzsébet krt. 43-49,
1073 Hungary

Keyword: LEG, KVM, ARM, testing
Follow us on Social Media

Published in: Technology
  • Be the first to comment

libvirt integration and testing for enterprise KVM/ARM - BUD17-213

  1. 1. libvirt integration and testing for enterprise KVM/ARM Drew Jones, Eric Auger Linaro Connect Budapest 2017 (BUD17)
  2. 2. 2 Overview • Enterprise guest requirements • QEMU/KVM enterprise guest management • libvirt • A one slide introduction • What’s new in libvirt for AArch64 • libvirt and virt tool example uses • Describing a PCIe topology • Managing PCIe device assignment • Managing guest migration • Virt stack verification • Summary
  3. 3. 3 Enterprise guest
  4. 4. 4 Features of an enterprise guest • AArch64 machine features • Wide size range memory, cores disks, network adapters • PCIe device support • Firmware initiated boot • Standardized • Virtual machine features • VirtIO • Generic devices • VM spec compliant VirtIO
  5. 5. 5 Additional guest requirements • No unnecessary host exposure • Generic CPU model for vCPUs • Dynamic memory sizing • Ballooning • ACPI hotplug events • vCPU hotplug • ACPI hotplug events • Device attach/detach support • SCSI and PCIe hotplug • High performance support • PCIe device assignment – VFIO • Host pinning: vNUMA and virtual CPU topology
  6. 6. 6 ● QEMU/KVM guest management • QEMU interfaces are QEMU version dependent • QMP access requires additional setup • QEMU command line grows and grows… qemu-system-aarch64 -machine virt-2.9,accel=kvm,gic-version=3 -cpu host,pmu=on -m 16384 -smp 48 -drive file=/usr/share/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/images/nvram/rhel_VARS.fd,if=pflash,format=raw,unit=1 -display none -serial mon:stdio -no-user-config -nodefaults -device ioh3420,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 -device ioh3420,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 -device ioh3420,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 -device ioh3420,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 -device virtio-scsi-pci,id=scsi0,bus=pci.3,addr=0x0 -drive file=/images/rhel.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0,cache=none -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,id=hostnet0,script=no,downscript=no,ifname=tap0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:62:c8:20,bus=pci.2,addr=0x0 -device virtio-balloon-pci,id=balloon0,bus=pci.1,addr=0x0
  7. 7. 7 libvirt
  8. 8. 8 libvirt provides a consistent API Libvirt API QEMU QEMU Libvirt daemon Node virsh virt-manager OpenStack libguestfs/ guestfish virt-install RPC RHEV/oVIRT More at
  9. 9. 9 libvirt 3.x: new AArch64 support • New support • Use virtio-pci by default for mach-virt guests • PCIe placement • virtio-gpu • Auto GIC version selection • Features for free (works like x86) • vPMU enablement • PCIe device assignment • Migration for GICv3/ITS
  10. 10. 10 PCIe placement • PCIe-only (no legacy PCI) • Complies with QEMU docs/pcie.txt • Root bus (pcie.0) and PCIe root ports at top • ioh3420 → generic root port • Flat hierarchy preferred (but PCIe switches are available) • Root ports or switch downstream ports necessary for hotplug PCIe root port #0 pcie.0 PCIe root port #N Downstream Port Upstream Port Downstream Port PCIe Dev PCIe Dev
  11. 11. 11 PCIe placement: virt-manager <controller type='pci' index='0' model='pcie-root'/> -device ioh3420,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 -device ioh3420,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 -device ioh3420,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.2,addr=0x0 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:8e:3a:57,bus=pci.1,addr=0x0 <controller type='pci' index='1' model='pcie-root-port'> <model name='ioh3420'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller>
  12. 12. 12 PCIe device assignment • Minimum requirements: kernel 4.11, QEMU 2.8, libvirt 3.x • virt-install: --host-device=pci_0005_90_00_0 • virsh: enumeration/dump, hot-plug/hot-unplug <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='5' bus='144' slot='0' function='0'/> </source> </hostdev> virsh nodedev-list --cap pci virsh nodedev-detach pci_0005_90_00_0 --driver vfio virsh nodedev-reattach pci_0005_90_00_0 virsh attach-device guest hostdev.xml
  13. 13. 13 Migration • GICv3 migration support • Minimum requirements: kernel 4.11, libvirt 3.x • QEMU: patches posted for 2.9 • ITS migration support • Patches posted • Hosts need identical hardware • Hosts do not need identical QEMU versions • mach-virt is versioned
  14. 14. 14 Migration with libvirt • Tested with virsh and virt-manager • Used libvirt managed shared storage pool • virsh save / virsh restore • virsh migrate
  15. 15. 15 Virt stack verification
  16. 16. 16 avocado / avocado-vt • avocado is a test framework inspired by autotest • Supports plugins • avocado-vt is a plugin supporting QEMU/KVM/libvirt tests • Can drive QEMU or libvirt / virt-tools • Ported to AArch64 • Teaching it defaults that match AArch64 enterprise guest setups • Has thousands of test cases: functional and performance • E.g. guest install, boot, migrate • Not all tests applicable to a given target • Most libvirt based tests just work on AArch64 • Tests can be external testsuites (kvm-unit-tests)
  17. 17. 17 Summary
  18. 18. 18 Summary • QEMU/KVM enterprise guests have complicated setups • Many useful tools have been built on libvirt • Automated virt stack verification can target libvirt • Automated virt stack verification can benefit from libvirt libvirt provides common and stable APIs Most libvirt-based tools just work when ported to AArch64 avocado-vt avocado-vt
  19. 19. 19 References • Libvirt and virt-tools • • • avocado-vt and kvm-unit-tests: • •
  20. 20. THANK YOU
  21. 21. 21 kvm-unit-tests • Simple guest kernel as a unit test • Currently has support and tests for KVM’s emulation of • gicv2/v3 • PMU • PSCI • Exits for QEMU PCI emulation • WIP: gicv3 ITS • WIP: nested virt • WIP: instruction emulation regression tests • Gaining contributors • Contribute now and get a free baseball cap (no, not really...)