XPDS13: Xen on ARM Update - Stefano Stabellini, Citrix

17,148 views

Published on

Many significant improvements have been made to Xen and Linux for the ARM architecture since September 2012, when initial support for Xen on ARM was introduced in the kernel. The number of contributors considerably increased as the number of different companies behind them. Xen on ARM has become a true multivendor project. Today Linux 3.11 can run on Xen on ARM as a DomU or Dom0, 32-bit or 64-bit, with one or more CPUs. Xen 4.3, out since July 2013, is the first hypervisor release to support ARMv7 and ARMv8 platforms. This talk will discuss the current status of the project, the principal technical advancements achieved during the last year of development and the problems still left unsolved. It will relate the experience of porting Xen to many new ARM SoCs and working with multiple hardware vendors in the ARM ecosystem, within and outside the Linaro Enterprise working Group.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
17,148
On SlideShare
0
From Embeds
0
Number of Embeds
13,748
Actions
Shares
0
Downloads
52
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

XPDS13: Xen on ARM Update - Stefano Stabellini, Citrix

  1. 1. Xen on ARM A success story Stefano Stabellini - Citrix Xen Project Team
  2. 2. Achievements of one year Xen support for ARM upstream in Linux 3.7 Xen support for ARM64 upstream in Linux 3.11 Xen 64-bit on ARM64 You are here 11/11 08/12 09/12 Part-time Xen ARM hacking starts 11/12 Xen running on real ARM hardware First Xen on ARM talk at Xen Summit 2012 01/13 03/13 Citrix announces that will be joining Linaro 06/13 07/13 Xen 4.3 released with ARM and ARM64 support
  3. 3. A growing community Xen-devel ARM traffic from August 2012: ● 4685 emails: 360 emails per month! ● 39% of which are not from Citrix
  4. 4. Hardware support Upstream: ● Versatile Express Cortex A15 ● Arndale board ● ARMv8 FVP In progress: ● ● ● ● ● Calxeda “Midway” Applied Micro “Mustang” Cubieboard2 Broadcom Brahma-B15 OMAP5
  5. 5. Upstream features Xen v4.3: ● basic lifecycle operations ● memory ballooning ● scheduler configurations and vcpu pinning Linux v3.11: ● dom0 and domU support ● 32-bit and 64-bit support ● SMP support ● PV disk, network and console
  6. 6. Coming in Xen 4.4 ● 64-bit guest support ● live-migration ● SWIOTLB
  7. 7. Coming in Xen 4.4 ● 64-bit guest support ● live-migration ● SWIOTLB
  8. 8. The problem 1 Stage physical address machine address hardware 2 stage Xen Linux virtual address
  9. 9. The problem: dom0 DMA 1 Stage physical address machine address Device DMA 2 stage Xen Linux virtual address
  10. 10. The best solution: IOMMU MMU physical address 2 stage Linux virtual address Device DMA IOMMU Xen machine address
  11. 11. The workaround: Dom0 1:1 mapping physical address = machine address Xen Linux virtual address Device DMA
  12. 12. The workaround: Dom0 1:1 mapping ● ● ● ● rigid solution no ballooning in dom0 no page sharing in dom0 does not work with foreign grant table mappings
  13. 13. UNHAPPY
  14. 14. The alternative: SWIOTLB MMU virtual address machine address Device DMA DMA ops Linux physical address
  15. 15. The alternative: SWIOTLB ● use memory_exchange_and_pin hypercall ○ create a contiguous buffer in machine memory ○ retrieve the machine address of the buffer ● introduce an additional memcpy ● remove the need for the 1:1 workaround
  16. 16. STILL UNHAPPY
  17. 17. SWIOTLB: the improved version pin and unpin hypercalls: ● dynamically retrieve P2M mappings ● pin a mapping for DMA ● remove additional memcpy map_page pfn XENMEM_pin pfn mfn pin mfn
  18. 18. SWIOTLB: the “improved” version ● Linux rbtree maintenance is expensive ● too many uncached address translations in Xen ○ guest virtual to machine ○ guest physical to machine cpu utilization increase
  19. 19. NOT AN IMPROVEMENT
  20. 20. SWIOTLB: the compromise ● keep the dom0 1:1 workaround ○ dom0 without ballooning and page sharing is the default configuration in XenServer x86 today ● use the swiotlb only to handle DMA involving foreign grants ○ we already know the p2m mappings of grants ■ no need for pin and unpin hypercalls ○ can take shortcuts: avoid many tree lookups ○ tree lookups are much faster ○ avoidable with IOMMU support
  21. 21. SWIOTLB: the compromise Testing platform: ● 1.5Ghz quad-core Cortex A15 ● 1 Gbit link Benchmark results: ● same network throughput as native (line rate) ● < 2% cpu usage increase
  22. 22. THAT’S BETTER
  23. 23. SWIOTLB: where to find it The patches (swiotlb-xen v8): http://marc.info/?l=linux-kernel&m=138203180707683&w=2 The kernel tree: git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git swiotlb-xen-8
  24. 24. Xen 4.5+ ● ● ● ● IOMMU support in Xen device assignment UEFI booting ACPI support
  25. 25. DEMO
  26. 26. Questions?

×