PPT

277
-1

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
277
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • At least double the number of faults Memory requirements Hiding which machine pages are in use: may interfere with page colouring, large TLB entries etc.
  • RaW hazards
  • RaW hazards
  • RaW hazards
  • typically 2 pages writeable : one in other page table
  • RaW hazards
  • RaW hazards
  • PPT

    1. 1. Virtualization The XEN Approach
    2. 2. XEN: paravirtualization <ul><li>Paul Barham, et.al., “Xen and the Art of Virtualization,” Symposium on Operating Systems Principles 2003 (SOSP’03), October 19-22, 2003, Bolton Landing, New York. </li></ul><ul><li>Presentation by Ian Pratt available at http://www.cl.cam.ac.uk/netos/papers/2005-xen-may.ppt </li></ul>References and Sources Computer Laboratory
    3. 3. Xen - Structure <ul><li>Employs paravirtualization strategy </li></ul><ul><ul><li>Deals with machine architectures that cannot be virtualized </li></ul></ul><ul><ul><li>Requires modifications to guest OS </li></ul></ul><ul><ul><li>Allows optimizations </li></ul></ul><ul><li>“ Domain 0” </li></ul><ul><ul><li>has special access to control interface for platform management </li></ul></ul><ul><ul><li>Has back-end device drivers </li></ul></ul><ul><li>Xen VMM </li></ul><ul><ul><li>entirely event driven </li></ul></ul><ul><ul><li>no internal threads </li></ul></ul>Xen 3.0 Architecture
    4. 4. MMU Virtualizion : Shadow-Mode MMU Accessed & dirty bits Guest OS VMM Hardware guest writes guest reads Virtual -> physical Virtual -> Machine Updates
    5. 5. MMU Virtualization : Direct-Mode MMU Guest OS Xen VMM Hardware guest writes guest reads Virtual -> Machine
    6. 6. Queued Update Interface (Xen 1.2) MMU Guest OS Xen VMM Hardware validation guest writes guest reads Virtual -> Machine
    7. 7. Writeable Page Tables : 1 – write fault MMU Guest OS Xen VMM Hardware page fault first guest write guest reads Virtual -> Machine
    8. 8. Writeable Page Tables : 2 - Unhook MMU Guest OS Xen VMM Hardware guest writes guest reads Virtual -> Machine X
    9. 9. Writeable Page Tables : 3 - First Use MMU Guest OS Xen VMM Hardware page fault guest writes guest reads Virtual -> Machine X
    10. 10. Writeable Page Tables : 4 – Re-hook MMU Guest OS Xen VMM Hardware validate guest writes guest reads Virtual -> Machine
    11. 11. I/O <ul><li>Safe hardware interfaces </li></ul><ul><ul><li>I/O Spaces </li></ul></ul><ul><ul><ul><li>Restricts access to I/O registers </li></ul></ul></ul><ul><ul><li>Isolated Device Drive </li></ul></ul><ul><ul><ul><li>Driver isolated from VMM in its own “domain” (i.e., VM) </li></ul></ul></ul><ul><ul><ul><li>Communication between domains via device channels </li></ul></ul></ul><ul><li>Unified interfaces </li></ul><ul><ul><li>Common interface for group of similar devices </li></ul></ul><ul><ul><li>Exposes raw device interface (e.g., for specialized devices like sound/video) </li></ul></ul><ul><li>Separate request/response from event notification </li></ul><ul><li>I/O descriptor rings </li></ul><ul><ul><li>Used to communicate I/O requests and responses </li></ul></ul><ul><ul><li>For bulk data transfer devices (DMA, network), buffer space allocated out of band by GuestOS </li></ul></ul><ul><ul><li>Descriptor contains unique identifier to allow out of order processing </li></ul></ul><ul><ul><li>Multiple requests can be added before hypercall made to begin processing </li></ul></ul><ul><ul><li>Event notification can be masked by GuestOS for its convenience </li></ul></ul>
    12. 12. Device Channels <ul><li>Connects “front end” device drivers in GuestOS with “native” device driver </li></ul><ul><li>Is an I/O descriptor ring </li></ul><ul><li>Buffer page(s) allocated by GuestOS and “granted” to Xen </li></ul><ul><li>Buffer page(s) is/are pinned to prevent page-out during I/O operation </li></ul><ul><li>Pinning allows zero-copy data transfer </li></ul>
    13. 13. System Performance <ul><li>Benchmark suites </li></ul><ul><ul><li>Spec INT200: compute intensive workload </li></ul></ul><ul><ul><li>Linux build time: extensive file I/O, scheduling, memory management </li></ul></ul><ul><ul><li>OSBD-OLTP: transaction processing workload, extensive synchronous disk I/O </li></ul></ul><ul><ul><li>Spec WEB99: web-like workload (file and network traffic) </li></ul></ul><ul><li>Fair comparison? </li></ul>L X V U SPEC INT2000 (score) L X V U Linux build time (s) L X V U OSDB-OLTP (tup/s) L X V U SPEC WEB99 (score) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 Benchmark suite running on Linux (L), Xen (X), VMware Workstation (V), and UML (U)
    14. 14. I/O Performance <ul><li>Systems </li></ul><ul><ul><li>L: Linux </li></ul></ul><ul><ul><li>IO-S: Xen using IO-Space access </li></ul></ul><ul><ul><li>IDD: Xen using isolated device driver </li></ul></ul><ul><li>Benchmarks </li></ul><ul><ul><li>Linux build time: file I/O, scheduling, memory management </li></ul></ul><ul><ul><li>PM: file system benchmark </li></ul></ul><ul><ul><li>OSDB-OLTP: transaction processing workload, extensive synchronous disk I/O </li></ul></ul><ul><ul><li>httperf: static document retrieval </li></ul></ul><ul><ul><li>SpecWeb99: web-like workload (file and network traffic) </li></ul></ul>

    ×