Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • What kind is VMware?
  • PPT

    1. 1. Virtual Machines Background Adapted from Silberschatz
    2. 2. Virtual Machines <ul><li>A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating system kernel as though they were all hardware. </li></ul><ul><li>A virtual machine provides an interface identical to the underlying bare hardware. </li></ul><ul><li>For example, the operating system creates the illusion of multiple processes, each executing on its own processor with its own (virtual) memory. </li></ul>
    3. 3. Virtual Machines (Cont.) <ul><li>The resources of the physical computer are shared to create the virtual machines. </li></ul><ul><ul><li>CPU scheduling can create the appearance that users have their own processor. </li></ul></ul><ul><ul><li>Spooling and a file system can provide virtual card readers and virtual line printers. </li></ul></ul><ul><ul><li>A normal user time-sharing terminal serves as the virtual machine operator’s console. </li></ul></ul>
    4. 4. System Models Non-virtual Machine Virtual Machine
    5. 5. Advantages/Disadvantages of Virtual Machines <ul><li>The virtual-machine concept provides complete protection of system resources since each virtual machine is isolated from all other virtual machines. What might be bad about this? </li></ul><ul><ul><li>This isolation, however, permits no direct sharing of resources. </li></ul></ul><ul><li>A virtual-machine system is a perfect vehicle for operating-systems research and development. System development is done on the virtual machine, instead of on a physical machine and so does not disrupt normal system operation. </li></ul><ul><li>The virtual machine concept is difficult to implement due to the effort required to provide an exact duplicate to the underlying machine. </li></ul>
    6. 6. Java Virtual Machine <ul><li>Compiled Java programs are platform-neutral bytecodes executed by a Java Virtual Machine (JVM). </li></ul><ul><li>JVM consists of </li></ul><ul><li>- class loader </li></ul><ul><li>- class verifier </li></ul><ul><li>- runtime interpreter </li></ul><ul><li>Just-In-Time (JIT) compilers increase performance </li></ul>
    7. 7. Java Virtual Machine
    8. 8. An Overview of Virtual Machine Architectures Smith and Nair
    9. 9. Definitions <ul><li>Instruction Set Architecture (ISA) </li></ul><ul><ul><li>Precise specification of the interface between hardware and software </li></ul></ul><ul><li>Application Binary Interface (ABI) </li></ul><ul><ul><li>Defines how an application can work with a platform at the binary level. (Contrast with API.) </li></ul></ul><ul><ul><li>Includes user ISA, system call interface, etc. </li></ul></ul><ul><ul><li>Suppose an ABI is changed. </li></ul></ul><ul><ul><ul><li>Recompile? </li></ul></ul></ul><ul><ul><ul><li>Source changes? </li></ul></ul></ul>
    10. 10. Virtualization <ul><li>VMM also known as hypervisor. </li></ul>Hardware OS Application ISA Hardware OS Application Virtual ISA VMM ISA Guest Host OS Application Virtual ISA Virtual Machine
    11. 11. Virtual Machine Uses <ul><li>Emulation </li></ul><ul><ul><li>One ISA can be used to emulate another. </li></ul></ul><ul><ul><li>Provides cross-platform portability. </li></ul></ul><ul><li>Optimization </li></ul><ul><ul><li>Emulators can optimize as they emulate. </li></ul></ul><ul><ul><li>Also can optimize same ISA to same ISA. </li></ul></ul><ul><li>Replication </li></ul><ul><ul><li>A single physical machine can be replicated, providing isolation between the VMs. </li></ul></ul><ul><li>Composition </li></ul><ul><ul><li>Two virtual machines can be composed, combining the functionality of each. </li></ul></ul>
    12. 12. Process vs. System <ul><li>Meaning of “machine” depends on perspective. </li></ul><ul><ul><li>To a process, the machine is the system calls, libraries, etc. </li></ul></ul><ul><ul><ul><li>Already abstract. </li></ul></ul></ul><ul><ul><li>The entire system also runs on a machine. </li></ul></ul><ul><ul><ul><li>Includes ISA, actual devices, etc. </li></ul></ul></ul><ul><ul><li>Other kinds of machines? </li></ul></ul><ul><li>As there are two perspectives, there are two kinds of virtual machines: process and system. </li></ul><ul><ul><li>Process virtual machine can support an individual process. </li></ul></ul><ul><ul><li>System virtual machine can run a complete OS plus environment. </li></ul></ul>
    13. 13. Process vs. System x86 Linux Java VM Native App Native App Java VM Java Prog Java Prog x86 Linux VMM Native App Native App W32 App Windows W32 App Process VM System VM Examples?
    14. 14. Process VMs <ul><li>Multiprogramming </li></ul><ul><ul><li>A process has the illusion of having the whole machine to itself. </li></ul></ul><ul><li>Emulation </li></ul><ul><ul><li>Interpreted. (Define.) </li></ul></ul><ul><ul><li>Translated. (Define.) </li></ul></ul><ul><ul><li>What are relative merits? </li></ul></ul><ul><li>Dynamic optimizers </li></ul><ul><ul><li>Especially useful with some kind of profile-directed translation. </li></ul></ul><ul><li>High Level Language VMs </li></ul><ul><ul><li>High-level language is compiled to an intermediate language. </li></ul></ul><ul><ul><li>VM then runs the intermediate language. </li></ul></ul><ul><ul><li>Example is Java: Interpreted or translated? </li></ul></ul>
    15. 15. System VMs <ul><li>Same ISA </li></ul><ul><ul><li>“ Classic” (Define. Pros/cons?) </li></ul></ul><ul><ul><ul><li>VMM built directly on top of hardware. </li></ul></ul></ul><ul><ul><ul><li>Most efficient, but requires wiping the slate clean. </li></ul></ul></ul><ul><ul><ul><li>Requires device drivers in the VMM. </li></ul></ul></ul><ul><ul><li>Hosted (Define. Pros/cons?) </li></ul></ul><ul><ul><ul><li>VMM built on top of existing OS. </li></ul></ul></ul><ul><ul><ul><li>Most convenient </li></ul></ul></ul><ul><ul><ul><li>Devices drivers supplied by host OS, VMM uses facilities provided by host OS. </li></ul></ul></ul><ul><li>Different ISA </li></ul><ul><ul><li>Whole System VMs: Emulation </li></ul></ul><ul><ul><ul><li>ISA not the same, must emulate everything. </li></ul></ul></ul><ul><ul><li>Co-Designed VMs: Optimization </li></ul></ul><ul><ul><ul><li>Hardware designed to support VMs. </li></ul></ul></ul><ul><ul><ul><li>Provides a clean design for virtualization. </li></ul></ul></ul><ul><ul><ul><li>Can be significantly more efficient. </li></ul></ul></ul>
    16. 16. Virtualization <ul><li>The state of a machine must be maintained. </li></ul><ul><ul><li>Physical machine: latches, flip-flops, etc. </li></ul></ul><ul><ul><li>Virtual machine: combination of physical machine and state emulated in software using RAM, etc. </li></ul></ul><ul><li>At certain points in execution, such as a trap, the state of the machine must be “materialized”. </li></ul><ul><ul><li>Not trivial due to complex hardware techniques used to provide high performance. </li></ul></ul><ul><ul><li>This ability to materialize the state is termed “preciseness”. </li></ul></ul><ul><li>Three aspects of virtualization </li></ul><ul><ul><li>State: registers and memory </li></ul></ul><ul><ul><li>Instructions: may involve emulation </li></ul></ul><ul><ul><li>State materialization: when exceptions occur </li></ul></ul>
    17. 17. Process VMs Virtualization <ul><li>Multiprogramming </li></ul><ul><ul><li>State </li></ul></ul><ul><ul><ul><li>Mapped 1:1 </li></ul></ul></ul><ul><ul><li>Instructions </li></ul></ul><ul><ul><ul><li>Native </li></ul></ul></ul><ul><ul><li>State materialization </li></ul></ul><ul><ul><ul><li>Provided by hardware </li></ul></ul></ul><ul><li>Dynamic translation </li></ul><ul><ul><li>State </li></ul></ul><ul><ul><ul><li>Registers mapped to host registers as available (overflow to memory). Memory mapped to host memory. </li></ul></ul></ul><ul><ul><li>Instructions </li></ul></ul><ul><ul><ul><li>Emulated </li></ul></ul></ul><ul><ul><li>State materialization </li></ul></ul><ul><ul><ul><li>Provided by VM software </li></ul></ul></ul><ul><li>HLL VMs </li></ul><ul><ul><li>State </li></ul></ul><ul><ul><ul><li>Mapped to host resources as available. </li></ul></ul></ul><ul><ul><li>Instructions </li></ul></ul><ul><ul><ul><li>Emulated, JIT compiled </li></ul></ul></ul><ul><ul><li>State materialization </li></ul></ul><ul><ul><ul><li>Provided by VM software </li></ul></ul></ul>
    18. 18. System VMs Virtualization <ul><li>“ Classic” VMs </li></ul><ul><ul><li>State </li></ul></ul><ul><ul><ul><li>Mapped 1:1, except for privileged registers. </li></ul></ul></ul><ul><ul><li>Instructions </li></ul></ul><ul><ul><ul><li>Native, except trapping for priveleged instructions </li></ul></ul></ul><ul><ul><li>State materialization </li></ul></ul><ul><ul><ul><li>Provided by hardware </li></ul></ul></ul><ul><li>Whole System VMs </li></ul><ul><ul><li>State </li></ul></ul><ul><ul><ul><li>Mapped to available memory, not 1:1 </li></ul></ul></ul><ul><ul><li>Instructions </li></ul></ul><ul><ul><ul><li>Emulated </li></ul></ul></ul><ul><ul><li>State materialization </li></ul></ul><ul><ul><ul><li>Provided by VM software </li></ul></ul></ul><ul><li>Co-Designed VMs </li></ul><ul><ul><li>State </li></ul></ul><ul><ul><ul><li>Mapped 1:1 </li></ul></ul></ul><ul><ul><li>Instructions </li></ul></ul><ul><ul><ul><li>Block-level translated </li></ul></ul></ul><ul><ul><li>State materialization </li></ul></ul><ul><ul><ul><li>Provided by hardware/VM software combination </li></ul></ul></ul>
    19. 19. Taxonomy <ul><li>Process </li></ul><ul><ul><li>Same ISA </li></ul></ul><ul><ul><ul><li>Multiprogramming </li></ul></ul></ul><ul><ul><ul><li>Dynamic optimization </li></ul></ul></ul><ul><ul><li>Different ISA </li></ul></ul><ul><ul><ul><li>Dynamic translators </li></ul></ul></ul><ul><ul><ul><li>HLL VM </li></ul></ul></ul><ul><li>System </li></ul><ul><ul><li>Same ISA </li></ul></ul><ul><ul><ul><li>“ Classic” OS VMs (IBM) </li></ul></ul></ul><ul><ul><ul><li>Hosted VMs </li></ul></ul></ul><ul><ul><li>Different ISA </li></ul></ul><ul><ul><ul><li>Whole system </li></ul></ul></ul><ul><ul><ul><li>Co-designed VMs </li></ul></ul></ul>
    20. 20. Key Ideas <ul><li>VMs can support an individual process only, or can support a whole OS. </li></ul><ul><li>Can construct a useful taxonomy based on: </li></ul><ul><ul><li>process or system </li></ul></ul><ul><ul><li>same ISA or different ISA </li></ul></ul>
    21. 21. Virtualizing I/O Devices on VMware Workstation’s Host VMM
    22. 22. Virtualizing the PC Platform <ul><li>Several hurdles </li></ul><ul><ul><li>Non-virtualizable processor </li></ul></ul><ul><ul><ul><li>Some privileged instructions fail silently. (Why is this a problem?) (What’s the solution?) </li></ul></ul></ul><ul><ul><li>PC hardware diversity </li></ul></ul><ul><ul><ul><li>Why is this problematic for a “classic” VM? </li></ul></ul></ul><ul><ul><li>Pre-existing PC software </li></ul></ul><ul><ul><ul><li>Must stay compatible </li></ul></ul></ul><ul><li>To address these, VMware uses a hosted VM. (Not a “classic” VM.) </li></ul>
    23. 23. Two Worlds <ul><li>VMApp runs in the host, using the VMDriver host kernel component to establish the VMM. </li></ul><ul><li>CPU is thus executing in either the host world or the virtual world, using VMDriver to switch worlds. </li></ul><ul><li>World switches are expensive, since user and system state must be switched. </li></ul>
    24. 24. Architecture Host Kernel VMApp VMDriver VMNet
    25. 25. Virtualizing the NIC <ul><li>I/O port operations by guest OS must be intercepted by VMM. </li></ul><ul><ul><li>Must then be processed in the VMM (to maintain the virtual state). </li></ul></ul><ul><ul><li>Or executed in the host world. (When must it do what?) </li></ul></ul><ul><li>Send operations start as a sequence of ops to virtual I/O ports. </li></ul><ul><ul><li>Upon finalization of the send, the VMApp issues a host OS syscall to the VMNet driver, which passes it on the real NIC. </li></ul></ul><ul><ul><li>Finally requires raising a virtual IRQ to signal completetion. </li></ul></ul><ul><li>Receive operations operate in reverse. </li></ul><ul><ul><li>VMApps executes select() syscall on possible sources. </li></ul></ul><ul><ul><li>Reads packet, forwards it to VMM which raises a virtual IRQ. </li></ul></ul>
    26. 26. Details <ul><li>Send </li></ul><ul><ul><li>Guest OS out to I/O port </li></ul></ul><ul><ul><li>Trap to VMDriver </li></ul></ul><ul><ul><li>Pass to VMApp </li></ul></ul><ul><ul><li>Syscall to VMNet </li></ul></ul><ul><ul><li>Pass to actual NIC driver </li></ul></ul><ul><li>Receive </li></ul><ul><ul><li>Hardware IRQ </li></ul></ul><ul><ul><li>Actual NIC delivers to VMNet driver </li></ul></ul><ul><ul><li>VMNet driver causes VMApp to return from select() </li></ul></ul><ul><ul><li>VMApp copies packet to VM memory </li></ul></ul><ul><ul><li>VMApp asks VMM to raise virtual IRQ </li></ul></ul><ul><ul><li>Guest OS performs port operations to read data </li></ul></ul><ul><ul><li>Trap to VMDriver </li></ul></ul><ul><ul><li>VMApp returns from ioctl() to raise IRQ </li></ul></ul>
    27. 27. Reducing Network Virtualization Overheads <ul><li>Handling I/O ports in the VMM </li></ul><ul><ul><li>Many accesses don’t involve actual I/O. </li></ul></ul><ul><ul><li>Let the VMM maintain the state, avoiding a worlds switch. </li></ul></ul><ul><li>Send combining </li></ul><ul><ul><li>If data rate is high, queue up packets, send them in a group. </li></ul></ul><ul><li>IRQ notification </li></ul><ul><ul><li>Use shared memory bitmap rather than requiring VMApp to call select() when an IRQ is received on the host system. </li></ul></ul>
    28. 28. Performance Enhancements <ul><li>Reducing CPU virtualization overhead </li></ul><ul><ul><li>Find operations to the interrupt controller that have memory semantics and replace with MOV operation, which does not require intervention by the VMM. </li></ul></ul><ul><ul><li>Apparently requires dynamic binary translation. </li></ul></ul><ul><li>Modifying the guest OS </li></ul><ul><ul><li>Eliminate idle task page table switching, which is not necessary, since the idle task pages are mapped in every process page table. </li></ul></ul><ul><ul><li>Run idle task with page table of last process. </li></ul></ul><ul><ul><li>What would happen if the idle task had a bug and wrote to some random addresses? </li></ul></ul>
    29. 29. Performance Enhancements <ul><li>Creating a custom virtual device </li></ul><ul><ul><li>Virtualizing a real device is somewhat inefficient, since the interface to these devices is optimized for real devices, not virtual devices. </li></ul></ul><ul><ul><li>Designing a custom virtual device can reduce expensive operations. </li></ul></ul><ul><ul><li>Disadvantage is that must write a new device driver in guest OS for this virtual device. </li></ul></ul><ul><li>Modifying the host OS </li></ul><ul><ul><li>VMNet driver allocates kernel memory sk_buff , then copies from VMApp to sk_buff . </li></ul></ul><ul><ul><li>Can eliminate copy by using memory from VM physical memory. </li></ul></ul><ul><li>Bypassing the host OS </li></ul><ul><ul><li>VMM uses own drivers, rather than going through the host OS. (Note that going through the host OS is using a kind of process VM provided by the host OS.) </li></ul></ul><ul><ul><li>Disadvantage is that you have to write your own VMM driver for every supported real device. </li></ul></ul>
    30. 30. Summary <ul><li>Main goal is to develop some understanding of the issues of hosted system VM performance. </li></ul>
    31. 31. Question <ul><li>If overwrite privileged instructions with a brk instruction, how does the VMM know what instruction used to go there? </li></ul>
    32. 32. Xen and the Art of Virtualization A (bad) play on “Zen and the Art of Motorcycle Maintenance”
    33. 33. Motivation <ul><li>Server farm scenario </li></ul><ul><ul><li>Multiple applications installed on machines. </li></ul></ul><ul><ul><li>Different customers. </li></ul></ul><ul><ul><li>(What’s “admission control”?) </li></ul></ul><ul><li>Current approaches </li></ul><ul><ul><li>Allow users to install and run apps </li></ul></ul><ul><ul><ul><li>Configuration interaction between apps (like versions of Java jars, shared libraries, etc.) can lead to compatibility problems requiring time-consuming system administration to solve. </li></ul></ul></ul><ul><ul><ul><li>Behavior of one app can impact performance of another. Need performance isolation. </li></ul></ul></ul><ul><ul><li>One approach is QoS. </li></ul></ul><ul><ul><ul><li>Extend OS to provide QoS to apps. </li></ul></ul></ul><ul><ul><ul><li>(What’s the difference between QoS and real-time? QoS and perf. isolation?) </li></ul></ul></ul>
    34. 34. Use VMs <ul><li>Instead use multiple VMs, one VM per app. </li></ul><ul><ul><li>Each app can configure the entire OS exactly how it requires. </li></ul></ul><ul><ul><li>Relatively easier to implement algorithms at the VM level to isolate the performance behavior of different apps. </li></ul></ul><ul><li>Requirements for successful partitioning </li></ul><ul><ul><li>Isolation (Does VMware provide this?) </li></ul></ul><ul><ul><li>Accommodate heterogeneity </li></ul></ul><ul><ul><li>Good performance </li></ul></ul><ul><li>To avoid performance penalties of VMs like VMware, use paravirtualization. </li></ul>
    35. 35. Design Principles <ul><li>Support for unmodified binaries is essential. </li></ul><ul><ul><li>Must virtualize all features required by existing ABIs. </li></ul></ul><ul><li>Support for full multi-app OSs is important. (Not just process VMs.) </li></ul><ul><ul><li>Complex configurations may have multiple processes and should be configured within a single VM. </li></ul></ul><ul><li>Paravirtualization is necessary to obtain high performance and strong resource isolation. </li></ul><ul><ul><li>For example, virtualizing page tables can result in many expensive traps. </li></ul></ul><ul><li>Even on ISAs designed for virtualization, completely hiding the virtualization from guest OS risks correctness and performance. </li></ul><ul><ul><li>For example, the VM should know real time (and not just virtual time) to handle things like timeouts. </li></ul></ul><ul><li>Contrast with Denali security model. </li></ul><ul><ul><li>Separate namespaces. </li></ul></ul><ul><ul><li>Xen uses hypervisor. </li></ul></ul>
    36. 36. The VM Interface Overview <ul><li>Memory management </li></ul><ul><ul><li>Paging </li></ul></ul><ul><ul><ul><li>Xen in top 64 MB of every AS, avoiding TLB flush for hypervisor transitions. </li></ul></ul></ul><ul><ul><ul><li>Guest OSs update actual hardware page tables through Xen, which improves performance. (But makes them aware of virtualization.) </li></ul></ul></ul><ul><ul><li>Segmentation </li></ul></ul><ul><ul><ul><li>Cannot install fully privileged segment descriptors. </li></ul></ul></ul>
    37. 37. The VM Interface Overview (contd.) <ul><li>CPU </li></ul><ul><ul><li>Protection </li></ul></ul><ul><ul><ul><li>Guest OS must run at lower privilege. Since ring 1-2 seldom used, run guest OS in ring 1. </li></ul></ul></ul><ul><ul><li>Exceptions </li></ul></ul><ul><ul><ul><li>Guest OSs must register handlers with Xen. Generally identical to original. </li></ul></ul></ul><ul><ul><ul><li>Safety is done by making sure it doesn’t execute in ring 0. </li></ul></ul></ul><ul><ul><li>System Calls </li></ul></ul><ul><ul><ul><li>“ Fast” handlers may be registered to avoid going through ring 0. Instead go from ring 3 to ring 1. </li></ul></ul></ul><ul><ul><ul><li>Does this change the ABI? </li></ul></ul></ul><ul><ul><li>Page Fault </li></ul></ul><ul><ul><ul><li>Page fault handler must be modified, fault addr in a priv reg. </li></ul></ul></ul><ul><ul><ul><li>Technique is for Xen to write to a location in the stack frame. </li></ul></ul></ul><ul><li>Device I/O </li></ul><ul><ul><li>Network, Disk, etc. </li></ul></ul><ul><ul><ul><li>All replaced with special, buffer-based event mechanism. </li></ul></ul></ul>
    38. 38. Porting <ul><li>XP directly accessed PTEs, Linux used macros. (Why sig.?) </li></ul>
    39. 39. Control and Management <ul><li>Separation of policy from mechanism </li></ul><ul><li>Microkernel like design </li></ul><ul><ul><li>Basic control mechanism provided by hypervisor through a control interface </li></ul></ul><ul><ul><li>Policies implemented by a special distinguished guest OS instance (domain). </li></ul></ul><ul><ul><ul><li>Scheduling parameters, phys mem allocations, domain creation/destruction, create/delete virtual network interfaces and block devices </li></ul></ul></ul>
    40. 40. Architecture
    41. 41. Details
    42. 42. Hypercalls and Events <ul><li>Hypercalls </li></ul><ul><ul><li>From domain to Xen </li></ul></ul><ul><ul><li>Explicit calls into the hypervisor by the guest OS. Used by guest OS for things like updating hardware page tables. </li></ul></ul><ul><li>Events </li></ul><ul><ul><li>From Xen to domain </li></ul></ul><ul><ul><li>Bitmask, and handler </li></ul></ul>
    43. 43. Data Transfer <ul><li>Presence of hypervisor is another layer, so imperative to minimize overhead. </li></ul><ul><li>For resource accountability </li></ul><ul><ul><li>Minimize work to demultiplex data </li></ul></ul><ul><ul><ul><li>Or, figure out as quickly as possible which domain it goes to. </li></ul></ul></ul><ul><ul><li>Memory committed to I/O comes from relevant domains </li></ul></ul><ul><ul><ul><li>Minimize cross-talk </li></ul></ul></ul>
    44. 44. I/O Rings <ul><li>Buffers separate. How is pointer shared? How does reordering work? NBS. </li></ul>
    45. 45. CPU Scheduling <ul><li>CPU Scheduling </li></ul><ul><ul><li>BVT </li></ul></ul><ul><ul><ul><li>Work-conserving </li></ul></ul></ul><ul><ul><ul><ul><li>Latency vs. throughput </li></ul></ul></ul></ul><ul><ul><ul><ul><li>When would you want non-work-conserving? </li></ul></ul></ul></ul><ul><ul><ul><li>Fast-dispatch (borrowing) </li></ul></ul></ul>
    46. 46. Time and Timers <ul><li>Time and timers </li></ul><ul><ul><li>Guest OSs made aware of real time, virtual time, and wall-clock time. </li></ul></ul><ul><ul><ul><li>Real-time, nanosecs since boot, can be frequency-locked to external </li></ul></ul></ul><ul><ul><ul><li>Virtual time advances only when the guest OS is executing. Used for scheduling by the guest OS. </li></ul></ul></ul><ul><ul><ul><li>Wall-clock time? An offset from real time. (When would ever adjust?) </li></ul></ul></ul><ul><ul><li>Xen-provided timers are used by guest OS. </li></ul></ul><ul><ul><ul><li>Solves one efficiency problem with VMware Workstation. </li></ul></ul></ul><ul><ul><ul><ul><li>Guest XP causes host to perform poorly, because must constantly deliver timer interrupts to XP to do things like smooth transition animations (like minimizing a window, etc.). Forcing the guest to use XP provided timer would eliminate the need to virtualize these timer interrupts. </li></ul></ul></ul></ul>
    47. 47. Virtual Address Translation <ul><li>Virtual address translation </li></ul><ul><ul><li>Handled by Xen, batched updates. </li></ul></ul><ul><ul><li>Must be validated by Xen. </li></ul></ul><ul><li>Type and ref count associated with each frame </li></ul><ul><ul><li>Type is used to aid validation </li></ul></ul><ul><ul><ul><li>For example, a page table frame needs to be validated once, but not afterwards. </li></ul></ul></ul>
    48. 48. Physical Memory <ul><li>Physical memory </li></ul><ul><ul><li>Reserved for each guest OS instance at time of creation. </li></ul></ul><ul><ul><ul><li>Provides strong isolation. </li></ul></ul></ul><ul><ul><ul><li>But no sharing. What would be advantage of sharing? </li></ul></ul></ul><ul><ul><li>OS may use an additional table to give the illusion of physical memory. </li></ul></ul><ul><ul><li>Might need to know hardware for optimizing placement. </li></ul></ul>
    49. 49. Network <ul><li>VIFs </li></ul><ul><ul><li>Two I/O rings </li></ul></ul><ul><ul><li>Zero-copy </li></ul></ul>
    50. 50. Disks <ul><li>VBDs ( Domain0 has direct access.) </li></ul><ul><li>Disk scheduling </li></ul><ul><ul><li>Guest doesn’t know the real layout </li></ul></ul><ul><ul><li>Xen does some reordering </li></ul></ul><ul><ul><ul><li>(A bit of a violation of policy/mechanism.) </li></ul></ul></ul><ul><li>Scheduling is RR of batched requests, then elevator. </li></ul><ul><ul><li>Also may have reorder barriers. </li></ul></ul><ul><ul><li>(How well does this provide isolation?) </li></ul></ul>
    51. 51. Performance
    52. 52. Relative Performance <ul><li>Compared Linux, XenoLinux, VMware 3.2, and UML. </li></ul><ul><ul><li>Tests with others could not be published. </li></ul></ul><ul><li>Tests </li></ul><ul><ul><li>SPEC INT2000 </li></ul></ul><ul><ul><li>Linux build </li></ul></ul><ul><ul><ul><li>Native Linux 7% CPU is system. </li></ul></ul></ul><ul><ul><li>Open Source Database Benchmark (OSDB) Information Retrieval (IR) </li></ul></ul><ul><ul><li>OSDB On-Line Transaction Processing </li></ul></ul><ul><ul><li>dbench </li></ul></ul><ul><ul><ul><li>File system benchmark </li></ul></ul></ul><ul><ul><li>SPEC WEB99 </li></ul></ul><ul><ul><ul><li>App level for Web servers (Apache) </li></ul></ul></ul>
    53. 53. Performance
    54. 54. Performance
    55. 55. Operating System BMs <ul><li>What does SMP stand for? </li></ul><ul><li>Why might SMP be slower? </li></ul><ul><li>Why are the highlighted ones slower? </li></ul><ul><li>Why sig handling faster for Xen? </li></ul>
    56. 56. Operating System BMs <ul><li>Needs hypercall. </li></ul><ul><li>Why more processes needs more time? </li></ul><ul><li>Why less sig diff with bigger WS? </li></ul>
    57. 57. Operating System BMs <ul><li>mmap and PF require two transitions. (Why?) </li></ul>
    58. 58. Operating System BMs <ul><li>Zero-copy </li></ul>
    59. 59. Concurrent VMs <ul><li>Run on 2-CPU SMP </li></ul><ul><li>Apache only 28% improve over UP. </li></ul><ul><li>Xen improves 9% over UP. </li></ul><ul><li>Why slightly better sometimes? </li></ul>
    60. 60. PostgreSQL <ul><li>Scores running multiple PostgreSQL on native Linux are 25-35% lower. Possibly due to SMP scalability plus poor use of block cache. </li></ul><ul><li>Weights seem to have an effect in the Info Retr case, but no effect in OLTP case due to lots of sync writes. Why sync writes? </li></ul>
    61. 61. Performance Isolation <ul><li>Only 4% and 2% below earlier results. </li></ul><ul><ul><li>Does this make sense? </li></ul></ul>Domain Fork bomb and touch 3GB of VM dd and creating many files in large directories SPEC WEB99 OSDB-IR 4 3 2 1
    62. 62. Scalability of VMs <ul><li>SPEC INT2000 </li></ul><ul><li>Native Linux identifies as compute bound, and uses 50 ms time slice. (Why does this matter?) </li></ul>
    63. 63. Future Work <ul><li>Universal buffer cache with COW </li></ul><ul><ul><li>How might this be used? </li></ul></ul><ul><li>Last chance page cache (LPC) </li></ul><ul><ul><li>“of non-zero length only when machine memory is undersubscribed.” </li></ul></ul><ul><ul><li>Clean, evicted pages, added to LPC. </li></ul></ul><ul><ul><li>If faults, check LPC </li></ul></ul><ul><ul><ul><li>(Why only clean pages?) </li></ul></ul></ul>
    64. 64. Key Ideas <ul><li>A virtual ISA (paravirtualization) is better. </li></ul><ul><ul><li>Better performance </li></ul></ul><ul><ul><li>Allows VMs to be isolated from one another. One VM can’t cause the other to thrash, for instance. </li></ul></ul><ul><ul><li>Allows up to 100 OS instances </li></ul></ul><ul><ul><li>Making the guest OS aware of virtualization improves correctness and performance </li></ul></ul><ul><li>Control and management of Xen itself is done from a guest OS, via a special interface. </li></ul><ul><li>Cherry picking? </li></ul><ul><ul><li>Generally speaking, people always choose tests to show their work in best light. </li></ul></ul><ul><ul><li>Maybe hard to tell if complex situation. </li></ul></ul>
    65. 65. Microkernels Meet Recursive Virtual Machines Ford et al
    66. 66. Decomposition <ul><li>Microkernels decompose functionality horizontally (mainly). </li></ul><ul><ul><li>Monolithic services separated horizontally. </li></ul></ul><ul><ul><li>Moved up one layer. </li></ul></ul><ul><li>Stackable VMMs decompose functionality vertically. </li></ul><ul><ul><li>Each layer supplies some functionality. </li></ul></ul>
    67. 67. Fluke <ul><li>Uses a nested process architecture. </li></ul><ul><ul><li>Each process provides a VM to its children, possibly with additional functionality. </li></ul></ul><ul><ul><li>Different from usual parent-child in that children are completely contained within and visible to parent. </li></ul></ul><ul><ul><li>This is necessary for the parent to be a VM to its children. </li></ul></ul><ul><li>Two APIs </li></ul><ul><ul><li>Low-level kernel API to microkernel for basic manipulation </li></ul></ul><ul><ul><li>High-level protocols to handle: </li></ul></ul><ul><ul><ul><li>Parent Interface </li></ul></ul></ul><ul><ul><ul><li>Process </li></ul></ul></ul><ul><ul><ul><li>MemPool </li></ul></ul></ul><ul><ul><ul><li>FileSystem </li></ul></ul></ul><ul><ul><li>Nested VMs interact directly with microkernel for the low-level API, but interact with the parent VM for high-level protocols. </li></ul></ul><ul><ul><li>Parent VM will use interposition to add additional functionality. This is how the stacking works. </li></ul></ul>
    68. 69. Key Ideas <ul><li>Implement a microkernel that allows process virtual machines to be stacked. </li></ul><ul><li>Each virtual machine is a user-level server. </li></ul><ul><li>Stacking occurs through process nesting. </li></ul><ul><li>Use pass-through to avoid exponential behavior. </li></ul><ul><li>Mainly interesting for the ideas, performance is relatively poor, but may be improvable. </li></ul>