Los manejadores de maquinas virutales se enfrentan a varios desafios: Tienen que ser completamente transparentes, es decir ni las aplicaciones de usario ni los sistemas operativos que corren dentro de las maquinas virtules deben conocer la existencia del VMM, como tampoco deben conocer que estan compartiendo los recurosos con otras maquinas virtuales El VMM debe aislar por completo las stacks de las distintas maquinas virtuales El VMM debe correr protegido del todo el software que corre en las maquinas virtuales Y finalmente debe presentarle a las maquinas virtuales una interfaz para acceder a los recursos del hardware
Para lidiar con los problemas presentados anteriormente, en una arquitectura intel 32 sin soporte para virtualizacion, el manejador de maquinas virtuales debe correr en un modo con m’as provilegios que los modos en los que corren los sistemas operativos invitados. La solucion que se utiliza para resolver todos los problemas se llama ring deprivileging y es justamente hacer que los sistemas operativos invitados corran en un nivel inferior (ring 1) y las aplicaciones corran en nivel 3. De esta manera,cada vez que los sistemas operativos invitados ejecuten una operacion privilegiada, se va a generar una falla, porque no estan corriendo en el modo supervisor, y el VMM sera el encargado de colectar esas faltas, interprestarlas y hacer lo necesario de acuerdo a qu’e tipo de falta sea.
The implementation of a virtual machine manager is very complex because of many problems that arise because of the ring deprivileging technique. Here some of the principal problems that a VMM implementation must deal with: First hole of the ring deprivileging is call RING ALIASING. For example, if the PUSH instruction is executed with the segment register CS, the segment address and the current privilege level are pushed on the stack. ADDRESS SPACE COMPRESSION: Adress Compression: refers to the challenges of protecting the portions of the virtual address space used by the VMM supporting guest accesses to them. Problems (in any of the cases) The integrity could be compromised if the guest can write those portions. Also, if the guest can read those portions it could detect that it is running in a VM Guest attempts to access these portions of the address space must generate transitions to the VMM, which can emulate or otherwise support them.
Excessive Faulting (Adverse impact on Guest System Calls) The IA-32 arch have 2 instructions that were created to make transitions from level 3 to 0 and vice versa. Those instructions, SYSENTER and SYSEXIT are used by the OS to execute all the system calls. SYSENTER changes from ring 3 to 0 and SYSEXIT changes from ring 0 to 3 and this fact is HARDCODED in the instruction. So, Executions of SYSENTER by a guest application cause transitions to the VMM and not to the guest OS. The VMM must emulate every guest execution of SYSENTER . Executions of SYSEXIT by a guest OS cause faults to the VMM. The VMM must emulate every guest execution of SYSEXIT . Non Trapping instructions (Non faulting Access to privileged State) The registers presented in the example could only be modified at ring 0, however they could be read at any privilege level. If the VM tries to read any of these registers it could be reading invalid information (for instance, information related to another virtual machine). Because the VM can do it at any privilege level, this operation will not failed and the VMM will not be aware of that.
Interrupt Virtualization: Even If it were possible to prevent guest modifications of interrupt masking without intercepting each attempt, challenges would remain when a VMM has a “virtual interrupt” to deliver to a guest. A virtual interrupt should be delivered only when the guest has unmasked interrupts. To deliver virtual interrupts in a timely way, a VMM should intercept some but not all attempts by a guest to modify interrupt masking. Doing so could significantly complicate the design of a VMM. Access to hidden state: IA-32 does not provide a mechanism for saving and restoring hidden components of a guest context when changing VMs or for preserving them while the VMM is running.
Frequent Access to Privileged Resources: The TPR controls interrupt prioritization. The accesses to this register require VMM intervention only if they cause the TPR to drop below a value determined by the VMM.
Intel introdujo un cambio en su arquitectura IA-32 que llamo VT-x para resolver por medio del hardware muchos de los problemas que tenia que resolver por software el VMM. El objetivo era agregar un nuevo modo de operacion que permitiera que los sistemas operativos invitados pudieran correr en ring 0 con menos privilegios en los que corre el VMM. Lo mas importante fue entonces que se agregaron nuevos modos de operaciones y nuevos mecanismos para transicionar desde el software invitado hacia el VMM y viceversa.
VT-x augments IA-32 with two new forms of CPU operation: VMX root operation and VMX not-root operation. VMX root operation is intended for use by a VMM and its behavior is very similar to that of IA-32 without VT-x. VMX non-root operation provides an alternative IA-32 environment controlled by a VMM and designed to support a VM.
VM entries and VM exits are managed by a new data structure called the virtual-machine control structure. Processor operation is changed substantially in VMX non-root operation. The most important change is that many instructions and events cause VM exits. Some instructions (e.g. INVD) cause VM exits unconditionally and thus can never be executed in VMX non-root operation. Other instructions (e.g. INVLPG) and all events can be configured to do so conditionally using VM-execution control fields in the VMCS. Guest-state area. Processor state is saved into the guest-state area on VM exits and loaded from there on VM entries. Host-state area. Processor state is loaded from the host-state area on VM exits. VM-execution control fields. These fields control processor behavior in VMX non-root operation. They determine in part the causes of VM exits. VM-exit control fields. These fields control VM exits. VM-entry control fields. These fields control VM entries. VM-exit information fields. These fields receive information on VM exits and describe the cause and the nature of VM exits. They are read-only.
Address-Space Compression OSs expect to have access to the processor's full virtual-address space (known as the linear-address space in IA-32). A VMM must reserve for itself some portion of the guest's virtual-address space. It could run entirely within the guest's virtual-address space, which allows it easy access to guest data, but the VMM's instructions and data structures would use a substantial amount of the guest's virtual-address space. Alternatively, the VMM can run in a separate address space, but even in that case, the VMM must use a minimal amount of the guest's virtual-address space for the control structures that manage transitions between guest software and the VMM. For IA-32, these structures include the interrupt-descriptor table (IDT) and the global-descriptor table (GDT), which reside in the linear-address space. For the Itanium architecture, the structures include the interruption vector table (IVT), which resides in the virtual-address space. The VMM must prevent guest access to those portions of the guest's virtual-address space that the VMM is using. Otherwise, the VMM's integrity could be compromised (if the guest can write to those portions) or the guest could detect that it is running in a VM (if it can read those portions). Guest attempts to access these portions of the address space must generate transitions to the VMM, which can emulate or otherwise support them. The term address-space compression refers to the challenges of protecting these portions of the virtual-address space and supporting guest accesses to them. Hardware Virtualization Address-Space Compression VT-x and VT-i provide two different techniques for solving address-space compression problems. With VT-x, every transition between guest software and the VMM can change the linear-address space, allowing guest software full use of its own address space. The VMX transitions are managed by the VMCS, which resides in the physical-address space, not the linear-address space. With VT-i, the VMM has a virtual-address bit that guest software cannot use. A VMM can conceal hardware support for this bit by intercepting guest calls to the PAL procedure that reports the number of implemented virtual-address bits. As a result, the guest will not expect to use this uppermost bit, and hardware will not allow it to do so, thus providing the VMM exclusive use of half of the virtual-address space.
Guest System Calls: The systenter and sysexit instructions are hardcoded to be successful only if they are called from ring0.
The VT-x instructions are available on VMX root mode only.
The extensions are architectural enhancements geared to delivering more powerful virtualization solutions.
Flex Migration provides the ability to mask certain bits of the CPUID instruction which allows the live migration of virtual machines between different generation of Intel processors starting with the Intel Core architecture processors and follow on. This technology relies on “Spoofing” the CPUID instruction by masking certain bits of the CPUID which indicate that new instructions are present (e.g. SSE4). Which bits to mask are determined by the VMM, the VMM user can select which processor they want to define as the “lowest common denominator”. The VMM then masks the appropriate bits whenever a CPUID instruction is executed and provides to the application, the modified value, so that the application won’t use instructions that are not present on other machines, thus yielding the OS and application compatible for live migration between cross generations of Intel CPU. Note that the ability to mask CPUID bits has been available from Intel since Intel VT was launched on platforms that have Intel VT ENABLED. All VMM vendors that support and depend on Intel VT (e.g. Xen, Veridian, etc), should be able to implement CPUID spoofing today. With the Penryn based processors, this same capability is being added so that it can be used independent of Intel VT being ENABLED or DISABLED.
Add one more indirection to memory addresses. In AMD this feature is called NPT the ordinary IA-32 page tables (referenced by control register CR3) translate from linear addresses to guest-physical addresses. A separate set of page tables (the EPT tables ) translate form guest-physical addresses to the host-physical addresses that are used to access memory. As a result, guest software can be allowed to modify its own IA-32 page tables and directly handle page faults. This allows a VMM to avoid the VM exits associated with page-table virtualization, which are a major source of virtualization overhead without EPT.
The TLB cache contains a physical to logical mapping. Virtual-processor identifiers (VPIDs) . This feature allows a VMM to assign a different non-zero VPID to each virtual processor (the zero VPID is reserved for the VMM). The CPU can use VPIDs to tag translations in the TLBs. This feature eliminates the need for TLB flushes on every VM entry and VM exit and eliminates the adverse impact of those flushes on performance.
Intel VT-d enables protection by restricting direct memory access (DMA) of the devices to pre-assigned domains or physical memory regions. This is achieved by a hardware capability known as DMA-remapping. The VT-d DMA-remapping hardware logic in the chipset sits between the DMA capable peripheral I/O devices and the computer’s physical memory. It is programmed by the computer system software. In a virtualization environment the system software is the VMM. In a native environment where there is no virtualization software, the system software is the native OS. DMA-remapping translates the address of the incoming DMA request to the correct physical memory address and perform checks for permissions to access that physical address, based on the information provided by the system software. Intel VT-d enables system software to create multiple DMA protection domains. Each protection domain is an isolated environment containing a subset of the host physical memory. Depending on the software usage model, a DMA protection domain may represent memory allocated to a virtual machine (VM) , or the DMA memory allocated by a guest-OS driver running in a VM or as part of the VMM itself. The VT-d architecture enables system software to assign one or more I/O devices to a protection domain. DMA isolation is achieved by restricting access to a protection domain's physical memory from I/O devices not assigned to it by using address-translation tables. This provides the necessary isolation to assure separation between each virtual machine’s computer resources. When any given I/O device tries to gain access to a certain memory location, DMA remapping hardware looks up the address-translation tables for access permission of that device to that specific protection domain. If the device tries to access outside of the range it is permitted to access, the DMA remapping hardware blocks the access and reports a fault to the system software.
For proper device isolation in a virtualized system, the interrupt requests generated by I/O devices must be controlled by the VMM. In the existing interrupt architecture for Intel platforms, a device may generate either a legacy interrupt (which is routed through I/O interrupt controllers) or may directly issue message signaled interrupts (MSIs) . MSIs are issued as DMA write transactions to a pre-defined architectural address range, and the interrupt attributes (such as vector, destination processor, delivery mode, etc.) are encoded in the address and data of the write request. Since the interrupt attributes are encoded in the request issued by devices, the existing interrupt architecture does not offer interrupt isolation across protection domains. The VT-d interrupt-remapping architecture addresses this problem by redefining the interrupt-message format. The new interrupt message continues to be a DMA write request, but the write request itself contains only a &quot;message identifier&quot; and not the actual interrupt attributes. The write request, like any DMA request, specifies the requester-id of the hardware function generating the interrupt. DMA write requests identified as interrupt requests by the hardware are subject to interrupt remapping. The requestor-id of interrupt requests is remapped through the table structure. Each entry in the interrupt-remapping table corresponds to a unique interrupt message identifier from a device and includes all the necessary interrupt attributes (such as destination processor, vector, delivery mode, etc.). The architecture supports remapping interrupt messages from all sources including I/O interrupt controllers (IOAPICs), and all flavors of MSI and MSI-X interrupts defined in the PCI specifications. (there is no mention of legacy interrupts) For more info about MSI refer to: http://lwn.net/Articles/44139/
VT-d architecture defines a multi-level page-table structure for DMA address translation.The multi-level page tables are similar to IA-32 processor page-tables, enabling software to manage memory at 4 KB or larger page granularity. Hardware implements the page-walk logic and traverses these structures using the address from the DMA request. The number of page-table levels that must be traversed is specified through the context-entry referencing the root of the page table. The page directory and page-table entries specify independent read and write permissions, and hardware computes the cumulative read and write permissions encountered in a page walk as the effective permissions for a DMA request. The page-table and page-directory structures are always 4 KB in size, and larger page sizes (2 MB, 1 GB, etc.) are enabled through super-page support.
Here you tell the current problem, where the hypervisor must manage the traffic from the NIC to any of the VMs and vice versa.
The VMDq capability is located in the NIC silicon. This feature is supported in Intel® 82575 Gigabit Ethernet Controller and Intel® 82598 10 Gigabit Ethernet Controller, and requires virtualization software enabling.
SRIOV it is not a feature, is a specification.
For proper VMM operation, certain registers must be loaded by every VM exit. These include those IA-32 registers that manage operation of the processor, such as the segment registers, CR3 and IDTR and many others. The guest-area contains fields for these registers so that their values can be saved as part of each VM exit. The guest state area does not contain fields corresponding to registers that can be saved and loaded by the VMM itself (e.g. general purpose registers like AX, BX, CX, DX, SP, BP, SI, and DI). Exclusion of such registers improves the performance of VM entries and VM exits. Software can manage these additional registers more efficiently as it knows better than the CPU when they need to be saved and loaded.
There are VM-execution control fields that support efficient virtualization of the IA-32 control registers CR0 and CR4. These register each comprise a set of bits controlling processor operation. A VMM may wish to retain control of some of these bits (e.g. those that control floating-point instructions). The VMCS includes, for each of these registers, a guest/host mask that a VMM can use to indicate which bits it wants to protect. Guest writes can freely modify the unmasked bits, but an attempt to modify a masked bit causes a VM exit. The VMCS also includes, for each of these registers, a read shadow whose value is returned to guest reads of the register. Interrupt window exiting: Interrupt Windows Exiting: Guest OS may not be interruptible (e.g., critical section) Interrupt-window exiting allows guest OS to run until it has enabled interrupts (via EFLAGS.IF) In virt environment, if a guest OS is handling an NMI and another NMI comes in, the VMM must keep it pending and determine when the guest OS is ready. This was usually handled in a non-standard manner by different VMMs with workarounds. Now, the VMM is notified (via VMExit) that the guest is ready for the pending NMI Flex Priority: The TPR is the register that store the interrupts priority. TPR is part of the APIC. The TPR is accessed through the OS memory because it is mapped in its memory. For example: The OS makes an “in 0x3f” to read the register. But, the VMM must control the TPR accesses, and so the access to this memory portions should cause a VM exit. For every access to the TPR a VM exit must occurs. The use of a TPR shadow generates VM exits only when the TPR is written and the new value is below the TPR threshold (also included in the VMCS) but not when it is read, reducing a lot of overhead generated by the context switches of every VM exit. In addition to the controls mentioned, there are VM-execution controls that support flexible VM exiting for a number of privileged instructions.
Hardware Assisted Virtualization Argentina Software Development Center Software and Solutions Group 21 July 2008
Problems that arise when software is run at a privilege level other than the privilege level for which it was written .
Example: the CS register which points to the code segment. If the PUSH instruction is executed with the CS register, the contents of that register (which include the current privilege level) is pushed on the stack. A guest OS could easily determine that it is not running at privilege level 0.
OSs expect to have access to the processor’s full virtual address space (in IA-32. linear address space)
The VMM could run entirely within the guest’s virtual-address space (but the VMM’s instructions and data structures would use a substantial amount of the guest’s virtual address space.
The VMM could run in a separate address space, but it must use a minimal amount of the guest’s virtual address space for the control structures that manage transitions between guest software and the VMM (IDT and GDT for IA-32)
The VMM must prevent guest access to those portions of the guest’s virtual address space that the VMM is using. Otherwise the VMM’s integrity could be compromised.
Ring deprivileging can interfere with the effectiveness of facilities in the IA-32 architecture that accelerate the delivery and handling of transitions to OS software.
For example: The IA-32 SYSENTER and SYSEXIT instructions support low-latency system calls. SYSENTER always effects a transition to privilege level 0, and SYSEXIT faults if executed outside that ring.
The VMM must emulate every execution of SYSENTER and SYSEXIT causing serious performance problems.
There are instructions that access privileged state and do not fault when executed with insufficient privilege.
For example, the IA-32 registers GDTR, IDTR, LDTR, and TR contain pointers to data structures that control CPU operation. Software can execute the instructions that read, or store, from these registers at any privilege level.
The mechanisms of masking external interrupts for preventing their delivery when the OS is not ready for them is a big challenge for the VMM design. The VMM must manage the interrupt masking in order to prevent an OS of masking the external interrupts preventing any guest to receive interrupts.
For example: IA-32 uses the interrupt flag (IF) in EFLAGS register to control interrupt masking. A value of 0 indicates that interrupts are masked.
Access to Hidden State
Some components of the processor state are not represented in any software- accessible register.
For example: the IA-32 has the hidden descriptor caches for segment registers. A segment-register load copies of the GDT and LDT into this cache, which is not modified if software later writes to the descriptor tables.
Data structure that manages VM entries and VM exits.
VMCS is logically divided into:
VM-execution control fields
VM-exit control fields
VM-entry control fields
VM-exit information fields
VM entries load processor state from the guest-state area.
VM exits save processor state to the guest-state area and the exit reason, and then load processor state from the host-state area.
VT-x Operations IA-32 Operation VMX Root Operation VMX Non-root Operation . . . VMXON VMLAUNCH VMRESUME VM Exit Ring 0 Ring 3 Ring 0 Ring 3 VM 1 Ring 0 Ring 3 VM 2 Ring 0 Ring 3 VM n VMCS 2 VMCS n VMCS 1
It can help a lot when you need to switch tasks, or you must allocate a certain amount of CPU power to a task. For telecom and networking applications, it makes virtualization a useful tool and possibly a must have feature. On the other end of the spectrum, it can help for media applications like media PCs and Tivo-type devices. For the business world, it doesn't buy you all that much.
VT-d: Intel® Virtualization Technology for Directed I/O
Provides the capability to ensure improved isolation of I/O resources for greater reliability, security, and availability.
Supports the remapping of I/O DMA transfers and device-generated interrupts.
Provides flexibility to support multiple usage models that may run un-modified, special-purpose, or "virtualization aware" guest OSs.
Intel® I/O Acceleration Technology (Intel® I/OAT) is a suite of features which improves data acceleration across the platform, from I/O and networking devices to the memory and processors which help to improve system performance.
Intel® QuickData Technology : designed to maximize the throughput of server data traffic across a broader range of configurations and server environments to achieve faster, scalable, and more reliable I/O.
Direct Cache Access (DCA) : Enables the CPU to pre-fetch data avoiding cache misses and improving application response times
MSI-X : Helps in load-balancing I/O network interrupts
Low latency interrupts : Automatically tune interrupt interval times depending on the latency sensitivity of the data
Receive Side Coalescing (RSC) : provides lightweight coalescing of receive packets, which increases the efficiency of the host network stack
In addition to consolidating CPU processes, you also effectively consolidate I/O bandwidth and switch processing capabilities onto the same platform
The overhead of this switching limits your bandwidth, adds CPU overhead, and effectively reduces the benefits of server virtualization. In some cases you may have a new problem in having created an I/O bottleneck
On the receive path, VMDq provides a hardware ‘sorter' or classifier that essentially does the pre-work for the VMM of directing which end VM the packets should go to. The NIC or LAN silicon is performing a hardware assist for the VMM layer.
The VMCS contains a number of fields that control VMX not-root operation by specifying the instructions and events that cause VM exits.
The VMCS includes controls that support interrupt virtualization:
External interrupt exiting: if it is set, all external interrupts cause VM exits. The guest is not able to mask these interrupts
Interrupt window exiting: if it is set a VM exit occurs whenever guest software is ready to receive interrupts.
Use TPR shadow: if is set, accesses to the APIC’s TPR through control register CR8 are handled in a special way: executions of MOV CR8 access a TPR shadow referenced by a pointer in the VMCS. The VMCS also includes a TPR threshold; a VM exit occurs after any instruction that reduces the TPR shadow below the TPR threshold. ( Flex Priority )
Exception bitmap: 32 entries for the IA-32 exceptions. To specify which exception should cause VM exits and which should not.
I/O bitmaps: one entry for each port in the 16-bit I/O space. An I/O cause a VM exit if it attempts to access a port whose entry is set in the I/O bitmap.
MSR bitmaps: two entries (read and write) for each model-specific register (MSR) currently in use. An execution of RDMSR (or WRMSR) causes a VM exit if attempts to read (or write) an MSR whose read bit (or write bit) is set in the MSR bitmaps.