Hwanju Kim, Jinkyu Jeong, Jaeho Hwang, Joonwon Lee, and Seungryoul Maeng, “Scheduler Support for Video-oriented Multimedia on Client-side Virtualization”, ACM Multimedia Systems (MMSys), Chapel Hill, North Carolina, USA, Feb. 2012.
OS vs. VMM provides an overview of the similarities and differences between operating systems (OS) and virtual machine monitors (VMM). Both OS and VMM abstract hardware resources, but VMM provides virtualization while OS provides abstraction. Nested virtualization further complicates resource management by adding additional layers of indirection. Key issues in virtualization include trapping privileged OS operations, scheduling virtual CPUs, managing virtual memory translations, and achieving high performance I/O.
This document discusses CPU virtualization and scheduling techniques. It covers topics such as deprivileging the operating system, virtualization-unfriendly architectures like x86, hardware-assisted virtualization using VMX mode, and proportional-share scheduling. It also summarizes research on improving VM scheduling by making it task-aware to prioritize I/O-bound tasks and correlate I/O events with tasks to boost their performance while maintaining inter-VM fairness. The document provides historical context on the evolution of virtualization technologies and research challenges in building lightweight and intelligent VMM schedulers.
This document provides an introduction to virtualization including:
1) The benefits of virtualization like efficient resource utilization and strong isolation between virtual machines.
2) A brief history of virtualization from the 1960s mainframe era to modern ubiquitous cloud computing.
3) Popular use cases of virtualization including cloud computing, virtual desktop infrastructure, and mobile virtualization.
4) Basic terminologies that distinguish type-1 and type-2 virtual machine monitors as well as full and para-virtualization methods.
The document discusses virtualization techniques used in KVM. It describes how KVM uses shadow page tables to virtualize memory management. The shadow page tables allow virtual addresses used by a guest OS to be translated to physical addresses on the host machine. Different techniques for implementing shadow page tables are described, including pre-validation of guest page tables and using a virtual translation lookaside buffer to cache translations.
This document discusses I/O virtualization and GPU virtualization. It covers:
- Two approaches to I/O virtualization: hosted and device driver approaches. Hosted has lower engineering cost but lower performance.
- Methods to optimize para-virtualized I/O including split-driver models, reducing data copy costs, and hardware supports like IOMMU and SR-IOV.
- Challenges of GPU virtualization including whether to take a low-level virtualization or high-level API remoting approach. API remoting is preferred due to closed and evolving GPU hardware.
- Hardware pass-through of GPUs for high performance but low scalability. Industry solutions for remote desktop
CPU Scheduling for Virtual Desktop InfrastructureHwanju Kim
This document discusses CPU scheduling techniques for virtual desktop infrastructure (VDI). It proposes a demand-based coordinated scheduling approach for scheduling multithreaded workloads on multiprocessor virtual machines (VMs). The key points are:
1. Coordinated scheduling of sibling virtual CPUs (vCPUs) in a VM is needed to effectively schedule multithreaded workloads, as uncoordinated scheduling can reduce inter-thread communication performance.
2. A coordination space consisting of space (physical CPU assignment) and time (preemption policy) domains is defined to coordinate vCPU scheduling.
3. In the space domain, a load-conscious balance scheduling approach assigns sibling vCPUs across physical CPUs based
Demand-Based Coordinated Scheduling for SMP VMsHwanju Kim
Hwanju Kim, Sangwook Kim, Jinkyu Jeong, Joonwon Lee, and Seungryoul Maeng, “Demand-Based Coordinated Scheduling for SMP VMs”, International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Houston, Texas, USA, Mar. 2013.
This document discusses full virtualization techniques. It defines full virtualization as simulating hardware to allow any OS to run unmodified in a virtual machine. It describes the challenges of virtualizing the x86 architecture and how binary translation is used to allow guest OSes to run at a higher privilege level. The document outlines hosted and bare-metal virtualization architectures and their pros and cons. It provides examples of using full virtualization for desktop and server virtualization/cloud computing. It also gives steps to implement hosted full virtualization using Oracle VM VirtualBox on Windows 7.
OS vs. VMM provides an overview of the similarities and differences between operating systems (OS) and virtual machine monitors (VMM). Both OS and VMM abstract hardware resources, but VMM provides virtualization while OS provides abstraction. Nested virtualization further complicates resource management by adding additional layers of indirection. Key issues in virtualization include trapping privileged OS operations, scheduling virtual CPUs, managing virtual memory translations, and achieving high performance I/O.
This document discusses CPU virtualization and scheduling techniques. It covers topics such as deprivileging the operating system, virtualization-unfriendly architectures like x86, hardware-assisted virtualization using VMX mode, and proportional-share scheduling. It also summarizes research on improving VM scheduling by making it task-aware to prioritize I/O-bound tasks and correlate I/O events with tasks to boost their performance while maintaining inter-VM fairness. The document provides historical context on the evolution of virtualization technologies and research challenges in building lightweight and intelligent VMM schedulers.
This document provides an introduction to virtualization including:
1) The benefits of virtualization like efficient resource utilization and strong isolation between virtual machines.
2) A brief history of virtualization from the 1960s mainframe era to modern ubiquitous cloud computing.
3) Popular use cases of virtualization including cloud computing, virtual desktop infrastructure, and mobile virtualization.
4) Basic terminologies that distinguish type-1 and type-2 virtual machine monitors as well as full and para-virtualization methods.
The document discusses virtualization techniques used in KVM. It describes how KVM uses shadow page tables to virtualize memory management. The shadow page tables allow virtual addresses used by a guest OS to be translated to physical addresses on the host machine. Different techniques for implementing shadow page tables are described, including pre-validation of guest page tables and using a virtual translation lookaside buffer to cache translations.
This document discusses I/O virtualization and GPU virtualization. It covers:
- Two approaches to I/O virtualization: hosted and device driver approaches. Hosted has lower engineering cost but lower performance.
- Methods to optimize para-virtualized I/O including split-driver models, reducing data copy costs, and hardware supports like IOMMU and SR-IOV.
- Challenges of GPU virtualization including whether to take a low-level virtualization or high-level API remoting approach. API remoting is preferred due to closed and evolving GPU hardware.
- Hardware pass-through of GPUs for high performance but low scalability. Industry solutions for remote desktop
CPU Scheduling for Virtual Desktop InfrastructureHwanju Kim
This document discusses CPU scheduling techniques for virtual desktop infrastructure (VDI). It proposes a demand-based coordinated scheduling approach for scheduling multithreaded workloads on multiprocessor virtual machines (VMs). The key points are:
1. Coordinated scheduling of sibling virtual CPUs (vCPUs) in a VM is needed to effectively schedule multithreaded workloads, as uncoordinated scheduling can reduce inter-thread communication performance.
2. A coordination space consisting of space (physical CPU assignment) and time (preemption policy) domains is defined to coordinate vCPU scheduling.
3. In the space domain, a load-conscious balance scheduling approach assigns sibling vCPUs across physical CPUs based
Demand-Based Coordinated Scheduling for SMP VMsHwanju Kim
Hwanju Kim, Sangwook Kim, Jinkyu Jeong, Joonwon Lee, and Seungryoul Maeng, “Demand-Based Coordinated Scheduling for SMP VMs”, International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Houston, Texas, USA, Mar. 2013.
This document discusses full virtualization techniques. It defines full virtualization as simulating hardware to allow any OS to run unmodified in a virtual machine. It describes the challenges of virtualizing the x86 architecture and how binary translation is used to allow guest OSes to run at a higher privilege level. The document outlines hosted and bare-metal virtualization architectures and their pros and cons. It provides examples of using full virtualization for desktop and server virtualization/cloud computing. It also gives steps to implement hosted full virtualization using Oracle VM VirtualBox on Windows 7.
Live VM migration allows virtual machines to be relocated between physical hosts with little to no downtime. There are two main approaches: pre-copy migration copies memory contents iteratively with little downtime, while post-copy migration copies CPU states first and then memory pages on demand to reduce total migration time. Several research projects use live migration techniques to improve data center efficiency: LiteGreen saves energy by consolidating idle desktop VMs, Jettison uses partial VM migration for quick consolidation, and Kaleidoscope proposes VM state coloring to enable fast micro-elasticity through live cloning of warm VMs.
This document provides an overview of virtualization concepts and VMware vSphere features. It begins with defining key virtualization building blocks like hypervisors, virtual machines, and virtual switches. It then covers ESXi architecture, vCenter functionality, and advanced features like vMotion, HA, and vNetworking. The document aims to give attendees a deep understanding of virtualization and how vSphere addresses various virtualization challenges.
Inroduction to Virtualization and Video Playback during a Live Migrated Virtual Machine hosting the server with its time analysis.
OS- Ubuntu
Hypervisor- KVM
The document discusses virtualization and its implementation at GHCL Ltd's Sutrapada facility. It defines virtualization as creating virtual versions of operating systems, storage, and network resources. The goals of virtualization are to centralize administration, improve scalability and hardware utilization. Types of virtualization discussed include full, partial, and para virtualization. The document outlines how virtual machines are created, monitored, snapshotted, migrated, and used for failover. It provides an example virtualization implementation at GHCL including resource planning and allocation across three physical servers. Finally, it discusses desktop virtualization and its advantages over traditional desktop computing.
Hyper-V High Availability and Live MigrationPaulo Freitas
This document provides an overview of a Microsoft Virtual Academy training program on Hyper-V virtualization. The program is split into two halves, with the first half covering topics like Hyper-V infrastructure, networking, storage, and management. The second half focuses on high availability, disaster recovery, and integrating Hyper-V with System Center. It also discusses capabilities like live migration, replication, clustering and improving application availability and redundancy through virtualization.
Introduction to Virtualization, Virsh and Virt-Managerwalkerchang
Virtualization allows for the abstraction and sharing of computer hardware resources like CPU, memory, storage and network capacity. The document introduces virtualization concepts and the tools KVM, Virsh and Virt-manager. It provides documentation on Virsh commands to manage domains (VMs), interfaces and networks. These include commands to define, start, suspend, resume VMs and interfaces, as well as take and restore VM snapshots to revert states. Managing VMs, interfaces and networks with Virsh commands allows administrators to efficiently share hardware resources across VMs.
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVMvwchu
With co-presenter Maninder Singh, delivered a presentation about hypervisors and virtualization technology for an independent topic study project for the Operating System Design (EECS 4221) course at York University, Canada in October 2014.
Virtualization, briefly, is the separation of resources or requests for a service from the underlying physical delivery of that service. It is a concept in which access to a single underlying piece of hardware is coordinated so that multiple guest operating systems can share a single piece of hardware, with no guest operating system being aware that it is actually sharing anything at all.
Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.
Building a KVM-based Hypervisor for a Heterogeneous System Architecture Compl...Hann Yu-Ju Huang
This document discusses building a KVM-based hypervisor that can virtualize the key features of Heterogeneous System Architecture (HSA) for a compliant system. It describes HSA features like shared virtual memory, I/O page faulting, and user-level queueing. It then outlines the design of virtualizing these features through techniques like VirtIO-KFD for queues, shadow page tables for shared memory, and shadow PPR interrupts for page faults. Evaluation shows the hypervisor approach incurs average performance overhead of 5% for GPU execution compared to native execution.
Hardware support for virtualization originated in the 1970s with goals of running multiple virtual machines on a single physical machine. A key requirement was virtualization allowing equivalent execution of programs in a virtual environment as running natively. The x86 architecture posed challenges to virtualization due to sensitive instructions. Intel Virtualization Technology (VT-x) added hardware support for virtualization on x86 by introducing a new CPU operation mode called VMX non-root, and transitions between it and VMX root mode. This reduced the need for software emulation of sensitive instructions and improved virtualization performance.
Server virtualization concepts allow partitioning of physical servers into multiple virtual servers using virtualization software and hardware techniques. This improves resource utilization by running multiple virtual machines on a single physical server. Server virtualization provides benefits like reduced costs, higher efficiency, lower power consumption, and improved availability compared to running each application on its own physical server. Key components of server virtualization include virtual machines, hypervisors, CPU virtualization using techniques like Intel VT-x or AMD-V, memory virtualization, and I/O virtualization through methods like emulated, paravirtualized or direct I/O. KVM and QEMU are popular open source virtualization solutions, with KVM providing kernel-level virtualization support and Q
Yabusame: postcopy live migration for qemu/kvmIsaku Yamahata
Yabusame is a postcopy live migration technique for QEMU/KVM. It was developed by Isaku Yamahata of VALinux Systems Japan K.K. and Takahiro Hirofuchi of AIST. The project aims to improve live migration performance by allowing the guest VM to resume execution at the destination host before memory pages have been fully copied. This is achieved through asynchronous page fault handling during the postcopy phase. Evaluation shows the technique can improve CPU utilization and reduce total migration times compared to traditional precopy approaches. Future work includes upstream integration, support for KSM/THP, multithreading optimizations, and integration with management platforms like libvirt and OpenStack.
This document provides an overview of virtualization using KVM and Xen hypervisors. It defines full and para virtualization approaches and type 1 and type 2 hypervisors. It describes the X86 architecture model and how virtualization abstracts privileged instructions. It then discusses parameters for evaluating hypervisor efficiency and provides descriptions of the open source KVM and Xen hypervisors, comparing their architectures, supported features, and operating systems. Key differences between KVM and Xen are outlined related to hardware support, complexity, paravirtualization, and memory management.
This document discusses different techniques for virtual machine migration. It begins with an introduction to virtualization and how virtual machine migration involves copying a VM from one physical machine to another. There are three main categories of migration techniques: fault tolerant techniques which migrate VMs to prevent failures, load balancing techniques which distribute load across servers, and energy efficient techniques which optimize resource utilization to conserve energy. Live VM migration is described as migrating the entire OS and applications between physical machines without disrupting applications. The document also covers background details on virtual machine migration methods being either hot/live where the VM continues running, or cold/non-live where the VM status is lost during migration.
Implementation levels of virtualizationGokulnath S
Virtualization allows multiple virtual machines to run on the same physical machine. It improves resource sharing and utilization. Traditional computers run a single operating system tailored to the hardware, while virtualization allows different guest operating systems to run independently on the same hardware. Virtualization software creates an abstraction layer at different levels - instruction set architecture, hardware, operating system, library, and application levels. Virtual machines at the operating system level have low startup costs and can easily synchronize with the environment, but all virtual machines must use the same or similar guest operating system.
The document discusses the history and usage of virtualization technology, provides an overview of CPU, memory, and I/O virtualization, compares the Xen and KVM virtualization architectures, and describes some Intel work to support virtualization in OpenStack including the Open Attestation service.
Virtualization provides advantages like managed execution, isolation, resource partitioning and portability. However, it can also lead to performance degradation, inefficiency, and new security threats. Virtualization technologies like Xen, VMware and Hyper-V use approaches like paravirtualization and full virtualization to virtualize hardware and provide isolated execution environments while managing the tradeoffs between performance, functionality and security.
Hyper-V is Microsoft's server virtualization technology that is included with Windows Server 2008. It allows multiple virtual machines to run on a single physical machine. Key capabilities of Hyper-V include support for large memory virtual machines up to 64GB, live migration of virtual machines between physical servers, and integration with the Windows hypervisor for security and isolation of virtual machines. System Center Virtual Machine Manager 2008 provides centralized management of virtualized and physical infrastructure across Hyper-V, Virtual Server and VMware environments.
Ready for cloud computing with hyper vAndik Susilo
Andik Susilo is a Most Valuable Professional (MVP) in data center engineering and cloud consulting. He provides consulting services through http://www.infinyscloud.com/ and http://www.telkomcloud.com. The document discusses cloud computing models including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It also covers the advantages of cloud computing such as pay per use, scalability, security, and reliability. The document provides examples of commercial cloud services and describes Microsoft Hyper-V virtualization capabilities on Windows Server 2008 R2.
Live VM migration allows virtual machines to be relocated between physical hosts with little to no downtime. There are two main approaches: pre-copy migration copies memory contents iteratively with little downtime, while post-copy migration copies CPU states first and then memory pages on demand to reduce total migration time. Several research projects use live migration techniques to improve data center efficiency: LiteGreen saves energy by consolidating idle desktop VMs, Jettison uses partial VM migration for quick consolidation, and Kaleidoscope proposes VM state coloring to enable fast micro-elasticity through live cloning of warm VMs.
This document provides an overview of virtualization concepts and VMware vSphere features. It begins with defining key virtualization building blocks like hypervisors, virtual machines, and virtual switches. It then covers ESXi architecture, vCenter functionality, and advanced features like vMotion, HA, and vNetworking. The document aims to give attendees a deep understanding of virtualization and how vSphere addresses various virtualization challenges.
Inroduction to Virtualization and Video Playback during a Live Migrated Virtual Machine hosting the server with its time analysis.
OS- Ubuntu
Hypervisor- KVM
The document discusses virtualization and its implementation at GHCL Ltd's Sutrapada facility. It defines virtualization as creating virtual versions of operating systems, storage, and network resources. The goals of virtualization are to centralize administration, improve scalability and hardware utilization. Types of virtualization discussed include full, partial, and para virtualization. The document outlines how virtual machines are created, monitored, snapshotted, migrated, and used for failover. It provides an example virtualization implementation at GHCL including resource planning and allocation across three physical servers. Finally, it discusses desktop virtualization and its advantages over traditional desktop computing.
Hyper-V High Availability and Live MigrationPaulo Freitas
This document provides an overview of a Microsoft Virtual Academy training program on Hyper-V virtualization. The program is split into two halves, with the first half covering topics like Hyper-V infrastructure, networking, storage, and management. The second half focuses on high availability, disaster recovery, and integrating Hyper-V with System Center. It also discusses capabilities like live migration, replication, clustering and improving application availability and redundancy through virtualization.
Introduction to Virtualization, Virsh and Virt-Managerwalkerchang
Virtualization allows for the abstraction and sharing of computer hardware resources like CPU, memory, storage and network capacity. The document introduces virtualization concepts and the tools KVM, Virsh and Virt-manager. It provides documentation on Virsh commands to manage domains (VMs), interfaces and networks. These include commands to define, start, suspend, resume VMs and interfaces, as well as take and restore VM snapshots to revert states. Managing VMs, interfaces and networks with Virsh commands allows administrators to efficiently share hardware resources across VMs.
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVMvwchu
With co-presenter Maninder Singh, delivered a presentation about hypervisors and virtualization technology for an independent topic study project for the Operating System Design (EECS 4221) course at York University, Canada in October 2014.
Virtualization, briefly, is the separation of resources or requests for a service from the underlying physical delivery of that service. It is a concept in which access to a single underlying piece of hardware is coordinated so that multiple guest operating systems can share a single piece of hardware, with no guest operating system being aware that it is actually sharing anything at all.
Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.
Building a KVM-based Hypervisor for a Heterogeneous System Architecture Compl...Hann Yu-Ju Huang
This document discusses building a KVM-based hypervisor that can virtualize the key features of Heterogeneous System Architecture (HSA) for a compliant system. It describes HSA features like shared virtual memory, I/O page faulting, and user-level queueing. It then outlines the design of virtualizing these features through techniques like VirtIO-KFD for queues, shadow page tables for shared memory, and shadow PPR interrupts for page faults. Evaluation shows the hypervisor approach incurs average performance overhead of 5% for GPU execution compared to native execution.
Hardware support for virtualization originated in the 1970s with goals of running multiple virtual machines on a single physical machine. A key requirement was virtualization allowing equivalent execution of programs in a virtual environment as running natively. The x86 architecture posed challenges to virtualization due to sensitive instructions. Intel Virtualization Technology (VT-x) added hardware support for virtualization on x86 by introducing a new CPU operation mode called VMX non-root, and transitions between it and VMX root mode. This reduced the need for software emulation of sensitive instructions and improved virtualization performance.
Server virtualization concepts allow partitioning of physical servers into multiple virtual servers using virtualization software and hardware techniques. This improves resource utilization by running multiple virtual machines on a single physical server. Server virtualization provides benefits like reduced costs, higher efficiency, lower power consumption, and improved availability compared to running each application on its own physical server. Key components of server virtualization include virtual machines, hypervisors, CPU virtualization using techniques like Intel VT-x or AMD-V, memory virtualization, and I/O virtualization through methods like emulated, paravirtualized or direct I/O. KVM and QEMU are popular open source virtualization solutions, with KVM providing kernel-level virtualization support and Q
Yabusame: postcopy live migration for qemu/kvmIsaku Yamahata
Yabusame is a postcopy live migration technique for QEMU/KVM. It was developed by Isaku Yamahata of VALinux Systems Japan K.K. and Takahiro Hirofuchi of AIST. The project aims to improve live migration performance by allowing the guest VM to resume execution at the destination host before memory pages have been fully copied. This is achieved through asynchronous page fault handling during the postcopy phase. Evaluation shows the technique can improve CPU utilization and reduce total migration times compared to traditional precopy approaches. Future work includes upstream integration, support for KSM/THP, multithreading optimizations, and integration with management platforms like libvirt and OpenStack.
This document provides an overview of virtualization using KVM and Xen hypervisors. It defines full and para virtualization approaches and type 1 and type 2 hypervisors. It describes the X86 architecture model and how virtualization abstracts privileged instructions. It then discusses parameters for evaluating hypervisor efficiency and provides descriptions of the open source KVM and Xen hypervisors, comparing their architectures, supported features, and operating systems. Key differences between KVM and Xen are outlined related to hardware support, complexity, paravirtualization, and memory management.
This document discusses different techniques for virtual machine migration. It begins with an introduction to virtualization and how virtual machine migration involves copying a VM from one physical machine to another. There are three main categories of migration techniques: fault tolerant techniques which migrate VMs to prevent failures, load balancing techniques which distribute load across servers, and energy efficient techniques which optimize resource utilization to conserve energy. Live VM migration is described as migrating the entire OS and applications between physical machines without disrupting applications. The document also covers background details on virtual machine migration methods being either hot/live where the VM continues running, or cold/non-live where the VM status is lost during migration.
Implementation levels of virtualizationGokulnath S
Virtualization allows multiple virtual machines to run on the same physical machine. It improves resource sharing and utilization. Traditional computers run a single operating system tailored to the hardware, while virtualization allows different guest operating systems to run independently on the same hardware. Virtualization software creates an abstraction layer at different levels - instruction set architecture, hardware, operating system, library, and application levels. Virtual machines at the operating system level have low startup costs and can easily synchronize with the environment, but all virtual machines must use the same or similar guest operating system.
The document discusses the history and usage of virtualization technology, provides an overview of CPU, memory, and I/O virtualization, compares the Xen and KVM virtualization architectures, and describes some Intel work to support virtualization in OpenStack including the Open Attestation service.
Virtualization provides advantages like managed execution, isolation, resource partitioning and portability. However, it can also lead to performance degradation, inefficiency, and new security threats. Virtualization technologies like Xen, VMware and Hyper-V use approaches like paravirtualization and full virtualization to virtualize hardware and provide isolated execution environments while managing the tradeoffs between performance, functionality and security.
Hyper-V is Microsoft's server virtualization technology that is included with Windows Server 2008. It allows multiple virtual machines to run on a single physical machine. Key capabilities of Hyper-V include support for large memory virtual machines up to 64GB, live migration of virtual machines between physical servers, and integration with the Windows hypervisor for security and isolation of virtual machines. System Center Virtual Machine Manager 2008 provides centralized management of virtualized and physical infrastructure across Hyper-V, Virtual Server and VMware environments.
Ready for cloud computing with hyper vAndik Susilo
Andik Susilo is a Most Valuable Professional (MVP) in data center engineering and cloud consulting. He provides consulting services through http://www.infinyscloud.com/ and http://www.telkomcloud.com. The document discusses cloud computing models including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It also covers the advantages of cloud computing such as pay per use, scalability, security, and reliability. The document provides examples of commercial cloud services and describes Microsoft Hyper-V virtualization capabilities on Windows Server 2008 R2.
Dynamic Data Center for Hosters, by Stefan SimonAlexey Kovyazin
The document discusses Dynamic Data Center (DDC), an industry term for a virtualized and automated infrastructure that provides real-time provisioning, high availability, unlimited capacity, and self-healing capabilities. It outlines the goals and technologies covered by DDC for hosting providers, including Hyper-V, System Center components, and guidance for bare metal and virtual server provisioning. The presentation also provides overviews of key Dynamic Data Center technologies like Hyper-V, clustered shared volumes, live migration, and how System Center products like Virtual Machine Manager, Operations Manager, and Data Protection Manager integrate with and support a DDC.
Security Best Practices For Hyper V And Server Virtualizationrsnarayanan
The document summarizes information about Hyper-V virtualization. It provides an overview of Hyper-V architecture, including that the hypervisor partitions the hardware and manages guest partitions through the virtualization stack. It also discusses Hyper-V security, noting that guests are isolated from each other and the root to prevent attacks, and that delegated administration and role-based access control can be used to manage virtual machine access.
z/VM version 6.2 introduced new capabilities for virtualization including Single System Image (SSI) clustering and Live Guest Relocation (LGR). SSI allows up to four z/VM systems to be managed as a single cluster, while LGR allows virtual machines to be moved between systems without disruption. Developing these features required addressing challenges like maintaining system architecture accuracy and flexibility across different hardware. Relocation domains were introduced to control where guests can move and the architecture features exposed. Overall, z/VM 6.2 significantly expanded the possibilities for virtualization on the IBM mainframe.
Hyper-V and SCVMM 2008 provide virtualization capabilities for Microsoft. SCVMM 2008 allows for managing virtual machines across VMware and Hyper-V environments. It provides features like intelligent placement of VMs, conversion of physical to virtual machines, and delegated administration. SCVMM 2008 integrates with other System Center products and uses PowerShell for administration and monitoring of the virtualized environment.
The Next Generation of Microsoft Virtualization With Windows Server 2012Lai Yoong Seng
The document discusses new features in Windows Server 2012 that improve virtualization capabilities. Key features highlighted include increased scalability for Hyper-V hosts and virtual machines, live migration enhancements, storage migration capabilities, high availability options like Hyper-V Replica for disaster recovery, and flexibility in infrastructure deployment. The presentation aims to demonstrate how these features enable private cloud deployments with optimized performance, scalability, and availability.
This document summarizes the key features and benefits of Microsoft's Hyper-V platform. It highlights how Hyper-V offers significant cost savings through lower upfront costs and ongoing costs compared to alternative virtualization platforms. It also improves IT flexibility and agility by enabling features like live migration, cluster shared volumes, hot-add/removal of storage, and CPU compatibility for live migration. These features along with improved performance, scalability, and ability to park/sleep unused CPU cores allow for increased server consolidation and a more efficient use of computing resources.
Linux Foundation Collaboration Summit 13 :10 years of Xen and BeyondThe Linux Foundation
In 2013, the Xen Hypervisor will be 10 years old: when Xen was designed, we anticipated a world, which now is known as cloud computing. Today, Xen powers the largest clouds in production and is the basis for several commercial virtualization products. In this talk we will give on overview of Xen and related projects, cover hot developments in the Xen community and outline what comes next.
The talk is intended for users and developers that are familiar with virtualization: no deep knowledge is required. We will start with an architectural overview and cover topics such as: Xen and Linux, how to secure your cloud using disaggregation, SELinux and XSM/FLASK, the evolution of Paravirtualization, Xen on ARM and common challenges for open source hypervisors. We will explore the potential of Open Mirage for testing hypervisors. The talk will conclude with an outlook to the future of Xen.
Bryan Nairn discusses security considerations for virtualization. He notes that over 40% of virtual machines will be less secure than physical machines by 2014. The document outlines common virtualization security myths and describes the hypervisor architecture. It discusses isolation between virtual machines and the hypervisor's security goals of protecting data confidentiality and integrity. The document also covers common attack vectors and provides potential solutions for securing the host system and virtual machines.
CSA Presentation 26th May Virtualization securityv2vivekbhat
Bryan Nairn discusses security considerations for virtualization. Virtual machines are increasingly common but over 40% will be less secure than physical servers by 2014. Key risks include compromised host machines which could then control VMs, and unpatched guest operating systems. Defenses include hardening host servers, protecting virtual machine files, isolating guest networks, and using access control lists to manage permissions for VMs. Securing the virtualization platform requires attention to both host and guest security.
System Center Virtual Machine Manager 2008 R2aralves
Virtual Machine Manager 2008 R2 is a centralized management solution that allows administrators to deploy, manage, and monitor virtual machines running on Hyper-V, Virtual Server, and VMware ESX servers. It provides features such as intelligent placement of VMs, conversion of physical to virtual machines and virtual to virtual machines, library management, and monitoring with Operations Manager. Version R2 adds additional capabilities such as managing Windows Server 2008 R2 Hyper-V, live migration, and storage improvements including support for multiple VMs per LUN.
Virtualization Manager 5.0 – Now with Hyper-V Support!SolarWinds
For more information on Virtualization Manager, visit: http://www.solarwinds.com/virtualization-manager.aspx
Watch this webcast: http://www.solarwinds.com/resources/webcasts/virtualization-manager-50-now-with-hyperv-support.html
Whether you have a Hyper-V virtual environment, VMware, or both – Virtualization Manager now has you covered. Watch SolarWinds virtualization experts Brian Radovich and Robbie Wright as we discuss the key areas for managing a Hyper-V virtual environment.
• How to manage performance on a shared virtual infrastructure
• Building out a proactive capacity plan
• Tracking and reporting on virtual configurations and drift
• Living in a multi-hypervisor world!
Also during this webcast we demonstrate key technologies from SolarWinds that help to conquer these challenges and ensure success in virtual environments.
1. Use Compare-VM to check for any configuration issues between the source VM and the Hyper-V environment where it will be imported. This will generate a report of any fixes needed.
2. Apply any fixes reported by Compare-VM.
3. Use Import-VM and specify the path to the XML configuration file, along with options like the destination path for virtual hard disks.
4. The VM will be imported and now reside in the new Hyper-V environment.
The document discusses different desktop and application virtualization technologies from Microsoft, Citrix, and VMware. It compares their remote application delivery methods like RDP for Microsoft, ICA for Citrix, and RDP for VMware. It also summarizes key features of products like Remote Desktop Connection, RemoteApp, and Virtual Desktop Infrastructure (VDI).
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2Damir Bersinic
This is the second of a 2-part series delivered at Prairie DevCon in Calgry on March 15. 2012. The sessions provided a quick overview of the new features of Hyper-V in Windows Server "8" Beta and how these compare to VMware vSphere 5.
During Microsoft Techday 2012 (Malaysia), we have presented about operating system virtualization, presentation virtualization and application virtualization.
This presenation gives a quick history on Hyper-V and discusses the arhcitecture of the vurrent release. It then goes into detail on Hyper-V R2, i.e. the build included in Hyper-V Server 2008 R2 and Windows Server 2008 R2. It includes Live Migration, Cluster Shared Volumes, Virtual Machine Queue, SLAT, Core Parking and Native VHD.
Cooperative VM Migration for a virtualized HPC Cluster with VMM-bypass I/O de...Ryousei Takano
1) Cooperative VM migration allows live migration of VMs with VMM-bypass I/O devices like InfiniBand adapters.
2) SymVirt enables coordination between the guest OS and VMM to safely detach and reattach devices during migration.
3) Experiments show SymVirt enables fault-tolerant live migration with minimal overhead for HPC workloads on an InfiniBand cluster. Postcopy migration further reduces downtime during migration.
Similar to Scheduler Support for Video-oriented Multimedia on Client-side Virtualization (20)
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
WeTestAthens: Postman's AI & Automation Techniques
Scheduler Support for Video-oriented Multimedia on Client-side Virtualization
1. Scheduler Support for
Video-oriented Multimedia on
Client-side Virtualization
Hwanju Kim1, Jinkyu Jeong1, Jaeho Hwang1, Joonwon Lee2,
and Seungryoul Maeng1
Korea Advanced Institute of Science and Technology (KAIST)1
Sungkyunkwan University2
ACM Multimedia Systems (MMSys)
February, 22-24, Chapel Hill, North Carolina, USA
2. Virtualization Everywhere
• A new layer between OS and HW
=hypervisor
Multiple OSes
VM
VM
High resource utilization
OS
OS
Strong Isolation
Easy management
Hypervisor
Live maintenance
Fast provisioning
…
Server-side virtualization
Client-side virtualization
Cloud computing
Virtual desktop infrastructure
2/22
3. Client-side Virtualization
• Multiple OS instances on a local device
• Primary use cases
• Different OSes for application compatibility
• Consolidating business and personal
computing environments on a single device
• BYOD: Bring Your Own Device
Managed
domain
Business
Personal
VM
VM
Hypervisor
3/22
4. Multimedia on Virtualized Clients
• Multimedia is ubiquitous on any VM
Video
Data Video
Playback
Compilation Processing 3D game
conference
Downloading
Windows
Linux
Business
Personal
Business
Personal
VM
VM
VM
VM
VM
VM
Hypervisor
Hypervisor
Hypervisor
1. Multimedia workloads are dominant on virtualized clients
2. Interactive systems can have concurrently mixed workloads
4/22
5. Issues on Multi-layer Scheduling
• A multimedia-agnostic hypervisor invalidat
es OS policies for multimedia
VM
VM
Larger CPU proportion
& Timely dispatching
Task
Task
Task
Task
OS
OS
scheduler
Scheduler
Task
Task
Virtual CPU
Virtual CPU
BVT [SOSP’99] OS scheduler
Additional
SMART [TOCS’03] abstraction
Hypervisor
Rialto [SOSP’97]
BEST [MMCN’02] CPU
Scheduler
HuC [TOMCCAP’06]
Redline [OSDI’08] CPU
RSIO [SIGMETRICS’10] I’m unaware of any multimedia
Windows MMCSS -specific OS policies in a VM,
since I see each VM as a black
Semantic gap!
box.
5/22
6. Multimedia-agnostic Hypervisor
• Multimedia QoS degradation Video playback
Competing
• Two VMs with equal CPU shares
or 3D game
workloads
• Multimedia VM + Competing VM VM
VM
Xen hypervisor
Credit scheduler
Video playback (720p)
on VLC media player
Quake III Arena (demo1)
30 100
90
25 80
Average FPS
Average FPS
70
20
60
15 50
40
10 30
20
5
10
0 0
Competing workloads in another VM
Competing workloads in another VM
6/22
7. Possible Solutions to Semantic Gap
• Explicit vs. Implicit
Explicit
Explicit
Implicit
OS cooperation
User involvement
Hypervisor-only
VM
VM
VM
OS scheduler
OS scheduler
OS scheduler
Workload monitor
Hypervisor
Hypervisor
Hypervisor
Scheduler
Scheduler
Scheduler
+ Accurate
+ Simple
+ Transparency
- OS modification
- Inconvenient
- Difficult to identify
- Infeasible w/o
- Unsuitable for
workload demands
multimedia-friendly
dynamic workloads
at the hypervisor
OS schedulers
7/22
8. Proposed Approach
• Multimedia-aware hypervisor scheduler
• Transparent scheduler support for multimedia
• No modifications to upper layer SW (OS & apps)
• “Feedback-driven VM scheduling”
VM
VM
VM
Scheduling command
(e.g., CPU share or priority)
Hypervisor
Multimedia
Multimedia
CPU
manager
monitor
scheduler
(feedback-driven)
Challenges
Estimated 1. How to estimate multimedia QoS
multimedia QoS
based on a small set of HW events?
Audio
Video
CPU
2. How to control CPU scheduler
based on the estimated information
8/22
9. Multimedia QoS Estimation
• What is estimated as multimedia QoS?
• “Display rate” (i.e., frame rate)
• Used by HuC scheduler [TOMCCAP’06]
• How is a display rate captured at the
hypervisor?
• Two types of display
1. Memory-mapped
2. GPU-accelerated
display
display
(e.g., video playback)
(e.g., 3D game)
Display
Graphics
interface
Library
Memory-mapped
Acceleration
Framebuffer
unit
Video device
9/22
10. Memory-mapped Display (1/2)
• How to estimate a display update rate on
the memory-mapped framebuffer
• Write-protection for virtual address space
mapped to framebuffer
Display interface
write
The hypervisor can inspect any attempt to map memory
Virtual address space
Write-protection
Hypervisor
Sampling to reduce trap overheads
page fault handler (1/128 pages, by default)
{ Framebuffer
Update display rate memory
}
10/22
11. Memory-mapped Display (2/2)
• Accurate estimation
• Maintaining display rate per task
• An aggregated display rate does not Task
10 FPS
represent multimedia QoS
Task
25 FPS
• Tracking guest OS task at the hypervisor
• Inspecting address space switches (Antfarm [USENIX’06])
• Monitoring audio access (RSIO [SIGMETRIC’10])
• Inspecting audio buffer access with write-protection
• A task with a high display rate and audio access
à a multimedia task
11/22
12. GPU-accelerated Display (1/2)
• Naïve method
• Inspecting GPU command buffer with
write-protection or polling
• Too heavy due to huge amount of GPU commands
• Lightweight method
• Little overhead, but less accuracy
• 3D games are less sensitive to frame rate degradation
than video playback
• GPU interrupt-based estimation
• An interrupt is typically used for an application to
manage buffer memory
• Hypothesis
• “A GPU interrupt rate is in proportion to a display rate”
12/22
13. GPU-accelerated Display (2/2)
• Linear relationship between display rates
and GPU interrupt rates
Intel GMA 950
Nvidia 6150 Go
PowerVR
(Apple MacBook)
(HP Pavillion tablet)
(Samsung GalaxyS)
Quake3 demo1 (640x480) Quake3 demo1 (640x480) 400 Quake3 demo4 (320x240)
# of GPU interrupt / sec
# of GPU interrupt / sec
# of GPU interrupt / sec
12000 Quake3 demo2 (640x480) 160 Quake3 demo2 (640x480) 350
10000 Quake3 demo4 (640x480)
Quake3 demo1 (1024x768) 140 Quake3 demo1 (1024x768) 300
8000 250
120
200
6000
100 150
4000
100
2000 80
50
0 60 0
0 20 40 60 A GPU interrupt rate can be used to estimate 20 display rate
80 100 0 a 0 20
40 6040 60 80 100
FPS
without additional overheads
FPS
FPS
• Exponential weighted moving average
(EWMA) is used to reduce fluctuation
• EWMAt = (1-w) x EWMAt-1 + w x current value 13/22
14. Multimedia Manager
• A feedback-driven CPU allocator
• Base assumption
• “Additional CPU share (or higher priority) improves
a display rate”
• Desired frame rate (DFR)
• A currently achievable display rate
• Multiplied by tolerable ratio (0.8)
/* Exceptional cases:
* 1) No relationship between CPU
and FPS IF current FPS < previous FPS AND
* 2) FPS is saturated below DFR
* 3) Local CPU contention in a VM
*/
current FPS < DFR THEN
If in initial phase Then
If no FPS improvement by CPU
Increase CPU share Exponential increase
share increase (3 times) Then
Decrease CPU share by half
Else
Linear increase
14/22
15. Priority Boosting
• Responsive dispatching
• Problem
• The hypervisor does not distinguish the types of
events for priority boosting
• A VM that will handle a multimedia event cannot preempt
a currently running VM handling a normal event.
• Higher priority for multimedia-related events
• e.g., video, audio, one-shot timer
MMBOOST
Multimedia events
IOBOOST
Other events
Normal
Based on remaining CPU shares
Priority
priority
15/22
16. Evaluation
• Experimental environment
• Intel MacBook with Intel GMA 950
• Xen 3.4.0 with Ubuntu 8.04
• Implementation based on Xen Credit scheduler
• Two-VM scenario
• One with direct I/O + one with indirect (hosted) I/O
• Presenting the case of direct I/O in this talk
• See the paper for the details of the indirect I/O case
16/22
18. Estimation Overhead
• CPU overhead caused by page faults
• Video playback
• 0.3~1% with sampling
• Less than 5% with tracking all pages
All
Sampling
Overhead
pages
1/8 pages 1/32 pages 1/128 pages
Low resolution
4.95% 1.10% 0.54% 0.58%
(640x354)
High resolution
3.91% 1.04% 0.69% 0.33%
(1280x720)
18/22
20. Performance Improvement
• Performance improvement
Video playback
Competing
or 3D game
workloads
• Closed to maximum achievable VM
VM
frame rates Hypervisor
Video playback (720p)
on VLC media player
Quake III Arena (demo1)
Credit scheduler
Credit scheduler
Credit scheduler w/ multimedia support
Credit scheduler w/ multimedia support
25
100
20
Average FPS
Average FPS
80
15
60
10 40
5 20
0 0
Competing workloads in another VM
Competing workloads in another VM
20/22
21. Limitations & Discussion
• Network-streamed multimedia
• Additional preemption support required for
multimedia-related network packets
• Multiple multimedia workloads in a VM
• Multimedia manager algorithm should be
refined to satisfy QoS of mixed multimedia
workloads in the same VM
• Adaptive management for SMP VMs
• Adaptive vCPU allocation based on hosted
multimedia workloads
21/22
22. Conclusions
• Demands for multimedia-aware hypervisor
• Multimedia are increasingly dominant in
virtualized systems
• “Multimedia-friendly hypervisor scheduler”
• Transparent and lightweight multimedia support
on client-side virtualization
• Future directions
• Multimedia for server-side VDI
• Multicore extension for SMP VMs
• Considerations for network-streamed multimedia
22/22
26. Task-based Display Rate Management
• Different types of frame rate accounting
For Unix-like OSes, a display rate is
accounted to a previously scheduled task
when a framebuffer write is observed
(this heuristic works well because a display
interface is interactive and is highly likely
to be scheduled promptly on requests)
Video playback (640x354) + Directory listing on the screen (ls –alR /)
(w/ CPU-bound VM)
Real FPS Estimated FPS
30
25
20
FPS
15
10
5 Error rate = 1.78%
0
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
Time (sec)
26/22
27. Local CPU Contention in a VM
• Increasing the number of CPU-bound tasks
with a video playback application
Native Linux
Virtualized Linux w/ our scheme
Ø Our scheme provides sufficient CPU to a VM (IDD) that hosts a multimedia workload
even though local CPU contention increases
Ø No starvation of another CPU-bound VM (guest) 27/22