1) Cooperative VM migration allows live migration of VMs with VMM-bypass I/O devices like InfiniBand adapters.
2) SymVirt enables coordination between the guest OS and VMM to safely detach and reattach devices during migration.
3) Experiments show SymVirt enables fault-tolerant live migration with minimal overhead for HPC workloads on an InfiniBand cluster. Postcopy migration further reduces downtime during migration.
Toward a practical “HPC Cloud”: Performance tuning of a virtualized HPC clusterRyousei Takano
1) Performance tuning methods for HPC Cloud include PCI passthrough, NUMA affinity, and reducing VMM noise to improve performance and close the gap with bare metal machines.
2) Evaluation of MPI and HPC applications on a 16-node cluster showed PCI passthrough improved MPI bandwidth close to bare metal, and NUMA affinity improved performance up to 2%.
3) Parallel efficiency of coarse-grained applications was comparable to bare metal, but fine-grained applications saw up to 22% degradation due to communication overhead and virtualization.
Toward a practical “HPC Cloud”: Performance tuning of a virtualized HPC clusterRyousei Takano
This document evaluates the performance of a virtualized HPC cluster using the HPC Challenge benchmark suite. It investigates three performance tuning techniques: PCI passthrough to bypass virtualization overhead for the network interface card, NUMA affinity to improve memory access performance, and reducing "VMM noise" like unnecessary services on the host OS. The results show these techniques can improve performance of the virtualized cluster to be close to that of a non-virtualized or "bare metal" system, realizing a more practical "true HPC Cloud."
Excessive interrupts can hurt I/O scalability in Xen. The proposals discuss software interrupt throttling and interrupt-less NAPI to reduce interrupt overhead. They also discuss exposing NUMA information to Xen to improve host I/O NUMA awareness and enabling guest I/O NUMA awareness by constructing _PXM methods and extending device assignment policies.
This document discusses moving backend drivers from the Dom0 domain to a separate HVM driver domain in Xen. Testing showed the HVM driver domain provided better network performance than the PV backend domain, with lower CPU utilization. Issues were discussed around booting the system without physical device drivers in Dom0, requiring the HVM driver domain to run devices and provide networking/storage. Further analysis of EPT page flipping performance was suggested.
The needs for immediate responsiveness of VMs in the virtualized environments have been on the rise. Several services in SKT also require soft realtime support for virtual machines to substitute the physical machines to achieve high utilization and adaptability. However, consolidated multiple OSes and irregular external events might render the hypervisor infringe on a VM's promptitude. As a solution of this problem, we are improving Xen's credit scheduler by introducing the RT_PRIORITY that guarantees a VM's running at any given point in time as long as credits remains to be burn. It would increase the quality of service and make a VM's behavior predictable on the consolidated environment. In addition, we extend our suggestion to the multi-core environment and even a large number of physical machines by using live migrations.
IBM i client partitions concepts and implementationCOMMON Europe
This document discusses IBM i client partitions hosted by an IBM i host partition. It provides an overview of iVirtualization and new enhancements, and covers HMC-based setup, considerations, and customer examples. The agenda includes an overview of iVirtualization, new enhancements, HMC-based setup, things to consider, and customer examples. Installation of IBM i hosting clients on Power systems can begin with the HMC-based quick install guide. Virtual versus physical hardware resources and an overview of IBM i virtual client partitions are also reviewed.
POWER VM with IBM i and live partition mobilityCOMMON Europe
IBM Power Systems provide virtualization capabilities through PowerVM. PowerVM allows customers to run multiple logical partitions (LPARs) on a single physical server. This improves resource utilization and reduces costs compared to a traditional one partition per server model. Key PowerVM technologies include the hypervisor, dynamic logical partitioning, shared processor pools, virtual I/O server, and live partition mobility. The virtual I/O server (VIOS) allows client partitions like IBM i to leverage virtualized storage, network, and other I/O resources rather than each requiring physical adapters.
Linux Foundation Collaboration Summit 13 :10 years of Xen and BeyondThe Linux Foundation
In 2013, the Xen Hypervisor will be 10 years old: when Xen was designed, we anticipated a world, which now is known as cloud computing. Today, Xen powers the largest clouds in production and is the basis for several commercial virtualization products. In this talk we will give on overview of Xen and related projects, cover hot developments in the Xen community and outline what comes next.
The talk is intended for users and developers that are familiar with virtualization: no deep knowledge is required. We will start with an architectural overview and cover topics such as: Xen and Linux, how to secure your cloud using disaggregation, SELinux and XSM/FLASK, the evolution of Paravirtualization, Xen on ARM and common challenges for open source hypervisors. We will explore the potential of Open Mirage for testing hypervisors. The talk will conclude with an outlook to the future of Xen.
Toward a practical “HPC Cloud”: Performance tuning of a virtualized HPC clusterRyousei Takano
1) Performance tuning methods for HPC Cloud include PCI passthrough, NUMA affinity, and reducing VMM noise to improve performance and close the gap with bare metal machines.
2) Evaluation of MPI and HPC applications on a 16-node cluster showed PCI passthrough improved MPI bandwidth close to bare metal, and NUMA affinity improved performance up to 2%.
3) Parallel efficiency of coarse-grained applications was comparable to bare metal, but fine-grained applications saw up to 22% degradation due to communication overhead and virtualization.
Toward a practical “HPC Cloud”: Performance tuning of a virtualized HPC clusterRyousei Takano
This document evaluates the performance of a virtualized HPC cluster using the HPC Challenge benchmark suite. It investigates three performance tuning techniques: PCI passthrough to bypass virtualization overhead for the network interface card, NUMA affinity to improve memory access performance, and reducing "VMM noise" like unnecessary services on the host OS. The results show these techniques can improve performance of the virtualized cluster to be close to that of a non-virtualized or "bare metal" system, realizing a more practical "true HPC Cloud."
Excessive interrupts can hurt I/O scalability in Xen. The proposals discuss software interrupt throttling and interrupt-less NAPI to reduce interrupt overhead. They also discuss exposing NUMA information to Xen to improve host I/O NUMA awareness and enabling guest I/O NUMA awareness by constructing _PXM methods and extending device assignment policies.
This document discusses moving backend drivers from the Dom0 domain to a separate HVM driver domain in Xen. Testing showed the HVM driver domain provided better network performance than the PV backend domain, with lower CPU utilization. Issues were discussed around booting the system without physical device drivers in Dom0, requiring the HVM driver domain to run devices and provide networking/storage. Further analysis of EPT page flipping performance was suggested.
The needs for immediate responsiveness of VMs in the virtualized environments have been on the rise. Several services in SKT also require soft realtime support for virtual machines to substitute the physical machines to achieve high utilization and adaptability. However, consolidated multiple OSes and irregular external events might render the hypervisor infringe on a VM's promptitude. As a solution of this problem, we are improving Xen's credit scheduler by introducing the RT_PRIORITY that guarantees a VM's running at any given point in time as long as credits remains to be burn. It would increase the quality of service and make a VM's behavior predictable on the consolidated environment. In addition, we extend our suggestion to the multi-core environment and even a large number of physical machines by using live migrations.
IBM i client partitions concepts and implementationCOMMON Europe
This document discusses IBM i client partitions hosted by an IBM i host partition. It provides an overview of iVirtualization and new enhancements, and covers HMC-based setup, considerations, and customer examples. The agenda includes an overview of iVirtualization, new enhancements, HMC-based setup, things to consider, and customer examples. Installation of IBM i hosting clients on Power systems can begin with the HMC-based quick install guide. Virtual versus physical hardware resources and an overview of IBM i virtual client partitions are also reviewed.
POWER VM with IBM i and live partition mobilityCOMMON Europe
IBM Power Systems provide virtualization capabilities through PowerVM. PowerVM allows customers to run multiple logical partitions (LPARs) on a single physical server. This improves resource utilization and reduces costs compared to a traditional one partition per server model. Key PowerVM technologies include the hypervisor, dynamic logical partitioning, shared processor pools, virtual I/O server, and live partition mobility. The virtual I/O server (VIOS) allows client partitions like IBM i to leverage virtualized storage, network, and other I/O resources rather than each requiring physical adapters.
Linux Foundation Collaboration Summit 13 :10 years of Xen and BeyondThe Linux Foundation
In 2013, the Xen Hypervisor will be 10 years old: when Xen was designed, we anticipated a world, which now is known as cloud computing. Today, Xen powers the largest clouds in production and is the basis for several commercial virtualization products. In this talk we will give on overview of Xen and related projects, cover hot developments in the Xen community and outline what comes next.
The talk is intended for users and developers that are familiar with virtualization: no deep knowledge is required. We will start with an architectural overview and cover topics such as: Xen and Linux, how to secure your cloud using disaggregation, SELinux and XSM/FLASK, the evolution of Paravirtualization, Xen on ARM and common challenges for open source hypervisors. We will explore the potential of Open Mirage for testing hypervisors. The talk will conclude with an outlook to the future of Xen.
Learn about IBM PowerVM Best Practices.This IBM Redbooks publication provides best practices for planning, installing, maintaining, and monitoring the IBM PowerVM Enterprise Edition virtualization features on IBM POWER7 processor technology-based servers.
For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
This document provides an overview and agenda for the Common Europe Conference in Vienna, June 2012. It discusses the Virtual Partition Manager (VPM) tool for managing partitions without an external management console. The VPM now supports creation and management of IBM i client partitions. It also covers where to start with installing IBM i hosting clients on Power systems, both with VPM-based and HMC-based setups, and considerations for virtual versus physical hardware resources.
Learn about IBM PowerVM Virtualization Introduction and Configuration. PowerVM is a combination of hardware, firmware, and software that provides CPU, network, and disk virtualization.This publication is also designed to be an introduction guide for system administrators, providing instructions for tasks like Configuration and creation of partitions and resources on the HMC,Installation and configuration of the Virtual I/O Server, creation and installation of virtualized partitions. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Xen has been very successful on servers, and yet there are substantial areas where Xen can evolve further. In this talk Jun will discuss a compelling area where the Xen technologies can be applied to -- Mobile virtualization. Using Android as an example, the talk will explore two types of usage models, 1) Android as a guest, 2) Android as the host, showing the benefits of using the Xen technologies.
This document proposes a method for link virtualization on the Xen virtualization platform using Single Root I/O Virtualization (SR-IOV). It discusses using SR-IOV to minimize overhead by performing encapsulation/decapsulation and packet filtering in hardware. It also describes using MAC-in-UDP tunneling with a virtual network ID to isolate networks and a vARP protocol to map between virtual and physical MAC addresses. The document evaluates the proposed method's ability to guarantee bandwidth isolation and provides performance results for both weight-based and bandwidth-based bandwidth control approaches.
This document discusses the challenges of graphics virtualization. It provides background on native device initialization, QEMU I/O virtualization, and PCI device pass-through. It then covers graphics pass-through for discrete and integrated graphics, including the current status and future work, such as supporting dual graphics devices and improving driver validation.
Kemari is a virtual machine synchronization technique that allows fault tolerance by keeping a primary and secondary VM identical. It uses DomT, a para-virtualized domain, to efficiently synchronize state between VMs by tapping event channels and only transferring updated memory pages. Evaluation shows the secondary VM can continue transparently and with acceptable performance during network, storage and file I/O workloads when the primary hardware fails.
This document discusses evolving configuration tools for Single Root I/O Virtualization (SR-IOV) networking. It describes Mitch Williams' work at Intel to address pain points with SR-IOV, such as randomly assigned MAC addresses. The solution involved kernel and driver changes to allow setting MAC addresses and VLAN tags for each virtual function from the command line. Future needs discussed include 10Gb support, distro updates, migration support, and addressing communication between VMs with emulated and direct-assigned networking.
Sang-bum Suh will give a talk on the current status and the future direction of Xen ARM. Xen ARM is the first ARM virtualization S/W based on Xen Architecture.
The document discusses I/O latency issues in Xen-ARM virtualization. It finds that the credit scheduler does not adequately support time-sensitive applications due to the hierarchical scheduling nature and split driver model. Measurements show large worst-case latencies throughout the interrupt path, including preemption, scheduling, and intra-domain latencies. Experiments reveal that not-boosted virtual CPUs and multi-boost situations negatively impact latency. Solutions are needed to minimize I/O latency for mobile and real-time environments in Xen-ARM.
This presenation gives a quick history on Hyper-V and discusses the arhcitecture of the vurrent release. It then goes into detail on Hyper-V R2, i.e. the build included in Hyper-V Server 2008 R2 and Windows Server 2008 R2. It includes Live Migration, Cluster Shared Volumes, Virtual Machine Queue, SLAT, Core Parking and Native VHD.
Presentation power vm virtualization without limitssolarisyougood
This document discusses IBM PowerVM virtualization capabilities for IBM Power Systems. PowerVM allows for virtualization of workloads through logical partitions (LPARs) and virtual machines (VMs). It provides capabilities like rapid provisioning, scalability, recoverability, and workload consolidation to improve efficiency and reduce costs. PowerVM editions differ in features available like the number of concurrent VMs, types of virtual I/O supported, and advanced functions. The document also discusses the Virtual I/O Server (VIOS) appliance, virtual storage and networking options in PowerVM like virtual SCSI, NPIV, and shared Ethernet adapters.
z/VM version 6.2 introduced new capabilities for virtualization including Single System Image (SSI) clustering and Live Guest Relocation (LGR). SSI allows up to four z/VM systems to be managed as a single cluster, while LGR allows virtual machines to be moved between systems without disruption. Developing these features required addressing challenges like maintaining system architecture accuracy and flexibility across different hardware. Relocation domains were introduced to control where guests can move and the architecture features exposed. Overall, z/VM 6.2 significantly expanded the possibilities for virtualization on the IBM mainframe.
The document provides an introduction to using the Linux on System z terminal server over z/VM IUCV. It discusses how IUCV terminal connections let you comfortably manage Linux instances even in emergencies. It describes how to establish IUCV terminal sessions using programs like iucvconn and iucvtty, how to set up an IUCV terminal environment on target systems using devices like HVC terminals, and how the terminal shell ts-shell can help authorize users and audit terminal sessions.
The document discusses developing embedded systems with multicore processors and virtualization. It provides an overview of symmetric multiprocessing (SMP), asymmetric multiprocessing (AMP), and supervised AMP (sAMP). It then discusses virtualization capabilities and details, including virtualizing memory, cores, devices, interrupts, and enabling communication between boards. The presentation aims to explain how to design next generation embedded systems using these virtualization techniques.
Windows server 8 hyper v networking (aidan finn)hypervnu
This document discusses new networking features in Windows Server 8 Hyper-V including built-in NIC teaming, SMB 2.2 for storage access over the network, network virtualization to isolate tenant networks, and security features like port ACLs and private VLANs. It also covers performance optimizations for network traffic like dynamic VMQ, SR-IOV, and new Quality of Service capabilities for prioritizing applications in a multi-tenant hosting environment.
Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.
This is the deck that I used at the January 2012 Hyper-V.nu event in Amsterdam, Netherlands. It focuses on the Build announced details on Windows Server 8 Hyper-V networking.
This document provides a history and overview of Xen virtualization technology. It discusses how Xen originated from university research in 1999 and was released as open source in 2004. It gained widespread adoption by 2005. The document outlines Xen's goals of being the standard open source hypervisor and maintaining performance, stability, and security. It discusses the benefits of virtualization for server consolidation, manageability, deployment, and high availability. Finally, it covers topics like paravirtualization, hardware virtualization, network and device virtualization, security, and future directions like client and mobile virtualization and cloud computing.
This document summarizes a presentation about business continuity solutions hosted by i//:squared. i//:squared provides end-to-end ICT managed services, business continuity services, and business strategic advice. The presentation also covered Veeam software, which develops products for virtual infrastructure management and data protection. Veeam solutions include backup and replication software, as well as monitoring and reporting tools. Finally, the presentation provided an overview of EMC's VNX and VNXe unified storage systems, which include various models with different maximum drive capacities and configurations.
This document summarizes a presentation given at SC11 about monitoring power usage of a Google data center using sensors connected to an embedded Linux board. Data was sent to Google App Engine (GAE) using a REST API and retrieved for display. The system monitored up to 32 power circuits with 1 second resolution and was demonstrated at the NICT booth.
Learn about IBM PowerVM Best Practices.This IBM Redbooks publication provides best practices for planning, installing, maintaining, and monitoring the IBM PowerVM Enterprise Edition virtualization features on IBM POWER7 processor technology-based servers.
For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
This document provides an overview and agenda for the Common Europe Conference in Vienna, June 2012. It discusses the Virtual Partition Manager (VPM) tool for managing partitions without an external management console. The VPM now supports creation and management of IBM i client partitions. It also covers where to start with installing IBM i hosting clients on Power systems, both with VPM-based and HMC-based setups, and considerations for virtual versus physical hardware resources.
Learn about IBM PowerVM Virtualization Introduction and Configuration. PowerVM is a combination of hardware, firmware, and software that provides CPU, network, and disk virtualization.This publication is also designed to be an introduction guide for system administrators, providing instructions for tasks like Configuration and creation of partitions and resources on the HMC,Installation and configuration of the Virtual I/O Server, creation and installation of virtualized partitions. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Xen has been very successful on servers, and yet there are substantial areas where Xen can evolve further. In this talk Jun will discuss a compelling area where the Xen technologies can be applied to -- Mobile virtualization. Using Android as an example, the talk will explore two types of usage models, 1) Android as a guest, 2) Android as the host, showing the benefits of using the Xen technologies.
This document proposes a method for link virtualization on the Xen virtualization platform using Single Root I/O Virtualization (SR-IOV). It discusses using SR-IOV to minimize overhead by performing encapsulation/decapsulation and packet filtering in hardware. It also describes using MAC-in-UDP tunneling with a virtual network ID to isolate networks and a vARP protocol to map between virtual and physical MAC addresses. The document evaluates the proposed method's ability to guarantee bandwidth isolation and provides performance results for both weight-based and bandwidth-based bandwidth control approaches.
This document discusses the challenges of graphics virtualization. It provides background on native device initialization, QEMU I/O virtualization, and PCI device pass-through. It then covers graphics pass-through for discrete and integrated graphics, including the current status and future work, such as supporting dual graphics devices and improving driver validation.
Kemari is a virtual machine synchronization technique that allows fault tolerance by keeping a primary and secondary VM identical. It uses DomT, a para-virtualized domain, to efficiently synchronize state between VMs by tapping event channels and only transferring updated memory pages. Evaluation shows the secondary VM can continue transparently and with acceptable performance during network, storage and file I/O workloads when the primary hardware fails.
This document discusses evolving configuration tools for Single Root I/O Virtualization (SR-IOV) networking. It describes Mitch Williams' work at Intel to address pain points with SR-IOV, such as randomly assigned MAC addresses. The solution involved kernel and driver changes to allow setting MAC addresses and VLAN tags for each virtual function from the command line. Future needs discussed include 10Gb support, distro updates, migration support, and addressing communication between VMs with emulated and direct-assigned networking.
Sang-bum Suh will give a talk on the current status and the future direction of Xen ARM. Xen ARM is the first ARM virtualization S/W based on Xen Architecture.
The document discusses I/O latency issues in Xen-ARM virtualization. It finds that the credit scheduler does not adequately support time-sensitive applications due to the hierarchical scheduling nature and split driver model. Measurements show large worst-case latencies throughout the interrupt path, including preemption, scheduling, and intra-domain latencies. Experiments reveal that not-boosted virtual CPUs and multi-boost situations negatively impact latency. Solutions are needed to minimize I/O latency for mobile and real-time environments in Xen-ARM.
This presenation gives a quick history on Hyper-V and discusses the arhcitecture of the vurrent release. It then goes into detail on Hyper-V R2, i.e. the build included in Hyper-V Server 2008 R2 and Windows Server 2008 R2. It includes Live Migration, Cluster Shared Volumes, Virtual Machine Queue, SLAT, Core Parking and Native VHD.
Presentation power vm virtualization without limitssolarisyougood
This document discusses IBM PowerVM virtualization capabilities for IBM Power Systems. PowerVM allows for virtualization of workloads through logical partitions (LPARs) and virtual machines (VMs). It provides capabilities like rapid provisioning, scalability, recoverability, and workload consolidation to improve efficiency and reduce costs. PowerVM editions differ in features available like the number of concurrent VMs, types of virtual I/O supported, and advanced functions. The document also discusses the Virtual I/O Server (VIOS) appliance, virtual storage and networking options in PowerVM like virtual SCSI, NPIV, and shared Ethernet adapters.
z/VM version 6.2 introduced new capabilities for virtualization including Single System Image (SSI) clustering and Live Guest Relocation (LGR). SSI allows up to four z/VM systems to be managed as a single cluster, while LGR allows virtual machines to be moved between systems without disruption. Developing these features required addressing challenges like maintaining system architecture accuracy and flexibility across different hardware. Relocation domains were introduced to control where guests can move and the architecture features exposed. Overall, z/VM 6.2 significantly expanded the possibilities for virtualization on the IBM mainframe.
The document provides an introduction to using the Linux on System z terminal server over z/VM IUCV. It discusses how IUCV terminal connections let you comfortably manage Linux instances even in emergencies. It describes how to establish IUCV terminal sessions using programs like iucvconn and iucvtty, how to set up an IUCV terminal environment on target systems using devices like HVC terminals, and how the terminal shell ts-shell can help authorize users and audit terminal sessions.
The document discusses developing embedded systems with multicore processors and virtualization. It provides an overview of symmetric multiprocessing (SMP), asymmetric multiprocessing (AMP), and supervised AMP (sAMP). It then discusses virtualization capabilities and details, including virtualizing memory, cores, devices, interrupts, and enabling communication between boards. The presentation aims to explain how to design next generation embedded systems using these virtualization techniques.
Windows server 8 hyper v networking (aidan finn)hypervnu
This document discusses new networking features in Windows Server 8 Hyper-V including built-in NIC teaming, SMB 2.2 for storage access over the network, network virtualization to isolate tenant networks, and security features like port ACLs and private VLANs. It also covers performance optimizations for network traffic like dynamic VMQ, SR-IOV, and new Quality of Service capabilities for prioritizing applications in a multi-tenant hosting environment.
Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.
This is the deck that I used at the January 2012 Hyper-V.nu event in Amsterdam, Netherlands. It focuses on the Build announced details on Windows Server 8 Hyper-V networking.
This document provides a history and overview of Xen virtualization technology. It discusses how Xen originated from university research in 1999 and was released as open source in 2004. It gained widespread adoption by 2005. The document outlines Xen's goals of being the standard open source hypervisor and maintaining performance, stability, and security. It discusses the benefits of virtualization for server consolidation, manageability, deployment, and high availability. Finally, it covers topics like paravirtualization, hardware virtualization, network and device virtualization, security, and future directions like client and mobile virtualization and cloud computing.
This document summarizes a presentation about business continuity solutions hosted by i//:squared. i//:squared provides end-to-end ICT managed services, business continuity services, and business strategic advice. The presentation also covered Veeam software, which develops products for virtual infrastructure management and data protection. Veeam solutions include backup and replication software, as well as monitoring and reporting tools. Finally, the presentation provided an overview of EMC's VNX and VNXe unified storage systems, which include various models with different maximum drive capacities and configurations.
This document summarizes a presentation given at SC11 about monitoring power usage of a Google data center using sensors connected to an embedded Linux board. Data was sent to Google App Engine (GAE) using a REST API and retrieved for display. The system monitored up to 32 power circuits with 1 second resolution and was demonstrated at the NICT booth.
This document discusses setting up Plan 9 from Bell Labs as a virtual machine on Mac OS X 10.6.2 using VMWare Fusion 3.0.2. It provides steps for configuring the filesystem, installing Plan 9 in a virtual disk, and basic commands for configuring networking and using the Acme text editor within the Plan 9 virtual machine.
Hardware support for virtualization originated in the 1970s with goals of running multiple virtual machines on a single physical machine. A key requirement was virtualization allowing equivalent execution of programs in a virtual environment as running natively. The x86 architecture posed challenges to virtualization due to sensitive instructions. Intel Virtualization Technology (VT-x) added hardware support for virtualization on x86 by introducing a new CPU operation mode called VMX non-root, and transitions between it and VMX root mode. This reduced the need for software emulation of sensitive instructions and improved virtualization performance.
The document discusses the history and usage of virtualization technology, provides an overview of CPU, memory, and I/O virtualization, compares the Xen and KVM virtualization architectures, and describes some Intel work to support virtualization in OpenStack including the Open Attestation service.
Windows Server 2012 includes several new and improved networking features for Hyper-V. These features help improve performance and scalability by offloading more processing to the network interface card. New features include improved Receive Side Scaling, Receive Segment Coalescing, Dynamic Virtual Machine Queuing, Single Root I/O Virtualization, and NIC teaming. These features address challenges around availability, reliability, security and reducing complexity for virtualized workloads.
HPC Cloud: Clouds on supercomputers for HPCRyousei Takano
- HPC Cloud is a promising platform that can provide high performance, energy efficiency, scalability, and usability for HPC workloads. It utilizes technologies like VMM-bypass I/O, hybrid live migration, and virtual cluster migration to minimize performance overhead.
- The AIST has integrated these technologies into their HPC Cloud OS and Apache CloudStack to provide bare-metal-comparable I/O performance within a cloud environment. This allows HPC workloads and applications to efficiently utilize cloud infrastructures.
- The HPC Cloud federation concept allows VM images to be easily shared between different cloud systems. This achieves large-scale utilization of computing resources by leveraging supercomputers across
BitVisor is a security-focused virtual machine monitor (VMM) developed in Japan with the goals of encrypting storage and networks and using smart cards for authentication and key management. It uses a para-virtualization approach where most device I/O is passed through directly to the guest operating system, unlike Xen which uses full virtualization and device emulation. This makes BitVisor's VMM smaller and lower overhead than Xen. Experimental results showed BitVisor running Windows and Linux guests with encryption of storage and networking.
This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
This document discusses different methods for virtualizing I/O in virtual machines. It covers virtual I/O approaches like virtio, PCI passthrough, and SR-IOV. It also explains the role of the VMM/hypervisor in managing I/O between VMs and physical devices using techniques like VT-d, Open vSwitch, and single root I/O virtualization. Finally, it discusses emerging standards for virtual switching like virtual Ethernet bridging.
Security Best Practices For Hyper V And Server Virtualizationrsnarayanan
The document summarizes information about Hyper-V virtualization. It provides an overview of Hyper-V architecture, including that the hypervisor partitions the hardware and manages guest partitions through the virtualization stack. It also discusses Hyper-V security, noting that guests are isolated from each other and the root to prevent attacks, and that delegated administration and role-based access control can be used to manage virtual machine access.
virtualization tutorial at ACM bangalore Compute 2009ACMBangalore
This document summarizes a tutorial on the hardware revolution in server virtualization. It begins with an overview of server virtualization technologies including VMM architectures and the criteria for a processor to be virtualizable. It then discusses the challenges of virtualizing x86 processors due to their architecture. The document outlines software techniques like binary translation and para-virtualization used for CPU, memory, and I/O virtualization. It also reviews hardware techniques enabled by technologies like VT-x, EPT, and SR-IOV. The summary concludes with a brief discussion of future trends in manageability and security relating to server virtualization.
At this year's FOSE 2011 conference, Government Computer News (GCN) awarded Phantom Virtual Tap the Best of FOSE / Best Networking Product for Government award. The Tap delivers unprecedented total visibility into formerly murky traffic passing between VMs on hypervisor stacks. With its ability to tap traffic between virtual servers (VMs) on a physical server, the Phantom Virtual Tap heralds a new era of network compliance, management, and security for virtualized data centers.
Presented by Net Optics' Senior Solutions Engineer, David Pham, this webinar will briefly introduce you to the Phantom Virtual Tap as well as provide insight into some of the security and compliance challenges created by data center virtualiztion. Additionally:
Advantages of gaining visibility into your virtualized network infrastructure
How to eliminate visibility challenges in the virtual network
Provide attendees the opportunity to learn more about this new technology
Nova for Physicalization and Virtualization compute modelsopenstackindia
This document discusses Nova, OpenStack's compute service, and provides an overview of:
1) Different compute models Nova supports including physical servers, virtualized servers using technologies like ESX, Hyper-V, KVM, Xen server, and container-based virtualization using LXC and OpenVZ.
2) Nova uses a driver-based approach to support different hypervisor technologies with drivers for KVM, ESX, Hyper-V, and others.
3) An example multi-hypervisor OpenStack cloud is shown supporting images, controllers, services, and compute hosts running Hyper-V, KVM, and ESXi.
4) Key features like physical bare-metal provisioning are supported across different
Presentation from physical to virtual to cloud emcxKinAnx
The document discusses three paradigm shifts in information technology: 1) From physical to virtual computing as virtualization becomes mainstream, 2) The network becoming the computer through network-centric architectures, 3) Storage evolving from a server-centric to a virtual, flexible model. These shifts are creating an industrialized "cloud computing" platform for intelligent, on-demand delivery of IT services.
This document discusses hardware-assisted virtualization and related security issues. It provides a history of virtualization technologies from 1960 to present day, including full virtualization, para-virtualization, and hardware-assisted virtualization using AMD-V, VT-x, and VT-d. It also summarizes how a VMM is programmed using VMX instructions to initialize and handle VM exits, and explains attacks that have targeted various virtualization methods like binary translation, para-virtualization, and hardware-assisted virtualization.
Virtualization introduces new security risks but also opportunities to enhance security. Key risks include attacks on the hypervisor, virtual environments from within, and virtual machine management interfaces. However, virtualization also allows security software to have deeper control of physical resources like memory and CPU outside of the OS. Technologies like VMsafe aim to provide dedicated security virtual machines that filter network traffic and protect memory and processor operations to address these risks. While promising increased security, VMsafe CPU/Memory also faces performance challenges from VM context switching overhead.
Bryan Nairn discusses security considerations for virtualization. He notes that over 40% of virtual machines will be less secure than physical machines by 2014. The document outlines common virtualization security myths and describes the hypervisor architecture. It discusses isolation between virtual machines and the hypervisor's security goals of protecting data confidentiality and integrity. The document also covers common attack vectors and provides potential solutions for securing the host system and virtual machines.
CSA Presentation 26th May Virtualization securityv2vivekbhat
Bryan Nairn discusses security considerations for virtualization. Virtual machines are increasingly common but over 40% will be less secure than physical servers by 2014. Key risks include compromised host machines which could then control VMs, and unpatched guest operating systems. Defenses include hardening host servers, protecting virtual machine files, isolating guest networks, and using access control lists to manage permissions for VMs. Securing the virtualization platform requires attention to both host and guest security.
Visão geral sobre Citrix XenServer 6 - Ferramentas e LicenciamentoLorscheider Santiago
This document provides an overview of Citrix XenServer, including:
- Why use XenServer over VMware, with XenServer having leadership in the market share and lower costs.
- An overview of XenServer's key features like virtual memory licensing, clusters and pools, live migration, snapshots, and high availability.
- A comparison of XenServer and VMware features around licensing, importing VMs, backup solutions, and more.
- Details on newer versions of XenServer that include integrated disaster recovery, provisioning services, and monitoring solutions.
Similar to Cooperative VM Migration for a virtualized HPC Cluster with VMM-bypass I/O devices (20)
1) The document explores a new concept called error permissive computing that improves computing capabilities and reduces power consumption by allowing and managing hardware errors through system software instead of eliminating errors through general purpose hardware error correction.
2) It describes several approaches for implementing error permissive computing including a software framework called BITFLEX that enables approximate computing, an FPGA-based memory emulator for evaluating new system software mechanisms, and techniques for sparse and topology-aware communication that can accelerate large-scale deep learning and reduce communication costs.
3) The goal is to take a holistic approach across hardware and software layers to perform lightweight error correction at the software level while eliminating general purpose error correction in hardware for improved efficiency.
Opportunities of ML-based data analytics in ABCIRyousei Takano
This document discusses opportunities for using machine learning-based data analytics on the ABCI supercomputer system. It summarizes:
1) An introduction to the ABCI system and how it is being used for AI research.
2) How sensor data from the ABCI system and job logs could be analyzed using machine learning to optimize data center operation and improve resource utilization and scheduling.
3) Two potential use cases - using workload prediction to enable more efficient cooling system control, and applying machine learning to better predict job execution times to improve scheduling.
ABCI: An Open Innovation Platform for Advancing AI Research and DeploymentRyousei Takano
AI Infrastructure for Everyone (Democratization AI) aims to build an AI infrastructure platform that is accessible to everyone from beginners to experts. The platform provides up to 512-node computing resources, ready-to-use software, datasets, and pre-trained models. It also offers services like an easy-to-use web-based IDE for beginners and an AI cloud with on-demand, reserved, and batch processing options. The goal is to accelerate AI research and promote social implementation of AI technologies.
The document discusses the performance of three SPEC CPU2006 benchmarks - 483.xalancbmk, 462.libquantum, and 471.omnetpp - under different last-level cache (LLC) configurations and when subjected to LLC cache interference from a background workload. Key findings include reduced performance for the benchmarks when run with a smaller LLC size or when interfered with by a LLC jammer workload, but maintained performance when QoS techniques were applied to isolate the benchmark workload in the LLC.
The document summarizes four presentations from the USENIX NSDI 2016 conference session on resource sharing:
1. "Ernest: Efficient Performance Prediction for Large-Scale Advanced Analytics" proposes a framework that uses results from small training jobs to efficiently predict performance of data analytics workloads in cloud environments and reduce the number of required training jobs.
2. "Cliffhanger: Scaling Performance Cliffs in Web Memory Caches" presents algorithms to dynamically allocate memory across queues in Memcached to smooth out performance cliffs and potentially save memory usage.
3. "FairRide: Near-Optimal, Fair Cache Sharing" introduces a caching policy that provides isolation guarantees, prevents strategic behavior, and
This document discusses optimizations for TCP/IP networking performance on multicore systems. It describes several inefficiencies in the Linux kernel TCP/IP stack related to shared resources between cores, broken data locality, and per-packet processing overhead. It then introduces mTCP, a user-level TCP/IP stack that addresses these issues through a thread model with pairwise threading, batch packet processing from I/O to applications, and a BSD-like socket API. mTCP achieves a 2.35x performance improvement over the kernel TCP/IP stack on a web server workload.
Flow-centric Computing - A Datacenter Architecture in the Post Moore EraRyousei Takano
1) The document proposes a new "flow-centric computing" data center architecture for the post-Moore era that focuses on data flows.
2) It involves disaggregating server components and reassembling them as "slices" consisting of task-specific processors and storage connected by an optical network to efficiently process data.
3) The authors expect optical networks to enable high-speed communication between processors, replacing general CPUs, and to potentially revolutionize how data is processed in future data centers.
A Look Inside Google’s Data Center NetworksRyousei Takano
1) Google has been developing their own data center network architectures using merchant silicon switches and centralized network control since 2005 to keep up with increasing bandwidth demands.
2) Their network designs have evolved from Firehose and Watchtower to the current Saturn and Jupiter networks, increasing port speeds from 1/10Gbps to 40/100Gbps and aggregate bandwidth from terabits to petabits per second.
3) Their network architectures employ Clos topologies with merchant silicon switches at the top-of-rack, aggregation, and spine layers and centralized control of traffic routing.
- Hardware such as DRAM and NAND flash are facing scaling challenges as density increases, which could impact performance and cost. New non-volatile memory (NVM) technologies may provide opportunities to address these challenges but require software and system architecture changes to realize their full potential. Key considerations include persistence, performance, and programming models.
AIST Super Green Cloud: lessons learned from the operation and the performanc...Ryousei Takano
This document discusses lessons learned from operating the AIST Super Green Cloud (ASGC), a fully virtualized high-performance computing (HPC) cloud system. It summarizes key findings from the first six months of operation, including performance evaluations of SR-IOV virtualization and HPC applications. It also outlines conclusions and future work, such as improving data movement efficiency across hybrid cloud environments.
The document summarizes the author's participation report at the IEEE CloudCom 2014 conference. Some key points include:
- The author attended sessions on virtualization and HPC on cloud.
- Presentations had a strong academic focus and many presenters were Asian.
- Eight papers on HPC on cloud covered topics like reliability, energy efficiency, performance metrics, and applications like Monte Carlo simulations.
Exploring the Performance Impact of Virtualization on an HPC CloudRyousei Takano
The document evaluates the performance impact of virtualization on high-performance computing (HPC) clouds. Experiments were conducted on the AIST Super Green Cloud, a 155-node HPC cluster. Benchmark results show that while PCI passthrough mitigates I/O overhead, virtualization still incurs performance penalties for MPI collectives as node counts increase. Application benchmarks demonstrate overhead is limited to around 5%. The study concludes HPC clouds are promising due to utilization improvements from virtualization, but further optimization of virtual machine placement and pass-through technologies could help reduce overhead.
From Rack scale computers to Warehouse scale computersRyousei Takano
This document discusses the transition from rack-scale computers to warehouse-scale computers through the disaggregation of technologies. It provides examples of rack-scale architectures like Open Compute Project and Intel Rack Scale Architecture. For warehouse-scale computers, it examines HP's The Machine project using application-specific cores, universal memory, and photonics fabric. It also outlines UC Berkeley's FireBox project utilizing 1 terabit/sec optical fibers, many-core systems-on-chip, and non-volatile memory modules connected via high-radix photonic switches.
高性能かつスケールアウト可能なHPCクラウド AIST Super Green CloudRyousei Takano
The document contains configuration instructions for creating a cluster in a cloud computing environment called myCluster. It specifies creating a frontend node and 16 compute nodes using specified templates, compute and disk offerings. It also defines the cluster name, zone, network, and SSH key to use. The cluster can then be started and later destroyed along with a configuration file.
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data CenterRyousei Takano
The document describes Iris, an inter-cloud resource integration system that enables elastic cloud data centers. Iris uses nested virtualization technologies including nested KVM to construct a virtual infrastructure spanning multiple distributed data centers. It provides a new Hardware as a Service (HaaS) model for inter-cloud federation at the infrastructure provider level. The authors demonstrate Apache CloudStack can seamlessly manage resources across emulated inter-cloud environments using Iris.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Cooperative VM Migration for a virtualized HPC Cluster with VMM-bypass I/O devices
1. Cooperative VM Migration
for a virtualized HPC Cluster
with VMM-bypass I/O devices
Ryousei Takano, Hidemoto Nakada, Takahiro Hirofuchi,
Yoshio Tanaka, and Tomohiro Kudoh
Information Technology Research Institute,
National Institute of Advanced Industrial Science and Technology (AIST), Japan
IEEE eScience 2012, Oct. 11 2012, Chicago
2. Background
• HPC cloud is a promising e-Science platform.
– HPC users begin to take an interest in Cloud computing,
e.g., Amazon EC2 Cluster Compute Instances.
• Virtualization is a key technology.
– Pro: It makes migration of computing elements easy.
• VM migration is useful for achieving fault tolerance, server
consolidation, etc.
– Con: It introduces a large overhead, spoiling I/O
performance.
• VMM-bypass I/O technologies, e.g., PCI passthrough and
SR-IOV, can significantly mitigate the overhead.
VMM-bypass I/O makes it impossible to migrate a VM.
2
3. Contribution
• Goal:
– To realize VM migration and checkpoint/restart on a
virtualized cluster with VMM-bypass I/O devices.
• E.g., VM migration on an Infiniband cluster
• Contributions:
– We propose cooperative VM migration based on the
Symbiotic Virtualization (SymVirt) mechanism.
– We demonstrate reactive/proactive fault tolerant (FT)
systems.
– We show postcopy migration helps to reduce the
service downtime in the proactive FT system.
3
4. Agenda
• Background and Motivation
• SymVirt: Symbiotic Virtualization Mechanism
• Experiment
• Related Work
• Conclusion and Future Work
4
5. Motivating Observation
• Performance evaluation of HPC cloud
– (Para-)virtualized I/O incurs a large overhead.
– PCI passthrough significantly mitigate the overhead.
KVM (IB) KVM (virtio)
300
BMM (IB) BMM (10GbE) VM1
VM1
KVM (IB) KVM (virtio)
250
Guest OS Guest OS
Execution time [seconds]
200 Physical Guest
driver driver
150
100 VMM VMM
50 Physical
driver
0
BT CG EP FT LU
The overhead of I/O virtualization on the NAS IB QDR HCA 10GbE NIC
Parallel Benchmarks 3.3.1 class C, 64 processes.
BMM: Bare Metal Machine
5
6. Para-virtualized VMM-Bypass I/O
device (virtio_net) PCI passthrough SR-IOV
VM1 VM2 VM1 VM2 VM1 VM2
Guest OS Guest OS Guest OS
… … …
Guest Physical Physical
driver driver driver
VMM VMM VMM
vSwitch
Physical
driver
NIC NIC NIC
Switch (VEB)
Para-virt PCI SR-IOV
device passthrough We address
Performance this issue!
Device sharing
VM migration
6
7. Problem
VMM-bypass I/O technologies make VM migration
and checkpoint/restart impossible.
1. VMM does not know the time when VMM-bypass I/O
devices are detached safely.
• To perform such migration without losing in-flight data,
packet transmission to/from the VM should be stopped prior
to detaching.
• With a VMM, it is hard to know the communication status of
an application inside the VM, especially if VMM-bypass I/O
devices are used.
2. VMM cannot migrate the state of VMM-bypass I/O
devices from the source to the destination.
• With Infiniband, Local ID, QueuePair Numbers, etc.
7
8. Goal
migration
Ethernet
Cluster 1 Cluster 2
VM1 VM2 VM3 VM4
detach re-attach
Infiniband Infiniband
We need a mechanism of combining VM migration and
PCI device hot-plugging.
8
10. Cooperative VM Migration
Existing VM Migration Cooperative VM Migration
(Black-box approach) (Gray-box approach)
Pro: portability Pro: performance
Guest OS Guest OS
Global
coordination
Application Application
Device setup
Cooperation
Global
VMM coordination VMM VMM-bypass
Device setup
I/O
Migration Migration
NIC NIC
10
11. SymVirt: Symbiotic Virtualization
• We focus on MPI programs.
• We design and implement
a symbiotic virtualization Guest OS
Global
(SymVirt) mechanism. coordination
Application
– It is a cross-layer mechanism
Device setup
between a VMM and an MPI
runtime system. SymVirt
Cooperation
VMM VMM-bypass
I/O
Migration
NIC
11
13. SymVirt wait and signal calls
• SymVirt provides a simple Guest OS-to-VMM
communication mechanism.
• SymVirt coordinator issues a SymVirt wait call, the guest
OS is blocked until a SymVirt signal call is issued.
• In the meantime, SymVirt agent controls the VM via a
VMM monitor interface.
Application
confirm confirm linkup
SymVirt coordinator
SymVirt
wait SymVirt Guest OS mode
signal VMM mode
detach migration re-attach
SymVirt controller/agent
13
14. SymVirt: Implementation
• We implemented SymVirt on top of QEMU/KVM
and the Open MPI system.
• User application and the MPI runtime system
can work without any modifications.
• QEMU/KVM is slightly modified for supporting
SymVirt wait and signal calls.
– A SymVirt wait call is implemented by SymVirt
coordinator
using a VMCALL Intel VT-x instruction.
SymVirt SymVirt
– A SymVirt signal call is implemented as wait signal
a new QEMU/KVM monitor command.
SymVirt
agent
14
15. SymVirt: Implementation (cont’d)
• SymVirt coordinator is heavily relied on the Open
MPI checkpoint/restart (C/R) framework.
– Global coordination of SymVirt is the same as a
coordination protocol for MPI programs.
– SymVirt executes VM-level migration or C/R instead
of process-level C/R using the BLCR system.
– SymVirt does not need to take care of changing LIDs
and QPNs after a migration, because Open MPI’s
BTL modules are re-constructed and connections are
re-established at continue or restart phases.
BTL: Point-to-Point Byte Transfer Layer
15
17. SymPFT: Proactive FT system
• A VM-level fault tolerant (FT) system is a use
case of SymVirt.
Cloud&scheduler
allocation
User requires a virtualized cluster consists
of 4 nodes (16 CPUs).
global&storage&
(VM&images)
17
18. SymPFT: Proactive FT system
• A VM-level fault tolerant (FT) system is a use
case of SymVirt.
• A VM is migrated from a “unhealthy” node to a
“healthy” node before the node crashes.
Cloud&scheduler Cloud&scheduler
allocation re-allocation
Failure
re!!
Failu prediction
VM migration
global&storage& global&storage&
(VM&images) (VM&images)
18
20. Experiment
• The overhead of SymPFT
– We used 8 VMs on an Infiniband cluster.
– We migrated a VM once during a benchmark
execution.
• Two benchmark programs written in MPI
– memtest: a simple memory intensive benchmark
– NAS Parallel Benchmarks (NPB) version 3.3.1
• Overhead reduction using postcopy migration
20
21. Experimental setting
We used a 16 node Infiniband cluster, which is
a part of the AIST Green Cloud.
Blade server Dell PowerEdge M610 Host machine environment
CPU Intel quad-core Xeon E5540/2.53GHz x2 OS Debian 7.0
Chipset Intel 5520 Linux kernel 3.2.18
Memory 48 GB DDR3 QEMU/KVM 1.1-rc3
InfiniBand Mellanox ConnectX (MT26428) MPI Open MPI 1.6
OFED 1.5.4.1
Blade switch
Compiler gcc/gfortran 4.4.6
InfiniBand Mellanox M3601Q (QDR 16 ports) VM environment
VCPU 8
Only 1 VM runs on 1 host, and an IB HCA is !
assigned to the VM by using PCI passthrough. Memory 20 GB
21
22. Result: memtest
• The migration time is dependent on the memory footprint.
– The migration throughput is less than 3 Gbps.
• Both hotplug and link-up times are approximately constant.
– The link-up time is not a negligible overhead. c.f., Ethernet
100
migration hotplug linkup
80
Execution Time
[Seconds]
44.2 53.7
60 35.9 38.7
40
14.6 13.5 12.5 11.3
20
28.5 28.5 28.5 28.6
0
2GB 4GB 8GB 16GB
memory footprint
This result does not include our proceeding.
22
23. Result: NAS Parallel Benchmarks
1400
linkup postcopy migration
1200 hotplug precopy migration
+105 s
Execution time [seconds]
application
1000
There is no overhead +103 s
800 during normal operations
+97 s
+299 s
600
400
The overhead is proportional
200 to the memory footprint.
0
baseline precopy postcopy baseline precopy postcopy baseline precopy postcopy baseline precopy postcopy
BT CG FT LU
Transferred Memory Size during VM Migration [MB]
BT CG FT LU
4417 3394 15678 2348
23
24. Integration with postcopy migration
• In contrast to precopy migration, postcopy migration
transfers memory pages on demand after the execution
node is switched to the destination.
• Postcopy migration can hide the overhead of the hot-add
and link-up times by overlapping them and migration.
• We used our postcopy migration implementation for
QEMU/KVM, Yabusame.
SymPFT a) hot-del b) migration c) hot-add d) link-up
(precopy)
SymPFT overhead mitigation
(postcopy)
24
25. Result: Effect of postcopy migration
1400
linkup postcopy migration
-15 %
1200 hotplug precopy migration
Execution time [seconds]
application -13 s
1000
800 -14 %
-53 %
600
400
Postcopy migration can hide the
200 overhead of hotplug and link-up by
0 overlapping them and migration.
baseline precopy postcopy baseline precopy postcopy baseline precopy postcopy baseline precopy postcopy
BT CG FT LU
Transferred Memory Size during VM Migration [MB]
BT CG FT LU
4417 3394 15678 2348
25
26. Related Work
• Some VM-level reactive and proactive FT systems have
been proposed for HPC systems.
– E.g., VNsnap: a distributed snapshots of VMs
• The coordination is executed by snooping the traffic of a software
switch outside the VMs.
– They do not support VMM-bypass I/O devices.
• Mercury: a self-virtualization technique
– An OS can turn virtualization on and off on demand.
– It lacks a coordination mechanism among distributed VMM.
• SymCall: an upcall mechanism from a VMM to the guest
OS, using a nested VM Exit call
– SymVirt is a simple hypercall mechanism from a guest OS to the
VMM, assuming it works in cooperation with a cloud scheduler.
26
28. Conclustion
• We have proposed a cooperative VM migration
mechanism that enables us to migrate VMs with VMM-
bypass I/O devices, using a simple Guest OS-to-VMM
communication mechanism, called SymVirt.
• Using the proposed mechanism, we demonstrated a
proactive FT system in a virtualized Infiniband cluster.
• We also confirmed that postcopy migration helps to
reduce the downtime in the proactive FT system.
• SymVirt can be useful for not only fault tolerant but also
load balancing and server consolidation.
28
29. Future Work
• Interconnect-transparent migration, called
“Ninja migration”
– We have submitted another conference paper.
• Overhead mitigation of SymVirt
– Very long link-up time problem
– Better integration with postcopy migration
• A generic communication layer supporting
cooperative VM migration
– It is independent on an MPI runtime system.
29
30. Thanks for your attention!
This work was partly supported by JSPS KAKENHI 24700040
and ARGO GRAPHICS, Inc.
30