vSphere 5 introduces a new licensing model based on vRAM entitlements rather than physical hardware limits. Each vSphere processor license comes with an entitlement of a certain amount of vRAM that can be pooled across all hosts managed by a vCenter. Powered on VMs count towards consumed vRAM, and the 12-month rolling average of daily consumed vRAM must be lower than the total pooled vRAM entitlement. This new model provides more flexibility than previous versions by removing core and physical RAM limits per host. Tools built into vCenter help track vRAM usage for compliance. The new licensing model applies to new vSphere 5 licenses, while prior versions and existing ELAs continue under their original terms.
VMware is introducing major upgrades to its entire cloud infrastructure stack, including vSphere 5, which features a new licensing model based on pooled vRAM entitlements rather than physical constraints. Each vSphere license provides a set amount of vRAM that can be pooled across all hosts managed by a vCenter. As long as the total vRAM configured in VMs does not exceed the pooled entitlement amount, additional licenses are not required. The new model provides more flexibility and aligns costs with actual virtual resource usage rather than physical hardware.
This document discusses VMware's new vSphere 5 licensing model. Key points include:
- vSphere 5 introduces a new pooled vRAM licensing model where each processor license contributes to an overall vRAM entitlement pool shared across all hosts/VMs rather than individual per-host entitlements.
- Compliance is determined by whether the 12-month rolling average of consumed vRAM stays below the total pooled vRAM entitlement across all licenses.
- vSphere 5 is packaged into various new editions with different features and price points, including a new vSphere Desktop edition for VDI workloads with unlimited vRAM.
- Existing customers will generally move to the new model upon upgrading to vSphere 5 while
Excessive interrupts can hurt I/O scalability in Xen. The proposals discuss software interrupt throttling and interrupt-less NAPI to reduce interrupt overhead. They also discuss exposing NUMA information to Xen to improve host I/O NUMA awareness and enabling guest I/O NUMA awareness by constructing _PXM methods and extending device assignment policies.
This document discusses moving backend drivers from the Dom0 domain to a separate HVM driver domain in Xen. Testing showed the HVM driver domain provided better network performance than the PV backend domain, with lower CPU utilization. Issues were discussed around booting the system without physical device drivers in Dom0, requiring the HVM driver domain to run devices and provide networking/storage. Further analysis of EPT page flipping performance was suggested.
The document summarizes Brandon Williams' presentation on performance tuning for Cassandra. It discusses strategies for making writes faster such as using a separate IO device for the commit log. It also covers tuning options for memtables and compaction, noting that compaction can hurt read performance by causing IO contention and that reducing its priority can help.
The needs for immediate responsiveness of VMs in the virtualized environments have been on the rise. Several services in SKT also require soft realtime support for virtual machines to substitute the physical machines to achieve high utilization and adaptability. However, consolidated multiple OSes and irregular external events might render the hypervisor infringe on a VM's promptitude. As a solution of this problem, we are improving Xen's credit scheduler by introducing the RT_PRIORITY that guarantees a VM's running at any given point in time as long as credits remains to be burn. It would increase the quality of service and make a VM's behavior predictable on the consolidated environment. In addition, we extend our suggestion to the multi-core environment and even a large number of physical machines by using live migrations.
This document discusses best practices for deploying Windows Server 2008 Hyper-V and System Center Virtual Machine Manager 2008. It provides an overview of Hyper-V functionality and deployment strategies. It also covers Virtual Machine Manager architecture, requirements, installation, host and cluster configuration, delegation, and Performance and Resource Optimization capabilities.
VMware is introducing major upgrades to its entire cloud infrastructure stack, including vSphere 5, which features a new licensing model based on pooled vRAM entitlements rather than physical constraints. Each vSphere license provides a set amount of vRAM that can be pooled across all hosts managed by a vCenter. As long as the total vRAM configured in VMs does not exceed the pooled entitlement amount, additional licenses are not required. The new model provides more flexibility and aligns costs with actual virtual resource usage rather than physical hardware.
This document discusses VMware's new vSphere 5 licensing model. Key points include:
- vSphere 5 introduces a new pooled vRAM licensing model where each processor license contributes to an overall vRAM entitlement pool shared across all hosts/VMs rather than individual per-host entitlements.
- Compliance is determined by whether the 12-month rolling average of consumed vRAM stays below the total pooled vRAM entitlement across all licenses.
- vSphere 5 is packaged into various new editions with different features and price points, including a new vSphere Desktop edition for VDI workloads with unlimited vRAM.
- Existing customers will generally move to the new model upon upgrading to vSphere 5 while
Excessive interrupts can hurt I/O scalability in Xen. The proposals discuss software interrupt throttling and interrupt-less NAPI to reduce interrupt overhead. They also discuss exposing NUMA information to Xen to improve host I/O NUMA awareness and enabling guest I/O NUMA awareness by constructing _PXM methods and extending device assignment policies.
This document discusses moving backend drivers from the Dom0 domain to a separate HVM driver domain in Xen. Testing showed the HVM driver domain provided better network performance than the PV backend domain, with lower CPU utilization. Issues were discussed around booting the system without physical device drivers in Dom0, requiring the HVM driver domain to run devices and provide networking/storage. Further analysis of EPT page flipping performance was suggested.
The document summarizes Brandon Williams' presentation on performance tuning for Cassandra. It discusses strategies for making writes faster such as using a separate IO device for the commit log. It also covers tuning options for memtables and compaction, noting that compaction can hurt read performance by causing IO contention and that reducing its priority can help.
The needs for immediate responsiveness of VMs in the virtualized environments have been on the rise. Several services in SKT also require soft realtime support for virtual machines to substitute the physical machines to achieve high utilization and adaptability. However, consolidated multiple OSes and irregular external events might render the hypervisor infringe on a VM's promptitude. As a solution of this problem, we are improving Xen's credit scheduler by introducing the RT_PRIORITY that guarantees a VM's running at any given point in time as long as credits remains to be burn. It would increase the quality of service and make a VM's behavior predictable on the consolidated environment. In addition, we extend our suggestion to the multi-core environment and even a large number of physical machines by using live migrations.
This document discusses best practices for deploying Windows Server 2008 Hyper-V and System Center Virtual Machine Manager 2008. It provides an overview of Hyper-V functionality and deployment strategies. It also covers Virtual Machine Manager architecture, requirements, installation, host and cluster configuration, delegation, and Performance and Resource Optimization capabilities.
The document discusses Oracle Automatic Storage Management (ASM) fault tolerance features including striping, mirroring, rebalancing, and failure groups. It explains that ASM provides fault tolerance through techniques like random striping of data across disks for availability, mirroring of extents for redundancy, and balancing of data across failure groups so that an entire group of disks can fail without data loss.
This document summarizes research into detecting and correcting transient hardware errors. The researchers created lockstep virtual machines that execute identical workloads and compare outputs to detect errors. If outputs mismatch, the VMs replay from the last checkpoint. Checkpoints are taken periodically and compared; if unequal, one VM replays from the previous checkpoint. Initial tests showed small performance overhead from the lockstep execution and input/output checking. Future work involves implementing checkpoint/replay and improving performance and scalability.
Linux Foundation Collaboration Summit 13 :10 years of Xen and BeyondThe Linux Foundation
In 2013, the Xen Hypervisor will be 10 years old: when Xen was designed, we anticipated a world, which now is known as cloud computing. Today, Xen powers the largest clouds in production and is the basis for several commercial virtualization products. In this talk we will give on overview of Xen and related projects, cover hot developments in the Xen community and outline what comes next.
The talk is intended for users and developers that are familiar with virtualization: no deep knowledge is required. We will start with an architectural overview and cover topics such as: Xen and Linux, how to secure your cloud using disaggregation, SELinux and XSM/FLASK, the evolution of Paravirtualization, Xen on ARM and common challenges for open source hypervisors. We will explore the potential of Open Mirage for testing hypervisors. The talk will conclude with an outlook to the future of Xen.
This document discusses enabling NUMA support for Xen guests. It outlines the importance of NUMA awareness for performance, and describes how to construct the SRAT and SLIT tables to provide NUMA information to guests. It also covers guest NUMA configuration options like memory allocation strategies and considerations for live migration. The current status includes upstream host NUMA APIs and planned rebasing of patches, with next steps involving further performance analysis and supporting I/O and live migration across NUMA nodes.
Xen in Ubuntu Raring
The document discusses Xen virtualization in Ubuntu Raring. It provides an overview of Xen, including new features in versions 4.2 and 4.3. It addresses integration issues with Qemu and Libvirt in Ubuntu. It also discusses what a great Xen experience in Ubuntu would look like, focusing on easy installation and reliable performance for both Xen hosts and guests. Potential improvements are identified, such as options during installation and switching between Xen and non-Xen modes.
This document summarizes a presentation about business continuity solutions hosted by i//:squared. i//:squared provides end-to-end ICT managed services, business continuity services, and business strategic advice. The presentation also covered Veeam software, which develops products for virtual infrastructure management and data protection. Veeam solutions include backup and replication software, as well as monitoring and reporting tools. Finally, the presentation provided an overview of EMC's VNX and VNXe unified storage systems, which include various models with different maximum drive capacities and configurations.
Kemari is a virtual machine synchronization technique that allows fault tolerance by keeping a primary and secondary VM identical. It uses DomT, a para-virtualized domain, to efficiently synchronize state between VMs by tapping event channels and only transferring updated memory pages. Evaluation shows the secondary VM can continue transparently and with acceptable performance during network, storage and file I/O workloads when the primary hardware fails.
Veeam vPower provides virtualization-powered data protection through innovations across multiple versions:
- Version 1 introduced 2-in-1 backup and replication with instant file-level recovery and inline deduplication. Version 3 added direct-to-target backups and synthetic full backups.
- Version 4 added support for vStorage APIs, changed block tracking, and thin-provisioned disks. Version 5 introduced instant VM recovery, recovery verification, and on-demand sandbox capabilities.
- Key vPower technologies include SureBackup recovery verification, instant VM recovery directly from backup files, SmartCDP near-CDP replication, and launching backup files in an on-demand sandbox virtual lab.
- Veeam vPower
This document provides an overview of Oracle VM server virtualization technology. It discusses Oracle VM features such as the ability to run Linux and Windows guests, 64-bit support, live migration, and integrated management. Performance results show Oracle VM introduces minimal overhead. Case studies demonstrate how Oracle VM allows customers to reduce hardware, increase utilization rates, and lower support costs.
Storage Foundation and Veritas Cluster Server can optimize storage management in high availability and disaster recovery environments on AIX through deep analysis of how they can work with PowerVM virtualization technology. Key areas include leveraging features like virtual SCSI, NPIV, virtual Ethernet, dynamic LPAR and more to provide storage management capabilities across physical and virtual environments, live application mobility between systems with non-disruptive migration, and using VCS to manage clusters within PowerVM.
This document discusses enhancing pass through device support with IOMMU. It covers the current status of pass through device support in Xen, areas for further enhancement including hardening the host from device failures, improving functionality by standardizing CFGS emulation, and handling more corner cases such as device reconfiguration and Qemu support for PCIe devices. It calls for community efforts to push these enhancements forward.
This document provides a history and overview of Xen virtualization technology. It discusses how Xen originated from university research in 1999 and was released as open source in 2004. It gained widespread adoption by 2005. The document outlines Xen's goals of being the standard open source hypervisor and maintaining performance, stability, and security. It discusses the benefits of virtualization for server consolidation, manageability, deployment, and high availability. Finally, it covers topics like paravirtualization, hardware virtualization, network and device virtualization, security, and future directions like client and mobile virtualization and cloud computing.
WinConnections Spring, 2011 - 30 Bite-Sized Tips for Best vSphere and Hyper-V...Concentrated Technology
The document provides 30 tips for optimizing virtual machine performance. Some key tips include purchasing hardware compatible with virtualization, using paravirtualized drivers for networking and storage, properly allocating CPU and memory resources to VMs, avoiding overuse of snapshots, performing resource-intensive tasks during off-hours, enabling jumbo frames and NTP time synchronization, and leveraging tools like DRS that prioritize faster hosts. Regular optimization and monitoring of VM configurations and underlying hardware is emphasized for maintaining good performance.
Hyper-V vs. vSphere: Understanding the DifferencesSolarWinds
For more information on Virtualization Manager visit: http://www.solarwinds.com/virtualization-manager.aspx
Watch this webcast: http://www.solarwinds.com/resources/webcasts/hyper-v-vs-vsphere-understanding-the-differences.html
Watch this webinar with Scott Lowe, Founder and Managing Consultant at The 1610 Group, and SolarWinds virtualization expert Jonathan Reeve where they discuss “Hyper-V vs. vSphere: Understanding the differences.”
The virtualization market is abuzz with talk of different hypervisors – most prominently VMware ESX® versus Microsoft Hyper-V®, who together own over 90% of the market. Small and medium businesses are already moving quickly toward Hyper-V, and a growing number of larger organizations are beginning to put plans in place to transition some portion of their environment from ESX to Hyper-V.
In this webcast we explore the reasons for these changes and the ecosystems for these two platforms both now and in the future. We also take a look ahead to what is known about Hyper-V 3.0 and why it warrants an even deeper look when evaluating hypervisors for your future virtualization deployments.
Mythbusting goes virtual What's new in vSphere 5.1Eric Sloof
The document summarizes new features in vSphere 5.1 that address common myths about virtualization limitations. It discusses that vMotion can now occur without shared storage using enhanced vMotion, vSphere management no longer requires Windows with the new web client, vSphere Replication provides site disaster recovery without SRM, the VMFS host limit for linked clones increased from 8 to 32, and distributed switch configurations can now be backed up and restored.
- vSphere 5.0 introduces several new platform enhancements including support for 2TB of host memory, 160 logical CPUs, and 512 VMs per host. ESXi now runs exclusively as the hypervisor.
- Storage features are improved with VMFS-5, which supports volumes over 2TB and faster operations. Storage DRS allows for initial placement and load balancing of VMs across datastores.
- Networking features include support for multiple vMotion NICs for faster migration. The new web client allows remote administration from any browser.
This document summarizes a presentation given at the Xen Summit 2008 in Tokyo about challenges in managing large virtualized environments. The presentation discussed scaling a machine pool from 10 to 1,000 physical machines and how different challenges arise at each level, including hardware compatibility and automation. It also covered different types of virtual machines for servers, desktops, and labs and how to integrate them. Finally, it provided an overview of how Google uses Ganeti to manage its virtualized infrastructure by fully automating resource management across a large cluster of machines with varying hardware over time.
Hypervisors are a kind of software which runs different virtual systems called virtual machines on a single computer giving the view to guest running on each virtual machine that it is running on its own single computer. This presentation talks about hypervisors and different techniques of their implementation in brief.
VMware vCloud® Director™ (vCloud Director) orchestrates the provisioning of software-defned datacenter
services, to deliver complete virtual datacenters for easy consumption in minutes. Software-defned datacenter
services and virtual datacenters fundamentally simplify infrastructure provisioning and enable IT to move at the
speed of business.
Numerous enhancements are included within vCloud Director 5.1, making it the best infrastructure-as-a-service
(IaaS) solution in the marketplace today. This document highlights some of these key enhancements and is
targeted toward users who are familiar with previous vCloud Director releases.
Citrix leverages the open source Xen hypervisor as the core virtualization engine for its XenServer product. While XenServer and open source Xen share the Xen hypervisor, XenServer offers additional tested and polished features designed for production use. XenServer is easier to use than open source Xen due to rigorous testing, optimization, and the inclusion of 75% proprietary code. XenServer provides an enterprise-grade virtualization platform with high availability, disaster recovery, workload visibility, and dynamic provisioning capabilities.
Cloud infrastructure licensing and pricing customer presentationsolarisyourep
VMware announced major upgrades to its entire cloud infrastructure stack, including vSphere 5, vCloud Director 1.5, vShield 5, and vCenter SRM 5. The key changes in vSphere 5 licensing include moving from a per-processor licensing model with core and memory restrictions, to an unlimited core model with a pooled vRAM entitlement. Each vSphere license provides a set amount of vRAM that can be used across all hosts managed by a vCenter. Compliance is measured by whether the average daily consumed vRAM stays below the total pooled entitlement. The new model aims to provide more flexibility without disruption to existing customers.
This document summarizes VMware's Service Provider Program (VSPP) which allows partners to offer VMware technologies as services. It outlines the various product offerings and associated point values, an example of pricing structures, new vCloud service provider bundles, and the requirements and benefits for different VSPP levels. The premier bundle is targeted at enterprise clouds and includes the full vShield Edge license, while the standard bundle is for SMB clouds and based on vSphere 5.0 Enterprise. Requirements to reach higher levels include having more certified professionals (VCPs) and larger contract values, which provide greater benefits such as marketing development funds.
The document discusses Oracle Automatic Storage Management (ASM) fault tolerance features including striping, mirroring, rebalancing, and failure groups. It explains that ASM provides fault tolerance through techniques like random striping of data across disks for availability, mirroring of extents for redundancy, and balancing of data across failure groups so that an entire group of disks can fail without data loss.
This document summarizes research into detecting and correcting transient hardware errors. The researchers created lockstep virtual machines that execute identical workloads and compare outputs to detect errors. If outputs mismatch, the VMs replay from the last checkpoint. Checkpoints are taken periodically and compared; if unequal, one VM replays from the previous checkpoint. Initial tests showed small performance overhead from the lockstep execution and input/output checking. Future work involves implementing checkpoint/replay and improving performance and scalability.
Linux Foundation Collaboration Summit 13 :10 years of Xen and BeyondThe Linux Foundation
In 2013, the Xen Hypervisor will be 10 years old: when Xen was designed, we anticipated a world, which now is known as cloud computing. Today, Xen powers the largest clouds in production and is the basis for several commercial virtualization products. In this talk we will give on overview of Xen and related projects, cover hot developments in the Xen community and outline what comes next.
The talk is intended for users and developers that are familiar with virtualization: no deep knowledge is required. We will start with an architectural overview and cover topics such as: Xen and Linux, how to secure your cloud using disaggregation, SELinux and XSM/FLASK, the evolution of Paravirtualization, Xen on ARM and common challenges for open source hypervisors. We will explore the potential of Open Mirage for testing hypervisors. The talk will conclude with an outlook to the future of Xen.
This document discusses enabling NUMA support for Xen guests. It outlines the importance of NUMA awareness for performance, and describes how to construct the SRAT and SLIT tables to provide NUMA information to guests. It also covers guest NUMA configuration options like memory allocation strategies and considerations for live migration. The current status includes upstream host NUMA APIs and planned rebasing of patches, with next steps involving further performance analysis and supporting I/O and live migration across NUMA nodes.
Xen in Ubuntu Raring
The document discusses Xen virtualization in Ubuntu Raring. It provides an overview of Xen, including new features in versions 4.2 and 4.3. It addresses integration issues with Qemu and Libvirt in Ubuntu. It also discusses what a great Xen experience in Ubuntu would look like, focusing on easy installation and reliable performance for both Xen hosts and guests. Potential improvements are identified, such as options during installation and switching between Xen and non-Xen modes.
This document summarizes a presentation about business continuity solutions hosted by i//:squared. i//:squared provides end-to-end ICT managed services, business continuity services, and business strategic advice. The presentation also covered Veeam software, which develops products for virtual infrastructure management and data protection. Veeam solutions include backup and replication software, as well as monitoring and reporting tools. Finally, the presentation provided an overview of EMC's VNX and VNXe unified storage systems, which include various models with different maximum drive capacities and configurations.
Kemari is a virtual machine synchronization technique that allows fault tolerance by keeping a primary and secondary VM identical. It uses DomT, a para-virtualized domain, to efficiently synchronize state between VMs by tapping event channels and only transferring updated memory pages. Evaluation shows the secondary VM can continue transparently and with acceptable performance during network, storage and file I/O workloads when the primary hardware fails.
Veeam vPower provides virtualization-powered data protection through innovations across multiple versions:
- Version 1 introduced 2-in-1 backup and replication with instant file-level recovery and inline deduplication. Version 3 added direct-to-target backups and synthetic full backups.
- Version 4 added support for vStorage APIs, changed block tracking, and thin-provisioned disks. Version 5 introduced instant VM recovery, recovery verification, and on-demand sandbox capabilities.
- Key vPower technologies include SureBackup recovery verification, instant VM recovery directly from backup files, SmartCDP near-CDP replication, and launching backup files in an on-demand sandbox virtual lab.
- Veeam vPower
This document provides an overview of Oracle VM server virtualization technology. It discusses Oracle VM features such as the ability to run Linux and Windows guests, 64-bit support, live migration, and integrated management. Performance results show Oracle VM introduces minimal overhead. Case studies demonstrate how Oracle VM allows customers to reduce hardware, increase utilization rates, and lower support costs.
Storage Foundation and Veritas Cluster Server can optimize storage management in high availability and disaster recovery environments on AIX through deep analysis of how they can work with PowerVM virtualization technology. Key areas include leveraging features like virtual SCSI, NPIV, virtual Ethernet, dynamic LPAR and more to provide storage management capabilities across physical and virtual environments, live application mobility between systems with non-disruptive migration, and using VCS to manage clusters within PowerVM.
This document discusses enhancing pass through device support with IOMMU. It covers the current status of pass through device support in Xen, areas for further enhancement including hardening the host from device failures, improving functionality by standardizing CFGS emulation, and handling more corner cases such as device reconfiguration and Qemu support for PCIe devices. It calls for community efforts to push these enhancements forward.
This document provides a history and overview of Xen virtualization technology. It discusses how Xen originated from university research in 1999 and was released as open source in 2004. It gained widespread adoption by 2005. The document outlines Xen's goals of being the standard open source hypervisor and maintaining performance, stability, and security. It discusses the benefits of virtualization for server consolidation, manageability, deployment, and high availability. Finally, it covers topics like paravirtualization, hardware virtualization, network and device virtualization, security, and future directions like client and mobile virtualization and cloud computing.
WinConnections Spring, 2011 - 30 Bite-Sized Tips for Best vSphere and Hyper-V...Concentrated Technology
The document provides 30 tips for optimizing virtual machine performance. Some key tips include purchasing hardware compatible with virtualization, using paravirtualized drivers for networking and storage, properly allocating CPU and memory resources to VMs, avoiding overuse of snapshots, performing resource-intensive tasks during off-hours, enabling jumbo frames and NTP time synchronization, and leveraging tools like DRS that prioritize faster hosts. Regular optimization and monitoring of VM configurations and underlying hardware is emphasized for maintaining good performance.
Hyper-V vs. vSphere: Understanding the DifferencesSolarWinds
For more information on Virtualization Manager visit: http://www.solarwinds.com/virtualization-manager.aspx
Watch this webcast: http://www.solarwinds.com/resources/webcasts/hyper-v-vs-vsphere-understanding-the-differences.html
Watch this webinar with Scott Lowe, Founder and Managing Consultant at The 1610 Group, and SolarWinds virtualization expert Jonathan Reeve where they discuss “Hyper-V vs. vSphere: Understanding the differences.”
The virtualization market is abuzz with talk of different hypervisors – most prominently VMware ESX® versus Microsoft Hyper-V®, who together own over 90% of the market. Small and medium businesses are already moving quickly toward Hyper-V, and a growing number of larger organizations are beginning to put plans in place to transition some portion of their environment from ESX to Hyper-V.
In this webcast we explore the reasons for these changes and the ecosystems for these two platforms both now and in the future. We also take a look ahead to what is known about Hyper-V 3.0 and why it warrants an even deeper look when evaluating hypervisors for your future virtualization deployments.
Mythbusting goes virtual What's new in vSphere 5.1Eric Sloof
The document summarizes new features in vSphere 5.1 that address common myths about virtualization limitations. It discusses that vMotion can now occur without shared storage using enhanced vMotion, vSphere management no longer requires Windows with the new web client, vSphere Replication provides site disaster recovery without SRM, the VMFS host limit for linked clones increased from 8 to 32, and distributed switch configurations can now be backed up and restored.
- vSphere 5.0 introduces several new platform enhancements including support for 2TB of host memory, 160 logical CPUs, and 512 VMs per host. ESXi now runs exclusively as the hypervisor.
- Storage features are improved with VMFS-5, which supports volumes over 2TB and faster operations. Storage DRS allows for initial placement and load balancing of VMs across datastores.
- Networking features include support for multiple vMotion NICs for faster migration. The new web client allows remote administration from any browser.
This document summarizes a presentation given at the Xen Summit 2008 in Tokyo about challenges in managing large virtualized environments. The presentation discussed scaling a machine pool from 10 to 1,000 physical machines and how different challenges arise at each level, including hardware compatibility and automation. It also covered different types of virtual machines for servers, desktops, and labs and how to integrate them. Finally, it provided an overview of how Google uses Ganeti to manage its virtualized infrastructure by fully automating resource management across a large cluster of machines with varying hardware over time.
Hypervisors are a kind of software which runs different virtual systems called virtual machines on a single computer giving the view to guest running on each virtual machine that it is running on its own single computer. This presentation talks about hypervisors and different techniques of their implementation in brief.
VMware vCloud® Director™ (vCloud Director) orchestrates the provisioning of software-defned datacenter
services, to deliver complete virtual datacenters for easy consumption in minutes. Software-defned datacenter
services and virtual datacenters fundamentally simplify infrastructure provisioning and enable IT to move at the
speed of business.
Numerous enhancements are included within vCloud Director 5.1, making it the best infrastructure-as-a-service
(IaaS) solution in the marketplace today. This document highlights some of these key enhancements and is
targeted toward users who are familiar with previous vCloud Director releases.
Citrix leverages the open source Xen hypervisor as the core virtualization engine for its XenServer product. While XenServer and open source Xen share the Xen hypervisor, XenServer offers additional tested and polished features designed for production use. XenServer is easier to use than open source Xen due to rigorous testing, optimization, and the inclusion of 75% proprietary code. XenServer provides an enterprise-grade virtualization platform with high availability, disaster recovery, workload visibility, and dynamic provisioning capabilities.
Cloud infrastructure licensing and pricing customer presentationsolarisyourep
VMware announced major upgrades to its entire cloud infrastructure stack, including vSphere 5, vCloud Director 1.5, vShield 5, and vCenter SRM 5. The key changes in vSphere 5 licensing include moving from a per-processor licensing model with core and memory restrictions, to an unlimited core model with a pooled vRAM entitlement. Each vSphere license provides a set amount of vRAM that can be used across all hosts managed by a vCenter. Compliance is measured by whether the average daily consumed vRAM stays below the total pooled entitlement. The new model aims to provide more flexibility without disruption to existing customers.
This document summarizes VMware's Service Provider Program (VSPP) which allows partners to offer VMware technologies as services. It outlines the various product offerings and associated point values, an example of pricing structures, new vCloud service provider bundles, and the requirements and benefits for different VSPP levels. The premier bundle is targeted at enterprise clouds and includes the full vShield Edge license, while the standard bundle is for SMB clouds and based on vSphere 5.0 Enterprise. Requirements to reach higher levels include having more certified professionals (VCPs) and larger contract values, which provide greater benefits such as marketing development funds.
This document provides an in-depth overview of VMware High Availability (HA). It discusses admission control policies, how HA calculates slot sizes based on CPU and memory reservations, and how it determines failover capacity. It also covers datastore heartbeats that HA uses to check host liveness and communicate during network outages, allowing it to determine if a host is failed, isolated, or partitioned. The document emphasizes properly configuring HA and understanding how reservations and runtime information impact its operation.
VMworld 2013: Performance and Capacity Management of DRS Clusters VMworld
VMworld 2013
Anne Holler, VMware
Ganesha Shanmuganathan, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Did you know that you can migrate your old backups to the cloud and restore data to VMware from any backup system? In this session, Veeam offers you a sneak peek of its newest solution: Backup & Replication v10.
Overview of my VMware vSphere 5.1 with ESXi and vCenter class. Get an overview of the most powerful, enterprise class private cloud platform available.
You voiced your concerns. VMware listened: Major Adjustments to vSphere 5 lic...Softchoice Corporation
This document summarizes the key changes to VMware vSphere 5 licensing announced on August 10th, 2011. It outlines the consolidation of product editions, new definitions of vRAM entitlements based on average usage rather than peaks, and increased vRAM allowances. It also clarifies that vSphere 4 will continue to be supported for two versions, and customers have downgrade rights to move to vSphere 4 if needed. Contact information is provided for Softchoice, a top VMware reseller, to help customers understand and apply the new licensing changes.
Unlocking the Value of Delivering Services Event – Monday 18th March 2013 – V...Arrow ECS UK
This document discusses VMware's Service Provider Program 3.0. It outlines how service providers can offer VMware-based infrastructure as a service to enterprise customers using tools like VMware vCloud Director. It also describes the benefits of partnering with VMware, such as access to public cloud integrations and tools to build hybrid clouds. Finally, it provides details on VMware's subscription point plan that allows service providers to pay for VMware products on a flexible consumption model.
Virtualization performance: VMware vSphere 5 vs. Red Hat Enterprise Virtualiz...Principled Technologies
Using a hypervisor that offers better resource management and scalability can deliver excellent virtual machine performance on your servers. In our testing, VMware vSphere 5 allowed our host’s virtual machines to outperform those running on RHEV 3 by over 28 percent in total OPM performance. Furthermore, VMware vSphere 5 performance continued to improve when going from 39 VMs to 42 VMs: Total performance for VMware vSphere 5 increased by 2.8 percent, whereas it decreased by 7.2 percent with RHEV 3.
With the capabilities and scalability that VMware vSphere 5 offers, you are able to utilize the full capacity of your servers with confidence and purchase fewer servers to handle workload spikes; this can translate to fewer racks in the data center, lower costs for your business, and more consistent overall application performance.
This document summarizes key capabilities and features of Hyper-V in Windows Server 2012 R2 compared to the vSphere Hypervisor and vSphere 5.5 Enterprise Plus editions. It covers areas such as scalability, security, networking, storage and infrastructure flexibility. The document provides comparisons of specific features and limitations for areas like live migration, network isolation, SR-IOV support and storage encryption.
This document provides an overview of compute, storage, networking, and operations features of VMware Cloud on AWS SDDC. It discusses the i3.metal and i3en.metal host types, vSAN configuration and storage options, elastic DRS and automated host replacement. Key highlights include up to 48 cores and 48TB of raw storage per i3en host, encryption at rest using AWS KMS, and ability to scale clusters automatically based on utilization.
This document provides an overview of the MRSCAPS design framework and how it can be applied to analyze VMware Virtual SAN (VSAN). It discusses VSAN considerations for each element of MRSCAPS: manageability using the vSphere console and health check plugin; recoverability through backups and replication; security with additional encryption options; cost based on licensing models; availability leveraged through storage policies and HA; performance through hardware optimizations and flash configurations; and scalability to large clusters and additional hosts. The presentation includes screenshots and concludes with a Q&A session.
What’s New in VMware vCenter Site Recovery Manager v5.0Eric Sloof
Summary of SRM v5.0 New Features
New user interface
Planned migration – with replication update
Failback
vSphere Replication
Faster IP customization
Shadow VM icons
In guest scripts
VM dependency
Technical Comparison against SRM v6 & ABR.pptxhellocn
Zerto provides hypervisor-based virtual machine replication that is more efficient than storage-based replication solutions like SRM. It replicates VM data and journals changes at the block level, keeping very small overhead sizes. Zerto's replication is application-aware to ensure consistency, and it provides automated disaster recovery testing and orchestration to failover virtual machines between sites in minutes with little to no data loss. In contrast, SRM relies on storage-based replication using snapshots which is slower and less flexible than Zerto's hypervisor-level approach.
The document discusses the history and usage of virtualization technology, provides an overview of CPU, memory, and I/O virtualization, compares the Xen and KVM virtualization architectures, and describes some Intel work to support virtualization in OpenStack including the Open Attestation service.
The document discusses virtualization technologies including fully virtualized and para-virtualized systems. It describes the benefits of virtualization for high availability, increased hardware utilization, separating services from hardware, business continuity, and testing new systems. Specific virtualization software and hardware configurations are provided as examples.
The document discusses deploying VMware on SoftLayer infrastructure. It defines virtualization and lists benefits of virtual machines like partitioning, isolation, encapsulation and hardware independence. Reasons given for choosing VMware include high availability, live migration capabilities like vMotion and Storage vMotion, and management tools. Choosing VMware on SoftLayer is recommended due to lower costs compared to on-premise VMware licensing and infrastructure, and SoftLayer's cloud capabilities like scalability and bare metal servers. The document provides guidance on simple and recommended configurations for deploying VMware on SoftLayer ranging from a single ESXi host to a three-host highly available cluster.
How Can Hypervisors Leverage Advanced Storage Features ? - VMFS(x) on the storage attached to the ESX/ESXi hosts works perfectly fine, but the network usage(IP/FC/etc) goes up significantly when the storage is coming from NAS or SAN.The goal is to offload the file operations to the NAS/SAN based Arrays and leverage maximum benefits to increase I/O performance,storage utilization and reduced network usage.
The document discusses VMware vSphere 4.0, which delivers virtualization solutions for compute, storage, and networking. It provides industry-leading consolidation ratios through features like virtual SMP and large memory support for virtual machines. vSphere 4.0 improves efficiency through technologies like distributed resource scheduling, storage thin provisioning, and fault tolerance. It also maximizes application availability, security, and scalability through integrated services.
零壹科技「壹點通行銷同步雲」入選經濟部中小企業雲端運算推廣服務計畫,採用趨勢科技SafeSync技術,「壹點通行銷同步雲」讓大型企業安全地同步、分享與管理文件,提供一個不可或缺的私有雲,讓 IT 將四處分散的文件集中至一個控管的空間。有了「壹點通行銷同步雲」,IT 就能提升使用者的靈活性和便利性,不論使用者在辦公室內或者在路上都能傳送及存取大型文件
The document discusses Akamai's enterprise application performance solutions. It summarizes how Akamai helped several companies improve the performance of their online applications, which led to increased conversion rates, reduced infrastructure costs, and improved experiences for international users. Akamai delivered performance improvements through its global intelligent platform, which offloads web traffic and optimizes application delivery.
Akamai 如何幫您的客戶用網站賺錢 how to monetize your site零壹科技股份有限公司
The document discusses how Akamai's Dynamic Site Accelerator (DSA) can help websites address performance, scalability, security, and availability issues. DSA leverages Akamai's global edge network to speed page loading, optimize caching, improve TCP performance, and offload website infrastructure. It provides an example of how DSA helped Cathay Pacific boost online bookings and reduce infrastructure costs. In summary, DSA leverages Akamai's edge network to improve website performance, scalability, and availability while reducing infrastructure needs and costs.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
2. vSphere 5 licensing: Evolution Without Disruption
vSphere 4.x vSphere 5
Licensing Unit Processor = Processor
!
Core per proc Restricted < Unlimited
Physical RAM
Restricted < Unlimited
per host
Amt of vRAM pooled
Pooled vRAM
NA ≠ across entire
entitlement
environment
2
3. What is vRAM?
vRAM is the memory configured to a virtual machine
Assigning a certain amount of vRAM is a required step in the
creation of a virtual machine
3
4. Key vRAM Concepts
1 Each vSphere 5 processor license comes with certain
amount of vRAM entitlement
2 Pooled vRAM Entitlement Sum of all
processor license
Consumed vRAM entitlements
3
Sum of vRAM
configured into all
powered on VMs
4
Compliance =
12 month rolling average of Consumed vRAM < Pooled vRAM Entitlement
4
5. Key concepts - Example
4 licenses of vSphere
Each vSphere Enterprise
Enterprise Edition
Edition license entitles
vRAM Pool (256GB) provide a vRAM pool of
to 64GB of vRAM.
256GB (4 * 64 GB)
Consumed vRAM = 80 GB
64GB 64GB 64GB 64GB
Customer creates
20 VMs with 4GB
vRAM each
vSphere Ent vSphere Ent
1 1 1 1
CPU CPU CPU CPU
Host A Host B
Compliance =
12 month rolling average of Consumed vRAM < Pooled vRAM Entitlement
5
7. vSphere 5.0 Licensing Model in More Detail
vSphere 4.1 and prior vSphere 5.0 and later
Per CPU with Core and Physical Per CPU with
Memory Limits vRAM Entitlements
Licensing Unit CPU = CPU
SnS Unit CPU = CPU
Restrictions by vSphere editions
Core per proc • 6 cores for Standard and Enterprise, Ess, Ess+ < Unlimited
• 12 core for Advanced and Ent. Plus
Restrictions by vSphere edition
Physical RAM • 256GB for Standard, Advanced and Enterprise.
capacity per host Ess, Ess+
< Unlimited
• Unlimited for Enterprise Plus
Entitlement by vSphere edition
• 32GB vRAM for Essentials Kit
vRAM entitlement per
proc
Not applicable ≠ •
•
32GB vRAM for Essentials Plus Kit
32GB vRAM for Standard
• 64GB vRAM for Enterprise
• 96GB vRAM for Enterprise Plus
YES – vRAM entitlements are pooled
Pooling of entitlements Not applicable < among vSphere hosts managed by a
vCenter or linked vCenter instance
Max amount of vRAM per 96GB – a powered on VM will count for a
VM counted
Not applicable ≠ maximum of 96GB against the pool
regardless of its actual configured amount
• Purchase in advance of use
• Purchase in advance of use
Compliance policies = • 12 months rolling average of daily
• High Watermark
high watermark
Monitoring tool Not applicable ≠ YES – built-into vCenter Server 5.0
7
8. vSphere 5 Licensing In Action
How does it work? vSphere 5.0
Each CPU must have at least one vSphere license
assigned
vRAM Pool
• Cores and physical RAM do not matter
(using 80 GB out of 256GB)
Each processor license managed by a vCenter or
multiple vCenters in Linked mode contributes an
amount of vRAM capacity to the total vRAM pool
• Example: 4 vSphere Ent. Licenses create a vRAM pool
of 256GB of vRAM (4 x 64GB)
• Each vSphere Edition creates a separate pool that
must be kept in licensing compliance
vRAM pool is shared among powered-on VMs
running on all hosts in a vCenter
• Example: 20 VMs with 4GB of configured vRAM VMware vCenter Server
consume a total of 80GB vRAM
• It doesn’t matter how many VMs you run and on which vSphere Ent vSphere Ent
hosts you run them.
• vMotion, DRS, HA do not require additional licenses 1 1 1 1
At any point in time the 12 month rolling average of
CPU CPU CPU CPU
daily high watermark of consumed vRAM must be
equal or less to the vRAM pool capacity
• Compliance is at the vCenter level not the host level
vRAM pool can be extended by: Host A Host B
• Upgrading all CPUs to higher end vSphere Edition
1
• Adding processor licenses to the same set of CPUs VM Processor
• Adding a new host with new licenses (4GB vRAM) License
8
9. Tools for Tracking vRAM Entitlement vs Usage
Before upgrading to vSphere 5, customers can use a
separate free utility that analyzes a VI3 or vSphere 4
environment, and determines vRAM consumed
• The tool will be available later in Q3 2011
After upgrading to vSphere 5:
1. vRAM licensing monitoring and reporting tool built into vCenter 5
2. Free add-on to vCenter for in-depth historical trending analysis
9
10. When Does the vSphere 5 Licensing Model Apply?
For ELA customers
Customers with active ELA will continue to be subject to the terms of their
contracts for the duration of their contract, independent of which vSphere
version they deployed
• ELA customers may contact their VMware sales representatives to update the
terms of their ELAs to the new vSphere 5 licensing model
For customers without ELAs
The new model applies only to vSphere 5 licenses. Prior versions of
vSphere will continue to be based on their respective licensing model
The new vSphere 5 licensing model will apply upon acceptance of the
vSphere 5 EULA (necessary condition to upgrade to vSphere 5)
Customers who purchase vSphere 5 licenses and decide to downgrade to
older versions of vSphere will be subject to the EULA terms and licensing
model of the vSphere version they downgrade to
10
12. vSphere 5 Editions
Essentials Essentials Enterprise
` New in vSphere 5.0 Essentials Essentials Standard Standard
Advanced Enterprise
Plus Plus Plus
vRAM Entitlement per proc 32 GB +
32GB 32 GB 64 GB 96 GB
vCPU 8 way 8 way 8 way 8 way 32 way
vSphere
Features Storage
Hypervisor Appliance
High Availability
Data Recovery
vMotion
Virtual Serial Port Concentrator
Hot Add
vShield Zones
Fault Tolerance
Storage APIs for Array Integration
Storage vMotion
Distribute Resource Scheduler &
Distributed Power Management
Distributed Switch
I/O Controls (Network and Storage)
Host Profiles
Auto deploy
Profile-Driven Storage
Storage DRS
All editions include: Thin Provisioning, Update Manager, Storage APIs for Data Protection, Image Profile, and SLES (except Ess and Ess +)
12
13. vSphere 5 Acceleration Kits
Essentials Standard Enterprise Enterprise
New in vSphere 5.0
Essentials
Plus AK AK Plus AK
Entitlements per CPU license
• vRAM Entitlement 32 GB 32 GB 32 GB 64 GB 96 GB
(192 GB max) (192 GB max) (256GB per kit) (384 per kit) (576 per kit)
• vCPU 8 way 8 way 8 way 8 way 32 way
Features
Hypervisor
High Availability
Data Recovery
vMotion
Virtual Serial Port Concentrator
Hot Add
vShield Zones
Fault Tolerance
Storage APIs for Array Integration
Storage vMotion
Distribute Resource Scheduler &
Distributed Power Management
Distributed Switch
I/O Controls (Network and Storage)
Host Profiles
Auto deploy
Profile-Driven Storage
Storage DRS
All editions include: Thin Provisioning, Update Manager, Storage APIs for Data Protection, Image Profile, and SLES (except Ess and Ess +)
13
14. Entitlement Paths for current vSphere 4.x customers
vSphere 4.x vSphere 5.0
Enterprise Plus Enterprise Plus
Enterprise Enterprise
Advanced
Standard Standard
Essentials Plus Essentials Plus
Essentials Essentials
14
15. Upgrade Paths for vSphere Editions and Kits
Enterprise Enterprise Plus
Enterprise Plus
Standard Enterprise
Any one of the
Essentials Plus
Acceleration Kits
Any one of the
Acceleration Kits
Essentials Essentials Plus
15
16. VMware vSphere Hypervisor 5
Entry level free product for single server virtualization
Full-featured hypervisor
Based on VMware’s next generation hypervisor architecture, ESXi
Provides the same performance, reliability and robustness of the
ESXi included with paid versions of VMware vSphere
Basic virtualization capabilities for a single host
Cannot be centrally managed with vCenter Server
Individual vSphere Hypervisor hosts can be remotely managed
with the vSphere Client
Provides only basic server consolidation capabilities
Free
Entitles to 32GB of vRAM per server and can be used on servers
with up to 32GB of physical RAM
Can be easily upgraded to paid vSphere editions for central
management and advanced capabilities
16
18. vSphere Storage Appliance - Shared Storage for Everyone
vSphere Storage Appliance
vSphere Storage Appliance
Shared storage capabilities,
without the cost and complexity
Licensing Per instance
(up to 3 nodes)
+ vSphere Essentials Plus w/
Install in minutes vSphere Storage Appliance
1 Five click simplicity Easy to use
Saves money
High Availability without Survive server failures vSphere Storage Appliance
2 the need for shared No more planned
available at cost down
storage hardware downtime
when purchased with
vSphere Essentials Plus
World-class datacenter Set and forget
3 capabilities – even for automation
small environments Get more out of your
hardware
18
20. SRM 5 Editions Lineup
SRM 5
Standard Enterprise
Scalability Limits
(1)
• Maximum protected VMs 75 virtual machines Unlimited(2)
Features
• Support for storage-based replication
• Centralized recovery plans
• Non-disruptive testing
• Automated DR failover
• vSphere Replication
• Automated failback
• Planned migration
1. Maximum of 75 VMs per site and per SRM instance
2. Subject to the product’s technical scalability limits
New in SRM 5.0
US pricing only. Pricing outside the US might vary
20
22. Customer Scenario
How do I license a host with vSphere 5?
How much vRAM do I get with my vSphere 5 licenses?
What is the vRAM pool?
How many VMs can I run with my vRAM pool?
How many VMs can I power on a host?
What if my VMs move to a different host with vMotion or DRS?
What is my vRAM pool if I have multiple vCenter Servers?
What is my vRAM pool if I have more than one vSphere edition?
How do I expand my vRAM pool?
How do I license an new host and join it to my vRAM pool?
What are the benefits of the vSphere 5 licensing model?
Will vSphere 5 be more expensive for vSphere 4.x customers?
22
23. How Many vSphere Licenses Do I Need?
Answer
In this example: Like in vSphere 4.x, each CPU requires at
• Licensing Host A with vSphere 5 requires the least one license
same number of licenses as with vSphere 4.x vSphere 5 licensing does not impose limits
• Licensing Host B with vSphere 5 requires half on number of cores per processor and
the licenses of vSphere 4.x (2 vs. 4) because physical RAM per server
vSphere 5 does not limit the number of cores
per processor
Example
vSphere Ent vSphere Ent Summary
1 1 1 1 Hosts 2
CPUs 4
CPU CPU CPU CPU
vSphere Licenses 4
Host A Host B
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
23
24. How Much vRAM Do I Get with My vSphere Licenses?
Answer
Each vSphere Enterprise Edition license
entitles to 64GB of vRAM. Each vSphere 5 processor license
includes a vRAM entitlement
Edition vRam per License
Enterprise Plus 96GB
64GB 64GB 64GB 64GB
Enterprise 64GB
Standard 32GB
32GB
Essentials Plus
(192GB max)
vSphere Ent vSphere Ent 32GB
Essentials
(192GB max)
1 1 1 1
CPU CPU CPU CPU
Host A Host B
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
24
25. What is the vRAM pool?
Answer
4 licenses of vSphere Enterprise Edition
provide a vRAM pool of 256GB (4 * 64 GB)
When managing vSphere hosts with
vCenter, vRAM entitlements are pooled
vRAM pool capacity is the max capacity
that can be used with the current set of
licenses
64GB
vRAM Pool (256GB) 64GB
64GB 64GB
VMware vCenter Server
Example
vSphere Ent vSphere Ent License the following servers with vSphere
Enterprise Edition:
1 1 1 1
Summary
CPU CPU CPU CPU
CPUs 4
vSphere Licenses 4
Host A Host B
Pooled vRAM (GB) 256
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
25
26. How Many VMs Can I Run with My vRAM pool?
24 powered on VMs each with 4GB of configured
vRAM consume a total of 96GB Answer
Powered off VMs do not consume vRAM capacity
You can run as many VMs as you want as
long as the consumed vRAM capacity is
equal or less than the vRAM pool
Only powered on VMs consume vRAM
capacity
VMware vCenter Server
Example
vSphere Ent vSphere Ent User creates 32 VMs with 4GB of
configured vRAM and powers on only 24
1 1 1 1
Summary
CPU CPU CPU CPU
CPUs 4
vSphere Licenses 4
Host A Host B
Pooled vRAM (GB) 256
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Consumed vRAM (GB) 96
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
26
27. How Many VMs Can I Power-on a Host?
By running 36VMs on host B the user consumes
a total of 144GB on Host B
Answer
The two Enterprise Ed. Licenses used for Host B
contributes a total of 128GB of vRAM to the pool
You can power on as many VMs as you as
you want on a host as long as the total
consumed vRAM is less or equal to
available vRAM pool
If necessary, you can increase the available
vRAM pool capacity by adding more proc.
… licenses to a CPU
Example
VMware vCenter Server User deploys 40 VMs each with 4GB of
configured vRAM distributing 4 VMs on Host
vSphere Ent vSphere Ent
A and 36 on Host B
1 1 1 1 Summary
CPU CPU CPU CPU A B Pool
vSphere Lic. 2 2 4
Host A Host B VMs 4 36 40
Host A Consumed vRAM (GB) 16 144 160
2 sockets, 4 cores per CPU, 48GB RAM 1
Host B VM Powered-off Processor vRam Pool (GB) 128 128 256
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
27
28. What if My VMs Move to a Different Host with vMotion or DRS?
Answer
Any VM can run on any host within a vRAM
pool. Since vRAM is pooled across all hosts of
the same vSphere edition under a vCenter
Server, the movement of VMs cannot cause
more vRAM to be needed.
Example
VMs on one host can vMotion to another without
VMware vCenter Server
impacting the consumed or available vRAM
vSphere Ent vSphere Ent capacity.
All VMs can even run on a single host, in effect
1 1 1 1 borrowing the vRAM capacity of the other host.
CPU CPU CPU CPU Summary
CPUs 4
Host A Host B vSphere Licenses 4
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1 Pooled vRAM (GB) 256
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License Consumed vRam (GB) 128
28
29. What is My vRAM Pool if I Have Multiple vCenter Servers?
You must link the vCenter Servers to form a single Answer
vRAM pool. The resulting vRAM capacity is the
sum of the two site’s vRAM capacity.
The vRAM pool can extend across multiple
linked vCenter Servers. vCenter Servers
Site 1 Site 2 (Standard Edition) can be linked together using
Linked Mode.
Example
Site 1 and Site 2 each contain a host with two
licenses of Enterprise. Each site has 128GB of
pooled vRAM capacity in a separate pool.
VMware vCenter VMware vCenter When the vCenter Servers at each site are linked
Server Server
together, one vRAM pool is created with 256 GB
vSphere Ent vSphere Ent of pooled vRAM capacity.
1 1 1 1 Summary
Summary
Site 1 1 and 2 2
Site Site
CPU CPU CPU CPU
CPUs
CPUs 2 4 2
vSphere Licenses
vSphere Licenses 2 4 2
Host A Host B
Host A
Pooled vRAM (GB)
Pooled vRAM (GB) 128 256 128
2 sockets, 4 cores per CPU, 48GB RAM 1
Host B VM Powered-off Processor Consumed vRam (GB)
Consumed vRam (GB) 64 128 64
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
29
30. What is My vRAM Pool if I Have More Than One vSphere
Edition?
Answer
Each edition of vSphere has a separate vRAM
pool. Adding licenses for one edition will not
add vRAM to other edition’s vRAM pool.
Example
Host X is licensed with two licenses of Enterprise
VMware vCenter Server Plus. There are two separate vRAM pools: one
vSphere Ent vSphere Ent
for Enterprise with 256 GB, another for Enterprise
vSphere Ent +
Plus with 192 GB.
1 1 1 1 1 1 Summary
CPU CPU CPU CPU CPU CPU Ent Ent+
CPUs 4 2
Host A Host B Host X vSphere Licenses 4 2
Host A Pooled vRAM (GB) 256 192
2 sockets, 4 cores per CPU, 48GB RAM 1
Host B VM Powered-off Processor Consumed vRam (GB) 128 96
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
30
31. I Need More vRAM Capacity. How Do I Expand my vRAM Pool?
Answer
There are two ways you can expand your
vRAM pool:
1) Upgrade all licenses to an edition with a
higher vRAM entitlement
2) Add more licenses of the current edition
… …
Example
VMware vCenter Server All 256GB of vRAM capacity is consumed.
Another 16 GB is needed for 4 additional VMs.
vSphere Ent vSphere Ent
1 1 1 1
Summary
CPU CPU CPU CPU
CPUs 4
vSphere Licenses 4
Host A Host B
Pooled vRAM (GB) 256
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Consumed vRam (GB) 256
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
31
32. I Need More vRAM Capacity. How Do I Expand my vRAM Pool?
Enterprise Plus is entitled to 96GB of vRAM.
4 licenses * 96GB = 384GB vRAM Answer
There are two ways you can expand your
vRAM pool:
1) Upgrade all licenses to an edition with a
higher vRAM entitlement
2) Add more licenses of the current edition
… …
Example
VMware vCenter Server All 256GB of vRAM capacity is consumed.
Another 16 GB is needed for 4 additional VMs.
vSphere Ent
vSphere Ent + vSphere Ent
vSphere Ent +
Upgrading all 4 licenses to Enterprise Plus would
1 1 1 1 raise the Pooled vRAM capacity to 384GB.
Summary
Summary
CPU CPU CPU CPU
CPUs
CPUs 4
4
vSphere Licenses
vSphere Licenses 4
4
Host A Host B
Pooled vRAM (GB)
Pooled vRAM (GB) 256
384
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Consumed vRam (GB)
Consumed vRam (GB) 256
272
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
32
33. I Need More vRAM Capacity. How Do I Expand my vRAM Pool?
One additional license of Enterprise will increase
the vRAM pool by 64GB, yielding a total pooled Answer
vRAM capacity of 320GB.
There are two ways you can expand your
vRAM pool:
1) Upgrade all licenses to an edition with a
higher vRAM entitlement
2) Add more licenses of the current edition
… …
Example
VMware vCenter Server All 256GB of vRAM capacity is consumed.
Another 16 GB is needed for 4 additional VMs.
vSphere Ent vSphere Ent
Adding one additional license of Enterprise would
1 1 1 1 1 increase the pooled vRAM capacity to 320GB.
Summary
CPU CPU CPU CPU
CPUs 4
vSphere Licenses 4
5
Host A Host B
Pooled vRAM (GB) 256
320
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Consumed vRam (GB) 256
272
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
33
34. How Do I License a New Host and Join It to My vRAM Pool?
Answer
There are two ways to add a host:
1) Add additional licenses of the same edition.
2) If you have more licenses than CPUs, you
can deploy those licenses to the new host.
Pooled vRAM capacity will remain
unchanged.
Example
VMware vCenter Server A new host, Host C, needs to be licensed.
vSphere Ent vSphere Ent
1 1 1 1 1
Summary
CPU CPU CPU CPU CPU
CPUs 4
vSphere Licenses 5
Host A Host B Host C
Pooled vRAM (GB) 320
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Consumed vRam (GB) 144
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
34
35. How Do I License a New Host and Join It to My vRAM Pool?
Pooled vRAM capacity is increased
by 64GB. As before, VMs can run
Answer
on any of the three hosts.
There are two ways to add a host:
1) Add additional licenses of the same edition
2) If you have more licenses than CPUs, you can
deploy those licenses to the new host. Pooled
vRAM capacity will remain unchanged.
Example
VMware vCenter Server
VMware vCenter Server A new host, Host C, needs to be licensed.
vSphere Ent vSphere Ent vSphere Ent One additional license of Enterprise is added. This
increases the pooled vRAM capacity to 384GB.
1 1 1 1 1 1
Summary
CPU CPU CPU CPU CPU
CPUs 4
5
vSphere Licenses 5
6
Host A Host B Host C
Pooled vRAM (GB) 320
384
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Consumed vRam (GB) 144
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
35
36. How Do I License a New Host and Join It to My vRAM Pool?
Pooled vRAM capacity remains
unchanged at 320GB. As before, the
Answer
VMs can run on any of the three hosts.
There are two ways to add a host:
1) Add additional licenses of the same edition.
2) If you have more licenses than CPUs,
you can deploy those licenses to the
new host. Pooled vRAM capacity will
remain unchanged.
Example
VMware vCenter Server
VMware vCenter Server A new host, Host C, needs to be licensed.
vSphere Ent vSphere Ent vSphere Ent No additional vRAM is needed and there are more licenses
than CPUs. A license can be redeployed to Host C. Pooled
vRAM capacity remains unchanged.
1 1 1 1 1
Summary
CPU CPU CPU CPU CPU
CPUs 5
4
vSphere Licenses 5
Host A Host B Host C
Pooled vRAM (GB) 320
Host A
2 sockets, 4 cores per CPU, 48GB RAM 1
Consumed vRam (GB) 144
Host B VM Powered-off Processor
2 sockets, 12 cores per CPU, 64GB RAM (4GB vRAM) VM License
36