This presentation provides an overview of the upcoming ESX Server 3 and VirtualCenter 2 releases and add-on options to be made available with the release. All the material in this presentation has been publicly announced, so no NDA is required. However, until ESX Server 3 and VirtualCenter 2 are generally available, the details of the features described in this presentation are subject to change.
[This slide uses a 3-click animation.] ESX Server 3 and VirtualCenter 2 extend the evolution of virtual infrastructure that began with the first VMware ESX Server release. With each new release of ESX Server and VirtualCenter, VMware has succeeded in enhancing the products in two complementary directions: each release has delivered improved capabilities to manage increasingly demanding workloads in datacenter environments; and, each new release has extended support for lower cost server and storage hardware. [Click 1] ESX Server 1, released in 2001 introduced the first x86 virtual machine hypervisor providing, for the first time, performance and scalability needed to use virtual machines in production datacenter settings. Although it was revolutionary, ESX Server 1 was primarily a single server virtualization solution. [Click 2] With ESX Server 2 and VirtualCenter, released in 2003, VMware delivered the first true virtual infrastructure platform capable of managing hundreds of virtual machines running on dozens of host servers. VirtualCenter was the key enhancement with this release as it provided a centralized management console for multiple ESX Server hosts and tools for managing provisioning of virtual machines across those hosts. This release expanded processing capacity of virtual machines with 2-way Virtual SMP. VMware improved storage integration with support for high-end fibre channel SANs and sharing of virtual machine storage containers between ESX Server hosts. That made possible the truly revolutionary VMotion feature that allows hot migration of running virtual machines between ESX Server hosts with zero-downtime. VMware also introduced the Virtual Infrastructure SDK allowing customers and partners to access all the capabilities of VirtualCenter from their own scripts and programs. ESX Server 2 added support for blade servers so users could increase the density of virtual infrastructure server deployments and it added support for AMD Opteron and dual-core processors, expanding the range of servers supported. [Click 3] Now with ESX Server 3 and VirtualCenter 2, VMware is continuing the evolution of virtual infrastructure with major improvements in virtual machine capabilities and platform cost reductions. ESX Server 3 and VirtualCenter 2 introduce a revolutionary new set of infrastructure-wide services for resource optimization, high availability and data protection that are built on the robust VMware platform that deliver capabilities not possible with physical machines. ESX Server 3 expands the processing and memory capacity of virtual machines so that intensive workloads, previously reserved for native systems, can now benefit from virtualization. VirtualCenter 2 can now scale up to manage large virtual infrastructures of hundreds of VMware servers and thousands of virtual machines and its management features have been enhanced to meet the requirements of the most sophisticated datacenters. ESX Server 3 and VirtualCenter 2 further increase the footprint of virtual infrastructure by adding support for NAS and iSCSI storage systems, available for significantly lower cost that the fibre channels SANs currently supported with ESX Server. ESX Server 3 has also been certified with lower cost “White box” server configurations based on Intel specifications and available from independent system builders.
There are two key aspects to the VMware ESX Server 3 and VirtualCenter 2 release. The first is our use of those products as a platform for truly innovative technologies. VMware ESX Server has established a record of more than four years of production data center experience. Our customers have come to rely on its proven reliability, performance and scalability. VMware’s continuing improvement of ESX Server and VirtualCenter has has created a solid and capable virtual infrastructure foundation that we are now using as a platform for a range of services that enable operations not possible with physical systems. The VMware virtualization layer provides a management interface to servers that enables a new range of software controls. With virtual infrastructure, we can now dynamically manage aspects of servers such as power state, network connections, storage configurations and resource allocations that aren’t available on conventional systems. The first of these innovative virtual infrastructure technologies from VMware was of course VMotion. VMotion delivered for the first time the key requirement of next generation computing initiatives which is to be able to transparently move running services between physical systems. VMotion has been enthusiastically received by VMware customers who rely on it for 100% uptime of their critical applications even when they are shifted between servers in response to hardware maintenance needs and workload balancing. VMotion is a perfect example of a technology available only with virtual machines. With ESX Server 3 and VirtualCenter 2, VMware is going beyond VMotion with innovative new services for high availability, workload balancing and backup management. These new services called distributed availability services, distributed resource scheduling and consolidated backup are available as add-ons to ESX Server 3 and VirtualCenter 2. I’ll describe each of them in more detail later in the presentation. These new virtual infrastructure services are convincing proof that virtual machines are now better than physical machines in all respects.
The second key aspect to the new ESX Server 3 and VirtualCenter 2 release is the opportunities opened for deploying virtual infrastructure to a much broader range of servers and applications. ESX Server 3 and VirtualCenter 2 support hardware ranging from the largest x86 data center systems with multiple, dual-core processors and high-end fibre channel SAN storage arrays to entry-level white box servers using lower cost NAS and iSCSI storage. ESX Server 3 virtual machines expand processor and memory capacity, making them suitable for for the most demanding production workloads and VirtualCenter 2 can handle virtual infrastructures spanning entire enterprises with hundreds of VMware servers and thousands of virtual machines. We’ve made management simpler. You can now use VirtualCenter 2 to manage all aspects of your virtual infrastructure – ESX Server hosts, virtual machines, provisioning, migration, resource allocations, and so on. We’ve simplified the installation process and our product licensing is now much easier to manage. We’re also introducing a new and improved client interface with ESX Server 3 and VirtualCenter 2. We’ve improved ESX Server 3 and VirtualCenter 2 with features required in production data center operations like fine-grained user access controls and audit trails to record any modifications made to your virtual infrastructure. Taken together, the support for a broader range of x86 servers, the certification with both high-end and low cost storage configurations, the expanded virtual machine capacity, the increased scalability and the improvements that make management both easier and more flexible truly enable virtualization everywhere . With ESX Server 3 and VirtualCenter 2, you can now, for the first time, extend the benefits of virtual infrastructure without compromises across environments ranging from small development and test groups, remote branch offices, disaster recovery sites and the largest production data centers.
Now let’s take a look at some of the new features specific to ESX Server 3. We’ve added support for NAS and iSCSI networked storage. Those technologies let you share storage between servers at lower cost than the fibre channel-based SAN storage we’ve supported up until now with ESX Server. With ESX Server 3, VMware continues to expand the list of certified server hardware. We now support rack, tower and blade servers from all the major vendors, including systems based on the latest dual-core processors from Intel and AMD. Those dual-core systems provide natural workload partitioning and thermal efficiency that makes them particularly well-suited to server virtualization. We’re also adding support for newer network and storage adapters with ESX Server 3. To truly deliver “virtualization everywhere”, we need to let you run your most demanding workloads in virtual machines without compromising performance. ESX Server 3 does that by offering a 4-way Virtual SMP option that goes beyond the 2-way Virtual SMP supported previously. 4-way Virtual SMP lets you configure virtual machines with up to 4 virtual processors to handle applications that require SMP platforms. We’ve also expanded the per virtual machine memory limits in ESX Server 3 from 3.6GB to 16GB to support memory-intensive workloads. The result is you can now run almost any production server in a virtual machine without sacrificing performance. The servers you previously reserved for native systems can now be virtualized. We’re making it easier for you to respond to growing virtual machine storage requirements with our hot-add virtual disks feature that allows you to add virtual storage devices to virtual machines with no reboot required. As with all our releases, we’re updating the supported guest operating systems to include the latest Windows and Linux releases. ESX Server 3 most notably adds support for the latest Red Hat Enterprise Linux 4 release as well as support for recent Windows service packs. A new feature especially important for development and test use is multiple snapshots which we’ve taken from our very popular Workstation 5 product. Multiple snapshots let you create and manage many point-in-time snapshots of each virtual machine that are represented in a tree-structured presentation showing the parent-child relationships of each snapshot. Each new ESX Server release brings performance improvements and ESX Server 3 is no different. We’ve enhanced networking performance to reduce CPU load generated by network traffic in virtual machines. We’ve targeted specific workloads like Citrix to improve the performance of desktop hosting solutions based on ESX Server 3. Other workloads addressed by ESX Server 3 performance improvements include J2EE servers and web servers in virtual machines. ESX Server 3 brings a revised service console based on a recent Red Hat Enterprise Linux 3 release. The service console is a special virtual machine running a Linux derivative that operates alongside your virtual machines to provide management access to the ESX Server host. (Don’t confuse the service console with the ESX Server hypervisor which is a thin layer of software for controlling virtual machines that is entirely written by VMware.) The updated RHEL 3-based service console provides the latest Linux security patches and other new capabilities. ESX Server 3 has improved networking flexibility with virtual network switches or “vmnets” that have over 1000 ports, letting you build the most complex networks entirely with virtual machines. NIC teaming policy can be set at the port group level, so a single virtual switch can support multiple NIC teams. Finally, ESX Server 3 will offer a technology preview of 65-bit guest operating system support. Users can create and run virtual machines using the 64-bit versions of Windows Server 2003, Red Hat Enterprise Linux 3 and 4, and SUSE Linux Enterprise Server 9. VMware will deliver full certified ESX Server support for 64-bit operating systems shortly after the ESX Server 3 release.
Now let’s drill down into some details on how we’ve implemented NAS and iSCSI support in ESX Server 3. We’ve built an NFS client into ESX Server to let you use NAS volumes in place of VMFS-formatted SAN or local SCSI volumes to store virtual disks. NAS can’t offer the same performance of our highly-optimized VMFS file systems, but it is more than adequate for customers that want lower cost storage options for their virtual infrastructure. iSCSI, if you’re not familiar with it, is block-level storage that uses a TCP/IP transport rather than fibre channel. iSCSI can use the CAT-5 or fiber TCP/IP networking infrastructure you already have in place. ESX Server 3 hosts can now use hardware-based iSCSI with certified network adapters, or you can use the iSCSI software initiator we’ve built into the ESX Server 3 hypervisor. Hardware-based iSCSI offers better performance because TCP/IP processing is offloaded to the network adapter, but software-based iSCSI provides cost savings and flexibility important to some users. We also support booting ESX Server from iSCSI volumes so you can configure completely diskless ESX Server hosts. It’s important to point out that choosing lower cost NAS or iSCSI storage options doesn’t force you to give up important ESX Server features. VMotion live migration of running virtual machines is fully supported when using iSCSI and NAS storage as are the new distributed resource scheduling and distributed availability services features I’ll tell you about shortly. Of course, the introduction of NAS and iSCSI support changes nothing from the virtual machines’ perspective – they continue to see their storage as local attached SCSI disks – so no changes are needed to your virtual machines or their software if you choose NAS or iSCSI support.
We haven’t just piled new features into ESX Server 3 and VirtualCenter 2. We’ve worked hard to make it easier to manage your virtual infrastructure with several simplifications so you can deploy virtualization everywhere and make life easier for your system administrators. Let’s take a look at how the new features in VirtualCenter 2 make that possible. If you’ve worked with ESX Server and VirtualCenter before, you know that there are multiple client interfaces you need to use. There’s a browser-based management user interface or MUI that you need to use when configuring ESX Server storage settings. To work with directly with a virtual machine, you use the VMware remote console from a Windows or Linux client, and you use the Windows-based VirtualCenter client for any VirtualCenter tasks. With VirtualCenter 2 and ESX Server 3, VMware simplifies your life with a single client called the Virtual Infrastructure Client that you can use for all tasks. The Virtual Infrastructure client does it all – it connects to ESX Server hosts, even those not under VirtualCenter management, it is the primary VirtualCenter interface, and it lets you remotely connect to any virtual machine for console access. There is a Windows version of the Virtual Infrastructure Client, and for access from any networked device, the same client is also available in a web browser implementation. The browser version of the client makes providing a user with access to a virtual machine as easy as sending a bookmark URL. We’ve done away with the need to separately connect to VirtualCenter or the ESX Server hosts to perform management tasks with VirtualCenter 2. Now, every ESX Server configuration task from deploying a new ESX Server host to configuring storage to managing the service console can be accomplished centrally through VirtualCenter. Another feature introduced in VMware’s hosted products and making its first appearance with ESX Server 3 and VirtualCenter 2 is remote devices. You can now remotely mount a local CD, DVD or floppy disk in a virtual machine. That means you don’t need to make a trip to the server room when installing software in a virtual machine. Instead, you just pop the CD into the drive on your Virtual Infrastructure Client system, remotely mount the CD from the virtual machine and you’re ready to go. A brand new feature in VirtualCenter 2 is topology maps which give you a graphical view of your entire virtual infrastructure showing each virtual machine, ESX Server host and storage volumes and the relationships between them all. With the increased scaling offered by VirtualCenter 2, topology maps are an essential feature for understanding and managing an enterprise-wide virtual infrastructure. We’ve made managing your VMware software licenses much easier with a centralized license server for ESX Server 3 and VirtualCenter 2. Now you have one repository for all your VMware processor-based licenses. As you add or remove ESX Servers and licensed components or add processors to them, the necessary licenses are checked out or checked back in to the VMware license server. Centralized licensing simplifies tracking your VMware software usage and makes adding capacity much easier. Another simplification is we now store all the files encapsulating and describing a virtual machine in the same location. We now use the same VMFS-formatted container you use for your virtual disk files to also store the virtual machine configuration files, swap files, non-volatile memory nvram files and state files like snapshots.
Many of the new features in VirtualCenter 2 we added after listening to our customers. They made it clear that we had to address a few remaining enterprise requirements to let VirtualCenter comply with their data center policies. We improved management of VirtualCenter user access controls with customizable roles and permissions. Going way beyond the four pre-defined user roles in version 1, VirtualCenter 2 lets you create your own user roles by selecting from an extensive list of permissions to grant to each role. That lets you manage your virtual infrastructure in full compliance with the most detailed data center access control policies. Under pressure from new government regulations like Sarbanes-Oxley, IT administrators need to retain audit trails of every significant change made or operation performed in their data center. VirtualCenter 2 goes beyond our current event logging to include full audit tracking to give you a detailed record of every action taken in your virtual infrastructure and who did it. With the increasing adoption of virtual infrastructure in enterprise data centers, our customers were pushing the limits of VirtualCenter scalability. With VirtualCenter 2, we’re staying ahead of our biggest users by increasing scalability of virtual infrastructure management to let a single VirtualCenter server manage hundreds of VMware server hosts and thousands of virtual machines. We’ve improved VirtualCenter performance: it now starts faster and the user interface is more responsive. A commonly requested feature we added to VirtualCenter 2 is session management. Now, sufficiently privileged administrators can disconnect other users’ VirtualCenter sessions. This is especially useful when a virtual machine needs to be rebooted or reconfigured and unattended user sessions would otherwise prevent that. All these new features we’ve added to VirtualCenter 2 are available from automated scripts and programs through an enhanced VMware Virtual Infrastructure SDK. Programs written to the VMware SDK can now directly access individual ESX Server hosts, with no need to interface through a VirtualCenter Management Server.
Now let’s turn to some major new technologies provided with ESX Server 3 and VirtualCenter 2. We’ll review the new concept of VMware clusters and some revolutionary new services built on them called distributed resource scheduling and distributed availability services. We’ll discuss resource pools and the resource management controls they provide. Then we’ll cover the VMware consolidated backup feature add-on and the powerful flexibility it gives you in virtual machine backup operations.
Clusters are a new concept in virtual infrastructure management that give you the power of multiple hosts but the simplicity of managing a single entity. Clusters reduce management complexity by combining standalone hosts into a single cluster with pooled resources and inherent high availability. Now, when you power on a virtual machine, you can place it on a cluster rather than a single ESX Server host. That makes all the resources of the cluster available for the virtual machine and it allows VirtualCenter to select the best host for the virtual machine and move the virtual machine within the cluster if conditions change. VMware clusters have inherent high availability because virtual machines now run on the cluster rather than on a standalone ESX Server host. If a VMware host fails, the virtual machines on it can be restarted on other hosts in the cluster. As hosts are added to or removed from clusters, the resources available to the virtual machines on the cluster are dynamically expanded or contracted.
[This slide uses a two-click animation] Let’s take a closer look at the benefits of VMware clusters first by explaining how they make possible a new add-on feature called distributed resource scheduling. Distributed resource scheduling or DRS works with ESX Server 3 and VirtualCenter 2 to automate the balancing of virtual machine workloads across your virtual infrastructure. You begin by defining a cluster of ESX Server hosts which creates an aggregated pool of resources that can be used by a collection of virtual machines. When a virtual machine is first started on the cluster, DRS selects the ESX Server host it runs on by automatically identifying a machine with sufficient resources. [Click 1] If conditions on the selected host change – for example, if other virtual machine activity increases to the point that the virtual machine can’t meet its guaranteed resource allocation – DRS will recognize that condition and search for an alternate ESX Server host on the cluster for the virtual machine – one that can honor the resource allocations needed by the virtual machine. [Click 2] DRS will then use VMotion to migrate the virtual machine to the new host automatically and with zero downtime for its users and applications. The result is a continuous balancing of all your server workloads across your virtual infrastructure With DRS, you can safely and reliably run your x86 servers at over 80% utilization for unmatched efficiencies. DRS works using the ESX Server Local Scheduler and a new Global Scheduler that is part of VirtualCenter 2. The Local Scheduler determines which processors within a host to use for virtual machine execution based on current workloads and it will relocate virtual machines as often as every few milliseconds if a different host processor offers more capacity. With DRS, we’ve introduced a Global Scheduler that continuously evaluates where best to locate a virtual machine across an entire cluster. The Global Scheduler will determine which ESX Server will host a newly started virtual machine and it will use DRS to relocate a virtual machine if another ESX Server host offers a more suitable set of resources. DRS is fully configurable so you can set preferences on VMotion policy ranging from unrestricted automated migrations to “advisory-mode” which requires manual approval before any virtual machine is migrated. The DRS algorithms also apply when you power on a virtual machine. You can provision a new VM to a cluster instead of an individual server, and DRS will make an intelligent decision about where to place it when you power it on. We also have affinity and anti-affinity rules for certain use cases – for example, you may choose to keep clustered VMs on physically separate servers at all times for hardware redundancy – that’s a rule you can specify in DRS. Or, you may want to keep two VMs with internal networking always on the same physical host – you can also specify that as a rule. DRS preserves absolute levels of allocated resources when virtual machines are migrated. It is aware that a virtual machine allocated 10% of the CPU resources on an 8-way machine with 3.2GHz processors will need a larger percentage of host resources if migrated to a 2-way machine with slower processors. DRS will respond immediately when a new ESX Server host is added to a cluster, which is a simple drag-and-drop operation with VirtualCenter 2. The new host will expand the resource pool available to the cluster’s virtual machines and DRS will rebalance workloads by shifting virtual machines to the new host as appropriate. DRS will also respond to a host being removed from a cluster by migrating its virtual machines to remaining hosts in the cluster. The end result with DRS is a data center that can run at over 80% utilization levels while safely maintaining guaranteed service levels for all applications. With DRS, you get much better ROI on your x86 servers with a minimum of capacity planning effort required.
A second major new feature based on ESX Server 3 clusters is the distributed availability services add-on. DAS uses the inherent high availability of clusters to give you a new option in protecting your critical services. With a VMware cluster, the loss of an ESX Server host due to a hardware failure, rather than being a catastrophic event, simply means that the resource pool available to the cluster has been reduced. DAS manages the reassignment and restart of the failed host’s virtual machines on the other ESX Server hosts in the cluster with the VirtualCenter Global Scheduler making the decisions on where to place the virtual machines to best meet resource guarantees. High availability for applications is usually achieved with failover clustering products like Microsoft Cluster Services or Veritas Cluster Services, but that technology is expensive and difficult to configure and manage. Failover clustering requires expensive operating system upgrades or third-party software and your applications must be cluster-aware. It is also a resource hog as standby cluster nodes tie up dedicated hardware even if they are not in active use. DAS gives you high-availability without configuration. You simply select the DAS option for a cluster or host, and all its virtual machines will be protected with automatic restarting should a host fail. DAS differs from failover clustering in that there will be some downtime as a virtual machine is restarted, but for the majority of applications, that minimal interruption is acceptable and the expense and complexity of failover clustering is simply not necessary. It’s important to note that the VirtualCenter Management Server is not a single point of failure in a cluster protected by DAS. A shared state model spread across all the ESX Server hosts in the cluster can manage DAS operations in the absence of VirtualCenter.
Along with clusters, ESX Server 3 and VirtualCenter 2 add another new object called resource pools that adds flexibility in management of your virtual infrastructure. VMware resource pools are containers for virtual machines that have an associated set of resources. Resource pools are an ideal solution when you want to give users authority to create VMs but constrain their resource usage. For example, a development team that needs to manage their own virtual machines could be provided with a resource pool like the one shown here that allocates a total of 8GHz of CPU capacity and 6GB of memory. The development team could then create and control their own virtual machines, but no matter how many virtual machines were started, their resource consumption could never exceed the size of the pool. In this way, resource pools simplify virtual infrastructure management by eliminating the need to provision virtual machines individually and with carefully pre-configured resource allocations. Resource pool size can be changed dynamically which makes them a great container for enterprise applications that experience fluctuating workloads. For example, a multi-tier SAP installation could be configured as several networked virtual machines in a single resource pool. In anticipation of a period of increased SAP activity, the system administrator could simply allocate more CPU and memory to the SAP resource pool instead of having to individually adjust the resource allocations of each SAP virtual machine.
This illustration shows how resource pools can be used to subdivide the CPU and memory aggregated across a VMware cluster. We’ve created two resource pools in a cluster and are left with some unallocated CPU and memory capacity. You can configure resource pools to allow them to “burst out” during periods of high activity to use the available floating capacity. You can even configure resource pools to be able to use idle resources in adjacent pools on the cluster. If you are familiar with the resource management features of a single ESX Server host, you can see how we have extended the concepts from managing allocations across multiple virtual machines on a single host to managing multiple virtual machines across all the hosts in a cluster.
The final major new technology we’re delivering with ESX Server 3 is called VMware Consolidated Backup. It’s an example of a service that normally runs in each virtual machine that’s been pushed into the virtual infrastructure layer to provide a number of benefits. VMware Consolidated Backup replaces the backup agents normally installed in each virtual machine with a backup proxy that manages backup activity across many virtual machines. Prior to VMware Consolidated Backup, there were two approaches to backing up virtual machines that each had advantages and disadvantages: You could install a backup agent in each virtual machine and treat a virtual machine like a physical machine. Virtual machines support off-the-shelf backup agents that are used to backup the contents of the VM’s virtual disks. This approach had the advantage of providing file level visibility for the VM’s files for backup and restore operations. The disadvantages are you need to license and install a backup agent in every VM and the backup process consumes CPU and network resources on the ESX Server host. The other approach is to use a dedicated backup server or SAN-based backup service to capture entire virtual disk files. With this method, you need only a single backup agent for many VMs, but you lose file-level visibility into your VM storage. You also need to ensure that applications in the VMs are properly quiesced before taking the backups, or you risk being left with an inconsistent file system on a restore. VMware Consolidated Backup eliminates the compromises of those two approaches by letting you centralize the VM backup service but it preserves file-level visibility in your backups and restore. It works by taking snapshots of running virtual machines after quiescing the applications in the virtual machines to ensure file system consistency. The virtual disk snapshots are then mounted as a LUN by a Windows backup proxy server that can use a standard backup agent to process the backup to tape or disk devices. VMware Consolidated Backup is pre-integrated with popular backup utilities and their pre-and post-processing hooks for easy out-of-the-box implementations. VMware Consolidated Backup operates transparently with no need to interrupt virtual machine activity. The backup processing is moved off the ESX Server host so there’s no impact on CPU and network resources needed by critical applications in VMs. VMware Consolidated Backup lets you centralize backup processing, reduce backup agent licenses, run backups without system interruptions, isolate backup workloads from your production servers and preserve full file-level visibility with none of the compromises previously imposed.
VCB now supports storage that is attached to the VM by iSCSI, NAS, or is locally attached (released in 3.0.2 but not widely known). This is a cost savings element that makes VCB more attractive in the SMB space. Ability to run VCB in a VM and eliminates the need for a dedicated backup proxy server running on Windows. This is a cost savings element that makes VCB more attractive in the SMB space and also enables the use of VI features such as VMotion and DRS to maximize the use of existing resources VMware Converter to restore VCB images – (Released with 3.0.1 but not widely known) Restore VCB images of VMs to running virtual machines. This is a simple 1 step graphical way to take VMs restored from tape and put them back into VI Full GUI Integration with Leading Partners – use the NetBackup 6.5 example (just released in August) that will enable VCB to run w/o scripting and be configured completely through the NetBackup GUI
Storage VMotion extends the concept of VMotion to storage arrays and allows virtual machine disks to be moved from one datastore to another with no disruption or downtime. HA enhancements now allow HA to protect against OS failures, allow HA to scale to 32-nodes and improve its robustness with more isolation addresses per hosts to prevent false restarts. Update Manager enforces greater security and reliability in the datacenter by automating enforcement of patch standards for ESX server hosts and virtual machines. ESX Server hosts are patched non-disruptively and automatically. Secure patching of offline or suspended virtual machines allows greater levels of compliance than physical environments. Site Recovery Manager is a new solution from VMware that is enabled by VC 2.5. Details on availability: Storage VMotion is available as part of VI3 Enterprise. ( Customers with active Sns or Sls for VI3 Enterprise get this as a free upgrade with VC2.5). Enhanced HA – is just updated HA in VC2.5 Update Manager is available as part of all three VI3 editions –starting with VC2.5. Customers with active Sns or Sls will get Update Manager when VC2.5 is generally available Site Recovery Manager is not part of VI3; it is an addon product for VI3 environments. It will require VC 2.5. It is likely to be available early next year.
What is VMware Update Manager? It is an automated patch management solution for VMware ESX hosts as well as Microsoft and Linux virtual machines Two main benefits compared to traditional patching solutions : 1. patching of offline /suspended machines is done securely. Non compliant machines are patched in a quarantined state so that the rest of the network is not exposed to them 2. Can patch and update ESX server and ESX Server 3i MORE DETAIL FOR INTERESTED CUSTOMERS: VMware Update Manager is used to enforce compliance to patch standards in four steps: 1.Getting information on the latest patches: V Mware Update Manager automatically gathers the latest patch data from VMware as well as application vendors such as Microsoft, Adobe and Mozilla via the Internet. To enable secure off-network patching, VMware Update Manager has a disconnected patch mode. A separate download service assists Update Manager in gathering patch data used for off-network patches and updates. 2. Set baselines : The information collected by Update Manager is used to set baselines . Baselines contain one or more service packs, patches and /or updates The baselines data that Update Manager gathers gives IT administrators granular control in defining patch levels. These baselines updates can be static baselines defined manually or dynamic baselines that are set automatically depending on the significance of the patch data from the system vendor. 3. Compare physical hosts and virtual machines against the baselines: VMware Update Manager scans the state of the physical VMware ESX Server hosts as well as select Microsoft and Linux guest operating systems and compares it with baselines set by the administrator. Scans can be initiated on entire data centers, clusters, resource pools, templates, folders or individual hosts and virtual machines. They can be run immediately or scheduled as necessary. After a scan is complete, non-compliant machines are flagged for patch updates. 4. Remediate the selected set of physical or virtual machines. Virtual machine patching: VMware Update Manager supports either manual or scheduled patching of the non compliant machines. If a reboot is required on a manual patch or update, the administrator has the option to reboot immediately or delay system restart by up to 60 minutes. To reduce the risk of virtual machine patching failures, the VMware Update Manager automates snapshotting the virtual machine state prior to applying a patch. Snapshots are stored for a user-defined period so administrators can rollback a patched virtual machine to a known working state if there are any problems VMware Update Manager also patches offline or suspended virtual machines. When remediating offline or suspended virtual machines, VMware Update Manager disables their NICs during the patching so the network is not exposed to non-compliant virtual machines.
DRS enabled patching brings new levels of automation and resource management to the datacenter With DRS enabled patching, system admins can be almost completely handsoff in applying patches to ESX Server hosts. DRS manages the entire process for entire clusters of physical hosts with NO disruption or downtime to any of the virtual machines
In a typical 100 VM environment, you save on 1 FTE dedicated to patching and maintaining VMs. This value is delivered in all three VI 3 Editions at no extra price!
A variety of solutions available with VMware virtual infrastructure support management of all types of downtime
Storage VMotion completes the non-disruptive management of planned downtime Migrating single VMs and their disks from one array to another is accomplished today by moving entire LUNs with datamovers/SAN tools – almost always involving downtime Now performance optimization of VMs with the right type of storage becomes a trivial problem to solve HOW DOES IT WORK: VMware Storage VMotion allows virtual machines’ disks to be relocated to different data store locations completely transparently while the virtual machine is running, with zero downtime. Storage VMotion takes advantage of core technologies that VMware has developed, such as disk snapshots, REDO logs, parent/child disk relations, and snapshot consolidation. Before moving a virtual machines disk file, Storage VMotion moves the “home directory” of the virtual machine to the new location. The “home directory” contains meta information about the virtual machine i.e. configuration, swap, log files. It next “self VMotions” to the new VM home location. The disk movements follow after the home directory migration. First, Storage VMotion creates a “child disk” for each virtual machine disk to be migrated. Once the migration operation has started, all disk writes are directed to this “child disk”. Second, the “parent”, or original, virtual disk is then copied from the old storage device to the new storage device. In step three, the child disk that is capturing the write operations are then “re-parented” to the newly copied parent disks. And, in the final step the child disk is consolidated into the new parent disk and the ESX host is now re-directed to the new parent disk location. The switchover process of home directory and disk migration, creation of child disks and parent disks, re- parenting and consolidation of child disks happens within sub-2 seconds, fast enough to be unnoticeable to the application user.
The traditional process of adding new storage disks and arrays is cumbersome, time-consuming and disruptive, with storage data migrations consuming IT resources and requiring significant service interruption. (Often these migration activities require extensive coordination between multiple IT groups) Storage VMotion helps customers embrace new storage choices, take advantage of flexible leasing models, cost effectively embrace new disk file formats and retire older, harder to manage arrays. Storage VMotion gives IT organizations the ability to non-disruptively move underlying virtual machine disk files from existing storage to any new storage of choice This operation can be performed within, and across storage arrays. It also helps IT organizations to conduct storage upgrades and migrations based on usage and priority policies (assumes that their arrays support storage tiering).
Use Case: Dynamically optimize storage I/O performance Customer Problem Statement: Application performance is key metric to SLA success. Often, managing storage LUN allocations to support dynamic virtual machine environments can be a time consuming process, requiring extensive co-ordination between application owners, virtual server owners and storage administrators, also requiring off-lining of storage and causing application downtime. Customers sometimes compensate by over allocating high I/O LUNS to support an application which may never need it. The two VM’s on the left have their data stores (red folders) on a LUN that is seeing increasing I/O activity leading to a hot spot that will soon cause the application performance on those VM’s to suffer or get sluggish. Essentially this LUN is reaching its limit to supporting high IO and if left un-attended cause application performance issues. In this example – VMware Storage VMotion will migrate the affected VM’s data store’s, to the “ optimized for High I/O LUN set ” on the right, which the storage administrator has created on the existing array. Normally moving data store for a running application would involve off-lining the application. Storage VMotion will do this on-line (non-disruptively) in sub-2 seconds (we need not mention this), imperceptible to application users. This can be done even across arrays as shown in previous examples ( Important : assumes that LUN sets have already been created – Storage VMotion does not create LUN’s).
What does HA do today – its protects against hardware failures by restarting VMs automatically on alternate ESX hosts With ESX 3.5/VC 2.5, HA scalability is improved to 32-node clusters Cluster configurations are now proactively monitored for abnormal configurations such as : common DNS misconfigurations, right amount of network redundancy in the service console. VM failure monitoring extends HA protection to OS failures as well. Initially this feature is experimentally supported. Guest OS agnostic – this feature extends HA protection to OS failures across the board Heartbeat monitoring through VMware tools Users can define parameters around how long before a VM is declared failed, how much time to restart, how many attempts to restart.
Provides a central point of management for disaster recovery plans Plug-in to VMware VirtualCenter Manage recovery plans for virtual machines managed by VirtualCenter Enables pre-programming of disaster recovery plans Map resources between production and recovery sites Program steps of recovery process Automates key disaster recovery workflows Simplified setup of recovery plans. Create, configure, and manage recovery plans from VMware VirtualCenter, making it possible to manage disaster recovery plans as an integral part of virtual infrastructure. Turn the complex, manual runbooks that are commonly required in traditional disaster recovery solutions into automated recovery workflows maintained and executed automatically at a push of a button from within Site Recovery Manager. Automated failover. Once recovery plans have been created, automate execution of those recovery plans in order to make recovery dramatically faster by eliminating the slow and unreliable manual processes otherwise required. Easier testing of recovery plans. Perform non-disruptive, automated tests of the same recovery plans that will be used in a failover scenario. Expected to be available in H1 2008. will be a separately licensed component
This slide illustrates the key components of a site recovery manager deployment The requirement is for protected VMs to be running off of storage that is being replicated to a secondary site. Virtual Center is required to manage both sites Site Recovery Manager manages the mapping of components ( VMs, resource pools, networks etc) between the two sites and provides workflow automation for setup, failover, failback and test of this DR environment.
VMware’s main goal with ESX Server 3 and VirtualCenter 2 is to elevate the simplicity and power of our virtual infrastructure to the point that any server in any environment can be easily virtualized. With improved performance and capacity, expanded hardware support, simplified management and high availability, we’re enabling virtualization everywhere so every server you manage can share the benefits of running in a virtual machine. We’ve simplified the installation, licensing and management of ESX Server 3 and VirtualCenter 2 with improved and unified client interfaces. We’ve expanded the capabilities of ESX Server virtual machines with support for 4-way Virtual SMP, 16GB per virtual machine memory limits and we’ve increased the scalability of VirtualCenter to manage virtual infrastructures of hundreds of VMware servers and thousands of virtual machines to meet the needs of the most demanding enterprise data centers. ESX Server 3 and VirtualCenter 2 provide a highly capable platform for new game-changing infrastructure-wide services like distributed availability services, distributed resource scheduling and consolidated backup that let you do things with virtual machines that are simply not possible with physical systems. By building those services in the virtualization layer together with our revolutionary VMotion technology, we can continuously manipulate the location and resource allocations of virtual machines to optimize your data center and provide high availability and data protection with unmatched simplicity and cost effectiveness. ESX Server 3 and VirtualCenter 2 make it clear, in case there were any remaining doubts, that virtual machines really are better than physical machines!
The following screenshots from the new VMware Virtual Infrastructure Client illustrate some of the new features in we’ve discussed in this presentation.
Introducing Virtual Infrastructure: ESX Server Version 3 with Virtual Center Version 2: Dan Sullivan Territory Manager – New England
Industry-Standard Way of Computing Platform for any OS, hardware, application Most effective way to manage IT infrastructure Mainframe-class reliability and availability The automated… … always on… … infrastructure VMware Virtual Infrastructure
VMware Virtual Infrastructure Evolution Reduced Platform Cost Improved Capabilities Expanded storage support White box server support Infrastructure-wide services Expanded VM capacity Increased scalability Improved management ESX Server 3 - VirtualCenter 2 2006 ESX Server 2 - VirtualCenter 1 Blade Servers Multi-server Infrastructure Virtual SMP VMotion SAN Integration Virtual SDK 2003 2001 ESX Server 1 First x86 Hypervisor
Expands footprint of virtualization from branch offices to biggest data centers
Virtualization Technology Basics System without VMware Software System with VMware Software VMware software insulates the BIOS / Operating System / Applications from the physical hardware, so many systems can share hardware, or be moved to different hardware with no service interruption
Data management techniques can be used for server management
Entire server – OS, apps, data, devices, and state – is now simply a file.
VMotion™ VMotion technology lets you move live, running virtual machines from one host to another while maintaining continuous service availability. | Continuous Optimization | Fast Reconfiguration | | Zero-Downtime Maintenance |
ESX Server 3 Features Virtualization Everywhere!
NAS and iSCSI storage
Expanded hardware compatibility list
4-way Virtual SMP
64GB guest memory/128GB per Host
Hot-add virtual disks
Red Hat Enterprise Linux 4 guests
Updated Service Console (Red Hat Enterprise Linux 3)
More flexible networking
64-bit guest technology – FULL SUPPORT
Branch Office NAS/iSCSI Storage Dev & Test Local Storage Fibre Channel SAN Data Center
Administrative time – 3064 hrs, $153,200 saved annually
Calculated for 100 virtual machines, assuming 75 patches per machine
Offline Machine Patching
Reduces exposure from non-compliant offline/suspended virtual machines
Systems have NICs disabled during patching to reduce risk
Assess patch requirements
Manual 15 min Automated 6 min Annual Savings for 100 VMs 1125 hrs, $56,250 Manual 18 min Automated 6 min Annual Savings for 100 VMs 1939 hrs, $96, 950 Manual 33 min Automated 12 min Annual Savings for 100 VMs 3064 hrs, $153, 200 Per virtual machine Per patch
Manage All Types of Downtime Site Recovery Manager Site VCB N/A Data VCB Storage VMotion Storage HA
DRS with Maintenance Mode
Server NIC Teaming Multipathing Component Quick recovery from unplanned outages Prevent planned outages