This is the summary diagram of vSphere and its components
This is a quick summary of current and new infrastructure Services that deliver greater capital and operational savings than any other virtualization solution. Infrastructure services are components of the cloud OS that abstract away from the underlying server, storage and networking hardware; aggregate them and deliver them precisely as needed to applications
Now let’s dig deeper into the new vCompute features. vCompute services deliver the most efficient way to virtualize single servers and maximize utilization across and group of servers increasing capital and operational savings.
The ESX 4 Service Console includes a 64‐bit, 2.6‐based Linux kernel, which is compatible with Red Hat Enterprise Linux Server release 5.2, CentOS version 5.2, and equivalent Linux systems. The ESX 4 Service Console introduces several enhancements to improve performance and security. The ESX 4 Service Console provides native support for 64‐bit applications and supports 32‐bit applications through compatibility libraries. Additionally, ESX 4 stores the root file system of the Service Console in a VMDK to provide more flexibility for other products to access and back up console data. By default, during installation the virtual disk is automatically located on the first virtual machine datastore that you previously configured. You have the option to change several elements of the internal configuration of the virtual disk, including changing the size of the swap area. ESX 4.0 Service Console supports only 64‐bit device drivers. All device drivers are vmklinux <VMK Linux> based drivers running in (and owned by) the VMkernel <VM kernel>. Unlike ESX 3.5, ESX 4 does not require Linux device drivers running in the Service Console . All Service Console network interfaces fully support IPv6. And the ESX 4.0 Service Console supports Address Space Layout Randomization. This methods helps prevent security attacks by making it difficult to easily predict target addresses. Additionally, some features are no longer supported. For example, VMware no longer actively supports, updates, or encourages the use of net-snmp in the Service Console. Instead, the SNMP MIB modules are provided by the SNMP agent built into the host agent in ESX/ESXi 4.0. Additionally, in previous versions, the Service Console was both a deployment environment and a development environment. The ESX 4.0 Service Console is a deployment environment only, so it no longer includes Linux development packages and libraries.
The Summary tab for clusters has been enhanced to display useful information about the configuration and operation of the cluster. The VMware DRS section provides a link to charts showing CPU and memory utilization per host. CPU utilization is displayed on a per‐virtual machine basis. Information for each virtual machine is shown as a colored box, which symbolizes the percentage of entitled resources (as computed by DRS) that have been delivered to it. If the virtual machine is receiving its entitlement, this box should be green. If it is not green for an extended time, you might want to investigate what is causing this shortfall (for example, unapplied recommendations). If you hold the pointer over the box for a virtual machine, its utilization information (Consumed vs. Entitlement) appears. You can toggle the display between percentage and megahertz by clicking the appropriate button. Memory utilization is also displayed on a per‐virtual machine basis. For memory utilization, the virtual machine boxes are not color‐coded because the relationship between consumed memory and entitlement is often not easily categorized. Like CPU utilization, you can toggle the display between percentage and megahertz by clicking the appropriate button.
The DRS Recommendations tab has been renamed to DRS because in addition to recommendations, it now includes a page for faults that have occurred in applying recommendations, and a page that displays the history of DRS actions. On the Recommendations page, you can view and edit cluster properties. Additionally, the DRS Recommendations section displays the current set of recommendations generated for optimizing resource utilization in the cluster through either migrations or power management. Only manual recommendations awaiting user confirmation appear on this list. To refresh the recommendations, click Run DRS now . Note that this command appears on all three DRS pages. To apply all recommendations, click Apply Recommendations. To apply a subset of the recommendations, select the Override suggested DRS recommendations check box. This activates the Apply check boxes next to each recommendation. Select the check box next to each desired recommendation and click Apply Recommendations. The Faults page of the DRS tab displays faults that prevented the recommendation of a DRS action (in manual mode) or the application of a DRS recommendation (in automatic mode). For each fault, the page displays: The timestamp of when the fault occurred. A description of the condition that prevented the recommendation from being made or applied. And the target of the intended action. When the problem is selected, more detailed information about its associated faults is displayed in the Problem Details box below. You can use the Problem or Target contains text box to customize the display of problems. Select the search criteria (Time, Problem, Target) from the drop‐down box next to the text box and enter a relevant text string. Only problems that match that string are displayed. For additional information about faults and other potential issues encountered when using DRS, click View DRS Troubleshooting Guide. The History page of the DRS tab displays recent actions that have been taken as a result of DRS recommendations. For each action, the page shows details of the action taken and a timestamp of when the action was taken. Like the Faults page, the display of recent actions can be customized using the DRS Actions or Time contains text box.
vCenter4 includes a new scheduled task to change the resource settings of a resource pool or virtual machine. You can configure the task to change the shares, reservation, and limit for CPU, Memory, or both so that you can accommodate business priorities that change throughout the year. For example, at the end of each quarter you could give financial applications higher priority than internal applications. Or, if you were a retail organization, you could double the resource reservations for the online store virtual machines during the month of December.
vSphere fully supports the VMware Distributed Power Management feature introduced with experimental support in VI3.5. When enabled, DPM continuously monitors resource requirements and power consumption across a DRS cluster. When the cluster needs fewer resources, it consolidates workloads and puts unused hosts in standby mode to reduce power consumption. When resource requirements of workloads increase, DPM brings powered-down hosts back online to ensure service levels are met. DPM allows you to: Cut power and cooling costs in the datacenter during low utilization times. And automate management of energy efficiency in the datacenter. With this release, DPM supports three wake protocols to bring hosts out of standby mode: Intelligent Platform Management Interface HP Integrated Lights-Out And Wake‐On‐LAN For each of these wake protocols, you must perform specific configuration or testing steps on each host before you can enable DPM for the cluster.
vStorage services deliver abstraction from the type of storage and the most efficient use of storage in virtual environments. Let’s look more closely at the new vStorage features in vSphere 4.
The VMkernel storage stack has been restructured to include a modular Pluggable Storage Architecture (or PSA) that provides superior multipathing support. The PSA coordinates the operation of both a Native Multipathing Module and custom software plugin modules developed using the vStorage APIs for Multipathing. VMware provides a default multipathing module with ESX called the Native Multipathing Module (or NMP). The NMP associates a set of physical paths with a specific storage device, or LUN. The NMP supports all storage arrays listed on the VMware storage hardware compatibility list and provides a path selection algorithm based on the array type. The NMP has two sub-plugins for failover and load balancing: Storage Array Type Plugins (known as SATPs) and Path Selection Plugins (know as PSPs). Storage Array Type Plugins handle path failover for a given storage array. SATPs run in conjunction with the NMP and are responsible for array‐specific operations. ESX offers an SATP for every type of array that VMware supports, generic SATPs for non‐specified storage arrays, and a local SATP for direct‐attached storage. Each SATP accommodates special characteristics of a certain class of storage arrays and can perform the array‐specific operations required to detect path state and to activate an inactive path. As a result, the NMP module can work with multiple storage arrays without having to be aware of the storage device specifics. After the NMP determines which SATP to call for a specific storage array and associates the SATP with physical paths from the storage array, the SATP monitors health of each physical path and reports changes in the state of each physical path to the NMP. The SATP also performs array‐specific actions necessary for storage fail‐over. For example, for active/passive devices, it can activate passive paths. Path Selection Plugins run in conjunction with the NMP and are responsible for choosing the physical path for I/O requests. The NMP assigns a default PSP for every logical device based on the SATP associated with the physical paths for that device. You can override the default PSP. The NMP supports the following PSPs: Most Recently Used selects the path it used most recently. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available. Fixed uses the designated preferred path, if it has be configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path, it selects a random alternative available path. The ESX host automatically reverts back to the preferred path as soon as the path becomes available. And Round Robin uses an automatic path selection rotating through all available paths and enabling load balancing across the paths.
The new PSA framework provides for installing third‐party multipathing plugins that can replace or supplement the vStorage native components. Third‐party multipathing plugins (referred to as MMPs) are developed by software or storage hardware vendors and integrate with the PSA improving critical aspects of path management and adding support for new arrays that may be currently unsupported by ESX and new path selection policies. Third party plugins fall into one of three categories: Third‐party MPPs can provide entirely new fault‐tolerance and performance behavior. They run in parallel with the VMware NMP and, for certain specified arrays, replace the behavior of the NMP taking control over the path failover and the load‐balancing operations. Third‐party SATPs are generally developed by third‐party hardware manufacturers who, having expert knowledge of their storage devices, can optimize the plugins so that they accommodate specific characteristics of storage arrays and support the new array lines. You need to install third‐party SATPs when the behavior of your array does not match the behavior of any existing SATPs that the PSA offers. When installed, the third‐party SATPs are coordinated by the NMP, and can run along and be simultaneously used with the VMware SATPs. Third‐party PSPs can provide more complex I/O load balancing algorithms. Generally, these plugins are developed by third‐party software companies and can help you achieve higher throughput across multiple paths. When installed, the third‐party PSPs are coordinated by the NMP and can run along and be simultaneously used with the VMware PSPs. When the host boots up or performs a rescan, the PSA discovers all physical paths to storage devices available to the host. Based on a set of claim rules defined in the /etc/vmware/esx.conf <et see vmware ESX config> file, the PSA determines which multipathing module should claim the paths to a particular device and become responsible for managing the device. For the paths managed by the NMP module, another set of rules is applied to select SATPs and PSPs. Using these rules, the NMP assigns an appropriate SATP to monitor physical paths and associates a default PSP with these paths.
A significant update of the iSCSI stack has dramatically improved the performance of both the hardware-optimized iSCSI HBA that ESX leverages and the iSCSI initiator that runs at the ESX layer. With the iSCSI initiator in particular, improvements have been made to significantly reduce the associated CPU overhead. Additionally, the new iSCSI stack no longer requires a Service Console connection to communicate with an iSCSI target. And, the new iSCSI stack offers new iSCSI initiator functionality. Global configuration settings made from the General tab propagate down to all targets. However, you can override these settings on a per target basis, so that you can configure unique parameters for each array discovered by the initiator.
vStorage iSCSI initiators support authentication via Challenge Handshake Authentication Protocol (CHAP). In previous releases, ESX and ESXi supported only unidirectional CHAP authentication, in which the target authenticates the initiator. This release adds an additional level of security by adding the ability to configure bidirectional, or mutual, CHAP authentication so that the initiator can authenticate the target. The advanced iSCSI settings control such parameters as header and data digest, FirstBurstLength, MaxOutstandingR2T, and so forth. Generally, you do not need to change these settings because hosts work fine with the assigned predefined values. However, if you want to fine‐tune the iSCSI performance for host, you may change the default settings at the initiator level, or for each target. One notable setting is the ability to check data integrity. To protect the integrity of iSCSI headers and data, the iSCSI protocol defines error correction methods, known as header digests and data digests. These digests pertain to, respectively, the header and SCSI data being transferred between iSCSI initiators and targets, in both directions. Header and data digests check the data integrity beyond the integrity checks that other networking layers provide, such as TCP and Ethernet. Header and data digests are disabled by default because they require additional processing for both the initiator and the target and can affect throughput performance and CPU usage overhead. However, should the need arise, you now have the ability to enable this functionality. As with all VMkernel advanced settings, you should not change any of these settings unless you are working with the VMware support team or otherwise have thorough information about the values to provide for the settings. For more details, see the “iSCSI SAN Configuration Guide.”
When you create a virtual machine, a certain amount of storage space on a datastore is provisioned, or allocated, to your virtual disk files. By default, vSphere offers a traditional storage provisioning method by which you estimate in advance how much storage a virtual machine will need for its entire lifecycle, provision a fixed amount of storage space to its virtual disk, and have the entire provisioned space committed to the virtual disk during its creation. This type of virtual disk that immediately occupies the entire provisioned space is called a thick disk. Creating virtual disks in thick format can lead to underutilization of datastore capacity because large amounts of storage space, pre‐allocated to individual virtual machines, might remain unused. For example, say you have a 500 GB volume allocated to an application with only 100 GB of actual data, the other 400 GB has no data stored on it. That unused capacity is still dedicated to that application and no other application can use it. This means that the unused 400 GB is wasted storage, which means that it is also wasted money. And even though all of the storage capacity may eventually be used, it could take years to do so. To avoid over allocating storage space and minimize stranded storage, vSphere 4 supports storage over-commitment in the form of thin-provisioning virtual disks. When a disk is thin-provisioned, the virtual machine thinks it has access to a large amount of storage, but the actual physical footprint is much smaller. Disks in thin format look just like disks in thick format in terms of logical size, but the VMFS version 3 driver manages them differently in terms of physical size. The VMFS 3 driver allocates physical space for thin-provisioned disks on first write and expands the disk on-demand if and when the guest operating system needs it. This capability allows you to allocate the total provisioned space for disks on a datastore at a greater amount than the actual capacity. If the VMFS volume is full, and a thin disk needs to allocate more space for itself, the virtual machine prompts you to provide more space on the underlying VMFS datastore. vSphere also provides alarms and reports that specifically track allocation versus current usage of storage capacity so you can optimize the allocation of storage for you virtual environment.
You can choose to deploy a thin provisioned disk when you: Create a virtual machine Clone to a template Clone a virtual machine And migrate virtual machines Thin disks require VMFS-3 and newer datastores, so if the selected datastore is an earlier version, vSphere will create the disk in thick format.
vSphere 4 adds a new option to dynamically increase the capacity of a running VMFS datastore when virtual machines require more space. Rather than adding a second extent, you can grow an existing extent so that it fills the available adjacent capacity without disrupting running virtual machines. The newly available space appears as a larger VMFS volume along with an associated grow event in vCenter. Before growing a VMFS volume, you must use an array management utility to grow the LUN backing the datastore. Only extents with free space immediately after them are expandable.
vSphere 4 introduces a new set of APIs to replace the existing backup framework previously provided by VMware Consolidated Backup (VCB). The vStorage APIs for Data Protection allow VMware partners to develop virtual machine backup applications that integrate seamlessly with vSphere 4 and later. Although VCB will continue to be provided and supported with vSphere 4, new features will only be available via these APIs and VCB will eventually be retired in the future. This diagram shows the backup architecture using vStorage APIs for Data Protection. Notice that the VMware Consolidated Backup software is no longer a component of the architecture. Instead, backup application developers integrate the VCB functionality directly into the backup application via the vStorage APIs for Data Protection. That means the backup application is a complete solution, designed specifically to back up virtual machines in a vSphere datacenter. Backup software partners have indicated strong support for the vStorage APIs for Data Protection. Among the partners planning to update their products in 2009 to support these APIs: CA (ArcServe) Commvault (Galaxy Simpana) EMC (Avamar, Networker) HP (Data Protector) IBM (Tivoli Storage Manager) Symantec (Backup Exec, Backup Exec System Recovery, NetBackup Enterprise) Vizioncore (vRanger Pro)
The vStorage APIs for Data Protection support all of the features included in VMware Consolidated Backup and include several new functionalities as well. APIs for Data Protection supports all storage architectures for backup and restore, on both LAN and SAN. They include backup options for full, incremental, and differential virtual machine images. Additionally, the APIs provide for file-level backup and restore, And allow for backup application to be deployed on both Windows and Linux platforms. APIs for Data Protection also support snapshots and VSS Quiescing.
vNetwok services deliver the most optimal way to integrate networking in virtual environments. Next, let’s take a look at the new vNetwork capabilities in vSphere 4.
With this release, ESX supports both Internet Protocol version 4 and Internet Protocol version 6 environnements. IPv6 has been designated by the Internet Engineering Task Force as the successor to IPv4. The adoption of IPv6, both as a standalone protocol and in a mixed environment with IPv4, is rapidly increasing. Most notably, the United States Federal Government requires that all new purchases include IPv6 support. The most immediately obvious difference between IPv4 and IPv6 is address length. IPv6 uses 128‐bit addresses rather than the 32‐bit addresses used by IPv4. This combats the problem of address exhaustion that is present with IPv4 and eliminates the need for network address translation. Other notable differences include Link-local addresses that appear as the interface is initialized, Addresses set by router advertisements, And the ability to have multiple IPv6 addresses on an interface. ESX 3.5 included support for IPv6 inside virtual machines. With this release, VMware Tools now supports IPv6 so that IPv6 addresses display in vCenter. Additionally, ESX 4 extends support to the Service Console and VMkernel. Support for IPv6 storage is still experimental. Note that using vCenter with IPv6 connectivity is not supported. Instead, use direct VI Client connections to configure ESX and ESXi servers.
VMDirectPath is a new ESX 4 feature that allows device drivers in virtual machines to bypass the virtualization layer so that they can directly access and control a physical device. VMDirectPath I/O provides this capability to I/O device drivers. VMDirectPath I/O relies on DMA Address Translation in an I /O Memory Management Unit to convert guest physical addresses to host physical addresses. ESX 4 fully supports this first generation of VMDirectPath I/O with the Intel 82598 10 Gigabit Ethernet Controller and provides experimental support for storage I/O devices with the QLogic QLA25xx 8Gb Fibre Channel and the LSI 3442e-R and 3801e (1068 chip based) 3Gb SAS adapters. Support is limited to Intel and AMD CPUs with EPT/NPT/RVI support. VMDirectPath I/O device access is primarily targeted to those applications that can benefit from direct access by the guest operating system to the I/O devices. Other virtualization features, such as VMotion, hardware independence, and sharing of physical I/O devices is not yet available to virtual machines using this feature. However, VMware is planning to extend the hardware support and virtualization features in the next generation of DirectPath I/O.
vNetwork components build on the VI3 networking infrastructure, so let’s quickly review that architecture. VI3 network configuration is done at the host level. Each virtual machine and the service console has one or more of its own virtual network adapters, or vNICs. The operating system and applications talk to a vNIC through a standard device driver or a VMware optimized device driver. The VMkernel also has vNICs for VMotion and IP storage network requirements. Each ESX host has its own virtual switches. On one side of the virtual switch are port groups that connect to virtual machines. On the other side are uplink connections to physical Ethernet adapters on the server where the virtual switch resides. Virtual machines, the service console, and VMkernel components connect to the outside world through the physical Ethernet adapters that are connected to the virtual switch uplinks.
The new vNetwork components move network management to the datacenter level. This diagram shows the new networking architecture . First off we have distributed switches. These are managed entities configured inside vCenter. Distributed switches provide the same basic functions as standard vSwitches, but they exist across two or more clustered ESX or ESXi hosts. vCenter Server owns the configuration of distributed switches, and the configuration is consistent across all hosts. Like a VI3 standard vSwitch, a distributed switches connects to a physical network via one or more physical Ethernet adapters on the hosts included in the cluster. In this manner, physical NICs become clustered resources to use as required by the networked components. Each distributed switch includes distributed ports. A distributed port represents a port to which you can connect any networking entity, such as a virtual machine, the Service Console, and so on. vCenter Server stores the state of distributed ports in the vCenter database, so networking statistics and policies migrate with virtual machines when moved from host to host. This network VMotion feature is key to implementing state-dependent features such as inline IDS/IPS, firewalls, and third party virtual switches. Be careful not to confuse a distributed switch with a single switch spanning across several hosts. Two virtual machines on different hosts can communicate with each other only if both virtual machines have uplinks in the same broadcast domain. Consider a distributed switch as a template for the network configuration on each ESX or ESXi host.
vNetwork Appliance APIs allow third-party developers to create distributed switch solutions for use in a VI datacenter. Third-party solutions network administrators to extend existing network operations and management into the vSphere datacenter. This diagram shows the basic way a third-party solution plugs in to the vNetwork architecture. A Custom Control Plane is implemented outside of vCenter, for example it may be implemented as a virtual appliance. VI Client includes a plugin to provide a management interface. vCenter includes an extension to handle the communication with the control plane. On the host, a custom IO plane agent replaces the standard IO plane agent. And the IO plane itself may be replaced for customization of forwarding and filtering. The Cisco Nexus 1000v is the first third-party switch to leverage vNetwork Appliance APIs. Network administrators can use this solution in place of the vNetwork Distributed Switch to extend vCenter to manage Cisco Nexus and Cisco Catalyst switches.
In summary, moving network configuration to the datacenter level offers several advantages. First, it simplifies datacenter setup and administration by centralizing network configuration. For example, adding a new host to a cluster and making it VMotion compatible is much easier. Also, whereas VI3 ports are ephemeral, distributed ports migrate with their clients. So , when you migrate a virtual machine with VMotion, the distributed port statistics and policies move with the virtual machine, thus simplifying debugging and troubleshooting. And, enterprise networking vendors can provide proprietary networking interfaces to monitor, control and manage virtual networks.
Now let’s turn to the new application services in vSphere 4. Application Services are components of the cloud OS that provide built in controls for service levels –these are usable with any application running on any OS inside a virtual machine – and can be turned on or off easily. We’ll start with services for Availability.
VMotion is one example of an application service that helps deliver availability by reducing planned downtime that would otherwise be required for hardware maintenance and load balancing. Enhanced VMotion Compatibility (or EVC) is a new feature introduced in ESX 3.5 Update 2 and VirtualCenter 2.5 Update 2. EVC facilitates safe VMotion across a range of CPU generations. With EVC, it is possible to VMotion between CPUs which previously were considered incompatible. VMotion CPU compatibility relies on CPUIDs for system property information such as vendor name, CPU family and model, and supported features such as SSE3, SSSE3, and Nx/XD. EVC uses AMD-V Extended Migration and Intel® VT FlexMigration technologies to hide from the virtual machine CPUID feature bits which might otherwise cause VMotion incompatibilities. By hiding these incompatible features, it is possible to make newer CPUs compatible with older generation processors. EVC, however, does not hide CPU features like number of cores per CPU, cache size, or any other features that could potentially cause performance penalties. EVC works at the cluster level in the vCenter inventory using CPU baselines to configure all processors included in the EVC-enabled cluster. A baseline is a set of CPU features that is supported by every host in the cluster. When you configure EVC, you set all host processors in the cluster to present the features of a baseline processor. Once enabled for a cluster, hosts that are added to the cluster are automatically configured to the CPU baseline. Hosts that cannot be configured to the baseline are not permitted to join the cluster. Virtual machines in the cluster always see an identical CPU feature set, no matter which host they happen to run on. And since this process is automatic, EVC is very simple to use and requires no specialized knowledge about CPU features and masks.
All hosts and virtual machines within a cluster must meet the requirements shown here to enable EVC on a cluster. And, after EVC is enabled on the cluster, hosts must meet these requirements to be added to the cluster. Additionally, for EVC to function properly, the applications on the virtual machines must be “well behaved.” By this, we mean that the applications must be written to use the CPUID machine instruction to discover CPU features as recommended by the CPU vendors. vSphere cannot support EVC with ill‐behaved applications that do not follow the CPU vendor recommendations to discover CPU features.
ESX 3.0.1 introduced the concept of Storage VMotion so that you could create a VMFS 3 volume, and then migrate the virtual machines from a shared VMFS 2 volume to the new VMFS 3 volume without any downtime. Data transfer occurred only in the ESX host so there was no network connectivity issue. New updates to Storage VMotion were included in ESX 3.5. In this version virtual machines remain on the same host, but their disks can be moved to other datastores. The process requires no downtime, is transparent to guest operating systems and applications and has minimal impact on performance. Administration is done via the Remote Command Line Interface and APIs. vSphere 4 Storage VMotion includes even more enhancements. In addition to the Command Line Interface and APIs, you can use the vSphere Client to administer Storage VMotion. And, Storage VMotion will now work across NFS in addition to Fibre Channel and iSCSI. The last version of Storage VMotion required that the host have enough resources to support two instance of the virtual machine; however, this is no longer a requirements. Storage VMotion now provides the option to convert virtual disks from thick to thin formats. It also supports migrating RDMs to RDMs. And for virtual machines that have not been configured to use VMwareDirectPath I/O, you can use Storage VMotion to convert RDMs to VMDKs. Finally, Storage VMotion leverages features new to vSphere 4. For example, Storage VMotion uses fast suspend/resume and change block tracking. We’ll explain more about those features in just a minute. Storage VMotion in vSphere 4 does included a couple of limitations worth noting. First, this version does not support migrating virtual machine that have snapshots. And, you cannot migrate virtual machines to a different host and a different datastore simultaneously unless you power off the virtual machine.
This diagram shows what happens when you migrate storage in vSphere 4. First, upon initiating a migration, vSphere copies all virtual machine files except the disks from the old virtual machine directory to a new directory on the destination datastore. Second, vSphere enables Changed Block Tracking on the virtual machine’s disk. Changed block tracking tracks changes to the disk so that vSphere knows which regions of the disk include data. vSphere stores this data in a bitmap that can reside either in memory or as a file. For Storage VMotion, vSphere usually keep the change bitmap in memory, but for simplicity we’ve shown it next to the disk on the source datastore. Third, vSphere “pre-copies” the virtual machine’s disk and swap file from the disk on the source to the disk on the destination. During this time, the virtual machine is running and may be writing to its disk. Therefore, some regions of the disk change and must be resent. This is where changed block tracking comes in. vSphere first copies the contents of the entire disk to the destination; this is the first pre-copy iteration. It then queries the changed block tracking module to determine what regions of the disk were written to during the first iteration. vSphere performs a second iteration of pre-copy, only copying those regions that were changed during the first iteration. Typically the number of changed regions is significantly smaller than the total size of the disk, so the second iteration takes significantly less time. vSphere continues pre-copying until the amount of modified data is small enough to be copied very quickly. Fourth, ESX invokes fast suspend/resume on the virtual machine. Fast suspend/resume does exactly what its name implies: the virtual machine is quickly suspended and resumed, with the new virtual machine process using the destination virtual machine home and disks. Before ESX allows the new virtual machine to start running again, the final changed regions of the source disk are copied over to the destination so that the destination disk image is identical to the source. And fifth, once the virtual machine is running on the destination datastore, ESX removes the source home and disks of the virtual machine.
In vSphere 4, VMware HA introduces several new features for improved monitoring and more flexible management. To begin, HA now offers the option to suspend host monitoring. This option is useful to avoid impacting maintenance activities such as backups and network maintenance. Additionally, vSphere 4 introduces two new admission control policies for reserving failover capacities. Instead of the original method of preparing HA to tolerate some number of host failures, you can now determine how much host capacity to reserve by specifying a percentage of cluster resources or by specifying a failover host.
A new VM Monitoring page has been added to Cluster Settings dialog box to allow you to control when to monitor the heartbeat of individual virtual machines to help ensure that services remain available. When you select Enable VM Monitoring, HA uses VMware Tools to evaluate whether each virtual machine in the cluster is running by checking for regular heartbeats from the guest operating system. If the VM monitoring service does not receive heartbeats, the service determines whether the virtual machine has failed and, if it has, restarts the virtual machine to restore service. To help diagnose issues, VM Monitoring captures a screenshot of the console before restarting the virtual machine. By default VM Monitoring captures up to 10 screenshots and saves them in the same directory as the configuration file of the virtual machine. When VM Monitoring is disabled, HA continues to monitor the heartbeat of virtual machines and vCenter alarms are still enabled. However, if there is a loss of heartbeat, HA does not automatically reboot the virtual machine. Monitoring sensitivity allows you to customize how quickly failures are detected. To avoid restarting virtual machines repeatedly for non-transient errors, HA restarts virtual machines only three times during a certain configurable time interval. After virtual machines have been restarted three times, VMware HA makes no further attempts to restart the virtual machines after any subsequent failures until after the specified time has elapsed. HA provides three default settings: High, Medium, and Low. You can also customize the intervals by choosing the Custom option shown here. Under Virtual Machine Settings you can override the default monitoring sensitivity level so certain virtual machines are more or less aggressively monitored. Specific custom values can also be set using advanced options.
For mission critical applications that can tolerate no downtime or data loss, vSphere 4 introduces VMware Fault Tolerance, or FT. When you enable FT on a virtual machine in an HA-enabled cluster, FT creates a duplicate, secondary, copy of the virtual machine on a different host. Then, Record/Replay technology records all executions on the primary virtual machine and replays them on the secondary instance. vLockstep technology ensures the two copies stay synchronized and allows the workload to run on two different ESX/ESXi hosts simultaneously. To the external world, the virtual machines appear as one virtual machine. That is, they have one IP address, one MAC address, and you need only manage the primary virtual machine. Heartbeats and replay information allow the virtual machines to communicate continuously to monitor the status of their complementary virtual machine. If a failure is detected, FT creates a new copy of the virtual machine on another host in the cluster. If the failed virtual machine is the primary, the secondary takes over and a new secondary is established. If the secondary fails, another secondary is created to replace the one that was lost. FT provides a higher level of business continuity than HA but requires more overhead and resources than HA. In the event of a failure, the secondary immediately comes on-line, and all or almost all information about the state of the virtual machine is preserved. Applications are already running, and data stored in memory do not need to be re-entered or reloaded. VMware HA, on the other hand, restarts virtual machines after a failure. Restarting the virtual machine requires the virtual machines to complete the process of rebooting, and there is a chance that information about the state of the virtual machine, such as applications or unsaved user-entered may be lost. Several typical situations can benefit from the use of VMware FT. For example: Applications that needs to be available at all times, especially those that have long-lasting client connections to maintain during hardware failure, can benefit from FT. And FT is useful in cases where high availability might be provided through Microsoft Cluster Service, but Microsoft Cluster Service is too complicated to configure and maintain. Or, you can use FT for custom applications that have no other way to configure clustering.
This diagram shows the basic FT architecture. When you turn on FT, FT uses a process similar to VMotion process to create a secondary virtual machine. The Record/replay technology creates log entries of all non-deterministic executions of the primary virtual machine and stores them in a circular buffer on the VMkernel. The configuration file, virtual machine monitor, and VMkernel all share the log buffer. The VMkernel fills and flushes the log buffer asynchronously, and within milliseconds, sends the log entries to the log buffer of the secondary virtual machine. Communication between log buffers is done through a socket on the VMkernel Ethernet adapter. The primary and secondary virtual machines access the same virtual disks on a shared SAN. The primary virtual machine sends both reads and writes to the virtual disks, while the secondary virtual machine sends only reads to the disks. (All writes by the secondary virtual machine are marked as completed without actually issuing them.) FT is designed to handle VMkernel, host server, and virtual machine failures. To detect VMkernel and host failures, FT uses network heartbeats over the IP address used for logging. The primary and secondary periodically send ping packets to the logging IP address. If one side does not receive pings within about a one second timeframe, then that side initiates a failover. To detect virtual machine failures, the VMkernel monitors the frequency of log updates from the configuration file and virtual machine monitor. If the log is not updated within the timeout period, the VMkernel posts an action to the virtual machine monitor to force it to append an entry to the log. If the virtual machine monitor does not respond and post a log entry, then the VMkernel initiates a failover.
To enable fault tolerance for a virtual machine, simply right click the virtual machine in the inventory and select Fault Tolerance > Turn On Fault Tolerance. vCenter Server creates the virtual machine with the same name as the primary virtual machines, but indicates that it is the copy by placing the word secondary in parentheses after the name. You use the same process from the primary virtual machine to turn the feature off. You cannot disable FT from the secondary virtual machine.
VMware Data Recovery is a new backup and recovery product from VMware. It is managed via a vCenter Server plugin and uses the vStorage APIs for Data Protection. It is designed for small to medium size organizations who do not currently have a backup solution or who are looking for a backup solution that it optimized for virtualization. When you open Data Recovery in the vSphere Client for the first time, Data Recovery provides a simple interface to perform basic configuration, such as defining network parameters. After that, wizard interfaces allow you to quickly create, configure and schedule backup jobs. Data Recovery is an agentless, disk-based solution, allowing for faster restores over solutions that write to backup tape. You can restore individual files or entire images. Multiple restore points for each virtual machine are displayed to easily select a copy to restore from a specific point in time. Data Recovery utilizes built in data de-duplication technology to save significant disk space. De-duplication eliminates duplicate storage blocks as backup data is streamed to disk. Therefore, you can maintain multiple point-in-time copies of virtual machines using only a fraction of the storage. To efficiently utilize the backup window and available resources, the de-duplication operation occurs as the virtual machine backup is stored to disk
Here is an outline of the components that make up VMware Data Recovery . Key features: Integrated near-line backup Select which resource pools to backup, set the schedule Efficient data storage Incremental backup Stores identical blocks only one time Constraint-based scheduling Simple scheduling – backup all VMs: At least once per specified time period During specified hours Self-service restore for VI admin Recover recently deleted images from within the vSphere client
Now let’s look at some of the key new features related to Security.
VMware VMSafe, announced earlier in 2008, is a set of APIs that enable protection of VMs by a protection engine : Works with the hypervisor to inspect a VM’s mem, cpu and storage from a higher privilege point Is isolated from the malware Covers all aspects of security – not limited to network or host. VMware VMSafe based products from our security partner ecosystem will work with VMware vSphere™ editions to provide higher levels of security than even physical systems MORE DETAIL Security solutions have an inherent problem. Protection engines are running in the same context as the malware they are protecting against and as a result, malware is able to subvert these engines by simply using the same hooks into the system as the protection engine. Worse, with Longhorn and Vista, Microsoft has enabled Patchguard, effectively eliminating the kernel hooks available to both the security solutions and the malware. While this helps, it doesn’t change the fact that malware and rootkits still exist and can run in those environments. The context that these security solutions need to protect against is also not limited to one set of interactions (e.g. attacks from the network and from spyware and from rootkits). Even those solutions that are in a safe context (outside the OS), they can’t see information from other contexts (e.g. network protection has no host visibility). Security API’s built into the hypervisor allow for 2 key advantages: Better Context – Provide protection from outside the OS, from a trusted context New Capabilities – now they can view all interactions and contexts Now, new security solutions can be developed and integrated within the VMware virtual infrastructure and we can protect the VM by inspection of virtual components (CPU, Memory, Network and Storage). Provides complete integration with VMotion, Storage VMotion, HA, etc. for any new security solution using the API’s. The end-result is an unprecedented level of security for VMs that’s better than the physical infrastructure. These API’s are already being made available to the security ISVs ecosystem. We utilize VC for role-based privileges to assign protection to any single VM and VMware certifies the solutions developed by our partners to ensure the security VM is created by a real security ISV and not a malicious hacker. Some potential use cases: An AV virtual appliance that intercepts all storage IO and is able to scan files as they are read/written from disk. This can be done without loading an AV agent on each machine. Inline Network Security for each ESX host. Now you can ensure that ALL network IO traffic is inspected by an inline appliance, regardless of your virtual networking setup. This includes even inter-VM traffic and allows state to be transferred from host to host during VMotion so that the security protection is never lost.
vShield Zones is a new Application Service providing fundamental and critical network security for vSphere deployments Expanding virtualization deployments in the datacenter are encompassing multiple areas of trust such as DMZ (demilitarized zone) buffers to the Internet and sensitive data such as credit card information subject to Payment Card Industry (PCI) compliance or corporate financial data covered by Sarbanes-Oxley. These varying trust zones must be segmented with firewalls and other network security. Existing physical appliances require diverting traffic to external chokepoints, splintering ESX resource pools into small fragments and disrupting the seamless vision of an internal computing cloud. vShield Zones is a virtual appliance that allows you to monitor and restrict inter-VM traffic within and between ESX hosts to provide security and compliance within shared resource pools. vCenter integration lets you create network zones based on familiar VI containers such as clusters and VLANs vShield Zones scans VM’s for known applications to present network flows and security policies by application protocol rather than as raw network flows. Virtualization awareness and application awareness increases accuracy and reduces risk of misconfiguration and noncompliance. Consistent security policies can be assured throughout a VM lifecycle, from initial provisioning to VMotion across various hardware to final decommissioning. Complete view of virtual machines, networks and security policies allows you to audit security posture fully within the virtual environment to meet defined security SLA’s, irrespective of changes to your external physical network and perimeter.
The architecture of vShield Zones is as follows The vShield Manager (hereafter referred to as the “VSM”) is a virtual appliance that the administrator installs anywhere in the environment. It could be on one of the production ESX hosts, or could even be on a separate standalone host or cluster. The vShield host gateway (hereafter referred to as “the vShield”) is a virtual appliance that is automatically installed on each ESX host by the VSM. The vShield is the point at which the monitoring and firewall is implemented The VSM is the centralized point of management and control. It is from here that you can: view the monitoring results, assign firewall policies, and audit configurations. It’s also the place where you can install/uninstall vShield on individual ESX hosts and perform maintenance-type activities.
Finally, let’s look at some of the new services in vSphere 4 that help to provide scalability for applications.
The ESX/ESXi hypervisor continues to lead the industry with highly scalable support for large servers with high core counts and memory configurations. The VMkernel, a core component of the ESX/ESXi 4.0 hypervisor, is now 64-bit. This provides greater host physical memory capacity, enhanced performance and more seamless hardware support than earlier releases. ESX/ESXi 4.0 supports systems with up to 512GB of RAM. And, ESX/ESXi 4.0 provides headroom for more virtual machines per host and the ability to achieve even higher consolidation ratios on larger machines.
Virtual machines themselves can now scale to accommodate even the largest workloads. And, you can scale up without powering down the virtual machine. ESX/ESXi 4.0 provides support for virtual machines with up to 8 virtual CPUs, allowing larger CPU-intensive workloads to be run on the VMware ESX platform. It is also possible to assign any integer number of virtual CPUs between 1 and 8 to a virtual machine. Up to 256GB of RAM can be assigned to ESX/ESXi 4.0 virtual machines. ESX/ESXi 4.0 introduces Hardware version 7, which adds significant new features including new storage and networking virtual devices. We’ll talk a little more about those in just a minute. Hardware version 7 also includes support for VMDirectPath I/O device access. VMDirectPath I/O enhances CPU efficiency in handling workloads that require constant and frequent access to i/o devices by allowing virtual machines to directly access the underlying hardware devices. Hardware version 7 also provides support for adding and removing virtual devices, adding virtual CPUs, and adding memory to a virtual machine without having to power off the virtual machine. Hardware version 7 is the default for new ESX/ESXi 4.0 virtual machines. Virtual machines that use hardware version 7 features are not compatible with ESX/ESXi releases prior to version 4.0. However, ESX/ESXi 4.0 will continue to run virtual machines created on hosts running ESX Server versions 2.x and 3.x.
Virtual Machine Hardware 7 (the new version available with ESX 4.0) enables you to hot-add memory and CPU to a VM. Note that the OS in the virtual machine needs to support hot-add of CPU and memory in order to make use of the additional resources.
Another capability supported by Virtual Hardware Version 7 is the ability to hot-add PCI devices. The types of PCI devices you can hot-add and hot-remove includes: Network cards SCSI adapters Sound cards SCSI disks and CDROMs USB EHCI controllers VMCI PCI passthrough devices
Virtual Machine Hardware Version 7 also supports extending VMDKs while the virtual machine is running. In the “Virtual Machine Properties” window you can increase the size of the VMDK as shown here.
vCenter Server 4 improves scalability, streamlines many management processes, and adds new features for better resource management. performance management, reduces storage management costs, and reduces the complexity involved with setup and ongoing management of virtual environments. This table shows some of the many new features and enhancements.
vCenter Server 4 introduces the ability to join multiple vCenter Servers into a linked‐mode group. Then you can use the vSphere Client to log on to any single instance of vCenter Server and view and manage the inventories of all the vCenter Servers in the group. Each user sees only the vCenter Server instances for which they have valid permissions. There are several reasons why you may want to link vCenter Servers. For example, you may want to simplify management of inventories associated with remote offices or multiple datacenters. Likewise, you could use Linked Mode to configure a recovery site for disaster recovery purposes. vCenter Server Linked Mode allows for: Global role definitions Searches for inventory items across multiple vCenter Server instances And a license model across multiple vCenter Servers
Linked Mode uses Microsoft Active Directory Application Mode (or ADAM) to store and synchronize data across multiple vCenter Server instances. ADAM is an implementation of Lightweight Directory Access Protocol (or LDAP). ADAM is installed automatically as part of the vCenter Server installation. Each ADAM instance stores data from all vCenter Servers in the group. Using peer‐to‐peer networking, the ADAM instances in a group replicate shared global data to the LDAP directory. The global data for each vCenter Server instance includes: Connection information (that is, IP addresses and ports) Certificates and thumbprints Licensing information And user roles All vCenter Server instances in a linked‐mode group can access a common view of the global data. The vSphere Client can connect to other vCenter Servers using the connection information retrieved from ADAM.
vCenter Server 4 includes a new orchestration platform and development components that installs silently when you install vCenter Server 4. vCenter Orchestrator provides a library of extensible workflows to create and execute automated, configurable processes to manage the vSphere datacenter infrastructure. To understand how Orchestrator works, it’s important to understand the difference between automation and orchestration. Automation provides a way to perform a frequently repeated process without manual intervention. Using a Perl or a PowerShell script to add several ESX hosts to the vCenter Server inventory is an example of automation. Orchestration , on the other hand, provides a way to manage multiple automated processes across heterogeneous systems. An example of orchestration is to automate the entire process of adding several ESX hosts to vCenter Server, updating the CMDB, and sending an email notification. vCenter Orchestrator relies on workflows to create and execute automated processes. Workflows are reusable building blocks that combine actions, decisions, and results that, when performed in a particular order, complete a specific task or process in a virtual environment. Workflows can call upon other workflows, so for example, you can reuse a workflow that starts a virtual machine in several different workflows. Orchestrator provides a library of more than 400 workflows that encapsulate best practices for common virtual environment management tasks such as provisioning virtual machines, backing up, and performing regular maintenance. Orchestrator also provides libraries of the individual actions that the workflows execute. Orchestrator includes several components to create and execute workflows. The Orchestrator Workflow Engine assembles workflows from the building blocks provided in the Orchestrator's libraries of pre-defined objects and actions. One such library exposes every function of the vCenter Server API, allowing you to integrate all the functionality provided by vCenter Server into your workflows. Another Orchestrator library provides Java database connectivity functions that allow you to automate processes related to database administration in your workflows. A third library provides XML processing operations, and so on. The workflow engine can also take objects from external libraries that you plug into Orchestrator, allowing you to create your own tailor-made processes or implement functions provided by third-party applications. The Orchestrator Web Service starts workflows and manages their execution via a network or the Web. Orchestrator includes two user interfaces accessed from the Start menu of the vCenter Server: Use the Orchestrator Configuration interface to configure the components that are related to the workflow engine, such as network, database, LDAP, server certificate, and so on. Use the Orchestrator Client to access the workflow engine to create workflows. You cannot open the Client until Orchestrator has been configured properly. To learn about configuring and using Orchestrator, see the vCenter Orchestrator Administrator’s Guide . To learn about develop applications using Orchestrator, see the vCenter Orchestrator Developer's Guide .
The Host Profiles feature allows you to export configuration settings from a gold reference host and save them as a portable set of policies, called a host profile. You can then use this profile to quickly configure other hosts in the datacenter. Configuring hosts using this method drastically reduces the setup time of new hosts: 10’s of steps reduced to a single click. Host profiles also eliminate the need for specialized scripts to configure hosts. Additionally, vCenter uses the profile as a configuration baseline, so you can monitor for changes to the configuration, detect discrepancies and fix them. Host profiles eliminate per-host, manual, or UI-based host configuration and efficiently maintain configuration consistency and correctness across the entire datacenter.
The basic workflow to implement host profiles is: One: Set up and configure a host to use as the reference host. Two: Use the Create Profile wizard to create a profile from the reference host configuration. Then, optionally, you can edit the profile to further customize the policies within the profile. We’ll talk more about editing policies in just a minute. Three: Attach the host profile to other hosts or clusters of hosts. Four: Check to ensure that the hosts to which the host profile is attached are in compliance with the profile. And Five: Apply the profile to the hosts that are not in compliance.
After configuring a host profile configuration, you can use the Attach Host/Cluster command to associate a cluster or individual hosts with the profile. Then, after the profile is associated with hosts, you can use the Check Compliance Now link to ensure that the associated hosts remain in compliance with the profile. If the check discovers a non-compliant host, those policies configured differently than the reference host profile show in the Compliance Failures section. To bring a non-compliant host back into compliance, place the host in maintenance mode and click Apply Profile.
vApps extend the capabilities of virtual appliances to encapsulate a multi-tier application running on multiple virtual machines. vApps encapsulate not only virtual machines but also their interdependencies and resource allocations allowing for single-step power operations, cloning, deployment, and monitoring of the entire service. vSphere includes end-to-end vApp support including creating, running, and updating vApps as well as importing and exporting them in compliance with Open Virtualization Format 1.0 standard. The OVF file format allows for exchange of virtual appliances across products and platforms. The OVF format offers several advantages: OVF files are compressed, allowing for faster downloads. vCenter validates an OVF file before importing it, and ensures that it is compatible with the intended destination server. And OVF files provides for rich metadata, so you can describe important characteristics such as the vApp anatomy, it’s relationship to other virtual machines, and resource and availability requirements.
The Deploy OVF Template wizard guides you through the process of importing a vApp into vCenter. You can access this wizard two ways: If you know the exact location of the vApp, select File > Deploy OVF Template. When you access the wizard via this command, the first page of the wizard is a Source page where you enter a file name or URL to locate the vApp. The URL may use the schemes http, https, or ftp; the source may be an OVF or OVA file. If the URL requires authentication, the wizard provides a dialog box for entering credentials, which can be saved. Another option is to select File > Browse VA Marketplace. When you choose this option, the Deploy OVF Template Wizard opens to a page that allows you to select an OVF from the Virtual Appliance Marketplace. The remaining pages of the wizard provide options to determine how to deploy the vApp so vCenter can manage it. For example, you must choose, where to place the vApp in the inventory, where to store the virtual machines, how to join the virtual machines to the network, and so forth. This screen capture shows some of the possible pages of the wizard. Many of the pages correspond to the configuration done during vApp setting customization through the vSphere Client. However third-party vApps may include additional pages. For example, an OVF descriptor can contain a Deployment Configuration section which allows the vApp being deployed to have different resource requirements depending on which configuration is chosen. If the OVF descriptor contains a deployment configuration, the wizard shows a Deployment Configuration page so you can choose which configuration to deploy. The various deployments may each have a description that explains when it should be used. Three classic deployment configurations are Evaluation, Production, and Enterprise. Evaluation deploys the vApp with minimal resource requirements suitable for small scale evaluation of the vApp. Production deploys the vApp with resource requirements suitable for a medium scale production environment. And Enterprise deploys the vApp with resource requirements suitable for a large scale production environment. After you complete the wizard, vCenter imports the vApp into the inventory as specified. You can change any of the setting specified in the wizard by using the Edit Settings command.
vSphere 4 introduces centralized license reporting and management. Key theme: simplification! Better match between how licenses are sold and how they are used. The key points about the new licensing: Flex licensing is replaced by simple license keys New centralized license key administration in vCenter New license portal provides more accurate view of entitlement
If you upgrade all hosts to ESX/ESXi 4, you no longer need a license server or host license files. All product and feature licenses are encapsulated in 25‐character license keys that you can manage and monitor from the vSphere Client. The Licensing panel of the vSphere Client shows license inventory data by product, license key, and asset. A product is a license to use a vSphere component or feature. A license key is the serial number that corresponds to a product. And an asset is a machine on which a product is installed. For an asset to run certain software legally, the asset must be licensed to do so. The license report displays licensed assets only. The report does not display assets that are unlicensed. You can split some license keys by applying them to multiple assets. For example you can split a 4‐CPU license by applying it to two 2‐CPU hosts. When you view licenses in the report, you can add an optional description of each license key or of a set of license keys. These labels appear and are editable in the license report. By default, the label field is empty. To export the license data to a file that you can open in a third‐party application, click the Export link. To manage licenses, click the Manage vSphere Licenses link.
The Guided Consolidation service is now a vCenter Server modular plugin and can be installed on a different system than the vCenter Server. This change allows vCenter Server to perform optimally with lower overheads around consolidation operations. In addition, the Guided Consolidation service provides better scalability by concurrently analyzing and making consolidation recommendations for up to 500 physical machines at a given time. Guided Consolidation service by virtue of being internationalized is also able to discover and analyze systems running non-English versions of Windows. This release of Converter includes three main enhancments: Converter can now import physical, virtual, and third party sources to all of the newly supported platforms on ESX/ESXi 4 hosts. Converter support sWindows Server 2008, both as a conversion source and as an installable platform. And, Converter can convert Microsoft Hyper-V virtual machines to VMware virtual machines by treating Hyper-V machines as physical source machines. There are several updates to Upgrade Manager for vSphere 4: Introduces two significant enhancements to baselines: you can now use Update Manager to upgrade ESX/ESXi hosts, virtual machine hardware, VMware Tools, and virtual appliances; and baseline groups allow you to specify an upgrade baseline and a set of patches in one group Update Manager also includes a new dashboard to review how machines comply with baselines and baseline groups. Update Manager includes a Stage wizard so that you can download patches from a remote server to a local server without applying the patches immediately. Staging patches speeds up the remediation process, because when the staged patches and upgrades are applied to a set of hosts the files are available locally.
The Performance tab now includes an Overview page that displays charts for the most common data counters for CPU, disk, memory, and network metrics. Overview charts provide a quick summary view of resource usage in the datacenter without navigating through multiple charts. Overview charts are displayed side-by-side, so you can quickly identify bottlenecks and problems associated with related metrics – for example, CPU and memory. To get help with a chart or to understand the meaning of a counter, click the blue question mark. A web page provides information explaining how to analyze the chart and things to consider. In addition, the Overview page also include: Thumbnail views of hosts, resource pools, clusters, and datastores to allow for easy navigation to the individual charts. Drill down capability across multiple levels in the inventory to help in isolating the root-cause of performance problems quickly. And detailed datastore level views that show utilization by file type and unused capacity. Chart customization is now done from the Advanced page, which display charts as they were displayed in previous versions.
The vSphere Client now includes a Storage Views tab for all managed entities in the vSphere inventory (except networks). This tab is powered by a new Web service within the vCenter Management Web services called the Storage Management Service. The Storage Management Service is designed to provide greater insight into the storage infrastructure, particularly in the areas of storage connectivity and capacity utilization. That means you can now quickly view information to answer questions such as: How much space on a datastore is used for snapshots? or Are there redundant paths to a virtual machine’s storage? All data used to compute information shown on the tab come from the vCenter Server database. The Storage Management Service makes direct database calls periodically, computes the information, and stores it in an in-memory cache. A display in the top, right corner shows the last time the report was updated and allows you to manually update as required.
The Maps page provides a graphical topology of the relationship between entities. Maps are very useful for seeing how many paths a virtual machine has to its storage and what targets it can see. They can also assist in troubleshooting by displaying problem entities. Controls are included so you can customize which entities to display and zoom in and out as needed.
The vSphere Client includes several enhancements for better viewing and management of storage devices. The new Devices page on the Storage Configuration tab allows you to view details of all storage devices. To ensure that storage device names are consistent across reboots, ESX and ESXi now use unique LUN identifier to name storage devices in the user interface and outputs from CLI commands. In most cases the Network Addressing Authority ID is used. The previous naming convention is still visible as the “Runtime Name.” However, this name is not guaranteed to be persistent through reboots. ESX and ESXi still use iSCSI Qualified Names for iSCSI targets and World Wide Names for Fibre Channel targets. For those devices that do not have a unique ID, ESX and ESXi reference a VMware Multipath X device (or MPX). The Devices page also includes an Owner column so you can view the PSA multipathing module managing the device. From the Devices page you can click the Manage Paths link to view and manage the path details for a selected device.
We’ve talked a lot about new features in vSphere 4, but we also wanted to emphasize that vSphere 4 is a platform designed to integrate with the capabilties provided by a broad array of VMware partners, giving customers access to a rich array of functionality that can be leveraged across their physical and virtual environment. Here are just a few examples of ways in which VMware partners are integrating with vSphere to deliver value-added functionality on top of the VMware platform.
This is the summary diagram of vSphere and its components
ESX/ESXi 4.0 adds support for 13 new guest operating systems and experimental support for two more.
Because vSphere 4 is a major new release, many products that build on top of the VMware platform will require an update to support vSphere 4. VMware is planning to deliver updates to those products later in 2009 to provide vSphere 4 support. Here is a brief summary of which products will require updates. More details on update versions and release dates will be shared as each product update gets closer to release. Note that in the case of product bundles that include one of the products not yet supported with vSphere 4 as well as VMware Infrastructure, in most cases customers purchasing those bundles will receive the appropriate VI3 license keys and will subsequently also be sent the vSphere license keys according to their license entitlements. Customers will continue to have the right to upgrade/downgrade their licenses between VI3 and the corresponding vSphere 4 licenses through the licensing portal.
This is an overview summary of several other updates to the ESX storage stack. We won’t go into more detail on these in this presentation, but please refer to the vSphere 4 documentation for additional detail.
Here are some additional updates to the vNetwork feature set. We will not go into more details on these here, please refer to the vSphere documentation for more details.
1. What’s New in vSphere 4.0: Technical Overview
2. Introducing VMware vSphere™ Application Services Infrastructure Services <ul><ul><li>ESX </li></ul></ul><ul><ul><li>ESXi </li></ul></ul><ul><ul><li>DRS/DPM </li></ul></ul><ul><ul><li>VMFS </li></ul></ul><ul><ul><li>Thin Provisioning </li></ul></ul><ul><ul><li>VMFS Volume Grow </li></ul></ul><ul><ul><li>Distributed Switch </li></ul></ul>VMware vSphere™ 4.0 Internal Cloud External Cloud <ul><ul><li>VMotion </li></ul></ul><ul><ul><li>Storage VMotion </li></ul></ul><ul><ul><li>HA </li></ul></ul><ul><ul><li>Fault Tolerance </li></ul></ul><ul><ul><li>Data Recovery </li></ul></ul><ul><ul><li>vShield Zones </li></ul></ul><ul><ul><li>VMSafe </li></ul></ul><ul><ul><li>DRS </li></ul></ul><ul><ul><li>Hot Add </li></ul></ul>Availability Security Scalability vCompute vStorage vNetwork *Note vCenter Server and its components are a separate purchase .Net SaaS Grid J2EE Linux Windows Web 2.0 vApp vCenter Suite
3. Infrastructure Services Deliver CapEx and OpEx Savings VMware vSphere™ 4.0 <ul><li>Storage/network optimizations </li></ul><ul><li>Power Management </li></ul><ul><li>VMDirectPath I/O </li></ul><ul><li>CPU/Memory optimization </li></ul><ul><li>DRS </li></ul><ul><li>vStorage Thin Provisioning </li></ul><ul><li>VMFS Volume Grow </li></ul><ul><li>vStorage VMFS </li></ul><ul><li>vNetwork Distributed Switch </li></ul><ul><li>Third party distributed virtual switches </li></ul><ul><li>vNetwork Standard Switch </li></ul>CURRENT NEW Infrastructure Services vCompute vStorage vNetwork Highest consolidation ratios in the industry Most efficient use of hardware resources Low operational overhead
4. vSphere 4.0 Infrastructure Services: vCompute Infrastructure Services VMware vSphere™ 4.0 CURRENT NEW <ul><li>ESX Service Console updates </li></ul><ul><li>Enhanced cluster resource usage views </li></ul><ul><li>Expanded DRS information </li></ul><ul><li>Expanded support for Distributed Power Management </li></ul><ul><li>CPU/Memory optimization </li></ul><ul><li>DRS </li></ul>vCompute vStorage vNetwork
5. ESX 4 Service Console <ul><li>64-bit, 2.6-based Linux kernel compatible with RHEL 5.2 and CentOS 5.2 </li></ul><ul><ul><li>Supports both 32-bit and 64-bit applications </li></ul></ul><ul><ul><li>Console root file system is a VMDK file </li></ul></ul><ul><ul><li>VMkernel runs and owns device drivers </li></ul></ul><ul><ul><li>Network interfaces fully support IPv6 </li></ul></ul><ul><ul><li>Provides enhanced security via Address Space Layout Randomization (ASLR) </li></ul></ul><ul><li>Some features no longer supported </li></ul><ul><ul><li>No longer a development environment </li></ul></ul>Service Console vCompute vStorage vNetwork
6. New Resource Distribution Charts vCompute vStorage vNetwork
7. New DRS Management Pages History tab Recommendations page Faults page Refresh recommendations Apply a subset of recommendations Edit cluster properties Apply all selected recommendations Faults view displays issues that prevented DRS from providing or applying recommendations. Actions taken based on recommendations Customize the display vCompute vStorage vNetwork
8. Scheduled Task to Change Resource Settings Home > Management > Scheduled Tasks > Add To accommodate business priorities that change over time, schedule tasks to change resource settings. vCompute vStorage vNetwork
9. <ul><li>DPM consolidates workloads to reduce power consumption </li></ul><ul><ul><li>Cuts power and cooling costs </li></ul></ul><ul><ul><li>Automates management of energy efficiency </li></ul></ul><ul><li>Supports three wake protocols: </li></ul><ul><ul><li>Intelligent platform management interface (IPMI) </li></ul></ul><ul><ul><li>Integrated Lights-Out (iLO) </li></ul></ul><ul><ul><li>Wake-On-LAN (WOL) </li></ul></ul><ul><li>Configure and test wake on every host in cluster </li></ul>VMware DPM Expanded Support Resource Pool Power Optimized Standby Host Server vCompute vStorage vNetwork
10. vSphere 4.0 Infrastructure Services: vStorage Infrastructure Services vCompute vStorage vNetwork VMware vSphere™ 4.0 CURRENT NEW <ul><li>Pluggable Storage Architecture </li></ul><ul><li>iSCSI enhancements </li></ul><ul><li>Thin Provisioning for virtual disks </li></ul><ul><li>VMFS Volume Grow </li></ul><ul><li>vStorage APIs for Data Protection </li></ul><ul><li>VMFS </li></ul><ul><li>Consolidated Backup </li></ul>
11. Enhanced Multipathing with Pluggable Storage Architecture (PSA) SATP PSP HBA 1 HBA 2 NMP PSA VMkernel Storage Stack <ul><ul><li>Storage Array Type Plugins (SATPs) handle path failover, monitors path health, and reports changes to NMP. </li></ul></ul><ul><ul><li>Path Selection Plugins (PSPs) choose the best path. </li></ul></ul>vCompute vStorage vNetwork
12. vStorage APIs for Multipathing Third-Party MPP Third-Party MPP VMware NMP VMware SATP VMware PSP VMware PSP VMware PSP Third-Party PSP VMware SATP VMware SATP Third-Party SATP Pluggable Storage Architecture (PSA) For unique performance and fault-tolerance behavior To accommodate specific storage arrays For more complex I/O load balancing algorithms vCompute vStorage vNetwork
13. Updated iSCSI Stack <ul><ul><li>Significant performance improvements </li></ul></ul><ul><ul><li>No longer requires service console connection to communicate with an iSCSI target </li></ul></ul><ul><ul><li>New iSCSI initiator features </li></ul></ul>Host Configuration > Storage Adapters > Properties vCompute vStorage vNetwork
16. Thin Disk Provisioning Operations Create New Virtual Machine Wizard Clone and Migrate Virtual Machine Wizards <ul><li>A thin-disk option is available when you : </li></ul><ul><li>Create a virtual machine </li></ul><ul><li>Clone to a template </li></ul><ul><li>Clone a virtual machine </li></ul><ul><li>Migrate virtual machine storage (Storage VMotion) </li></ul>vCompute vStorage vNetwork
17. <ul><li>Volume Grow expands an extent so that it fills the available adjacent capacity. </li></ul><ul><ul><li>Single partition provides improved virtual machine availability </li></ul></ul><ul><ul><li>Can grow a volume any number of times up to size for a VMFS volume </li></ul></ul><ul><ul><li>Must grow LUN backing VMFS datastore first </li></ul></ul><ul><ul><li>Extent immediately after must have free space in LUN </li></ul></ul>VMFS Volume Grow Option Add Extent Volume Grow
18. vStorage APIs for Data Protection SAN Storage Backup Proxy Server Centralized Data Mover Snapshots Backup Application vStorage APIs for Data Protection Physical Server or VM (Windows or Linux ) Moun t
19. Features in vStorage APIs for Data Protection <ul><li>Includes All VCB features </li></ul><ul><li>Also supports: </li></ul><ul><ul><li>All storage architectures for backup and restore, LAN and SAN </li></ul></ul><ul><ul><li>Full, incremental, and differential file-level backup options </li></ul></ul><ul><ul><li>File-level backup and restore </li></ul></ul><ul><ul><li>Windows and Linux guests </li></ul></ul><ul><ul><li>Snapshots and Volume Shadow-Copy Service Quiescing </li></ul></ul>
21. IPv6 Support <ul><li>Successor to IPv4 </li></ul><ul><ul><li>128-bit addresses (vs. 32-bit in IPv4) </li></ul></ul><ul><ul><li>Link-local addresses that appear as the interface is initialized </li></ul></ul><ul><ul><li>Addresses set by router advertisements </li></ul></ul><ul><ul><li>Ability to have multiple IPv6 addresses on an interface </li></ul></ul><ul><li>Supported Components </li></ul><ul><ul><li>Virtual machines (as of ESX 3.5) </li></ul></ul><ul><ul><li>VMware Tools to display addresses in vCenter Server </li></ul></ul><ul><ul><li>Service console </li></ul></ul><ul><ul><li>VMkernel </li></ul></ul><ul><ul><li>vSphere Client connection to vCenter Server not supported </li></ul></ul>
22. VMDirectPath I/O <ul><li>I/O Device Driver Directly Accesses Physical Device </li></ul><ul><ul><li>Full network support with: </li></ul></ul><ul><ul><ul><li>Intel 82598 10 Gigabit Ethernet Controller </li></ul></ul></ul><ul><ul><ul><li>Broadcom 57710 10 gigabit network adapter </li></ul></ul></ul><ul><ul><li>Experimental storage I/O device support with: </li></ul></ul><ul><ul><ul><li>QLogic QLA25xx 8Gb Fibre Channel </li></ul></ul></ul><ul><ul><ul><li>LSI 3442e-R and 3801e (1068 chip based) 3Gb SAS adapters </li></ul></ul></ul><ul><ul><li>Virtual machines must be running on Intel Nahalem platform support </li></ul></ul><ul><ul><li>Each virtual machine can connect to up to two passthrough devices </li></ul></ul><ul><ul><li>Increases performance but trades off losing several virtualization features </li></ul></ul><ul><ul><ul><li>VMotion, Hot add/remove of virtual devices, Suspend and Resume, Record and Replay, Fault Tolerance, High Availability, Memory Over-commitment and page sharing </li></ul></ul></ul>I/O MMU I/O Device Virtualization Layer
23. Standard Switch Architecture Service Console Virtual Physical Physical NICs Physical Switches vNICs vSwitches Port Groups VM Port Group VMotion Port VM Port Group COS Port VMotion Port ESXi Host 1 ESX Host 2 Network configuration at the host level
24. Distributed Switch Architecture Virtual Machines Service Console VMotion Hidden vSwitches (IO plane) Distributed Switch (Control Plane) Distributed Port Groups Service Console ESXi Host 1 ESX Host 2 Virtual Physical vCenter Server
25. Third-Party Distributed Switches vSphere Client Plug-In DB Control Plane vCenter Server IO Plane ESX IO Plane Virtual Control Plane Appliance Agent Agent ESX vCenter Server Extension vNetwork Appliance APIs allow third-party developers to create distributed switch solutions.
26. Benefits of Distributed Switches <ul><li>vNetwork Distributed Switches… </li></ul><ul><ul><li>Simplify datacenter administration </li></ul></ul><ul><ul><li>Enable networking statistics and policies to migrate with virtual machines (Network VMotion) </li></ul></ul><ul><ul><li>Provide for customization and third-party development </li></ul></ul>VMware Infrastructure 3 VMware vSphere 4
27. vSphere 4.0 Application Services: Availability Application Services VMware vSphere™ 4.0 CURRENT NEW <ul><li>Enhanced VMotion compatibility </li></ul><ul><li>Storage VMotion enhancements </li></ul><ul><li>VMware HA enhancements </li></ul><ul><li>VMware Fault Tolerance </li></ul><ul><li>VMware Data Recovery </li></ul><ul><li>VMware HA </li></ul><ul><li>VMotion </li></ul><ul><li>Storage VMotion </li></ul><ul><li>NIC/HBA teaming </li></ul>Availability Security Scalability
28. Enhanced VMotion Compatibility (EVC) EVC Cluster CPU Baseline Feature Set CPUID CPUID CPUID CPUID X… X… X… EVC prevents migrations with VMotion from failing due to incompatible CPUs. K… Availability Security Scalability
29. EVC Cluster Requirements <ul><li>Hosts </li></ul><ul><ul><li>CPUs from a single vendor, either Intel or AMD </li></ul></ul><ul><ul><li>Running ESX Server 3.5 Update 2 or later </li></ul></ul><ul><ul><li>Connected to vCenter Server </li></ul></ul><ul><ul><li>Hardware virtualization support (AMD‐V or Intel VT) enabled </li></ul></ul><ul><ul><li>AMD No eXecute (NX) or Intel eXecute Disable (XD) technology enabled </li></ul></ul><ul><ul><li>Support hardware live migration (AMD-V Extended Migration or Intel FlexMigration) or have baseline processor of intended feature set </li></ul></ul><ul><li>Virtual Machines </li></ul><ul><ul><li>Powered off or migrated out of cluster when EVC is enabled </li></ul></ul><ul><ul><li>Applications on virtual machines must use CPUID instructions </li></ul></ul>Availability Security Scalability
30. Storage VMotion in vSphere 4 <ul><li>Enhancements </li></ul><ul><ul><li>Can administer via vSphere Client </li></ul></ul><ul><ul><li>Supports NFS, Fibre Channel, and iSCSI </li></ul></ul><ul><ul><li>No longer requires 2 x memory </li></ul></ul><ul><ul><li>Supports moving VMDKs from thick to thin formats </li></ul></ul><ul><ul><li>Can migrate RDMs to RDMs and RDMs to VMDKs (non-passthrough) </li></ul></ul><ul><ul><li>Leverages new vSphere 4 features to speed migration </li></ul></ul><ul><li>Limitations </li></ul><ul><ul><li>Virtual machine cannot include snapshots </li></ul></ul><ul><ul><li>VM must be powered off to simultaneously migrate both host and datastore </li></ul></ul>Availability Security Scalability
31. Storage VMotion in vSphere 4 Source Destination 1 2 3 4 5 1. Copy virtual machine files except disks to new datastore 2. Enable changed block tracking on the virtual machine’s disk 3. “Pre-copy” virtual machine’s disk and swap file from source to destination 4. Invoke fast suspend/resume on virtual machine 5. Remove source home and disks of virtual machine Availability Security Scalability
32. New HA Cluster Settings Ability to suspend host monitoring Choice of three admission control strategies Availability Security Scalability
33. VM Monitoring Enable automatic restart due to failure of guest operating system Determine how quickly failures are detected Set monitoring sensitivity for individual virtual machines Availability Security Scalability
34. VMware Fault Tolerance (FT) Secondary Primary vLockstep Technology New Secondary vLockstep Technology New Primary VMware FT provides zero-downtime, zero-data-loss protection to virtual machines in an HA cluster. Availability Security Scalability
35. How VMware FT Works VMkernel Log Buffer VMkernel VMM VMM Primary Virtual Machine Secondary Virtual Machine Log Buffer Heartbeat? Record Logs Read/Write Read Single Copy of Disks on Shared Storage Log Update? Log Read? Availability Security Scalability
36. Enable Fault Tolerance with a Single Click Primary Virtual Machine > Summary Tab After you turn on Fault Tolerance, the Status tab on the primary virtual machine shows Fault Tolerance information. Availability Security Scalability
37. VMware Data Recovery <ul><li>VMware’s Backup/Recovery Solution based on APIs for Data Protection </li></ul><ul><ul><li>Agentless disk-based backup and recovery </li></ul></ul><ul><ul><li>De-duplication and incremental backups to save disk space </li></ul></ul>Availability Security Scalability
38. VMware Data Recovery Key Components Storage Servers VMware ESX/ESXi Virtual Machines vCenter Server Data Recovery <ul><li>vCenter Plug-in </li></ul><ul><ul><li>With vSphere Client plug-in, allows configuration and management of backup/recovery appliance </li></ul></ul><ul><ul><li>Wizard driven backup and restore job creation </li></ul></ul><ul><ul><li>Storage of backup configuration in vCenter Server database and awareness of HA/VMotion/DRS </li></ul></ul><ul><li>VMware ESX/ESXi </li></ul><ul><ul><li>Provides VSS support </li></ul></ul><ul><ul><li>Change block tracking functionality allows backups to be more efficient </li></ul></ul><ul><li>Storage </li></ul><ul><ul><li>Any VMFS storage: DAS, iSCSI or Fibre Channel storage plus NFS and CIFS shares as target </li></ul></ul><ul><ul><li>All backed up virtual machines are stored on disk in a deduplicated datastore </li></ul></ul><ul><li>Backup and Recovery Appliance </li></ul><ul><ul><li>OVF appliance </li></ul></ul><ul><ul><li>Leverages vStorage APIs for Data Protection to discover, manage backup and restore </li></ul></ul>Availability Security Scalability
39. vSphere 4.0 Application Services: Security Application Services VMware vSphere™ 4.0 CURRENT NEW <ul><li>VMware VMsafe </li></ul><ul><li>VMware vShield Zones </li></ul><ul><li>Thin ESXi hypervisor with locked-down interfaces </li></ul><ul><li>No dependence on general-purpose OS </li></ul>Availability Security Scalability
40. VMware VMsafe <ul><ul><li>API that enables protection of VMs by inspection of virtual components in conjunction with hypervisor </li></ul></ul><ul><ul><li>Isolation of protection engine from malware </li></ul></ul><ul><ul><li>Broad ranging coverage of virtual machine CPU, memory, storage and network </li></ul></ul>Protection Engine VMware vSphere™ Application Operating System Availability Security Scalability
41. vShield Zones <ul><ul><li>Capabilities </li></ul></ul><ul><ul><li>Bridge, firewall, or isolate VM zones based on familiar VI containers </li></ul></ul><ul><ul><li>Monitor allowed and disallowed activity by application-based protocols </li></ul></ul><ul><ul><li>One-click flow-to-firewall blocks precise network traffic </li></ul></ul><ul><ul><li>Benefits </li></ul></ul><ul><ul><li>Well-defined security posture within virtual environment </li></ul></ul><ul><ul><li>Monitoring and assured policies, even through Vmotion and VM lifecycle events </li></ul></ul><ul><ul><li>Simple zone-based rules reduces policy errors </li></ul></ul>Availability Security Scalability
43. vSphere 4.0 Application Services: Scalability Application Services VMware vSphere™ 4.0 CURRENT NEW <ul><li>Increased host scalability </li></ul><ul><li>8-way SMP and 255 GB of virtual machine RAM </li></ul><ul><li>Hot add of virtual CPU and memory </li></ul><ul><li>Hot plug devices </li></ul><ul><li>Hot extend of virtual disks </li></ul><ul><li>DRS shares and reservations allow apps to shrink and grow based on priority </li></ul>Availability Security Scalability
44. Host Scalability <ul><li>Enhanced performance and higher consolidation rates </li></ul><ul><ul><li>64-bit VMkernel </li></ul></ul><ul><ul><li>512GB host memory </li></ul></ul><ul><ul><li>64 logical CPUs </li></ul></ul><ul><ul><li>256 virtual machines per host </li></ul></ul>App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS 32 Cores 256 GB 192 VMs 512GB 64 Cores 256 VMs Availability Security Scalability
45. Virtual Machine Scalability <ul><li>Dynamic scale-up supports much larger workloads </li></ul><ul><ul><li>8-Way Virtual SMP </li></ul></ul><ul><ul><li>256GB RAM </li></ul></ul><ul><ul><li>Virtual Machine Hardware Version 7 </li></ul></ul><ul><ul><ul><li>New virtual devices </li></ul></ul></ul><ul><ul><ul><li>VMDirectPath I/O </li></ul></ul></ul><ul><ul><ul><li>Hot plug support </li></ul></ul></ul>256 GB 8 CPUs App OS App OS Availability Security Scalability
46. Hot Add for Memory and CPU Virtual Machine > Edit Settings > Options Tab > Memory/CPU Hotplug You must enable Memory and CPU Hot Add so that the options are available on the Hardware tab. Availability Security Scalability
47. Hot Adding and Removing PCI Devices Virtual Machine > Edit Settings > Hardware Tab > Add <ul><li>You can hot-add/remove: </li></ul><ul><ul><li>Network cards </li></ul></ul><ul><ul><li>SCSI adapters </li></ul></ul><ul><ul><li>Sound cards </li></ul></ul><ul><ul><li>SCSI disks and CDROMs </li></ul></ul><ul><ul><li>USB EHCI controller </li></ul></ul><ul><ul><li>VMCI </li></ul></ul><ul><ul><li>PCI passthrough devices </li></ul></ul>Availability Security Scalability
50. vCenter Server Linked Mode Overview <ul><ul><li>Standard vSphere Client can access inventory across multiple vCenters </li></ul></ul><ul><ul><li>View and search across combined inventory of a group of vCenter Servers </li></ul></ul><ul><ul><li>Shared roles and license configurations </li></ul></ul>
51. vCenter Server Linked Mode Architecture <ul><ul><ul><li>Connection information </li></ul></ul></ul><ul><ul><ul><li>Certificates and thumbprints </li></ul></ul></ul><ul><ul><ul><li>Licensing information </li></ul></ul></ul><ul><ul><ul><li>User roles </li></ul></ul></ul>vCenter Server Instance vCenter Server Instance vCenter Server Instance vSphere Client Tomcat Web Service ADAM Instance Tomcat Web Service ADAM Instance Tomcat Web Service ADAM Instance vCenter Server vCenter Server vCenter Server
52. vCenter Orchestrator <ul><li>Use Orchestrator to create and execute workflows that automate virtual infrastructure management processes </li></ul>Workflow Engine vCenter Server XML SSH … Third-Party Plugin Client Configuration Workflow Library Web Service
53. Host Profiles Overview Cluster Reference Host Host profiles reduce setup time and allow you to manage configuration consistency and correctness.
54. Basic Workflow to Implement Host Profiles <ul><li>Host Profile </li></ul><ul><ul><li>Memory Reservation </li></ul></ul><ul><ul><li>Storage </li></ul></ul><ul><ul><li>Networking </li></ul></ul><ul><ul><li>Date and Time </li></ul></ul><ul><ul><li>Firewall </li></ul></ul><ul><ul><li>Security </li></ul></ul><ul><ul><li>Services </li></ul></ul><ul><ul><li>Users and User Groups </li></ul></ul><ul><ul><li>Security </li></ul></ul>Cluster Reference Host 1 2 3 4 5
55. Working with Host Profiles After you create the profile, attach it to hosts/clusters so that you can check compliance and apply it to hosts not in compliance.
56. vApp Overview <ul><li>vApps are multi-tier application services that you can manage as a single inventory item. </li></ul><ul><ul><li>Provides for single-step management </li></ul></ul><ul><ul><li>Eliminates complex setup and configuration </li></ul></ul>… Resource Pool Distributed Virtualization Layer App Server VM vApp OVF Descriptor App Server VM Database VM
57. Deploying vApps <ul><li>File > Deploy OVF Template </li></ul><ul><li>File > Browse VA Marketplace </li></ul>vApps from ISVs may include additional settings to configure.
58. Simplified License Management in vSphere 4 <ul><li>Simple license keys instead of flex </li></ul><ul><ul><ul><li>1 license per edition </li></ul></ul></ul><ul><ul><ul><li>1 key for many hosts </li></ul></ul></ul><ul><li>New centralized license key administration in vCenter </li></ul><ul><ul><ul><li>No separate license server to manage or monitor </li></ul></ul></ul><ul><ul><ul><li>Centralized host and license monitoring through vCenter enabling easy compliance </li></ul></ul></ul><ul><li>New license portal provides more accurate view of entitlement </li></ul>
59. Managing Licenses in vSphere 4 Administration > Licensing Key is a string, not a text file Custom label Manage licenses Export report
60. vCenter Server Plug-in Enhancements <ul><li>Lower overhead and better scalability </li></ul><ul><ul><li>Modular plugin </li></ul></ul><ul><ul><li>Analyzes up to 500 physical machines at a time </li></ul></ul><ul><li>More platforms supported </li></ul><ul><ul><li>Ability to convert to new platforms supported in ESX/ ESXi 4.0 </li></ul></ul><ul><ul><li>Support for Windows Server 2008 as source and platform </li></ul></ul><ul><ul><li>Convert Microsoft Hyper-V VMs to VMware VMs </li></ul></ul><ul><li>Enhanced management and administration </li></ul><ul><ul><li>ESX/ESXi hosts and virtual appliance upgrades </li></ul></ul><ul><ul><li>Baseline groups </li></ul></ul><ul><ul><li>Compliance dashboard </li></ul></ul><ul><ul><li>Patch staging </li></ul></ul>
61. New Performance Charts Thumbnail Views Performance overview charts help to quickly identify bottlenecks and isolate root causes of issues.
62. New Storage Views Tab Adds Insight into Storage Infrastructure The new Storage Views tab provides greater insight into capacity utilization and storage connectivity.
63. Maps View HBA LUN Target
64. Enhanced Views for Storage Devices Host Configuration > Storage > Devices Unique LUN identifier is persistent across reboots. Right-click to rename
66. Summary of VMware vSphere™ Application Services Infrastructure Services <ul><ul><li>ESX </li></ul></ul><ul><ul><li>ESXi </li></ul></ul><ul><ul><li>DRS/DPM </li></ul></ul><ul><ul><li>VMFS </li></ul></ul><ul><ul><li>Thin Provisioning </li></ul></ul><ul><ul><li>VMFS Volume Grow </li></ul></ul><ul><ul><li>Distributed Switch </li></ul></ul>VMware vSphere™ 4.0 Internal Cloud External Cloud <ul><ul><li>VMotion </li></ul></ul><ul><ul><li>Storage VMotion </li></ul></ul><ul><ul><li>HA </li></ul></ul><ul><ul><li>Fault Tolerance </li></ul></ul><ul><ul><li>Data Recovery </li></ul></ul><ul><ul><li>vShield Zones </li></ul></ul><ul><ul><li>VMSafe </li></ul></ul><ul><ul><li>DRS </li></ul></ul><ul><ul><li>Hot Add </li></ul></ul>Availability Security Scalability vCompute vStorage vNetwork *Note vCenter Server and its components are a separate purchase vApp vCenter Suite
67. What’s New in vSphere 4.0: Technical Overview
68. Backup Slides
69. Guest Operating System Support <ul><ul><li>Asianux 3.0 </li></ul></ul><ul><ul><li>CentOS 4 </li></ul></ul><ul><ul><li>Debian 4 </li></ul></ul><ul><ul><li>FreeBSD 6 </li></ul></ul><ul><ul><li>FreeBSD 7 </li></ul></ul><ul><ul><li>OpenServer 5 </li></ul></ul><ul><ul><li>Unixware 7 </li></ul></ul><ul><ul><li>Solaris 8 (experimental) </li></ul></ul><ul><ul><li>Solaris 9 (experimental) </li></ul></ul><ul><ul><li>Solaris 10 </li></ul></ul>New in vSphere 4 <ul><ul><li>OS/2 </li></ul></ul><ul><ul><li>MS-DOS 6.22 </li></ul></ul><ul><ul><li>Windows 3.1 </li></ul></ul><ul><ul><li>Windows 95 </li></ul></ul><ul><ul><li>Windows 98 </li></ul></ul>Support for over 45 guest operating systems
70. <ul><li>vSphere 4.0 is a major new release that will require updates to most current VMware add-on products </li></ul><ul><li>Most products will release updates that will provide vSphere 4.0 compatibility in 2H 2009 </li></ul><ul><li>Customers will still receive VI3 licenses for most bundles containing not-yet-compatible products, but can upgrade/downgrade their license keys at any time </li></ul>VMware Solution Compatibility Compatible with vSphere 4 at GA Compatibility with vSphere 4 planned for 2H 2009 vCenter Heartbeat VMware View VMware Capacity Planner vCenter Site Recovery Manager Converter 4.0 vCenter Lifecycle Manager vCenter Stage Manager vCenter Lab Manager
71. Additional New vStorage Features Summary <ul><ul><li>SCSI-3 Compliant </li></ul></ul><ul><ul><li>Modular Pluggable Storage Architecture (PSA) </li></ul></ul><ul><ul><li>Updated iSCSI stack </li></ul></ul><ul><ul><li>Native SATA support </li></ul></ul><ul><ul><li>MS Server 2008 Failover Clustering support </li></ul></ul><ul><ul><ul><li>Persistent reservations in VMkernel </li></ul></ul></ul><ul><ul><ul><li>LSI Logic SAS (virtual SAS controller) </li></ul></ul></ul><ul><ul><li>New storage virtual devices </li></ul></ul><ul><ul><ul><li>Paravirtual SCSI adapter </li></ul></ul></ul><ul><ul><ul><li>IDE virtual device </li></ul></ul></ul>Optimized Storage Capabilities vCompute vStorage vNetwork
72. Additional New vNetwork Features Summary <ul><li>Tcpip2 </li></ul><ul><ul><li>Based on FreeBSD 6.1 </li></ul></ul><ul><ul><li>Supports IPv6 </li></ul></ul><ul><ul><li>Improved locking and threading capabilities </li></ul></ul><ul><ul><li>Loads by default </li></ul></ul><ul><ul><ul><li>Tcpip2v6 loads when IPv6 is enabled </li></ul></ul></ul><ul><li>VMXNET3 </li></ul><ul><ul><li>MSI/MSI-X support </li></ul></ul><ul><ul><li>Receive side scaling </li></ul></ul><ul><ul><li>IPv6 checksum and TSO over IPv6 </li></ul></ul><ul><ul><li>VLAN offloading </li></ul></ul>Improved performance and extended support