• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

V sphere 5 what's new technical

on

  • 1,988 views

 

Statistics

Views

Total Views
1,988
Views on SlideShare
1,979
Embed Views
9

Actions

Likes
2
Downloads
354
Comments
0

2 Embeds 9

http://www.mcisconsulting.com 7
http://www.computavmware.fr 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Accelerate VM storage placement decision to a storage pod by:Capturing VM storage SLA requirementsMapping to the storage with the right characteristics and spare space
  • 1991 SHI’s version of UNIX IRIX. 1st release supported 96GB and follow on supported 384GB List was $8k for 96GB at release.
  • Note: Why run multiple roles – Lower utilization per role than single role, easier to manage than two VMs. How do we get more mailboxes…running a bigger VM than recommended.
  • Can you put image profiles and VIBs into a version control system, so you can track, etc?Evan: depot is just like any other directory tree, so you can check into CVS, etc.
  • This series of slides goes through a typical workflow when working with Image Builder
  • This series of slides goes through a typical workflow when working with Image Builder
  • Typically there would be an existing default image profile. Admins would clone this one and then make modifications — that way it saves the effort of creating one from scratch
  • Auto Deploy is a new method for provisioning ESXi hosts in vSphere 5.1. At a high level the ESXi host boots over the network (using PXE/gPXE), contacts the Auto Deploy Server which loads ESXi into the hosts memory. After loading the ESXi image the Auto Deploy Server coordinates with vCenter Server to configure the host (using Host Profiles and Answer Files (answer files are new in 5.0). Auto Deploy eliminates the need for a dedicated boot device, enables rapid deployment for many hosts, and also simplifies ESXi host management by eliminating the need to maintain a separate “boot image” for each host.
  • Auto Deploy addresses the traditional challenges of deploying servers/hosts. Hosts images are decoupled from the physical hardware, boot image and host identity determine at time of install, share a standard image across many hosts.
  • Prior to Auto Deploy the ESXi Image, configuration, state and log files are stored on the boot device.
  • With Auto Deploy the ESXi Image (binaries, VIBs) are stored off hosts in an Image Profile, configuration files are stored in vCenter (Host Profiles), and Log files managed via vCenter add-on components.
  • Auto Deploy is comprised of four primary components: PXE Boot, Auto Deploy Server, Image Builder and vCenter Server. The following slides break down details of each component.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Here is the workflow for a In-memory ESXi Host deployment. To start out, the host boots and sends a DHCP request. The DHCP server redirects the host to the TFTP server, which loads the host with the gPXE image and gPXE configuration that points to the Auto Deploy host.
  • Agent is ~50Kb in size. FDM Agent is not tied to vpxd at all
  • VC will pick datastores based on 2 factors: How many hosts have connectivity to the datastore and if the storage is on differing arrays. This is a attempt to provide redundancy in the event of a array failure.If a datastore is chosen by the user to be used, and that datastore is not accessible by all hosts, then it will be reported as a configuration issue.
  • Screenshot of state representation – Provided in GUI Enhancement section
  • Link Layer Discovery Protocol (LLDP) allows vSphere administrators to discover the capabilities of physical switch that is connected to a vNetwork Distributed Switch. Also it allows Network administrators who are managing external physical network infrastructure to know the capabilities of vNetwork Distributed Switch. When LLDP is enabled for a particular vDS, users can view properties of the physical switch (such as device ID, software version, and timeout) from the vSphere Client.
  • The LLDP properties shown here were captured using a third party lldpd utility running on a RH5 VM.
  • INTERNAL: https://wiki.eng.vmware.com/NetflowCollectorsINTERNAL: https://wiki.eng.vmware.com/NetflowFuncSpec
  • INTERNAL: https://wiki.eng.vmware.com/NetflowCollectorsINTERNAL: https://wiki.eng.vmware.com/NetflowFuncSpecHelps customer to monitor the application flows and measure application performance overtime. The NetQoS Multi-Port Collector is a powerful server that captures and processes large amounts of data at an extremely high rate. By passively monitoring large volumes of data-center traffic from multiple ports, the Multi-Port Collector helps NetQoS SuperAgent keep a continuous record of end-to-end system performance.Helps monitor Application performanceMonitoring IP traffic flows facilitates more accurate capacity planning and ensures that resources are used appropriately in support of organizational goals. It helps IT determine where to apply Quality of Service (QoS), optimize resource usage and it plays a vital role in network security to detect Denial-of-Service (DoS) attacks, network propagated worms, and other undesirable network events.
  • VDS+ uses Netflow v5 to export dataNetflow v5 record format - https://bto.bluecoat.com/packetguide/7.2.0/info/netflow5-records.htm
  • nGenius is a network traffic analysis system that provides centralised monitoring of networks, to application level, in both the local area and the wide area. The product has two key components: nGenius probes and the nGenius Performance Manager application.
  • The SPAN feature was introduced on switches because of a fundamental difference that switches have with hubs. When a hub receives a packet on one port, the hub sends out a copy of that packet on all ports except on the one where the hub received the packet. On the other hand switches are intelligent devices that build forwarding tables based on source MAC addresses and make forwarding decision based on the comparison of packets destination MAC address with the forwarding table.Thus Switch only sends traffic to one destination port unless it is a unknown/broadcast/multicast MACIf customer want to monitor a switch port traffic from another port then the switch should support the ability to copy packets to this sniffer port (called a SPAN port). Port Mirroring/DVMirror provides the same functionality on distributed virtual switches.
  • Click 1 – Two VMs sharing a vDS.Click 2 – External Physical switch and a monitor connected to one of the port.Click 3 – Uplink port of host is used for both normal and VM trafficClick 4 – VM A communicating with VM B and outside worldClick 5 – The mirrored uplink is the one used by the VMs for normal IO, then ‘Encapsulation VLAN’ must be configured on the mirror.Click 5 – VM A traffic is mirrored through the same uplinkEncapsulation VLAN creates a VLAN ID that encapsulates all frames at the destination ports. If packets already have a VLAN, it is replaced with the new VLAN ID specified here.
  • Click 1 – Two VMs sharing a vDS.Click 2 – VM A doing IO via a VLAN port.Click 3 – VM A sends I/O to VM B.Click 4 – VM B traffic is mirrored to an uplink.Click 5 – If ‘Allow Normal IO’ is disabled, then we can use a dedicated uplink for mirrored traffic.Click 6 – However if the mirrored uplink is the one used by the VMs for normal IO, then ‘Encapsulation VLAN’ must be configured on the mirror.Click 7 – VM B traffic can then be mirrored out on the network on its own VLAN.If Allow Normal I/O is disabled, mirrored traffic will be allowed out on destination ports, but no traffic will be allowed in. This is essentially dedicating the uplink to mirrored traffic.Encapsulation VLAN creates a VLAN ID that encapsulates all frames at the destination ports. If packets already have a VLAN, it is replaced with the new VLAN ID specified here. This allows the captured traffic to be sent to different host on a different VLAN which shares a trunk port that has the original traffic with the original VLAN ID passing through it.
  • Before this release, all Virtual Machine traffic was grouped together as VM Traffic Type.
  • 802.1p is only available with the dvSwitch, not stand-alone switches (ATE Jan 2011)
  • 802.1p is only available with the dvSwitch, not stand-alone switches (ATE Jan 2011)It is not just sufficient to provide IO resources on the server by programming shares and limits. To provide an end to end SLA guarantee customers need to make sure that external devices connected to the server also treats the traffic according to the priority.This feature now allows an administrator to determine which traffic is important and should have higher priority on the network.
  • vSphere 5.0 introduces a new version of VMware’s file system, VMFS-5. VMFS-5 contains many important architectural changes allowing for greater scalability and performance while reducing complexity. The most compelling enhancements are mentioned on this slide. Where I guess the most visible is the support for 64TB volumes with just a single LUN so no need for extents! Another very welcome change is the ability to reclaim dead space for thin provisioned disks. It will enable you to reclaim allocated blocks from a VMFS volume and give it back to the array as “free space”. We will discuss in the VAAI section
  • One noticable difference between VMFS-3 & VMFS-5 is that in VMFS-5 we only support the 1MB block size.The reason for this is that we needed a larger block size in VMFS-3 to create a large 2TB volume.Since we have changed the way we do pointer blocks in VMFS-5, the need for a larger block is moot – we can now create 2TB files with a 1MB block size.ATS “Complete” implements a locking mechanism which fully leverages the capabilities of the array to reduce stress and enhance scaling for shared volumesAlthough a minor change the different implementation of sub-blocks is important for customers as this will allow for greater scale and will reduce overhead often associated with small files. Where a vmx file would always take up 64KB we now have 8KB sub blocks. On top of that for really small files we offer “1KB” blocks as well.Technically speaking this is great but what does it mean for your customer? Simple said: larger volume, better scalability results in less volume needed and subsequently less management required! So what if I already have VMFS3?
  • Upgrade! VMFS Updates are non-disruptive and supported. So your customers can easily upgrade from VMFS3 to VMFS5 without the need to create complex migration scenarios!Upgrades can also be done from the command line use vmkfstools –T option.It should be noted that VMFS-3 volumes upgraded to VMFS-5 maintain their original block size but do offer you the ability to grow beyond 2TB just like a newly created VMFS-5 volume.
  • What is VAAI? It stands for vStorage API for Array Integration and that is what it does! It allows for a tighter integration to reduce the overhead specifics tasks cause on ESXi. It does this by offloading these tasks to the array as for instance a Storage vMotion as shown in the Diagram. With VI3 the application would buffer the data, in this case Storage vMotion, and then write it back to disk. With vSphere 4.0 we improved this for non-vaai capable arrays so that a copy wouldn’t go all the way up in to the application buffers. With vSphere 4.1 we started offloading these tasks so that the array handles it…. Generally speaking the arrays are far more efficient of course when doing these types of tasksOn top of what we offer today we’ve improved some of these “traditional” VAAI primitives and we introduced new ones. We’ve also added support for NAS storage.
  • vSphere 4.1 introduced T10 compliancy for Block Zeroing allowing for vendors utilizing the T10 standards with the default shipped plugin. vSphere 5.0 introduces enhanced support for T10 allowing for the usage of VAAI primitives without the need to install a plugin and enabling support for many additional storage devices. For those who are not familiar with VAAI, “primitive” is basically another word for a functionality provided by VAAI, a feature. The final point is interesting. In previous versions ATS was used for locks when there was no contention. When there was contention we reverted back to SCSI reservations. In this release, ATS is also used in situations where contention arises.
  • NAS VAAI plugins are not shipped with vSphere 5.0. These plugins are developed and distributed by storage vendors, but signed by VMware's certification program to guarantee quality. Hardware Acceleration for NAS will allow for faster provisioning and the use of thick virtual disk through two newly introduces VAAI primitives:Full File Clone - Similar to Full Copy. Allows virtual disks to be cloned by the NAS deviceReserve Space - Allows creation of thick virtual disk files on NAS. Full File Clone enables fast provisioning of virtual machines on NAS devices similar to the experience “Full Copy” provided for iSCSI/FC arrays. Prior to vSphere 5.0 a virtual disk was created as a thin provisioned disk not even allowing for the creation of a thick disk.
  • Prior to vSphere 5.0 a virtual disk was created as a thin provisioned disk not even allowing for the creation of a thick disk. Starting with vSphere 5.0, VAAI NAS extensions allow NAS vendors to reserve space for an entire virtual disk. This enables the creation thick disks on NFS datastores .
  • vSphere 5.0 introduces multiple VAAI enhancements for environments using Array based Thin Provisioning capabilities. Historically the two major challenges of thin provisioned LUNs have been the reclamation of dead space and the challenges around monitoring space usage. Dead space reclamation offers the ability to reclaim blocks of a thin provisioned LUN on the array when a virtual disk is deleted or migrated to a different datastore by for example Storage DRS.
  • Step 1Historically, when VMs were migrated from a datastore, the blocks used by the VM prior to the migration were still being reported as in-use by the array. This meant that usage statistics from the storage array could be misleading and expensive disk space could possibly be wasted. Step 2 With this new VAAI primitive the storage device will be informed that the blocks are no longer used, resulting in better reporting of disk spaceStep 3But even more important it will allow the array to shrink the volume back in size so that these “free space” can be re-used for other purposes. This will allow administrators to use thin provisioned LUNs in the most complex scenarios.
  • An Out Of Space Condition is the nightmare of every storage and virtualization administrator in array based thin provisioned environments. Storage Over-Subscription in thin provisioned environments could lead to catastrophic scenarios when an out of space condition is encountered. This is of course more an operational issue but we are trying to help preventing these scenarios to begin with by offering better monitoring capabilities
  • vSphere 5.0 mitigates these problems and simplifies storage management through the addition of advanced warnings (Figure 11) and errors when threshold are reached for thin-provisioned datastores. These enhancements include mechanisms to temporary pause a virtual machine when disk space is exhausted allowing for the allocation of additional space to the datastore or the migration of an existing virtual machine without resulting in the failure of the virtual machine.
  • With vSphere 5.0, multiple enhancements have been introduced to increase efficiency of the Storage vMotion process, to improve overall performance, and for enhanced supportability. Storage vMotion in vSphere 5.0 now also supports the migration of virtual machines with a vSphere Snapshot and the migration of linked clones.
  • The enhancements to Storage vMotion include a new and more efficient migration process through the use of new feature called Mirror Mode. Mirror Mode enables a single-pass block copy of the source disk to the destination disk by mirroring I/Os of copied blocks. Not only has efficiency of Storage vMotion increased but also migration time predictability, making it easier to plan migrations and reducing the elapsed time per migration. The Mirror Driver is enabled on a per virtual machine basis and resides within the VMkernel. When the Guest OS of the virtual machine that is undergoing the Storage vMotion initiates a write to an already copied block the Mirror Driver
  • The Storage vMotion control logic is in the VMXThe Storage VMotion thread first creates the destination disk.After that, a stun/unstun of the VM allows the SVM Mirror Driver to be installed. I/Os to source will be mirrored to destination. The new driver will leverage the Data Mover to implement a single-pass block copy of the source to the destination disk. In additional to this it will mirror I/O between the two disks. This is a synchronous write meaning that the mirror driver will acknowledge the write to the Guest OS when it has received the acknowledgement from both the source and destination
  • vSphere 5.0 extends Storage I/O Control (SIOC) to provide cluster-wide I/O shares and limits for NFS datastores. This means that no single virtual machine should be able to create a bottleneck in any environment regardless of the type of shared storage used. SIOC automatically throttles a virtual machine which is consuming a disparate amount of I/O bandwidth when the configured latency threshold has been exceeded, For those who never used Storage IO Control, it throttles on a host level by reducing the queue of the device. in this example that is the data mining virtual machine which happens to reside on a different host. To allow for other virtual machines receiving their fair share of I/O bandwidth using the same datastore a share based fairness mechanism has been created which now is supported on both NFS and VMFS.
  • This is were most people start to get excited… Storage DRSSo what are people doing today? They try to identify the LUN with the most available disk space and validate if the latency isn’t too high… Well in some cases they will do that. In many cases customers will just hope for the best!So how does that work with Storage DRS?Well storage DRS will automatically select the best placement for your VM based on available disk space and current IO load. You also use afiinity rules, which are indeed similar to the DRS Affinity RulesThis will help you avoid placing VMs with a similar task on the same datastore and it can help keeping virtual machines together when required
  • So how exactly does storage drs solve these problems? How will it make your life better?As mentioned first of all and foremost Initial Placement of virtual machines and vmdks. This placement is based on Space and I/O capacity. Storage DRS will select the best datastore to place this virtual machine or virtual disk in the selected Datastore ClusterWhen Storage DRS is set to fully automatic, it will do automated load balancing actions. Of course this can be configured as manual as well and that is actually the default today. Load balancing again is based on space and I/O capacity. If and when required Storage DRS will make recommendations based space and I/O capacity. It will however only do this when a specific threshold is reached.So what is this datastore cluster?
  • Datastore clusters form the basis of Storage DRS. A datastore cluster is a collection of datastores aggregated in to a single unit of consumption from an administrators perspective. When a datastore cluster is created, Storage DRS can manage the storage resources comparable to how DRS manages compute resources in a cluster. As with a cluster of hosts, a datastore clusters is used to aggregate storage resources, enabling smart and rapid placement of new virtual machines and virtual disk drives and load balancing of existing workloadsThe diagram actually shows this in a nice way. When you create a VM you will be able to select a Datastore Cluster as opposed to individual LUNs
  • Storage DRS provides initial placement recommendations to datastores in a Storage DRS-enabled datastore cluster based on I/O and space capacity. During the provisioning of a virtual machine, a datastore cluster can be selected as the target destination for this virtual machine or virtual machine disk after which a recommendation for initial placement is done based on I/O and space capacity. As just mentioned Initial placement in a manual provisioning process has proven to be very complex in most environments and as such important provisioning factors like current I/O load or space utilization are often ignored. Storage DRS ensures initial placement recommendations are made in accordance with space constraints and with respect to the goals of space and I/O load balancing. Although people are really excited about automated load balancing… It is Initial Placement where most people will start off with and where most people will benefit from the most as it will reduce operational overhead associated with the provisioning of virtual machines.
  • Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
  • The first block shows the thresholds that need to be configured on which Storage DRS will be triggered. 80% for utilized space and 15miliseconds for I/O latency. With that meaning that the Storage DRS algorithm will be invoked when these thresholds are exceeded. Now in the case of “utilized space” this happens when vCenter collects the datastore statistics and notices the threshold has been exceeded in the case of I/O load balancing this is slightly different and that is what the second block shows. Every 8 hours, currently, Storage DRS will evaluate the I/O imbalance and will make recommendations if and when the thresholds are exceeded. Note that these recommendations will only be made when the difference between the source and destination is at least 5% and the cost / risk / benefit analysis has a positive result.
  • Datastores can be placed into maintenance mode. Storage DRS will move all registered VMs from VOL1 to the three remaining datastores in the cluster so that VOL1 can be taken offline. This can be used for tasks like storage migrations and when physical settings of a volume need to be changed without downtime or a large risk associated with this action. Be aware that currently only VMs are moved and Templates and ISOs are left untouched
  • Storage DRS affinity rules enable controlling which virtual disks should or should not be placed on the same datastore within a datastore cluster. By default, a virtual machine's virtual disks are kept together on the same datastore. Storage DRS offers three types of affinity rules:VMDK Anti-AffinityVirtual disks are placed on different datastoresVMDK AffinityVirtual disks are kept together on the same datastoreVM anti-affinityVirtual machines including associated disks are place on different datastores
  • If you want to stop gathering information for a certain time period/window, especially when you know there is going to be a lot of I/O and perhaps an increase in latency.
  • When creating a virtual machine or new virtual disk you now have the option to select a datastore cluster instead of a datastore.Based on the selected datastore cluster and the current I/O load and space capacity Storage DRS will make a recommendationOf course again you are free ignore this recommendation
  • For manual or fully automated the recommendations will be shown in Storage DRS tab. These recommendations include details like “utilization before and “utilization after” which will allow you to make a decision based on the provided details. Note that you can always override the recommendations and for instance chose to only apply specific recommendations which you feel are appropriate at that point in time.
  • That is where our new API comes in to play. vStorage APIs for Storage Awareness (VASA) is a new set of APIs which will enable vCenter to see the capabilities of the storage array LUNs/datastores, making it much easier to select the appropriate disk for virtual machine placement. Storage capabilities, such as RAID level, Thin or Thick Provisioned, Replication State and much more can now be made visible within vCenter. VASA eliminates the need for maintaining massive spreadsheets detailing the storage capabilities of each LUN needed to guarantee the correct SLA to virtual machines
  • With VASA, storage vendors can provide vShere with information about the storage environment. It enables tighter integration between storage and the virtual infrastructure.Information about storage health status, configuration info, capacity and thin provisioning info etcFor the first time we have an end to end story, i.e. storage array informs VASA storage provider of capabilities & then the storage provider informs vCenter, so now users can see storage array capabilities from vSphere client.Through the new VM Storage Profiles, these storage capabilities can then be displayed in vCenter to assist administrators in choosing the right storage in terms of space, performance and SLA requirements.This information enables the administrator to take the appropriate actions based on health & usage information.
  • What are some of the issues your customers are facing? The feedback that we generally had was that it was difficult to match SLA requirements with specific tiers of storage. People were maintaining massive spreadsheets to correlate storage volumes with tiers and in its turn with virtual machines. Not only during provisioning but also when manually migrating storage.
  • So what does Profile Driven Storage tries to achieve? Very simply said minimize the amount of time required to provision virtual machines. Provisioning virtual machines of course isn’t only just selecting a random datastore. You will need to know what the requirements are of the virtual machine and then select the appropriate volume to the best of your knowledge. Profile Driven Storage tries to help with that by providing better insight into storage characteristics and allowing for custom-tags and linking virtual machines to profiles.
  • Today:Currently we identify the requirements of the virtual machine, try to find the optimal datastore based on the requirements and create the virtual machine or disk. In some cases customers even periodically check if VMs are compliant but in many cases this is neglected.Storage DRS:Storage DRS only solves that problem partly. As still manually we will need to identify the correct datastore cluster and even when grouping datastores in to a cluster we need to manually verify if all LUNs are “alike”…. And again there is that manual periodically checkStorage DRS and Profile Driven Storage:When using Profile Driven Storage and Storage DRS in conjunction these problems are solved. Datastore cluster can be created based on the characteristics provided through vasa or the custom tags. When deploying virtual machines a storage profile can be selected ensuring that the virtual will be compliant!
  • Step 1:You can see the storage back end at the lower end. In this example we are using 3 different arrays.Step 2Through VASA specific system capabilities are surfaced. In our example that is RAID-5 - Replicated and RAID10 - NonReplicated. Step 3These are passed to vCenter, making it easier to recognize “tiers” within the vSphere Client. Note that you can use multiple VASA providersStep 4You can also tag datastores with a "User-defined" Tag in the case the vendor doesn't provide a VASA plugin or when you want to use for instance a tag business-specific tag such as tier definitions or DR level... it can be anythingStep 5Now you will need to create VM Storage Profiles. These profiles will usually have business-specific tags or user-friendly tags as well. Step 6These VM Storage Profiles will be linked to the Storage Capabilities surfaced through VASA or the User Defined CapabilitiesStep 7When ever you create a virtual machine, you can now link them to a specific VM Storage Profile ensuring the virtual machine or virtual disk will end up on the right datastore. A VM can then be checked for Storage Compliance. If the VM is placed on storage which has the same capabilities as those defined in the VM Storage Profile, then it is said to be compliant.
  • You can still choose other datastores outside of the VM Storage Policy, but these put the Virtual Machine into a non-compliant state.
  • Step 1The diagram we just showed gave a total overview, but most customers are concerned about just one thing: compliancy so how does this work? As mentioned Capabilities are surfaced through VASAStep2:And these capabilties are linked to a specific VM Storage ProfileStep 3:When a new is created or a excisting virtual machine is tagged the resultStep 4:Will be either complaint or not compliant it is as simple as that.
  • And of course Compliancy is also visible on a virtual machine layer!
  • Is browser based allowing for a wide range of platform support, allowing the environment to be managed from any environmentCurrently supports Firefox on Windows and Linux and IE on WindowsMore platform support (Chrome, ect) will be evaluated as future enhancementsSupports multiple vCenter instancesWhy another client?A. This is what we are moving to. In the first release, the vSphere Web Client in not a superset of the functionality in the C# client currently available. Over time, this will be improved.
  • Common questions:Q. Why Flex?Flex gave us The richest and fullest featured environment at the best performance currently possible.Additionally, it provides:Huge development libraries and functionality200,000+ users develop and contribute to the Flex community (Adobe reference)HTML5 and others are still ‘up and coming’ technologiesPerformanceFlex outperforms other frameworks and languages in the enterpriseFlex can run as a web applicationOffering “web scale”Ubiquitous access across many browsers and environments
  • Eclipse inspired model (in Flex)Thin frameworkExtension hosts specify and process extensionsConcepts:Extension: a unit of code or data introduced into the application through the extensibility mechanismExtension Host: a component that hosts or consumes one or more extensionsExtension Point: an identifiable entry point in the application targeted by extensions; owned by an extension hostExtension Filter: an extension of the extension manager that post processes a request for extensions, based on extension host properties and extension metadataOSGi based bundle deployment/versioning
  • Over the years, customers have expressed interest in:Enhancing the patching and upgrade process for vCenterSimplifying the deployment overhead associated with new vCenter instancesReducing licensing costs associated with vCenter deploymentsNon-Windows shops didn’t want to have to pay for additional MS licenses in order to use vSphere
  • Scalability – VCSA is all about being able to scale. By making the deployment/configuration/management easier and faster, customers can quickly increase the scale of their environment.Visibility – Same as VCAutomation – Can easily automate deployments of VC servers
  • Disk FootprintDistribution Package (ZIP of VMX or OVF): 3.6GB approximatelyMinimum Deployed VM footprint: 5GB approximatelyMaximum Deployed VM footprint: 80GB approximately (tentative maximum disk size)Memory Footprint The memory usage of vCenter on Linux is the same as that for vCenter on Windows. The actual memory usage on ESX is determined by the memory configured for the Virtual Appliance itself. A typical small inventory would need approximately 2GB of memory for the VM, while a large inventory would require as much as 4 or 6 GB (to include the OS’ memory requirements, too, of course). It is recommend that users thin provision the appliance and let the disk grow rather than grabbing all the disk up front. NFS mount support is available to accommodate larger log files. DB2 Express is utilized as the embedded database
  • Linked Mode requires ADAM today, which is a Windows technology. Hence why it is not supported with the VCSA.Use of MS-SQL requires a database driver to communicate with Linux. We are entertaining options in this area now.Active Directory Lightweight Directory Service (AD LDS), formerly known as Active Directory Application Mode (ADAM), is used to provide directory services for directory-enabled applications. Instead of using a organization’s AD DS database to store directory-enabled application data, AD LDS can be used to store the data.
  • Appliance will be fully managed by VMware by providing patches JeOS via vSphere Update ManagerVMW will fully manage the appliance by providing patches for both the just enough OS (JeOS) included in the appliance as well as vCenter itself – this should make it easier for IT administrators to manage vCenter servers deployed in their data center. VUM cannot patch VC at present. Users will have to use the appliance configuration GUI to manage the lifecycle until VUM adds support. The issue is not lack of functionality, but more about the VUM architecture. VUM expects VC to be around to patch VM’s and if the appliance has to be updated and rebooted, the update is stranded.
  • Impact of vCenter DowntimeKEY = Component - Impact ExperiencedVirtual Machines - Management requires direct; can’t provision new VMs from templatesESXi Servers - Management requires direct connectionsPerformance & Monitoring Statistics - Historical records will have gapsvMotion / Storage vMotion - UnavailablevSphere DRS - UnavailablevCenter Plug-ins (e.g. VUM) - UnavailablevSphere HA / FT - HA / FT failover works once,admission control unavailableVMware VIEW - Cannot provision new desktop instances
  • Hardware RAIDThe supported setup for hardware RAiD on the physical servers is to place it in a RAID 1+0 (RAID 10) configuration. This way, if a spindle is lost, it does not affect the underlying volume.VSA Cluster Service The VSA Cluster Service is a Windows service that installs together with VSA Manager on the vCenter Server machine and maintains the status of the VSA cluster. The service is only used in a VSA cluster with two members and does not provide storage as the other members. It helps to maintain the majority of VSA cluster members in case of failures. The VMs running on the ESXi servers are using the shared storage. At install time there should be no running VMs on the ESXi server.
  • Hardware RAIDThe supported setup for hardware RAiD on the physical servers is to place it in a RAID 1+0 (RAID 10) configuration. This way, if a spindle is lost, it does not affect the underlying volume.VSA Cluster Service The VSA Cluster Service is a Windows service that installs together with VSA Manager on the vCenter Server machine and maintains the status of the VSA cluster. The service is only used in a VSA cluster with two members and does not provide storage as the other members. It helps to maintain the majority of VSA cluster members in case of failures. The VMs running on the ESXi servers are using the shared storage. At install time there should be no running VMs on the ESXi server.
  • Step 1: This screen displays some of the new features which will be enabled. vMotion & HA will be enabled to provide additional resilience to VSA. This is a unique selling point as VSA will facilitate vMotion & HA without having a physical storage array providing shared storage. It also removes the complexity of creating a vSphere HA cluster and configuring vMotion networks.Step 2: The next step is to select a datacenter from you vCenter inventory. In this example, the datacenter pml-pod13 has 3 x ESXi 5.0 hosts.Step 3: In this example, we are building a 3 node cluster. A host audit is run by the installer to verify that hosts are compatible with VSA. One thing to note about the host audit is that hosts are split into different processor types. Hosts used for VSA must have the same processor type for vMotion compatability.Step 4: Provide static IP addresses for all the VSA nodes in the cluster. When user types in the Management IP address, the wizard will automatically generates the rest of the IPs in ascending order. User can modifies the individual IP separately of course.
  • Step 5: This screen allows you to zero out the disks (eagerzeroedthick) as they are created. This will slow down the installation process but should speed up the initial first access to each 1 MB section of disk has to be zero’d (zeroedthick).  After a while, once the disk has been accessed fully, the performance should be equivalent no matter which option is chosen.Step 6: The Ready to Install screen displays the previous entered configuration details. If correct, click the Install button.Compare the simplicity of this install with the complexity of deploying a physical SAN. You may need to purchase and configure a physical switch (iSCSI, FC, FCoE), as well as do masking, zoning, etc.
  • Hardware RAIDThe supported setup for hardware RAiD on the physical servers is to place it in a RAID 1+0 (RAID 10) configuration. This way, if a spindle is lost, it does not affect the underlying volume.VSA Cluster Service The VSA Cluster Service is a Windows service that installs together with VSA Manager on the vCenter Server machine and maintains the status of the VSA cluster. The service is only used in a VSA cluster with two members and does not provide storage as the other members. It helps to maintain the majority of VSA cluster members in case of failures. The VMs running on the ESXi servers are using the shared storage. At install time there should be no running VMs on the ESXi server.
  • There is no support for rolling upgrade in the 1.0 release, so the whole cluster will have to be placed into maintenance mode. Rolling upgrade is a possible future feature of the product.
  • There is no support for rolling upgrade in the 1.0 release, so the whole cluster will have to be placed into maintenance mode. Rolling upgrade is a possible future feature of the product.
  • There is no support for rolling upgrade in the 1.0 release, so the whole cluster will have to be placed into maintenance mode. Rolling upgrade is a possible future feature of the product.

V sphere 5 what's new   technical V sphere 5 what's new technical Presentation Transcript

  • vSphere 5– What‘s New Your Cloud. Intelligent Virtual Infrastructure. Delivered Your Way.Cloud Infrastructure Product Marketing TeamConfidential © 2009 VMware Inc. All rights reserved
  • Agenda  Introduction  ESXi Performance Enhancements  Platform  Availability  DRS and vMotion  Networking  Storage  vCenter  Virtual Storage Appliance2 Confidential
  • Technical Barriers to 100% Virtualization Have Been Falling Application‘s Performance Requirements 95% of Apps VMware Inf. VMware VMware ESX 1 ESX 2 Require 3.0/3.5 vSphere 4 vSphere 5 CPU 1 to 2 CPUs 1 VCPUs 2 VCPUs 4 VCPUs 8 VCPUs 32 VCPUs% of Applications Memory < 4 GB at peak 2 GB per VM 3.6 GB per VM 16/64 GB per VM 256 GB per VM 1,000 GB per VM Network <2.4 Mb/s <.5Gb/s .9 Gb/s 9 Gb/s 30 Gb/s >36Gb/s IOPS < 10,000 <5,000 7,000 100,000 300,000 1,000,000 3 Confidential
  • ESXi is the Trusted Place to Run Business Critical Applications Overview • vSphere 5.0 exclusively utilizes the thin ESXi hypervisor: 144MB footprint versus 2GB for VMware ESX with the service console vSphere ESXi ESX Benefits • Smaller security footprint • Streamlined deployment and configuration • Simplified patching and updating model4 Confidential
  • vSphere 5.0 – ―Monster VMs‖ Overview  Create virtual machines with up to:  32 vCPU  1 TB of RAM Benefits  4x size of previous vSphere versions 4x  Run even the largest applications in vSphere, including very large databases5 Confidential
  • How Powerful is a VM Today? The movie Jurassic Park’s A vSphere 5.0 VM‘s 1 TB vRAM computer graphics were created on limit is 207 times the total RAM15 Silicon Graphics Indigo machines. on those 15 machines!!!6 Confidential
  • Bigger Than The Biggest Exchange Configuration 25% performance improvement* Single Exchange 2010 Multiple Exchange 2010 role on a server. roles on a server. Microsoft recommended Microsoft recommended Single VM = 32 vCPUmaximum = 12 cores/vCPUs maximum = 24 cores/vCPUs ~17,900 Mailboxes ~17,900 Mailboxes ~47,000 Mailboxes * For 4 vCPU VM7 Confidential
  • Welcome Home, Massive Databases One massive database with ….fits in a single 2 billion transaction per day…. VM with 32-vCPU, 1 TB RAM8 Confidential
  • Agenda: vSphere 5.0 Platform ESXi CLI ESXi Firewall Image Builder Auto Deploy vSphere Update Manager Platform Enhancements9 Confidential
  • vSphere 5.0 CLI Components ESXi Shell • Rebranded Tech Support Mode • Local and remote (SSH) vCLI • ‗esxcli‘ Command Set • Local and remote CLI ESXi Shell vCLI • New and improved in 5.0 • ‗vicfg‘ Command Set • Remote CLI Only • Other Commands: • vmware-cmd, vmkfstools, etc. vMA PowerCLI • vCLI available for Linux and Windows vMA • vCLI Appliance PowerCLI • Windows CLI Tool10 Confidential
  • vSphere 5.0 CLI Compatibility Commands Run Local Run Remote ESX/ESXi 4.x ESXi 5.x esxcfg1 Yes No Yes No esxcli2 Yes Yes No Yes vicfg3 No Yes Yes Yes vmware-cmd Yes Yes Yes Yes vmkfstools Yes Yes Yes Yes PowerCLI No Yes Yes Yes 1. ‗esxcfg‘ commands deprecated in 5.0 (replaced with esxcli) 2. ‗esxcli‘ in 4.x is *not* backward compatible with 5.0 3. ‗vicfg‘ used for remote CLI only11 Confidential
  • ESXi Command Line Why a new ESXi CLI tool? • Console CLI and remote vCLI are different • Need to learn multiple CLIs • Local commands don‘t work remote, remote commands don‘t work locally • Commands evolved from multiple sources using different standards • No formal process for adding or updating commands • Inconsistent output and syntax • Output format changes from command to command • Different commands have different input parameters • Remote CLI limited compared to local CLI ESXCLI establishes a standard with an extensible framework. Going forward ESXCLI commands will be backward compatible12 Confidential
  • ESXi Command Line ESXCLI Overview • Designed with identical local and remote versions • Locally: commands available in ESXi Shell • Remotely: part of vCLI, with additional requirement of authentication and privileges • Uniform syntax • Directory-like layout of commands • Naming is namespace[ namespace …] command, e.g. • esxcli corestorage device list • Discoverable • Commands can be listed in each namespace • Full in-line help; each command has ―--help‖ option The new 5.0 ‗esxcli‘ commands are not backward compatible with prior releases13 Confidential
  • ESXi Command Line ESXCLI Overview (cont.) • ESXCLI replaces the ―esxcfg-*‖ style commands • Primarily used for host configuration • No current support for VM operations (use ―vicfg-‖) • Includes additional functionality not found in ―esxcfg-*‖ • Network Policies, Security, vibs, etc. • Intended to eventually be full replacement for vicfg-* commands14 Confidential
  • Agenda: vSphere 5.0 Platform Welcome ESXi CLI ESXi Firewall Image Builder Auto Deploy vSphere Update Manager Platform Enhancements15 Confidential
  • ESXi 5.0 Firewall Features Capabilities • ESXi 5.0 has a new firewall engine which is not based on iptables. • The firewall is service oriented, and is a stateless firewall. • Users have the ability to restrict access to specific services based on IP address/Subnet Mask. Management • The GUI for configuring the firewall on ESXi 5.0 is similar to that used with the classic ESX firewall — customers familiar with the classic ESX firewall should not have any difficulty with using the ESXi 5.0 version. • There is a new esxcli interface (esxcfg-firewall is deprecated in ESXi 5.0). • There is Host Profile support for the ESXi 5.0 firewall. • Customers who upgrade from Classic ESX to ESXi 5.0 will have their firewall settings preserved.16 Confidential
  • The esxcli Network Firewall Namespace Namespace: esxcli > network > firewall allowedip esxcli network firewall ruleset add get list list set set remove refresh load unload rule list17 Confidential
  • UI: Security Profile The ESXi Firewall can be managed via the vSphere client. Through the Configuration > Security Profile, one can observe the Enabled Incoming/Outgoing Services, the Opened Port List for each service & the Allowed IP List for each service.18 Confidential
  • Agenda: vSphere 5.0 Platform Welcome ESXi CLI ESXi Firewall Image Builder Auto Deploy vSphere Update Manager Platform Enhancements19 Confidential
  • Composition of an ESXi Image Core CIM Hypervisor Providers Plug-in Drivers Components20 Confidential
  • ESXi Image Deployment Challenges • Standard ESXi image from VMware download site is sometimes limited • Doesn‘t have all drivers or CIM providers for specific hardware • Doesn‘t contain vendor specific plug-in components ? Missing CIM provider Missing driver Standard ESXi ISO • Base providers • Base drivers21 Confidential
  • Describing ESXi Components VIB • ―VMware Infrastructure Bundle‖ (VIB) • Software packaging format used for ESXi • Often referred to as a ―Software Package‖ • Used for all components • ESXi Base Image • Drivers • CIM providers • Other components • Can specify relationship with other VIBs • VIBs that it depends on • VIBs that it conflicts with22 Confidential
  • Managing Customized ESXi Images Image Builder: a set of command line utilities for… • Creating and managing image profiles • Building ESXi customized boot images, e.g. • Installable ISO • Bundle suitable for PXE installation or Flash • Initial version is based on PowerCLI • Snap-in component bundled as part of VMware‘s PowerCLI tools Depot • A repository containing • Image profiles • VIBs • Can have multiple depots, with two types • On a web server • Encapsulated in a .ZIP file23 Confidential
  • Building an Image Start PowerCLI session Windows Host with PowerCLI and Image Builder Snap-in24 Confidential
  • Building an Image Activate Image Builder Snap-in Windows Host with PowerCLI and Image Builder Snap-in Image Builder25 Confidential
  • Building an Image Depots Connect to depot(s) Image Profile Windows Host with PowerCLI and Image Builder Snap-in ESXi VIBs Image Driver Builder VIBs OEM VIBs26 Confidential
  • Building an Image Depots Clone and modify existing Image Profile Image Profile Windows Host with PowerCLI and Image Builder Snap-in ESXi VIBs Image Driver Builder VIBs OEM VIBs27 Confidential
  • Building an Image Depots Generate new image Image Profile Windows Host with PowerCLI and Image Builder Snap-in ESXi VIBs Image Driver ISO Image Builder VIBs PXE-bootable Image OEM VIBs28 Confidential
  • Host Profiles Enhancements New feature enables greater flexibility and automation • Using an Answer File, administrators can configure host-specific settings to be used in conjunction with the common settings in the Host Profile, avoiding the need to type in any host-specific parameters. • This feature enables the use of Host Profiles to fully configure a host during an automated deployment. • Host Profiles now has support for a greatly expanded set of configurations, including: • iSCSI • FCoE • Native Multipathing • Device Claiming and PSP Device Settings • Kernel Module Settings • And more29 Confidential
  • Agenda: vSphere 5.0 Platform Welcome ESXi CLI ESXi Firewall Image Builder Auto Deploy vSphere Update Manager Platform Enhancements30 Confidential
  • What Is Auto Deploy New host deployment method introduced in vSphere 5.0 • Based on PXE Boot • Works with Image Builder, vCenter Server, and Host Profiles • How it works: • PXE boot the server • ESXi image profile loaded into host memory via Auto Deploy Server • Configuration applied using Answer File / Host Profile • Host placed/connected in vCenter • Benefits • No boot disk • Quickly and easily deploy large numbers of ESXi hosts • Share a standard ESXi image across many hosts • Host image decoupled from the physical server • Recover host w/out recovering hardware or having to restore from backup31 Confidential
  • What Is Auto Deploy Without Auto Deploy… With Auto Deploy… Host image tied to physical server Host image decoupled from server • Each host needs full install and config • Run on any server w/ matching hardware • Not easy to recover host • Config stored in Host Profile • Redundant boot disks/dedicated LUN • No boot disk A lot of time/effort building hosts Agile deployment model • Deploying hosts is repetitive and tedious • Deploy many hosts quickly and efficiently • Heavy reliance on scripting • No pre/post install scripts • Need to update for each new release • No need to update with each release Configuration drift between hosts Host State Guaranteed • Config drift always a concern • Single boot image shared across hosts • Compromises HA/DR Every reboot provides consistent image • Manging drift consumes admin resources • Eliminate need to detect/correct drift32 Confidential
  • What Is Auto Deploy No Boot Disk? Where does it go? Boot Disk Platform Composition: ESXi base, drivers, CIM providers, … Configuration: networking, storage, All information on the state date/time, firewall, admin password, … of the host is stored off the host in vCenter Running State: VM Inventory, HA state, License, DPM configuration Event Recording: log files, core dump33 Confidential
  • What Is Auto Deploy No Boot Disk? Where does it go? Boot Disk Platform Composition: ESXi base, drivers, CIM providers, … Image Profile Configuration: networking, storage, date/time, firewall, admin password, … Host Profile Running State: VM Inventory, HA state, License, DPM configuration vCenter Server Event Recording: log files, core dump Add-on Components34 Confidential
  • Auto Deploy Components Component Sub-Components Notes PXE Boot • DHCP Server • Setup independently Infrastructure • TFTP Server • gPXE file from vCenter • Can use Auto Deploy Appliance Auto Deploy Server • Rules Engine • Build/Manage Rules • PowerCLI Snap-in • Match server to Image • Web Server and Host Profile • Deploy server Image Builder • Image Profiles, • Combine ESXi image • PowerCLI Snap-in with 3rd party VIBs to create custom Image Profiles vCenter Server • Stores Rules • Provides store for rules • Host Profiles • Host configs saved in • Answer Files Host Profiles • Custom Host settings saved in Answer Files35 Confidential
  • What Is Auto Deploy INTERNAL USE ONLY Target Audience for Auto Deploy in vSphere 5.0 • Customers with large vSphere deployments • Large numbers of ESXi hosts • High host refresh rates • Experienced vSphere Administrators • Comfortable with augmenting and customizing solutions • Comfortable with scripting • Quick to deploy adopt new technology • Comfortable working with new software • Beta customers • Strong VMware relationship • TAM Customers • Good relationship with sales and consulting36 Confidential
  • Auto Deploy Example – Initial BootProvision new host vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs Driver VIBs ―Waiter‖ Auto TFTP DHCP OEM VIBs Deploy37 Confidential
  • Auto Deploy Example – Initial Boot1) PXE Boot server vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs Driver VIBs ―Waiter‖ gPXE DHCP image Request Auto TFTP DHCP OEM VIBs Deploy38 Confidential
  • Auto Deploy Example – Initial Boot2) Contact Auto Deploy Server vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs Driver VIBs ―Waiter‖ Auto OEM VIBs Deploy Cluster A Cluster B39 Confidential
  • Auto Deploy Example – Initial Boot3) Determine Image Profile, Host Profile and cluster vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi • Image Profile X • Host Profile 1 VIBs • Cluster B Driver VIBs ―Waiter‖ Auto OEM VIBs Deploy Cluster A Cluster B40 Confidential
  • Auto Deploy Example – Initial Boot4) Push image to host, apply host profile vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile Host Profile VIBs Cache Driver VIBs ―Waiter‖ Auto OEM VIBs Deploy Cluster A Cluster B41 Confidential
  • Auto Deploy Example – Initial Boot5) Place host into cluster vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile Host Profile VIBs Cache Driver VIBs ―Waiter‖ Auto OEM VIBs Deploy Cluster A Cluster B42 Confidential
  • Auto Deploy Example – Subsequent RebootReboot Auto Deploy Host vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile Host Profile VIBs Cache Driver VIBs ―Waiter‖ Auto TFTP DHCP OEM VIBs Deploy43 Confidential
  • Auto Deploy Example1) PXE boot host vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile Host Profile VIBs Cache Driver VIBs ―Waiter‖ gPXE DHCP image Request Auto TFTP DHCP OEM VIBs Deploy44 Confidential
  • Auto Deploy Example2) Contact Auto Deploy Server vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile Host Profile VIBs Cache Driver VIBs ―Waiter‖ Auto OEM VIBs Deploy Cluster A Cluster B45 Confidential
  • Auto Deploy Example3) Image Profile and Host Profile loaded from cache on vCenter vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile Host Profile VIBs Cache Driver VIBs ―Waiter‖ Auto OEM VIBs Deploy Cluster A Cluster B46 Confidential
  • Auto Deploy Example4) Host place host into cluster vCenter Server Image Image Profile Image Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile Host Profile VIBs Cache Driver VIBs ―Waiter‖ Auto OEM VIBs Deploy Cluster A Cluster B47 Confidential
  • Agenda: vSphere 5.0 Platform Welcome ESXi CLI ESXi Firewall Image Builder Auto Deploy vSphere Update Manager Platform Enhancements48 Confidential
  • New Update Manager Features Optimized Cluster Patching and Upgrade: • Based on available cluster capacity, it can remediate an optimal number of ESX/ESXi servers simultaneously without virtual machine downtime. • For those scenarios where turnaround time is more important than virtual machine uptime, you have the choice to remediate all ESX servers in a cluster simultaneously. Less Downtime for VMware Tools Upgrade • Can schedule an upgrade to occur at the time of next virtual machine reboot. More Flexible Update Manager Download Service (UMDS) • Configure multiple download URLs. • Restrict downloads to only those ESX product versions and type that are relevant to your environment. New Update Manager Utility: • Helps users reconfigure the setup of Update Manager. • Change the database password and proxy authentication. • Replace the SSL certificates for Update Manager.49 Confidential
  • ESX to ESXi Migration with VMware Update Manager Supported Paths • Migration from ESX (―Classic‖) 4.x to ESXi 5.0 • For VUM-driven migration, pre-4.x hosts will have to be upgraded to 4.x first • Might be better just to do fresh install of ESXi 5.0 Preservation of Configuration Information • Most standard configurations will be preserved, but not all: • Information that‘s not applicable to ESXi will not be preserved, e.g. • /etc/yp.conf (no NIS in ESXi) • /etc/sudoers (no sudo in ESXi) • Any additional custom configuration files will not be preserved, e.g. • Any scripts added to /etc/rc.d50 Confidential
  • ESXi Migration and Third-Party Software Supported components • Upgrade of third-party components limited to • Cisco Nexus 1000v • EMC PowerPath • During upgrade, if either of these is detected on starting host • Target ESXi image is checked for presence of these modules • If found, upgrade proceeds • If not found, option provided to override and proceed • Otherwise, halt All other components • Starting host not checked for other third-party software • Upgrade process will not preserve anything • Up to Admins to take care of replacing51 Confidential
  • Upgrade Compatibility Provides for flexibility for Administrators to upgrade environment in phased manner Feature ESX/ESXi 4.x ESXi 5.x VMware Tools 4.x Yes Yes VMware Tools 5.x Yes Yes VMFS-3 Yes Yes VMFS-5 No Yes Virtual Hardware1 3, 4, 7 4, 7, 8 1. ESXi 5.0 supports upgrading Virtual Hardware version 3 and later52 Confidential
  • Agenda: vSphere 5.0 Platform Welcome ESXi CLI ESXi Firewall Image Builder Auto Deploy vSphere Update Manager Platform Enhancements53 Confidential
  • New Virtual Machine Features vSphere 5.0 supports the industry‘s most capable virtual machines • 32 virtual CPUs per • 1TB RAM per VM VM • 4x previous capabilities! VM Scalability • 3D graphics Richer Desktop Experience • Client-connected USB • VM BIOS boot order config API devices and PowerCLI interface • USB 3.0 devices • EFI BIOS Broader Device • Smart Card Readers for Coverage VM Console Access • UI for multi-core virtual • Support for Mac OS X Other new CPUs servers features • Extended VMware Tools compatibility Items which require HW version 8 in orange54 Confidential
  • vSphere 5 Availability55 Confidential
  • Running Business-Critical Applications with Confidence vSphere HA provides the right availability services with groundbreaking simplicity for any application Allows for: • Protection of Tier 1 Applications • Restart of VM upon Application Failure • VM High Availability • Virtual Machine Health Monitoring • Host High Availability • Host Monitoring • Zero downtime VM recovery upon host failure56 Confidential
  • Release Enhancement Summary Complete re-write of vSphere HA Provides a foundation for increased scale and functionality • Eliminates common issues (DNS resolution) Multiple Communication Paths • Can leverage storage as well as the mgmt network for communications • Enhances the ability to detect certain types of failures and provides redundancy IPv6 Support Enhanced Error Reporting • One log file per host eases troubleshooting efforts Enhanced User Interface Enhanced Deployment Mechanism57 Confidential
  • vSphere HA Primary Components Every host runs an agent. • Referred to as ‗FDM‘ or Fault Domain Manager • One of the agents within the cluster is chosen to assume the role of the Master ESX 01 ESX 03 • There is only one Master per cluster during normal operations • All other agents assume the role of Slaves There is no more Primary/Secondary concept with vSphere HA ESX 02 ESX 04 vCenter58 Confidential
  • The Master Role An FDM master monitors: • ESX hosts and Virtual Machine availability. • All Slave hosts. Upon a Slave host failure, protected VMs on that host will be restarted. • The power state of all the protected VMs. Upon failure of a protected VM, the Master will restart it. An FDM master manages: • The list of hosts that are members of the cluster, updating this list as hosts are added or removed from the cluster. • The list of protected VMs. The Master updates this list after each user-initiated power on ESX 02 or power off.59 Confidential
  • The Slave Role A Slave monitors the runtime state of its locally running VMs and forwards any significant state changes to the Master. It implements vSphere HA features that do not require central coordination, most ESX 01 ESX 03 notably VM Health Monitoring. It monitors the health of the Master. If the Master should fail, it participates in the election process for a new master. Maintains list of powered on VMs. ESX 0460 Confidential
  • Storage-Level Communications One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication. The datastores used for this are referred to as ‗Heartbeat Datastores‘. ESX 01 ESX 03 This provides for increased communication redundancy. Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning. ESX 02 ESX 0461 Confidential
  • Storage-Level Communications Heartbeat Datastores allow a Master to: • Monitor availability of Slave hosts and the VMs running on them. • Determine whether a host has become network isolated rather than network ESX 01 ESX 03 partitioned. • Coordinate with other Masters - since a VM can only be owned by only one master, masters will coordinate VM ownership thru datastore communication. • By default, vCenter will automatically pick 2 datastores. These 2 datastores can also be selected by the user. ESX 02 ESX 0462 Confidential
  • Storage-Level Communications Host availability can be inferred differently, depending on storage used: • For VMFS datastores, the Master reads the VMFS heartbeat region. • For NFS datastores, the Master monitors ESX 01 ESX 03 a heartbeat file that is periodically touched by the Slaves. Virtual Machine Availability is reported by a file created by each Slave which lists the powered on VMs. Multiple Master Coordination is done by using file locks on the datastore. ESX 02 ESX 0463 Confidential
  • VM Protection States A protected VM is a VM that vSphere HA guarantees that a attempt to restart it will be made in the event of a failure. A VM becomes protected when vCenter is informed by the Master that the VM is protected. • When vCenter detects that the VM is powered on, it informs the Master about it. The Master then updates it‘s list of protected VMs. After which, the Master informs vCenter that the VM is protected. • When VMs are powered off, the process is repeated and the VM is considered to be not protected. This is a change from previous versions of vSphere HA, where the power-on task for a VM would not complete until HA became aware that this was a protected VM. • This allows the Power On tasks to complete faster, even if the VM has not been designated as being protected at the time of the task completing. • Creates a explicit dependency on VC.64 Confidential
  • VM Protection Flow When a VM is first powered on, it goes into unprotected state. It stays in the unprotected state until the Master tells vCenter that it has written the information to disk. Periodically (e.g., once every 5 minutes), VC will compare the list it has to the protected VM list last reported by the Master. If any deltas exist, VC update the Master. A VM becomes unprotected when: • It is powered off. • It is vMotion‘ed out of the cluster. • Its host is disconnected from vCenter. • Its host is put into Maintenance Mode. • When a host is placed into Maintenance Mode, the summary screen of the host displays the fact that the HA agent has been disabled.65 Confidential
  • HA States A new host property to report the HA state of a host. The state is reported on host summary panel and optionally in the host list. Possible States include: • N/A (HA not configured) • Election (Master election in progress) • Master (Can be more than one) • Connected (To Master over network) • Network Partitioned • Network Isolated • Dead • Agent Unreachable • Initialization Error • Unconfig Error66 Confidential
  • UI Changes Cluster Summary Screen • Advanced Runtime Info Cluster • Cluster Status • Configuration Issues Cluster – Hosts tab VM Summary: HA Protection Cluster Configuration: Datastore Heartbeating Admission Control: Failover Host(s)67 Confidential
  • UI Changes Cluster Summary Screen • Advanced Runtime Info • Cluster Status • Configuration Issues Cluster – Hosts tab VM Summary: HA Protection Cluster Configuration: Datastore Heartbeating Admission Control: Failover Host(s)68 Confidential
  • UI Changes Cluster Summary Screen • Advanced Runtime Info • Cluster Status • Configuration Issues Admission Control: Failover Host(s)69 Confidential
  • UI Changes Cluster Summary Screen • Advanced Runtime Info • Cluster Status • Configuration Issues Cluster – Hosts tab VM Summary: HA Protection Cluster Configuration: Datastore Heartbeating Admission Control: Failover Host(s)70 Confidential
  • UI Changes Cluster Summary Screen • Advanced Runtime Info • Cluster Status • Configuration Issues Cluster – Hosts tab VM Summary: HA Protection Cluster Configuration: Datastore Heartbeating Admission Control: Failover Host(s)71 Confidential
  • UI Changes Cluster Summary Screen • Advanced Runtime Info • Cluster Status • Configuration Issues Cluster – Hosts tab VM Summary: HA Protection Cluster Configuration: Datastore Heartbeating Admission Control: Failover Host(s)72 Confidential
  • UI Changes Cluster Summary Screen • Advanced Runtime Info • Cluster Status • Configuration Issues Cluster – Hosts tab VM Summary: HA Protection Cluster Configuration: Datastore Heartbeating Admission Control: Failover Host(s)73 Confidential
  • vSphere 5 DRS and vMotion74 Confidential
  • Agenda: vMotion, DRS/DPM and Resource Pools Welcome vMotion Enhancements DRS/DPM Enhancements Resource Pools75 Confidential
  • vSphere 5.0 – vMotion Enhancements The original vMotion keeps getting better! Multi-NIC Support • Support up to four 10Gbps or sixteen 1Gbps NICs. (ea. NIC must have its own IP). • Single vMotion can now scale over multiple NICs. (load balance across multiple NICs). • Faster vMotion times allow for a higher number of concurrent vMotions. Reduced Application Overhead • Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce timeouts and improve success. • Ensures less than 1 Second switchover time in almost all cases. Support for higher latency networks (up to ~10ms) • Extend vMotion capabilities over slower networks.76 Confidential
  • vSphere 5.0 – vMotion Enhancements Improved Error Reporting • More detailed logging allows for better troubleshooting. • Access detailed error messages directly from VC through VOB integration. Improved Resource Pool Integration • vMotion now puts VMs in the proper resource pool vs. waiting for DRS.77 Confidential
  • Agenda: vMotion, DRS/DPM and Resource Pools Welcome vMotion Enhancements DRS/DPM Enhancements Resource Pools78 Confidential
  • vSphere 5.0 – DRS/DPM Enhancements DRS/DPM improvements focus on cross-product integration. • Introduce support for ―Agent VMs.‖ • Agent VM is a special purpose VM tied to a specific ESXi host. • Agent VM cannot / should not be migrated by DRS or DPM. • Special handling of Agent VMs now afforded by DRS & DPM. A DRS/DPM cluster hosting Agent VMs. • Accounts for Agent VM reservations (even when powered off). • Waits for Agent VMs to be powered on and ready before placing client VMs. • Will not try to migrate a Agent VM (Agent VMs pinned to their host). Maintenance Mode / Standby Mode Support • Agent VMs do not have to be evacuated for host to enter maintenance or standby mode. • When host enters maintenance/standby mode, Agent VMs are powered off (after client VMs are evacuated). • When host exits maintenance/standby mode, Agent VMs are powered on (before client VMs are placed).79 Confidential
  • Agenda: vMotion, DRS/DPM and Resource Pools Welcome vMotion Enhancements DRS/DPM Enhancements Resource Pools80 Confidential
  • vSphere 5.0 – Resource Pool Enhancements Resource Pool improvements focus on consistency and usability. Resource Pool management is now consistent for clustered and non-clustered hosts being managed by vCenter Server. • In the past, resource pool settings were stored on the hosts when not part of a cluster and in vCenter once placed in a cluster. • Led to confusion as behavior was different for non-cluster and cluster hosts. • In 5.0, Resource Pool settings are now stored in vCenter for both non-clustered hosts and clustered hosts. • This also enables support for Auto Deploy hosts running in a standalone/non-clustered mode. We now prevent direct host access to resource pool settings when host is managed by vCenter Server. • Attempts to modify Resource Pool settings outside of vCenter are now blocked. • In the past, host level changes would appear to succeed only to be ignored/overridden by vCenter, leading to confusion. • UI now shows if a host is being managed through vCenter or managed locally (direct access).81 Confidential
  • vSphere 5.0 – Resource Pool Enhancements Improved behavior when host is disconnected from vCenter Server. • If host loses access to vCenter (i.e. server outage or network failure), user can connect directly to the host and override vCenter to take control. • No longer requires restarting vpxa.82 Confidential
  • vSphere 5 Networking83 Confidential
  • Agenda: vSphere 5.0 Networking  Introduction  LLDP  NetFlow  Port Mirror  NETIOC – New Traffic Types  802.1p Tagging – QoS  Performance Improvements84 Confidential
  • New Networking Features Two broad categories of features Network Discovery and Visibility/Monitoring features • LLDP • NetFlow • Port Mirror I/O Consolidation (10 Gig) related features • New traffic types • User Defined Network Resource Pool (VM traffic) • Host Based Replication traffic • 802.1p Tagging (QoS)85 Confidential
  • Agenda: Networking Section  Introduction  LLDP  NetFlow  Port Mirror  NETIOC – New Traffic Types  802.1p Tagging – QoS  Performance Improvements86 Confidential
  • What Is Discovery Protocol? Discovery protocol is a data link layer network protocol used to discover capabilities of network devices. Discovery protocol allows customer to automate the deployment process in a complex environment through its ability to • Discover capabilities of Network devices • Discover configuration of neighboring infrastructure vSphere infrastructure supports following Discovery Protocol • CDP • LLDP LLDP is a standard based vendor neutral discovery protocol (802.1AB)87 Confidential
  • LLDP Neighbour Info – vSphere Side Sample output using LLDPD Utility88 Confidential
  • Agenda: vSphere 5.0 Networking  Introduction  LLDP  NetFlow  Port Mirror  NETIOC – New Traffic Types  802.1p Tagging – QoS  Performance Improvements89 Confidential
  • What Is NetFlow? NetFlow is a networking protocol that collects IP traffic information as records and sends them to third party collectors such as CA NetQoS, NetScout etc. VM A VM B Legend : VM traffic NetFlow session Physical Collector switch VDS Host trunk  The Collector/Analyzer report on various information such as: • Current top flows consuming the most bandwidth • Which flows are behaving irregularly • Number of bytes a particular flow has sent and received in the past 24 hours90 Confidential
  • NetFlow Usage NetFlow helps customers monitor the application flows and measure application performance overtime. It also helps in capacity planning and ensuring that IO/Network resources are utilized appropriately by different applications. NetFlow capability in vSphere infrastructure provides complete visibility into virtual infrastructure traffic. • Inter-VM traffic on the same hosts • Intra-VM traffic across hosts • VM-to-Physical Infrastructure traffic This visibility into virtual infrastructure traffic allows customer to • Perform Security and Compliance analysis • Do Profiling and Billing • Perform Intrusion Detection and Prevention, Networking Forensics91 Confidential
  • What Is a Flow? A flow is a sequence of packets with some common properties identified by 7 properties: • Source & Destination IP Address • Source & Destination Port • Input & Output Interface ID • Protocol By definition, a flow is unidirectional. Flows are processed and stored by supported network devices as flow records which are then sent to a NetFlow Collector for additional analysis.92 Confidential
  • NetFlow with Third-Party CollectorsLegend : Net Scout Internal flows nGenius External flows Collector NetFlow sessionExternalSystems vDS Host CA NetQoS Collector93 Confidential
  • Agenda: vSphere 5.0 Networking  Introduction  LLDP  NetFlow  Port Mirror  NETIOC – New Traffic Types  802.1p Tagging – QoS  Performance Improvements94 Confidential
  • What Is Port Mirroring ? Port Mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network monitoring device connected on another switch port. Port Mirroring is also referred to as SPAN (Switched Port Analyzer) on Cisco Switches. Port Mirroring overcomes the limitation of promiscuous mode. • By providing granular control on which traffic can be monitored • Ingress Source • Egress Source Helps in troubleshooting network issue by providing access to: • Inter-VM traffic • Intra-VM traffic95 Confidential
  • Port Mirror Traffic Flow When Mirror Destination Is a VM Inter-Host VM Traffic Ingress Egress Source Destination Source Destination VDS VDS Legend : Mirror Flow Intra-Host VM VM Traffic Traffic Egress Ingress Source Destination Source Destination External External System System VDS VDS96 Confidential
  • Traffic Flow When Mirror Destination Is an Uplink Port (Encapsulation VLAN Session ) VM A VM B Used for both normal traffic and mirrored traffic (Encapsulation VLAN) Legend : VDS Mirror Flow VM Traffic Host trunk Physical switch Monitor97 Confidential
  • Traffic Flow When Mirror Destination Is an Uplink Port (Allow Normal I/O Disabled) VM A VM B Dedicated for mirrored traffic (Allow Normal Legend : I/O disabled) Mirror Flow VDS VM Traffic Host Physical Monitor switch98 Confidential
  • Agenda: vSphere 5.0 Networking  Introduction  LLDP  NetFlow  Port Mirror  NETIOC – New Traffic Types  802.1p Tagging – QoS  Performance Improvements99 Confidential
  • What Is Network I/O Control (NETIOC)? Network I/O control is a traffic management feature of vSphere Distributed Switch (vDS). In consolidated I/O (10 gig) deployments, this feature allows customers to: • Allocate Shares and Limits to different traffic types. • Provide Isolation • One traffic type should not dominate others • Guarantee Service Levels when different traffic types compete Enhanced Network I/O Control — vSphere 5.0 builds on previous versions of Network I/O Control feature by providing: • User-defined network resource pools • New Host Based Replication Traffic Type • QoS tagging100 Confidential
  • Network I/O Control (NETIOC) Usage When customers are deploying Tier 1 Apps on virtual infrastructure they can utilize this advanced feature to reserve I/O resources for those important business critical applications giving them SLA guarantees. Service providers who are deploying public clouds and are serving multiple tenants can now provision I/O resources per tenant based on each tenant‘s need.101 Confidential
  • NETIOC VM Groups VMRG1 VMRG2 VMRG3 Total BW = 20 Gig User Defined RP vMotion iSCSI VMware vNetwork Distributed Switch HBR NFS VM FT Network I/O Control 10 GigE VMRG1 VMRG2 VMRG3102 Confidential
  • NETIOC VM Traffic Tenant 2 Tenant 1 VMs VM VR vMotion FT Mgmt NFS iSCSIServer Admin vSphere Distributed Portgroup Teaming Policy vSphere Distributed Switch Load Based Shaper Teaming Traffic Shares Limit (Mbps) 802.1p vMotion 5 150 1 Scheduler Scheduler Mgmt 30 -- Limit enforcement NFS 10 250 -- per team Shares enforcement iSCSI 10 2 per uplink FT 60 -- VR 10 -- VM 20 2000 4 Tenant 1 5 -- Tenant 2 15 --103 Confidential
  • Network I/O Control Enhancements – User-Defined Network RP104 Confidential
  • Agenda: vSphere 5.0 Networking  Introduction  LLDP  NetFlow  Port Mirror  NETIOC – New Traffic Types  802.1p Tagging – QoS  Performance Improvements105 Confidential
  • What Is 802.1p Tagging (QoS) ? 802.1p is an IEEE standard for enabling Quality of Service at MAC level. 802.1p tagging helps provide end-to-end Quality of Service when: • All network switches in the environment treat traffic according to the tags • Tags are added based on the priority of the application/workload Customers will now be able to tag any traffic flowing out of the vSphere infrastructure.106 Confidential
  • 802.1p Tag for Resource Pool vSphere infrastructure does not provide QoS based on these tags. vDS simply tags the packets according to the Resource Pool setting, and it is down to the physical switch to understand the flag and act upon it.107 Confidential
  • Agenda: vSphere 5.0 Networking  Introduction  LLDP  NetFlow  Port Mirror  NETIOC – New Traffic Types  802.1p Tagging – QoS  Performance Improvements108 Confidential
  • Performance Improvements Multicast performance improvement • Multiple VMs on an ESX host receiving multicast traffic from same source will see improved • Throughput • CPU efficiency TCP IP stack improvement • Vmknics will see following improvement • Higher throughput with small messages • Better IOPs scaling for iSCSI traffic109 Confidential
  • Key Takeaways Features addressing Network Administrators concerns • Visibility into virtual infrastructure • Troubleshooting and monitoring support • Automation in network configuration and management Enhanced features for Network I/O Control • Ability to carve I/O resources (SLA) • End to End QOS Available only on vSphere Distributed Switch110 Confidential
  • vSphere 5 Storage111 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet112 Confidential
  • Introduction to VMFS-5 Enhanced Scalability • Increase the size limits of the filesystem & support much larger single extent VMFS-5 volumes. • Support for single extent 64TB Datastores. Better Performance • Uses VAAI locking mechanism with more tasks. Easier to manage and less overhead • Space reclamation on thin provisioned LUNs. • Smaller sub blocks. • Unified Block size.113 Confidential
  • VMFS-5 Versus VMFS-3 Feature Comparison Feature VMFS-3 VMFS-5 Yes 2TB+ VMFS Volumes Yes (using extents) Support for 2TB+ Physical RDMs No Yes Unified Block size (1MB) No Yes Atomic Test & Set Enhancements No Yes (part of VAAI, locking mechanism) Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k) Small file support No 1KB114 Confidential
  • VMFS-3 to VMFS-5 Upgrade The Upgrade to VMFS-5 is clearly displayed in the vSphere Client under Configuration → Storage view. It is also displayed in the Datastores → Configuration view. The upgrade is non-disruptive.115 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet116 Confidential
  • VAAI – Introduction vStorage API for Array Integration = VAAI VAAI‘s main purpose is to leverage array capabilities. • Offloading tasks to reduce overhead • Benefit from enhanced mechanisms arrays mechanisms The ―traditional‖ VAAI primitives have been improved. We have introduced multiple new primitives. Application Support for NAS! VI-3 Hypervisor Non-VAAI Fabric Array VAAI LUN LUN 01 02117 Confidential
  • VAAI Primitive Updates in vSphere 5.0 vSphere 4.1 has a default plugin shipping for Write Same as the primitive was fully T10 compliant, however ATS and Full Copy were not. • The T10 organization is responsible for SCSI standardization (SCSI-3) and a standard used by many Storage Vendors. vSphere 5.0 has all the 3 primitives which are T10 compliant integrated in the ESXi Stack. • This allows for arrays which are T10 compliant leverage these primitives with a default VAAI plugin in vSphere 5.0. It should also be noted that the ATS primitive has been extended in vSphere 5.0 / VMFS-5 to cover even more operations, resulting in even better performance and greater scalability.118 Confidential
  • Introducing VAAI NAS Primitives With this primitive, we will enable hardware acceleration/offload features for NAS datastores. The following primitives are defined for VAAI NAS: • Full File Clone – Similar to the VMFS block cloning. Allows offline VMDKs to be cloned by the Filer. • Note that hot migration via Storage vMotion on NAS is not hardware accelerated. • Reserve Space – Allows creation of thick VMDK files on NAS. NAS VAAI plugins are not shipped with ESXi 5.0. These plugins will be developed and distributed by the storage vendors, but signed by the VMware certification program.119 Confidential
  • VAAI NAS: Thick Disk Creation Without the VAAI NAS primitives, only Thin format is available. With the VAAI NAS primitives, Flat (thick), Flat pre-initialized (eager zeroed-thick) and Thin formats are available. Non VAAI VAAI120 Confidential
  • Introducing VAAI Thin Provisioning What are the driving factors behind VAAI Thin Provisioning? • Provisioning new LUNs to a vSphere environment (cluster) is complicated. • Often requires involvement from multiple people, creating delays in the provisioning process Strategic Goal: • We want to make the act of physical storage provisioning in a vSphere environment extremely rare. • LUNs should be extended across a large address space and able to handle any VM workload. VAAI TP features include: • Dead space reclamation. • Monitoring of the space.121 Confidential
  • VAAI Thin Provisioning – Dead Space Reclamation Dead space is previously written blocks that are no longer used by the VM. For instance after a Storage vMotion. vSphere conveys block information to storage system via VAAI & storage system reclaims the dead blocks. • Storage vMotion, VM deletion and swap file deletion can trigger the thin LUN to free some vSphere physical space. • ESXi 5.0 uses a standard SCSI command for dead space reclamation. VMFS volume A VMFS volume B122 Confidential
  • Current ―Out Of Space‖ User Experience No space related warnings VMware No mitigation steps available Space exhaustion, VMs and LUN offline VMware123 Confidential ?
  • ―Out Of Space‖ User Experience with VAAI Extensions Space exhaustion warning in UI VMware Storage vMotion based evacuation or add space VMware Space exhaustion, affected VMs paused, LUN online & awaiting space allocation.124 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet125 Confidential
  • Storage vMotion – Introduction In vSphere 5.0, a number of new enhancements were made to Storage vMotion. • Storage vMotion will work with Virtual Machines that have snapshots, which means coexistence with other VMware products & features such as VDR & HBR. • Storage vMotion will support the relocation of linked clones. • Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).126 Confidential
  • Storage vMotion Architecture Enhancements (1 of 2) In vSphere 4.1, Storage vMotion uses the Changed Block Tracking (CBT) method to copy disk blocks between source & destination. The main challenge in this approach is that the disk pre-copy phase can take a while to converge, and can sometimes result in Storage vMotion failures if the VM was running a very I/O intensive load. Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy mechanism. In vSphere 5.0, Storage vMotion uses a new mirroring architecture to provide the following advantages over previous versions: • Guarantees migration success even when facing a slower destination. • More predictable (and shorter) migration time.127 Confidential
  • Storage vMotion Architecture Enhancements (2 of 2) VMM/Guest Guest OS Datamover Mirror Driver VMkernel Userworld Source Destination128 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet129 Confidential
  • Storage I/O Control Phase 2 and Refreshing Memory In many customer environments, storage is mostly accessed from storage arrays over SAN, iSCSI or NAS. One ESXi host can affect the I/O performance of others by issuing large number of requests on behalf of one its virtual machines. Thus the throughput/bandwidth available to ESXi hosts itself may vary drastically leading to highly-variable I/O performance for VMs. To ensure stronger I/O guarantees, we implemented Storage I/O Control in vSphere 4.1 for block storage which guarantees an allocation of I/O resources on a per VM basis. As of vSphere 5.0 we also support SIOC for NFS based storage! This capability is essential to provide better performance for I/O intensive and latency-sensitive applications such as database workloads, Exchange servers, etc.130 Confidential
  • Storage I/O Control Refreshing Memory What you see What you want to see online Microsoft data online Microsoft data store Exchange mining store Exchange mining VIP VIP VIP VIP NFS / VMFS Datastore NFS / VMFS Datastore131 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet132 Confidential
  • What Does Storage DRS Solve? Without Storage DRS: • Identify the datastore with the most disk space and lowest latency. • Validate which virtual machines are placed on the datastore and ensure there are no conflicts. • Create Virtual Machine and hope for the best. With Storage DRS: • Automatic selection of the best placement for your VM. • Advanced balancing mechanism to avoid storage performance bottlenecks or ―out of space‖ problems. • VM or VMDK Affinity Rules.133 Confidential
  • What Does Storage DRS Provide? Storage DRS provides the following: 1. Initial Placement of VMs and VMDKS based on available space and I/O capacity. 2. Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization. 3. Load balancing via Storage vMotion based on I/O metrics, i.e. latency. Storage DRS also includes Affinity/Anti-Affinity Rules for VMs and VMDKs; • VMDK Affinity – Keep a VM‘s VMDKs together on the same datastore. This is the default affinity rule. • VMDK Anti-Affinity – Keep a VM‘s VMDKs separate on different datastores. • Virtual Machine Anti-Affinity – Keep VMs separate on different datastores. Affinity rules cannot be violated during normal operations.134 Confidential
  • Datastore Cluster An integral part of SDRS is to create a group of datastores called a datastore cluster. • Datastore Cluster without Storage DRS – Simply a group of datastores. • Datastore Cluster with Storage DRS – Load Balancing domain similar to a DRS Cluster. A datastore cluster, without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than just a folder. 2TB datastore cluster 500GB 500GB 500GB 500GB datastores135 Confidential
  • Storage DRS Operations – Initial Placement (1 of 4) Initial Placement – VM/VMDK create/clone/relocate. • When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore. • SDRS will select a datastore based on space utilization and I/O load. • By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters. 2TB datastore cluster 500GB 500GB 500GB 500GB datastores 300GB 260GB 265GB 275GB available available available available136 Confidential
  • Storage DRS Operations – Load Balancing (2 of 4)Load balancing – SDRS triggers on space usage & latency threshold. Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded. • Space utilization statistics are constantly gathered by vCenter, default threshold 80%. • I/O load trend is currently evaluated every 8 hours based on a past day history, default threshold 15ms. Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds. Storage DRS will do a cost / benefit analysis! For I/O load balancing Storage DRS leverages Storage I/O Control functionality.137 Confidential
  • Storage DRS Operations – Thresholds (3 of 4)138 Confidential
  • Storage DRS Operations – Datastore Maintenance Mode Datastore Maintenance Mode • Evacuates all VMs & VMDKs from selected datastore. • Note that this action will not move VM Templates. • Currently, SDRS only handles registered VMs. Place VOL1 in maintenance mode 2TB datastore cluster VOL1 VOL2 VOL3 VOL4 datastores139 Confidential
  • Storage DRS Operations (4 of 4) Datastore Cluster Datastore Cluster Datastore Cluster VMDK affinity VMDK anti-affinity VM anti-affinity Keep a Virtual Machine‘s  Keep a VM‘s VMDKs on  Keep VMs on different VMDKs together on the different datastores datastores same datastore  Useful for separating  Similar to DRS anti- Maximize VM availability log and data disks of affinity rules when all disks needed in database VMs order to run  Maximize availability of  Can select all or a a set of redundant VMs On by default for all VMs subset of a VM‘s disks140 Confidential
  • SDRS Scheduling SDRS allows you to create a schedule to change its settings. This can be useful for scenarios where you don‘t want VMs to migrate between datastore or when I/O latency might rise, giving false negatives, e.g. during VM backups.141 Confidential
  • So What Does It Look Like? Provisioning…142 Confidential
  • So What Does It Look Like? Load Balancing. The Storage DRS tab will show ―utilization before‖ and ―after‖. There‘s always the option to override the recommendations.143 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet144 Confidential
  • What Is vStorage APIs Storage Awareness (VASA)? VASA is an Extension of the vSphere Storage APIs, vCenter-based extensions. It allows storage arrays to integrate with vCenter for management functionality via server-side plug-ins or Vendor Providers. This in turn allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster. VASA enables several features. • For example it delivers System-defined (array-defined) Capabilities that enables Profile-driven Storage. • Another example is that it provides array internal information that helps several Storage DRS use cases to work optimally with various arrays.145 Confidential
  • Storage Compliancy Once the VASA Provider has been successfully added to vCenter, the VM Storage Profiles should also display the storage capabilities provided to it by the Vendor Provider. The above example contains a ‗mock-up‘ of some possible Storage Capabilities as displayed in the VM Storage Profiles. These are retrieved from the Vendor Provider.146 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet147 Confidential
  • Why Profile Driven Storage? (1 of 2) Problem Statement 1. Difficult to manage datastores at scale • Including: capacity planning, differentiated data services for each datastore, maintaining capacity headroom, etc. 2. Difficult to correctly match VM SLA requirements to available storage • Because: Manually choosing between many datastores and >1 storage tiers • Because: VM requirements not accurately known or may change over its lifecycle Related trends • Newly virtualized Tier-1 workloads need stricter VM storage SLA promises • Because: Other VMs can impact performance SLA • Scale-out storage mix VMs with different SLAs on the same storage148 Confidential
  • Why Profile Driven Storage? (2 of 2)Save OPEX by reducing repetitive planning and effort! Minimize per-VM (or per VM request) ―thinking‖ or planning for storage placement. • Admin needs to plan for optimal space and I/O balancing for each VM. • Admin needs to identify VM storage requirements and match to physical storage properties. Increase probability of ―correct‖ storage placement and use (minimize need for troubleshooting, minimize time for troubleshooting). • Admin needs more insight into storage characteristics. • Admin needs ability to custom-tag available storage. • Admin needs easy means to identify incorrect VM storage placement (e.g. on incorrect datastore).149 Confidential
  • Save OPEX by Reducing Repetitive Planning and Effort! Periodically Identify Find optimal Create VM check Today requirements datastore compliance Initial setup Identify storage Periodically characteristics Identify Storage Create VM check requirements compliance DRS Group datastores Initial setup Discover storage characteristics Select VM Storage DRS + Create VM Profile driven Storage profile Group storage datastores150 Confidential
  • Key Concepts Overview VMDK storage PRODUCTION DATA VMDK, Create VMDK tiers profiles PRODUCTION OS VMDK vSphereAdmin using vCenter VM operations + PRODUCTION DATA VMDK  vendor1_Tier1 OR vendor2_Tier1 Importance_Level1 = User-defined ―DR protected‖ vSphere/vCenter Datastore Capability vStorage APIs Vendor 1 Vendor 2 ―<vendor1>_Type1‖ = System Capability ―RAID 5, replicated‖ ―<vendor2>_TypeA = ―RAID 10, non-replicated‖ Heterogeneous EMC, EQL, NTAP, … storage backend VMware Confidential, NDA only151 Confidential
  • Selecting a Storage Profile During Provisioning By selecting a VM Storage Profile, datastores are now split into Compatible & Incompatible. The Celerra_NFS datastore is the only datastore which meets the GOLD Profile requirements – i.e. it is the only datastore that has our user-defined storage capability associated with it.152 Confidential
  • Storage Capabilities & VM Storage Profiles Compliant Not Compliant VM Storage Profile associated with VM VM Storage Profile referencing Storage Capabilities Storage Capabilities surfaced by VASA or user-defined153 Confidential
  • VM Storage Profile Compliance Policy Compliance is visible from the Virtual Machine Summary tab.154 Confidential
  • Agenda: vStorage – What‘s New Introduction VMFS-5 vStorage API for Array Integration Storage vMotion Storage I/O Control Storage DRS VMware API for Storage Awareness Profile Driven Storage FCoE – Fibre Channel over Ethernet155 Confidential
  • Introduction Fiber Channel over Ethernet (FCoE) is an enhancement that expands Fiber Channel into the Ethernet by combining two leading-edge technologies (FC and the Ethernet) The FCoE adapters that VMware supports generally fall into two categories, hardware FCoE adapters and software FCoE adapters which uses an FCoE capable NIC • Hardware FCoE adapters were supported as of vSphere 4.0. The FCoE capable NICs are referred to as Converged Network Adapters (CNAs) which facilitate network and storage traffic. ESXi 5.0 uses FCoE adapters to access Fibre Channel storage.156 Confidential
  • Software FCoE Adapters (1 of 2) A software FCoE adapter is a software code that performs some of the FCoE processing. This adapter can be used with a number of NICs that support partial FCoE offload. Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.157 Confidential
  • Software FCoE Adapters (2 of 2) Once the Software FCoE is enabled, a new adapter is created, and discovery of devices can now take place.158 Confidential
  • Conclusion vSphere 5.0 has many new compelling storage features. VMFS volumes can be larger than ever before. • They can contain many more virtual machines due to VAAI enhancements and architectural changes. Storage DRS and Profile Driven Storage will help solve traditional problems with virtual machine provisioning. The administrative overhead will be severely reduced. • VASA surfacing storage characteristics. • Creating Profiles through Profile Driven Storage. • Combining multiple datastores in a large aggregate.159 Confidential
  • vSphere 5 vCenter Server160 Confidential
  • Agenda: What‘s New in vCenter Server vSphere Web Client • Overview • Architecture • New Functionality • Summary vCenter Server Appliance • Introduction • Components and Features • Deployment/Management • Summary vCenter Heartbeat161 Confidential
  • vSphere Web Client ArchitectureThe vSphere WebClient runs withina browser FxApplicationServer that Flex Clientprovides a Back Endscalable back end The Query ServicevCenter in either Query obtains optimizedsingle or data live from the Service core vCenterLinked modeoperation vCenter Server process162 Confidential
  • Why Flex? Flex provides us with the richest and fullest featured development platform available. • Extensive amount of Libraries to use • Technologies such as HTML5 and others are still in development • Provides the best performance • Scales to the web Web Client Windows Client 50 VCs 10 VCs Scalability 100,000 VMs 10,000 VMs Platform Windows Windows Independence Linux Linux Native One HTML Extensibility Rich Extension plug-in Points163 Confidential
  • Extension PointsLaunchbar Tabs Inventory Objects Create custom actions Sidebar Extension Portlets Add right-click extensions 164 Confidential
  • Features of the vSphere Web Client Customize the GUI • Create custom views to reflect the information you need to see, the way you like to see it165 Confidential
  • Features of the vSphere Web Client Ready Access to Common Actions • Quick access to common tasks provided out of the box166 Confidential
  • Features of the vSphere Web Client Support interrupt driven workflows • Allow jumping in and out of workflows easily – continuing exactly from where you left off without having to repeat a process167 Confidential
  • Features of the vSphere Web Client Advanced Search Functionality • New advanced search functionality allows administrators to save and run search find information quickly – Even across multiple vCenters!168 Confidential
  • Features of the vSphere Web Client Extendable Functionality • Possible for partners and end users to add features and functionality Easily create new tabs for information Create portlets for instant access to information169 Confidential
  • Current Use Case The vSphere Web Client is tailored to met the needs of VM Administrators in the first release. This includes: • VM Management • VM Provisioning • Edit VM, VM power ops, Snapshots, Migration • VM Resource Management • View all vSphere objects (hosts, clusters, datastores, folders, etc) • Basic Health Monitoring • Viewing the VM console remotely • Search through large, complex environments • Save search queries, and quickly run them to find detailed information • vApp Management • vApp Provisioning, vApp Editing, vApp Power Operations170 Confidential
  • Summary The vSphere Web Client enables you to respond to business faster • Provides a common, cross platform capable user experience • Enables admins to accomplish tasks more effectively • Customize the GUI • Ready Access to Common Actions • Support interrupt driven workflows • Advanced Search Functionality • Scales to Cloud levels • Easily Extensible171 Confidential
  • Agenda: What‘s New in vCenter Server vSphere Web Client • Overview • Architecture • New Functionality • Summary vCenter Server Appliance • Introduction • Components and Features • Deployment/Management • Summary vCenter Heartbeat172 Confidential
  • Customer Feedback Customers consistently provide feedback on their desire to: Simplify Management • Upgrades/Patching • Standardization Reduce Deployment Costs • Deployment Overhead • Licensing173 Confidential
  • Introducing vCenter Server Appliance The vCenter Server Appliance is the answer! • Simplifies Deployment and Configuration • Streamlines patching and upgrades • Reduces the TCO for vCenter Enables companies to respond to business faster! VMware vCenter Server Virtual Appliance Automation Visibility Scalability174 Confidential
  • Component Overview vCenter Server Appliance (VCSA) consists of: • A pre-packaged 64 bit application running on SLES 11 • Distributed with sparse disks • Disk Footprint Distribution Min Deployed Max Deployed 3.6GB ~5GB ~80GB • Memory Footprint • A built in enterprise level database with optional support for a remote Oracle databases. • Limits are the same for VC and VCSA • Embedded DB • 5 hosts/50 VMs • External DB • <1000 hosts/<10,000 VMs (64 bit) • A web-based configuration interface175 Confidential
  • Feature Overview vCenter Server Appliance supports: • The vSphere Web Client • Authentication through AD and NIS • Feature parity with vCenter Server on Windows • Except – • Linked Mode support • Requires ADAM (AD LDS) • IPv6 support • External DB Support • Oracle is the only supported external DB for the first release • No vCenter Heartbeat support • HA is provided through vSphere HA176 Confidential
  • vCenter Server Appliance Deployment Simply deploy from a OVF template! • Install takes ~5 minutes177 Confidential
  • Configuration Complete configuration is possible through a powerful web-based interface!178 Confidential
  • Management Increases lifecycle management efficiency! • Upgrades consist of deploying a new vCenter Server Appliance • Built in configuration migration utilities which automatically import configuration data from previous installations • Patches can be installed from the appliance configuration web enabled GUI. • Being a virtual machine allows all the features one would expect, such as: • VMware HA for high availability • Snapshots for quick and easy backups • Self-Contained Appliance • Administrators do not (and should not) have to login to VCSA to administer.179 Confidential
  • Summary vCenter Server Appliance provides customers with: • A quick and easy deployment mechanism that provides consistent results • A reduction in deployment costs, licensing costs, and overall management • A solution that they can leverage the best features of virtualization with • And most of all… The ability to take care of business needs faster!180 Confidential
  • Agenda: What‘s New in vCenter Server vSphere Web Client • Overview • Architecture • New Functionality • Summary vCenter Server Appliance • Introduction • Components and Features • Deployment/Management • Summary vCenter Heartbeat181 Confidential
  • vCenter Heartbeat  What is it? • High availability for vCenter Server  What does it help protect against? • Failures that occur with: • Hardware, Networks, OS, vCenter Application • Loss of key vSphere features and functions if vCenter is unavailable182 Confidential
  • What‘s New in vCenter Heartbeat v6.4 Enhanced architecture allows the active and VCENTER.VMWARE.COM standby nodes to be reachable over the PRIMARY SERVER SECONDARY SERVER network at the same vCenter Heartbeat vCenter Heartbeat time, enabling both to be patched and managed. vCenter Server vCenter Server WAN MONITORING REPLICATION Data FAILOVER Data SWITCHBACK LAN OS OS HEARTBEAT SERVER1.VMWARE.COM CHANNEL SERVER2.VMWARE.COM183 Confidential
  • What‘s New in vCenter Heartbeat v6.4  Better integration with VMware vCenter Server: • A New Plug-in to the vSphere Client provides monitoring and management of vCenter Server Heartbeat from the vSphere Client • Heartbeat events will register in the vCenter Recent Tasks and display in the vSphere Client • Heartbeat alerts will register in the vCenter Alarms and display in the vSphere Client  Support for: • VMware vCenter Server v5.0 • VMware View Composer v5.0 • Microsoft SQL Server 2008 R2184 Confidential
  • Agenda: vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary185 Confidential
  • Introduction (1 of 3) In vSphere 5.0, VMware releases a new storage appliance called VSA. • VSA is an acronym ―vSphere Storage Appliance.‖ • This appliance is aimed at our SMB (Small-Medium Business) customers who may not be in a position to purchase a SAN or NAS array for their virtual infrastructure, and therefore do not have shared storage. • Without access to a SAN or NAS array, this excludes these SMB customers from many of the top features which are available in a VMware Virtual Infrastructure, such as vSphere HA & vMotion. • Customers who decide to deploy a VSA can now benefit from many additional vSphere features without having to purchase a SAN or NAS device to provide them with shared storage.186 Confidential
  • Introduction (2 of 3) VSA VSA VSA VSA Manager vSphere vSphere vSphere vSphere Client NFS NFS NFS Each ESXi server has a VSA deployed to it as a Virtual Machine. The appliances use the available space on the local disk(s) of the ESXi servers & present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures.187 Confidential
  • Introduction (3 of 3) The NFS datastores exported from the VSA can now be used as shared storage on all of the ESXi servers in the same datacenter. The VSA creates shared storage out of local storage for use by a specific set of hosts. This means that vSphere HA & vMotion can now be made available on low-end (SMB) configurations, without external SAN or NAS servers. There is a CAPEX saving achieved by SMB customers as there is no longer a need to purchase a dedicated SAN or NAS devices to achieve shared storage. There is also an OPEX saving as the management of the VSA may be done by the vSphere Administrator and there is no need for dedicated SAN skills to manage the appliances. The installation & configure is also much simpler than that of a physical storage array or other storage appliances.188 Confidential
  • Agenda: vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary189 Confidential
  • Supported VSA Configurations The vSphere Storage Appliance can be deployed in two configurations: • 2 x ESXi 5.0 servers configuration • Deploys 2 vSphere Storage Appliances, one per ESXi server & a VSA Cluster Service on the vCenter server • 3 x ESXi 5.0 servers configuration • Deploys 3 vSphere Storage Appliances, once per ESXi server • Each of the servers must contain a new/vanilla install of ESXi 5.0. • During the configuration, the user selects a datacenter. The user is then presented with a list of ESXi servers in that datacenter. • The installer will check the compatibility of each of these physical hosts to make sure they are suitable for VSA deployment. • The user must then select which compatible ESXi servers should participate in the VSA cluster, i.e. which servers will host VSA nodes. • It then ‗creates‘ the storage cluster by aggregating and virtualizing each server‘s local storage to present a logical pool of shared storage.190 Confidential
  • Two Member VSA vCenter Server VSA Manager VSA Cluster Service Manage Volume 2 Volume 1 Volume 1 Volume 2 (Replica) (Replica) VSA VSA Datastore 1 Datastore 2 VSA cluster with 2 members191 Confidential
  • Three Member VSA vCenter Server VSA Manager Manage VSA VSA VSA Datastore 1 Datastore 2 Datastore 3 Volume 3 Volume 1 Volume 2 Volume 1 Volume 2 (Replica) Volume 3 (Replica) (Replica) VSA cluster with 3 members192 Confidential
  • Simplified UI for VSA Cluster Configuration Once the VSA Manager installation has completed and the VSA manager plug-in is enabled in vCenter, select the datacenter in the vCenter inventory and select the VSA Manager tab. The following is displayed:193 Confidential
  • Simplified UI for VSA Cluster Configuration Datacenter Introduction Selection 1 2 ESXi Host IP Address Selection Assignment 3 4194 Confidential
  • Simplified UI for VSA Cluster Configuration Select Disk Ready to Format Install 5 6195 Confidential
  • VSA Manager The VSA Manager helps an administrator perform the following tasks: • Deploy vSphere Storage Appliance instances onto ESXi hosts to create a VSA cluster • Automatically mount the NFS volumes that each vSphere Storage Appliance exports as datastores to the ESXi hosts • Monitor, maintain, and troubleshoot a VSA cluster196 Confidential
  • Agenda: vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary197 Confidential
  • Resilience Many storage arrays are a single point of failure (SPOF) in customer environments. VSA is very resilient to failures. If a node fails in the VSA cluster, another node will seamlessly take over the role of presenting its NFS datastore. The NFS datastore that was being presented from the failed node will now be presented from the node that holds its replica (mirror copy). The new node will use the same NFS server IP address that the failed node was using for presentation, so that any VMs that reside on that NFS datastore will not be affected by the failover.198 Confidential
  • Resilience Diagram vCenter Server VSA Manager VSA Cluster Service Manage Volume 2 Volume 1 Volume 1 Volume 2 (Replica) (Replica) VSA VSA Datastore 1 Datastore 2 Failover in a VSA cluster with 2 hosts199 Confidential
  • Agenda: vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary200 Confidential
  • Maintenance Mode The are two types of Maintenance Mode: • Whole VSA Cluster Maintenance Mode • Single VSA Node Maintenance Mode A user can put a particular VSA node into maintenance mode in order to reconfigure the VSA in some way, e.g. rolling upgrade Since only one VSA is being taken offline, the storage volumes being supplied by the storage cluster will remain online, and there is no need to migrate any VMs that are running guest operating systems using that storage. This does mean however that at least 2 volumes will be degraded with the loss of one VSA.201 Confidential
  • Agenda: vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary202 Confidential
  • VSA Cluster Node/Member Replacement Due to various reasons, a VSA cluster member might stop responding. If a VSA cluster member stops responding or powers off, its status changes to Offline in the VSA Manager tab. Different reasons might contribute to the Offline status. If an admin cannot bring the VSA Cluster member back online by resetting it, another option available to the admin is to replace the VSA cluster member.203 Confidential
  • Agenda: vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary204 Confidential
  • VSA Cluster Recovery In the event of a vCenter server loss, the VSA cluster can be recovered with a vanilla install of vCenter server if a customer does not have a good backup. There must be no configuration changes to the ESXi servers or the VSA cluster members during the vCenter server outage. The admin will have to re-install the VSA plugin. When vSphere Client is launched and the VSA tab is selected, it will contain two options (the same options visible during the initial install). In this case the admin can choose to Recover the VSA cluster.205 Confidential
  • Agenda: vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary206 Confidential
  • vSphere Storage Appliance: Summary Installed, configured and managed via vCenter Simple manageability Abstraction From Underlying Hardware Resilient to server failures Delivers Highly available during disk (spindle) failure high availability Provides Storage framework for vMotion, HA and DRS Pools server disk capacity to form shared storage Creates shared Leverages vSphere Thin provisioning for space utilization storage Enables storage scalability207 Confidential
  • Questions?208 Confidential