The VSA provides shared storage for SMB customers without requiring a separate SAN or NAS device. It deploys virtual storage appliances on each ESXi host that replicate data across hosts, providing resilience to failures. The VSA manager in vCenter automates deployment and management of the VSA cluster, mounting NFS datastores for use across all ESXi hosts. This allows features like vMotion and HA without a dedicated storage device.
2. You Are Here vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary
3. Introduction (1 of 3) In vSphere 5.0, VMware release a new storage appliance called VSA. VSA is an acronym “vSphere Storage Appliance”. This appliance is aimed at our SMB (Small-Medium Business) customers who may not be in a position to purchase a SAN or NAS array for their virtual infrastructure, and therefore do not have shared storage. It is the SMB market that we wish to go after with this product – our aim to move these customers from Essentials to Essentials+. Without access to a SAN or NAS array, this excludes these SMB customers from many of the top features which are available in a VMware Virtual Infrastructure, such as vSphere HA & vMotion. Customers who decide to deploy a VSA can now benefit from many additional vSphere features without having to purchase a SAN or NAS device to provide them with shared storage.
4. Introduction (2 of 3) vSphere vSphere vSphere vSphere Client VSA VSA VSA VSA Manager NFS NFS NFS Each ESXi server has a VSA deployed to it as a Virtual Machine. The appliances use the available space on the local disk(s) of the ESXi servers & present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures.
5. Introduction (3 of 3) The NFS datastores exported from the VSA can now be used as shared storage on all of the ESXi servers in the same datacenter. The VSA creates shared storage out of local storage for use by a specific set of hosts. This means that vSphere HA & vMotion can now be made available on low-end (SMB) configurations, without external SAN or NAS servers. There is a CAPEX saving achieved by SMB customers as there is no longer a need to purchase a dedicated SAN or NAS devices to achieve shared storage. There is also an OPEX saving as the management of the VSA may be done by the vSphere Administrator and there is no need for dedicated SAN skills to manage the appliances. The installation & configure is also much simpler than that of a physical storage array or other storage appliances.
6. You Are Here vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary
7. Supported VSA Configurations The vSphere Storage Appliance can be deployed in two configurations: 2 x ESXi 5.0 servers configuration Deploys 2 vSphere Storage Appliances, one per ESXi server & a VSA Cluster Service on the vCenter server. 3 x ESXi 5.0 servers configuration Deploys 3 vSphere Storage Appliances, once per ESXi server. Each of the servers must contain a new/vanilla install of ESXi 5.0. During the configuration, the user selects a datacenter. The user is then presented with a list of ESXi servers in that datacenter. The installer will check the compatibility of each of these physical hosts to make sure they are suitable for VSA deployment. The user must then select which compatible ESXi servers should participate in the VSA cluster, i.e. which servers will host VSA nodes. It then ‘creates’ the storage cluster by aggregating and virtualizing each server’s local storage to present a logical pool of shared storage.
8. vCenter Server VSA Manager VSA Cluster Service Manage Volume 2 (Replica) VSA Datastore 1 Volume 1 (Replica) VSA Datastore 2 Volume 2 Volume 1 ESXi-1 ESXi-2 VSA VSA VSA cluster with 2 members
9. vCenter Server VSA Manager Manage VSA Datastore 2 VSA Datastore 3 VSA Datastore 1 Volume 3 (Replica) Volume 2 (Replica) Volume 1 (Replica) Volume 3 Volume 2 Volume 1 ESXi-3 ESXi-2 ESXi-1 VSA VSA VSA VSA cluster with 3 members
10. Simplified UI for VSA Cluster Configuration Once the VSA Manager installation has completed and the VSA manager plug-in is enabled in vCenter, select the datacenter in the vCenter inventory and select the VSA Manager tab. The following is displayed:
11. Simplified UI for VSA Cluster Configuration Datacenter Selection Introduction 2 1 IP Address Assignment ESXi host Selection 4 3
12. Simplified UI for VSA Cluster Configuration Ready to Install Select Disk Format 6 5
13. VSA Manager The VSA Manager helps an administrator perform the following tasks: Deploy vSphere Storage Appliance instances onto ESXi hosts to create a VSA cluster Automatically mount the NFS volumes that the each vSphere Storage Appliance exports as datastores to the ESXi hosts. Monitor, maintain, and troubleshoot a VSA cluster
14. You Are Here vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary
15. Resilience Many storage arrays are a single point of failure (SPOF) in customer environments. VSA is very resilient to failures. If a node fails in the VSA cluster, another node will seamlessly take over the role of presenting its NFS datastore. The NFS datastore that was being presented from the failed node will now be presented from the node that holds it’s replica (mirror copy). The new node will use the same NFS server IP address that the failed node was using for presentation, so that any VMs that reside on that NFS datastore will not be affected by the failover.
16. vCenter Server VSA Manager VSA Cluster Service Manage Volume 2 (Replica) Volume 1 (Replica) VSA Datastore 2 VSA Datastore 1 Volume 2 Volume 1 ESXi-1 ESXi-2 VSA VSA Failover in a VSA cluster with 2 hosts
17. You Are Here vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary
18. Maintenance Mode The are two types of Maintenance Mode: Whole VSA Cluster Maintenance Mode Single VSA Node Maintenance Mode A user can put a particular VSA node into maintenance mode in order to reconfigure the VSA in some way, e.g. rolling upgrade. Since only one VSA is being taken offline, the storage volumes being supplied by the storage cluster will remain online, and there is no need to migrate any VMs that are running guest operating systems using that storage. This does mean however that at least 2 volumes will be degraded with the loss of one VSA.
19. You Are Here vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary
20. VSA Cluster Node/Member Replacement Due to various reasons, a VSA cluster member might stop responding. If a VSA cluster member stops responding or powers off, its status changes to Offline in the VSA Manager tab. Different reasons might contribute to the Offline status. If an admin cannot bring the VSA Cluster member back online by resetting it, another option available to the admin is to replace the VSA cluster member.
21. You Are Here vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary
22. VSA Cluster Recovery In the event of a vCenter server loss, the VSA cluster can be recovered with a vanilla install of vCenter server if a customer does not have a good backup. There must be no configuration changes to the ESXi servers or the VSA cluster members during the vCenter server outage. The admin will have to re-install the VSA plugin. When vSphere Client is launched and the VSA tab is selected, it will contain two options (the same options visible during the initial install). In this case the admin can choose to Recover the VSA cluster.
23. You Are Here vSphere Storage Appliance Introduction Installation & Configuration VSA Cluster Resilience Maintenance Mode Replacing a VSA Node Recovering a VSA Cluster Summary
24. vSphere Storage Appliance: Summary Simple manageability Installed, configure and managed via vCenter Abstraction From Underlying Hardware Delivers High availability Resilient to server failures Highly available during disk (spindle) failure Provides Storage framework for vMotion, HA and DRS Creates Shared Storage Pools server disk capacity to form shared storage Leverages vSphere Thin provisioning for space utilization Enables storage scalability
Welcome.We have many customers today who purchase vSphere to do workload consolidation, but invariably the next question is about availability. How do I protect this virtualized environment?The obvious answer is to use vSphere features like vMotion & vSphere HA.For many SMB customers, purchasing a storage array to provide shared storage for vSphere availability features like vMotion, HA & DRS is usually cost prohibitive & not an option.With the VSA, we are addressing this by providing lost cost, simple to deploy, shared storage solution.This will allow SMB customers to now move from essentials to essentials+ to get vSphere availability features.
We are basically providing our SMB customer with a low cost, easy to deploy storage appliance which will provide shared storage.This now means that SMB customers who want availability can now purchase the VSA with essentials+ & this will give them vMotion and vSphere HA.We are looking at an essentials+ option which has VSA bundled at a discount.
ESXi 5.0 Hosts Customers require two or three newly installed (green-field) ESXi 5.0 hosts to create a VSA cluster. The ESXi hosts must not have any virtual machines deployed & have the default logical network configuration (Management portgroup & VM portgroup).VSA Manager [click thru animation here] VSA Manager is a vCenter Server 5.0 extension that you install on a vCenterServer machine. After you install it, & the plugin is enabled, you can see the VSA Manager tab in the vSphere Client. VSA Manager will deploy & afterwards monitor the VSA clustervSphere Storage ApplianceThe VSA is essentially a SUSE VM which has been re-engineered with additional features by VMware. [click thru animation here]. The VSA manages the data replication/redundancy by dividing the local storage is two, and using one half as the mirror source and the other half as the mirror destination.[click thru animation here] It then exposes the source disk as an NFS export over the network, which allows it to be mounted by all the ESXi servers in the VSA cluster.This is all done automatically by the installer. There is no customer intervention required to do this.
The 2 member cluster uses a VSA Cluster Service on the vCenter Server. The 3 member cluster does not require this service.The VSA Manager installation/configuration steps are fool-proof, meaning that an administrator without any SAN skills can deploy it quickly & painlessly. This is a really cool selling point when compared to some other VSAs on the market.If the hosts are unsuitable for any reason (e.g. hardware not supported, networking not configured), the installer will report it and will not allow you to proceed.
Hardware RAIDThe supported setup for hardware RAiD on the physical servers is to place it in a RAID 1+0 (RAID 10) configuration. This way, if a spindle is lost, it does not affect the underlying volume.VSA Cluster Service The VSA Cluster Service is a Windows service that installs together with VSA Manager on the vCenter Server machine and maintains the status of the VSA cluster. The service is only used in a VSA cluster with two members and does not provide storage as the other members. It helps to maintain the majority of VSA cluster members in case of failures. The VMs running on the ESXi servers are using the shared storage. At install time there should be no running VMs on the ESXi server.
Note that there is no need for a VSA Cluster Service in this configuration.Remember that these ESXi servers must be newly installed with ESXi 5.0, and must not participate in any other cluster configuration.
This is visible inside in vSphere, in the new VSA tab.For the first deployment, choose New Installation. We will discuss the Recover VSA Cluster option shortly.
Step 1: This screen displays some of the new features which will be enabled. vMotion & HA will be enabled to provide additional resilience to VSA. This is a unique selling point as VSA will facilitate vMotion & HA without having a physical storage array providing shared storage. It also removes the complexity of creating a vSphere HA cluster and configuring vMotion networks.Step 2: The next step is to select a datacenter from you vCenter inventory. In this example, the datacenter pml-pod13 has 3 x ESXi 5.0 hosts.Step 3: In this example, we are building a 3 node cluster. A host audit is run by the installer to verify that hosts are compatible with VSA. One thing to note about the host audit is that hosts are split into different processor types. Hosts used for VSA must have the same processor type for vMotioncompatability.Step 4: Provide static IP addresses for all the VSA nodes in the cluster. When user types in the Management IP address, the wizard will automatically generates the rest of the IPs in ascending order. User can modifies the individual IP separately of course.
Step 5: This screen allows you to zero out the disks (eagerzeroedthick) as they are created. This will slow down the installation process but should speed up the initial first access to each 1 MB section of disk has to be zero’d (zeroedthick). After a while, once the disk has been accessed fully, the performance should be equivalent no matter which option is chosen.Step 6: The Ready to Install screen displays the previous entered configuration details. If correct, click the Install button.Compare the simplicity of this install with the complexity of deploying a physical SAN. You may need to purchase and configure a physical switch (iSCSI, FC, FCoE), as well as do masking, zoning, etc.
The installer will need Adobe Flash Player installed – it will prompt for this if it is needed.
The important point here is that failover is seamless. By transferring the NFS server IP address of the failed VSA to the new VSA, the ESXi continues to use the same mount point reference and any VMs on that mount point are oblivious to any changes having occurred.
There is no support for rolling upgrade in the 1.0 release, so the whole cluster will have to be placed into maintenance mode. Rolling upgrade is a possible future feature of the product.
An administrator is effectively replacing one ESXi member with a new ESXi. This will create a new VSA, sync it up to the degraded volumes, and then be prompted to take ownership of presenting one of the NFS datastores.