VMware vSphere 5 seminar
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

VMware vSphere 5 seminar

  • 2,125 views
Uploaded on

Powerpoint presentation of Raido's VMware vSphere 5 seminar on November 10, 2011

Powerpoint presentation of Raido's VMware vSphere 5 seminar on November 10, 2011

More in: Business , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
2,125
On Slideshare
2,098
From Embeds
27
Number of Embeds
2

Actions

Shares
Downloads
476
Comments
0
Likes
2

Embeds 27

http://www.raido.be 24
http://www.slashdocs.com 3

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. VMware CloudInfrastructureWhat’s new in vSphere 5
  • 2. 2010 vCloud Director vShield Security vCenter Management vSphere vSphere vSphere
  • 3. 2011 vCloud Director New vShield Security Cloud Infrastructure Launch vCenter Management vSphere vSphere vSphere
  • 4. Agenda• vSphere 5.0 Platform• vSphere 5.0 Networking• vSphere 5.0 Availability• vSphere 5.0 vMotion, DRS/DPM• vCenter Server 5.0• vSphere 5.0 vStorage• vSphere 5.0 Storage Appliance (VSA)• VMware vCenter Site Recovery Manager v5.0
  • 5. vSphere 5.0 – Platform•  Platform Enhancements•  ESXi Firewall•  Image Builder•  Auto Deploy
  • 6. New Virtual Machine Features§ vSphere 5.0 supports the industry’s most capable VM’s •  32 virtual CPUs per VM •  1TB RAM per VM •  4x previous capabilities! VM Scalability •  3D graphics Richer Desktop Experience •  Client-connected USB •  VM BIOS boot order config API devices and PowerCLI interface •  USB 3.0 devices •  EFI BIOS •  Smart Card Readers for Broader Device VM Console Access Coverage •  UI for multi-core virtual •  Support for Mac OS X Other new CPUs servers features •  Extended VMware Tools compatibility Items which require HW version 8 in blue
  • 7. vSphere 5.0 – Platform•  Platform Enhancements•  ESXi Firewall•  Image Builder•  Auto Deploy•  vSphere Update Manager
  • 8. ESXi 5.0 Firewall Features§ Capabilities •  ESXi 5.0 has a new firewall engine which is not based on iptables. •  The firewall is service oriented, and is a stateless firewall. •  Users have the ability to restrict access to specific services based on IP address/Subnet Mask.§ Management •  The GUI for configuring the firewall on ESXi 5.0 is similar to that used with the classic ESX firewall — customers familiar with the classic ESX firewall should not have any difficulty with using the ESXi 5.0 version. •  There is a new esxcli interface (esxcfg-firewall is deprecated in ESXi 5.0). •  There is Host Profile support for the ESXi 5.0 firewall. •  Customers who upgrade from Classic ESX to ESXi 5.0 will have their firewall settings preserved.
  • 9. UI: Security Profile§ The ESXi Firewall can be managed via the vSphere client.§ Through the Configuration > Security Profile, one can observe the Enabled Incoming/Outgoing Services, the Opened Port List for each service & the Allowed IP List for each service.
  • 10. UI: Security Profile > Services >Properties§ Through the Services Properties, one can configure if a service should be automatically started.§ Services can also be stopped & started on-the-fly.
  • 11. vSphere 5.0 – Platform•  Platform Enhancements•  ESXi Firewall•  Image Builder•  Auto Deploy
  • 12. Composition of an ESXi Image Core CIM Hypervisor Providers Plug-in Drivers Components
  • 13. ESXi Image Deployment§ Challenges •  Standard ESXi image from VMware download site is sometimes limited •  Doesn’t have all drivers or CIM providers for specific hardware •  Doesn’t contain vendor specific plug-in components ? Missing CIM provider Missing driver Standard
 ESXi ISO •  Base providers •  Base drivers
  • 14. Building an Image Start PowerCLI session Windows Host with PowerCLI
 and Image Builder Snap-in
  • 15. Building an Image Activate Image Builder Snap-in Windows Host with PowerCLI
 and Image Builder Snap-in Image Builder
  • 16. Building an Image Depots Connect to depot(s) Image
 Profile Windows Host with PowerCLI
 and Image Builder Snap-in ESXi VIBs Image Driver Builder VIBs OEM VIBs
  • 17. Building an Image Depots Clone and modify  existing Image Profile Image
 Profile Windows Host with PowerCLI
 and Image Builder Snap-in ESXi VIBs Image Driver Builder VIBs OEM VIBs
  • 18. Building an Image Depots Generate new image Image
 Profile Windows Host with PowerCLI
 and Image Builder Snap-in ESXi VIBs Image Driver Builder ISO Image VIBs PXE-bootable
 ImageOEM VIBs
  • 19. vSphere 5.0 – Platform•  Platform Enhancements•  ESXi Firewall•  Image Builder•  Auto Deploy
  • 20. vSphere 5.0 – Auto Deploy Overview vCenter Server with Auto Deploy •  Deploy and patch vSphere hosts in minutes using a new “on the fly” model Image Host Profiles •  Coordination with vSphere Host Profiles Profiles Benefits • Rapid provisioning: initial deployment and patching of hosts vSphere vSphere vSphere • Centralized host and image management • Reduce manual deployment and patch processes
  • 21. Deploying a Datacenter Has Just Gotten Much Easier Before AfterTime: 30 Time: 30 Time: 30mins mins mins …..Repeat 37 more times… Total time: 20 Total time: 10 Minutes! Hours!
  • 22. Auto Deploy Example – Initial BootProvision new host vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs Driver VIBs “Waiter” Auto 
 Deploy TFTP DHCP OEM VIBs
  • 23. Auto Deploy Example – Initial Boot1) PXE Boot server vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs Driver VIBs “Waiter” gPXE
 DHCP
 image Reques t Auto 
 Deploy TFTP DHCP OEM VIBs
  • 24. Auto Deploy Example – Initial Boot2) Contact Auto Deploy Server vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi VIBs 
 Driver b oot http est VIBs r eq u “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 25. Auto Deploy Example – Initial Boot3) Determine Image Profile, Host Profile and cluster vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi •  Image Profile X VIBs •  Host Profile 1 •  Cluster B Driver VIBs “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 26. Auto Deploy Example – Initial Boot4) Push image to host, apply host profile vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile VIBs Host Profile Cache Driver VIBs “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 27. Auto Deploy Example – Initial Boot5) Place host into cluster vCenter Server Image
 Image
 Profile Image
 Host Profile Profile Host Profile Profile Host Profile Rules Engine ESXi Image Profile VIBs Host Profile Cache Driver VIBs “Waiter” Auto 
 Deploy OEM VIBs Cluster A Cluster B
  • 28. vSphere 5.0 – Networking•  LLDP•  NetFlow•  Port Mirror•  NETIOC – New Traffic Types
  • 29. What Is Discovery Protocol?(Link Layer Discovery Protocol )§  Discovery protocol is a data link layer network protocol used to discover capabilities of network devices.§  Discovery protocol allows customer to automate the deployment process in a complex environment through its ability to •  Discover capabilities of Network devices •  Discover configuration of neighboring infrastructure§  vSphere infrastructure supports following Discovery Protocol •  CDP (Standard vSwitches & Distributed vSwitches) •  LLDP (Distributed vSwitches)§  LLDP is a standard based vendor neutral discovery protocol (802.1AB)
  • 30. LLDP Neighbour Info§ Sample output using LLDPD Utility
  • 31. vSphere 5.0 – Networking•  LLDP•  NetFlow•  Port Mirror•  NETIOC – New Traffic Types
  • 32. What Is NetFlow?§  NetFlow is a networking protocol that collects IP traffic info as records and sends them to third party collectors such as CA NetQoS, NetScout etc. Legend : VM A VM B VM traffic NetFlow session Collecto Physical r switch vDS Host trun k§  The Collector/Analyzer report on various information such as: •  Current top flows consuming the most bandwidth •  Which flows are behaving irregularly •  Number of bytes a particular flow has sent and received in the past 24 hours
  • 33. NetFlow with Third-Party CollectorsLegend : Net Scout 
 Internal flows nGenius 
 External flows Collector NetFlow sessionExternalSystems vDS Hos t CA NetQoS
 Collector
  • 34. vSphere 5.0 Networking•  LLDP•  NetFlow•  Port Mirror•  NETIOC – New Traffic Types
  • 35. What Is Port Mirroring (DVMirror)?§  Port Mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network monitoring device connected on another switch port.§  Port Mirroring is also referred to as SPAN (Switched Port Analyzer) on Cisco Switches.§  Port Mirroring overcomes the limitation of promiscuous mode. •  By providing granular control on which traffic can be monitored •  Ingress Source •  Egress Source§  Helps in troubleshooting network issue by providing access to: •  Inter-VM traffic •  Intra-VM traffic
  • 36. Port Mirror Traffic Flow WhenMirror Destination Is a VMInter-VM traffic Ingress
 Egress
 Destinatio Destinatio Source Source n n vDS vDS Legend : Mirror Flow VM Traffic Intra-VM traffic Egress
 Ingress
 Destinatio Destinatio Source Source n n External External System System vDS vDS
  • 37. vSphere 5.0 Networking•  LLDP•  NetFlow•  Port Mirror•  NETIOC – New Traffic Types
  • 38. What Is Network I/O Control (NETIOC)?§  Network I/O control is a traffic management feature of vSphere Distributed Switch (vDS).§  In consolidated I/O (10 gig) deployments, this feature allows customers to: •  Allocate Shares and Limits to different traffic types. •  Provide Isolation •  One traffic type should not dominate others •  Guarantee Service Levels when different traffic types compete§  Enhanced Network I/O Control — vSphere 5.0 builds on previous versions of Network I/O Control feature by providing: •  User-defined network resource pools •  New Host Based Replication Traffic Type •  QoS tagging
  • 39. NETIOC VM GroupsVMRG1 VMRG2 VMRG3 Total BW = 20 Gig User Defined RP vMotion iSCSI VMware vNetwork Distributed Switch HBR NFS FT VM
 Network I/O Control 10 GigE VMRG1 VMRG2 VMRG3
  • 40. NETIOC VM Traffic Pepsi VMs Coke VM vMotio HBR FT n Mgmt NFS iSCSIServer Admin vNetwork Distributed Portgroup Teaming Policy vNetwork Distributed Switch Load Based Shaper Teaming Traffic Shares Limit (Mbps) 802.1p vMotion 5 150 1 Scheduler Scheduler Mgmt 30 -- Limit enforcement 
 NFS 10 250 -- per team Shares enforcement iSCSI 10 2 per uplink FT 60 -- HBR 10 -- VM 20 2000 4 Pepsi 5 -- Coke 15 --
  • 41. vSphere 5.0 – Availability
  • 42. vSphere HA Primary Components§ Every host runs a agent •  Referred to as ‘FDM’ or Fault Domain Manager •  One of the agents within the cluster is chosen to assume the role of the Master •  There is only one Master per cluster during normal ESX 01 ESX 03 operations •  All other agents assume the role of Slaves§ There is no more Primary/ Secondary concept with vSphere HA ESX 02 ESX 04 vCenter
  • 43. The Master Role§ An FDM master monitors: •  ESX hosts and Virtual Machine availability. •  All Slave hosts. Upon a Slave host failure, protected VMs on that host will be restarted. •  The power state of all the protected VMs. ESX 01 ESX 03 Upon failure of a protected VM, the Master will restart it.§ An FDM master manages: •  The list of hosts that are members of the cluster, updating this list as hosts are added or removed from the cluster. •  The list of protected VMs. The Master updates this list after ESX 02 ESX 04 each user-initiated power on or power off. vCenter
  • 44. The Slave Role§ An Slave monitors the runtime state of it’s locally running VMs and forwards any significant state changes to the Master.§ It implements vSphere HA features ESX 01 ESX 03 that do not require central coordination, most notably VM Health Monitoring.§ It monitors the health of the Master. If the Master should fail, it participates in the election process ESX 02 ESX 04 for a new master.§ Maintains list of powered on VMs vCenter
  • 45. Storage Level Communications§ One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication.§ The datastores used for this are referred to as ‘Heartbeat Datastores’.§ This provides for increased communication ESX 01 ESX 03 redundancy.§ Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning. ESX 02 ESX 04 vCenter
  • 46. Storage Level Communications§ Heartbeat Datastores allow a Master to: •  Monitor availability of Slave hosts and the VMs running on them •  Determine whether a host has become network isolated rather than network ESX 01 ESX 03 partitioned. •  Coordinate with other Masters - since a VM can only be owned by only one master, masters will coordinate VM ownership thru datastore communication.•  By default, vCenter will automatically pick 2 datastores. These 2 ESX 02 ESX 04 datastores can also be selected by the user. vCenter
  • 47. vSphere 5.0 – vMotion, DRS/DPM
  • 48. vSphere 5.0 – vMotion§ The original vMotion keeps getting better!§ Multi-NIC Support •  Support up to four 10Gbps or sixteen 1Gbps NICs. (ea. NIC must have its own IP). •  Single vMotion can now scale over multiple NICs. (load balance across multiple NICs). •  Faster vMotion times allow for a higher number of concurrent vMotions.§  Reduced Application Overhead •  Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce timeouts and improve success. •  Ensures less than 1 Second switchover time in almost all cases.§ Support for higher latency networks (up to ~10ms) •  Extend vMotion capabilities over slower networks.
  • 49. Multi-NIC Throughput Multi-NIC 30 25Throughput (Gbps) 20 15 10 5 0 One NIC Two NICs Three NICs* * Limited by throughput of PCI-E bus in this particular setup.
  • 50. vSphere 5.0 – DRS/DPM§ DRS/DPM improvements focus on cross-product integration. •  Introduce support for “Agent VMs.” •  Agent VM is a special purpose VM tied to a specific ESXi host. •  Agent VM cannot / should not be migrated by DRS or DPM. •  Special handling of Agent VMs now afforded by DRS & DPM.§ A DRS/DPM cluster hosting Agent VMs. •  Accounts for Agent VM reservations (even when powered off). •  Waits for Agent VMs to be powered on and ready before placing client VMs. •  Will not try to migrate a Agent VM (Agent VMs pinned to their host).§  Maintenance Mode / Standby Mode Support • Agent VMs do not have to be evacuated for host to enter maintenance or standby mode. •  When host enters maintenance/standby mode, Agent VMs are powered off (after client VMs are evacuated). •  When host exits maintenance/standby mode, Agent VMs are powered on (before client VMs are placed).
  • 51. vSphere 5.0 – vCenter Server
  • 52. vSphere Web Client ArchitectureThe vSphere WebClient runs withina browser FxApplicationServer that Flex Clientprovides a Back Endscalable back end The Query Service obtains live datavCenter in either Query from the coresingle or Service vCenter ServerLinked mode processoperation vCenter
  • 53. Extension PointsLaunchbar Tabs Inventory Objects Create custom actions Sidebar Extension Portlets Add right-click extensions
  • 54. Features of the vSphere Web Client§ Ready Access to Common Actions •  Quick access to common tasks provided out of the box
  • 55. Introducing vCenter Server Appliance§ The vCenter Server Appliance is the answer! •  Simplifies Deployment and Configuration •  Streamlines patching and upgrades •  Reduces the TCO for vCenter§ Enables companies to respond to business faster! VMware vCenter Server Virtual Appliance Automatio Visibility n Scalability
  • 56. Component Overview§ vCenter Server Appliance (VCSA) consists of: •  A pre-packaged 64 bit application running on SLES 11 •  Distributed with sparse disks •  Disk Footprint Distribution Min Deployed Max Deployed 3.6GB ~5GB ~80GB •  Memory Footprint •  A built in enterprise level database with optional support for a remote Oracle databases. •  Limits are the same for VC and VCSA •  Embedded DB •  5 hosts/50 VMs •  External DB •  <300 hosts/<3000 VMs (64 bit) •  A web-based configuration interface
  • 57. Feature Overview§ vCenter Server Appliance supports: •  The vSphere Web Client •  Authentication through AD and NIS •  Feature parity with vCenter Server on Windows •  Except – •  Linked Mode support - Requires ADAM (AD LDS) •  IPv6 support •  External DB Support •  Oracle is the only supported external DB for the first release •  No vCenter Heartbeat support •  HA is provided through vSphere HA
  • 58. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 59. Introduction to VMFS-5§ Enhanced Scalability •  Increase the size limits of the filesystem & support much larger single extent VMFS-5 volumes. •  Support for single extent 64TB Datastores.§ Better Performance •  Uses VAAI locking mechanism with more tasks.§ Easier to manage and less overhead •  Space reclamation on thin provisioned LUNs. •  Smaller sub blocks. •  Unified Block size.
  • 60. VMFS-5 vs VMFS-3 FeatureComparison Feature VMFS-3 VMFS-5 Yes 
 2TB+ VMFS Volumes Yes (using extents) Support for 2TB+ Physical RDMs No Yes Unified Block size (1MB) No Yes Atomic Test & Set Enhancements
 No Yes (part of VAAI, locking mechanism) Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k) Small file support No 1KB
  • 61. VMFS-3 to VMFS-5 Upgrade§ The Upgrade to VMFS-5 is clearly displayed in the vSphere Client under Configuration Storage view.§ It is also displayed in the Datastores Configuration view.§ Non-disruptive upgrades.
  • 62. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 63. VAAI – Introduction§ vStorage API for Array Integration = VAAI§ VAAI’s main purpose is to leverage array capabilities. •  Offloading tasks to reduce overhead •  Benefit from enhanced mechanisms arrays mechanisms§ The “traditional” VAAI primitives have been improved.§ We have introduced multiple new primitives. Application§ Support for NAS! VI-3 Hypervisor Non-VAAI Fabric Array VAAI LUN
 LUN
 01 02
  • 64. VAAI Primitive Updates in vSphere 5.0§ vSphere 4.1 has a default plugin shipping for Write Same as the primitive was fully T10 compliant, however ATS and Full Copy were not. •  The T10 organization is responsible for SCSI standardization (SCSI-3) and a standard used by many Storage Vendors.§ vSphere 5.0 has all the 3 primitives which are T10 compliant integrated in the ESXi Stack. •  This allows for arrays which are T10 compliant leverage these primitives with a default VAAI plugin in vSphere 5.0.§ It should also be noted that the ATS primitive has been extended in vSphere 5.0 / VMFS-5 to cover even more operations, resulting in even better performance and greater scalability.
  • 65. Introducing VAAI NAS Primitives§ With this primitive, we will enable hardware acceleration/ offload features for NAS datastores.§ The following primitives are defined for VAAI NAS: •  Full File Clone – Similar to the VMFS block cloning. Allows offline VMDKs to be cloned by the Filer. •  Note that hot migration via Storage vMotion on NAS is not hardware accelerated. •  Reserve Space – Allows creation of thick VMDK files on NAS.§ NAS VAAI plugins are not shipped with ESXi 5.0. These plugins will be developed and distributed by the storage vendors, but signed by the VMware certification program.
  • 66. VAAI NAS: Thick Disk Creation§ Without the VAAI NAS primitives, only Thin format is available.§ With the VAAI NAS primitives, Flat (thick), Flat pre-initialized (eager zeroed-thick) and Thin formats are available. Non VAAI VAAI
  • 67. Introducing VAAI Thin Provisioning§ What are the driving factors behind VAAI TP? •  Provisioning new LUNs to a vSphere environment (cluster) is complicated.§ Strategic Goal: •  We want to make the act of physical storage provisioning in a vSphere environment extremely rare. •  LUNs should be an incredibly large address spaces & should be able to handle any VM workload.§ VAAI TP features include: •  Dead space reclamation. •  Monitoring of the space.
  • 68. VAAI Thin Provisioning – Dead Space Reclamation§ Dead space is previously written blocks that are no longer used by the VM. For instance after a Storage vMotion.§ vSphere conveys block information to storage system via VAAI & storage system reclaims the dead blocks. •  Storage vMotion, VM deletion and swap file deletion can trigger vSphere Storage vMotion the thin LUN to free some physical space. •  ESXi 5.0 uses a standard SCSI command for dead space reclamation. VMFS volume A VMFS volume B
  • 69. Current “Out Of Space” UserExperience No space related warnings VMware No mitigation steps available Space exhaustion, VMs 
 and LUN offline ? VMware
  • 70. “Out Of Space” User Experiencewith VAAI Extensions Space exhaustion warning in UI VMware Storage vMotion based evacuation or add space VMware Space exhaustion, affected VMs paused, 
 LUN online & awaiting space allocation.
  • 71. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 72. Storage vMotion – Introduction§ In vSphere 5.0, a number of new enhancements were made to Storage vMotion. •  Storage vMotion will work with Virtual Machines that have snapshots, which means coexistence with other VMware products & features such as VCB, VDR & HBR. •  Storage vMotion will support the relocation of linked clones. •  Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).
  • 73. Storage vMotion ArchitectureEnhancements (1 of 2)§ In vSphere 4.1, Storage vMotion uses the Changed Block Tracking (CBT) method to copy disk blocks between source & destination.§ The main challenge in this approach is that the disk pre-copy phase can take a while to converge, and can sometimes result in Storage vMotion failures if the VM was running a very I/O intensive load.§ Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy mechanism.§ In vSphere 5.0, Storage vMotion uses a new mirroring architecture to provide the following advantages over previous versions: •  Guarantees migration success even when facing a slower destination. •  More predictable (and shorter) migration time.
  • 74. Storage vMotion ArchitectureEnhancements (2 of 2) VMM/Guest Guest OS Datamover Mirror Driver VMkernel Userworld Source Destination
  • 75. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 76. Storage I/O Control Phase 2 and Refreshing Memory§ In many customer environments, storage is mostly accessed from storage arrays over SAN, iSCSI or NAS.§ One ESXi host can affect the I/O performance of others by issuing large number of requests on behalf of one its virtual machines.§ Thus the throughput/bandwidth available to ESXi hosts itself may vary drastically leading to highly-variable I/O performance for VMs.§ To ensure stronger I/O guarantees, we implemented Storage I/O Control in vSphere 4.1 for block storage which guarantees an allocation of I/O resources on a per VM basis.§ As of vSphere 5.0 we also support SIOC for NFS based storage!§ This capability is essential to provide better performance for I/O intensive and latency-sensitive applications such as database workloads, Exchange servers, etc.
  • 77. Storage I/O Control RefreshingMemoryWhat you see What you want to see online Microsoft data online Microsoft data store Exchange mining store Exchange mining VIP VIP VIP VIP NFS / VMFS Datastore NFS / VMFS Datastore
  • 78. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 79. What Does Storage DRS Solve?§ Without Storage DRS: •  Identify the datastore with the most disk space and lowest latency. •  Validate which virtual machines are placed on the datastore and ensure there are no conflicts. •  Create Virtual Machine and hope for the best.§ With Storage DRS: •  Automatic selection of the best placement for your VM. •  Advanced balancing mechanism to avoid storage performance bottlenecks or “out of space” problems. •  Affinity Rules.
  • 80. What Does Storage DRS Provide?§ Storage DRS provides the following: 1.  Initial Placement of VMs and VMDKS based on available space and I/O capacity. 2.  Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization. 3.  Load balancing via Storage vMotion based on I/O metrics, i.e. latency.§ Storage DRS also includes Affinity/Anti-Affinity Rules for VMs and VMDKs; •  VMDK Affinity – Keep a VM’s VMDKs together on the same datastore. This is the default affinity rule. •  VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different datastores. •  Virtual Machine Anti-Affinity – Keep VMs separate on different datastores.§ Affinity rules cannot be violated during normal operations.
  • 81. Datastore Cluster§ An integral part of SDRS is to create a group of datastores called a datastore cluster. •  Datastore Cluster without Storage DRS – Simply a group of datastores. •  Datastore Cluster with Storage DRS – Load Balancing domain similar to a DRS Cluster.§ A datastore cluster, without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than just a folder. 2TB datastore cluster 500GB 500GB 500GB 500GB datastores
  • 82. Storage DRS Operations – Initial Placement (1 of 4)§ Initial Placement – VM/VMDK create/clone/relocate. •  When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore. •  SDRS will select a datastore based on space utilization and I/O load. •  By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters. 2TB datastore cluster 500GB 500GB 500GB 500GB datastores 300GB 260GB 265GB 275GB available available available available
  • 83. Storage DRS Operations – LoadBalancing (2 of 4)Load balancing – SDRS triggers on space usage & latency threshold.§ Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded. •  Space utilization statistics are constantly gathered by vCenter, default threshold 80%. •  I/O load trend is currently evaluated every 8 hours based on a past day history, default threshold 15ms.§ Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds.§ Storage DRS will do a cost / benefit analysis!§ For I/O load balancing Storage DRS leverages Storage I/O Control functionality.
  • 84. Storage DRS Operations –Thresholds (3 of 4)
  • 85. Storage DRS Operations – Datastore Maintenance Mode§ Datastore Maintenance Mode •  Evacuates all VMs & VMDKs from selected datastore. •  Note that this action will not move VM Templates. •  Currently, SDRS only handles registered VMs. Place VOL1 in maintenance mode 2TB datastore cluster VOL1 VOL2 VOL3 VOL4 datastores
  • 86. Storage DRS Operations (4 of 4) Datastore Cluster Datastore Cluster Datastore Cluster VMDK affinity VMDK anti-affinity VM anti-affinity §  Keep a Virtual §  Keep a VM’s VMDKs §  Keep VMs on different Machine’s VMDKs on different datastores together on the same datastores §  Similar to DRS anti- datastore §  Useful for separating affinity rules §  Maximize VM log and data disks of §  Maximize availability availability when all database VMs of a set of redundant disks needed in order §  Can select all or a VMs to run subset of a VM’s disks §  On by default for all VMs
  • 87. SDRS Scheduling SDRS allows you to create a schedule to change its settings.
 This can be useful for scenarios where you don’t want VMs to migrate betweendatastore or when I/O latency might rise, giving false negatives, e.g. during VM backups.
  • 88. So What Does It Look Like?Provisioning…
  • 89. So What Does It Look Like? Load Balancing.§ It will show “utilization before” and “after.”§ There’s always the option to override the recommendations.
  • 90. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 91. What Is vStorage APIs Storage Awareness (VASA)?§ VASA is an Extension of the vSphere Storage APIs, vCenter- based extensions. Allows storage arrays to integrate with vCenter for management functionality via server-side plug- ins or Vendor Providers.§ This in turn allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster.§ VASA enables several features. •  For example it delivers System-defined (array-defined) Capabilities that enables Profile-driven Storage. •  Another example is that it provides array internal information that helps several Storage DRS use cases to work optimally with various arrays.
  • 92. Storage Compliancy§ Once the VASA Provider has been successfully added to vCenter, the VM Storage Profiles should also display the storage capabilities provided to it by the Vendor Provider.§ The above example contains a ‘mock-up’ of some possible Storage Capabilities as displayed in the VM Storage Profiles. These are retrieved from the Vendor Provider.
  • 93. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 94. Why Profile Driven Storage? (1 of 2)§ Problem Statement 1.  Difficult to manage datastores at scale •  Including: capacity planning, differentiated data services for each datastore, maintaining capacity headroom, etc. 2. Difficult to correctly match VM SLA requirements to available storage •  Because: Manually choosing between many datastores and >1 storage tiers •  Because: VM requirements not accurately known or may change over its lifecycle§ Related trends •  Newly virtualized Tier-1 workloads need stricter VM storage SLA promises •  Because: Other VMs can impact performance SLA •  Scale-out storage mix VMs with different SLAs on the same storage
  • 95. Why Profile Driven Storage? (2 of 2)Save OPEX by reducing repetitive planning and effort!§ Minimize per-VM (or per VM request) “thinking” or planning for storage placement. •  Admin needs to plan for optimal space and I/O balancing for each VM. •  Admin needs to identify VM storage requirements and match to physical storage properties.§ Increase probability of “correct” storage placement and use (minimize need for troubleshooting, minimize time for troubleshooting). •  Admin needs more insight into storage characteristics. •  Admin needs ability to custom-tag available storage. •  Admin needs easy means to identify incorrect VM storage placement (e.g. on incorrect datastore).
  • 96. Save OPEX by Reducing Repetitive Planning and Effort! Periodically Identify Find optimal Create VM check Today requirements datastore compliance Initial setupIdentify storage Periodically characteristics Identify Storage Create VM check requirements DRS compliance Group
 datastores Initial setup Discover storage Select VM Storage DRS +characteristics Create VM Storage profile Profile driven Group
 storage datastores
  • 97. Storage Capabilities & VMStorage Profiles Not Compliant Compliant VM Storage Profile associated with VM VM Storage Profile referencing Storage Capabilities Storage Capabilities surfaced by VASA or user-defined
  • 98. Selecting a Storage Profile During Provisioning§ By selecting a VM Storage Profile, datastores are now split into Compatible & Incompatible.§ The Celerra_NFS datastore is the only datastore which meets the GOLD Profile requirements – i.e. it is the only datastore that has our user-defined storage capability associated with it.
  • 99. VM Storage Profile Compliance§ Policy Compliance is visible from the Virtual Machine Summary tab.
  • 100. vSphere 5.0 – vStorage• VMFS 5.0• vStorage API for Array Integration• Storage vMotion• Storage I/O Control• Storage DRS• VMware API for Storage Awareness• Profile Driven Storage• FCoE – Fiber Channel over Ethernet
  • 101. Introduction§ Fiber Channel over Ethernet (FCoE) is an enhancement that expands Fiber Channel into the Ethernet by combining two leading-edge technologies (FC and the Ethernet)§ The FCoE adapters that VMware supports generally fall into two categories, hardware FCoE adapters and software FCoE adapters which uses an FCoE capable NIC •  Hardware FCoE adapters were supported as of vSphere 4.0.§ The FCoE capable NICs are referred to as Converged Network Adapters (CNAs) which facilitate network and storage traffic.§ ESXi 5.0 uses FCoE adapters to access Fibre Channel storage.
  • 102. Software FCoE Adapters (1 of 2)§ A software FCoE adapter is a software code that performs some of the FCoE processing.§ This adapter can be used with a number of NICs that support partial FCoE offload.§ Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.
  • 103. Software FCoE Adapters (2 of 2)§ Once the Software FCoE is enabled, a new adapter is created, and discovery of devices can now take place.
  • 104. Conclusion§ vSphere 5.0 has many new compelling storage features.§ VMFS volumes can be larger than ever before. •  They can contain many more virtual machines due to VAAI enhancements and architectural changes.§ Storage DRS and Profile Driven Storage will help solve traditional problems with virtual machine provisioning.§ The administrative overhead will be severely reduced. •  VASA surfacing storage characteristics. •  Creating Profiles through Profile Driven Storage. •  Combining multiple datastores in a large aggregate.
  • 105. vSphere Storage Appliance (VSA)
  • 106. Introduction (1 of 3)§ In vSphere 5.0, VMware releases a new storage appliance called VSA. •  VSA is an acronym “vSphere Storage Appliance.” •  This appliance is aimed at our SMB (Small-Medium Business) customers who may not be in a position to purchase a SAN or NAS array for their virtual infrastructure, and therefore do not have shared storage. •  It is the SMB market that we wish to go after with this product — our aim to move these customers from Essentials to Essentials+. •  Without access to a SAN or NAS array, this excludes these SMB customers from many of the top features which are available in a VMware Virtual Infrastructure, such as vSphere HA & vMotion. •  Customers who decide to deploy a VSA can now benefit from many additional vSphere features without having to purchase a SAN or NAS device to provide them with shared storage.
  • 107. Introduction (2 of 3) VSA VSA VSA VSA Manager vSphere vSphere vSphere vSphere Client NFS NFS NFS§ Each ESXi server has a VSA deployed to it as a Virtual Machine.§ The appliances use the available space on the local disk(s) of the ESXi servers & present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures.
  • 108. Introduction (3 of 3)§ The NFS datastores exported from the VSA can now be used as shared storage on all of the ESXi servers in the same datacenter.§ The VSA creates shared storage out of local storage for use by a specific set of hosts.§ This means that vSphere HA & vMotion can now be made available on low-end (SMB) configurations, without external SAN or NAS servers.§ There is a CAPEX saving achieved by SMB customers as there is no longer a need to purchase a dedicated SAN or NAS devices to achieve shared storage.§ There is also an OPEX saving as the management of the VSA may be done by the vSphere Administrator and there is no need for dedicated SAN skills to manage the appliances.
  • 109. Supported VSA Configurations§ The vSphere Storage Appliance can be deployed in two configurations: •  2 x ESXi 5.0 servers configuration •  Deploys 2 vSphere Storage Appliances, one per ESXi server & a VSA Cluster Service on the vCenter server •  3 x ESXi 5.0 servers configuration •  Deploys 3 vSphere Storage Appliances, once per ESXi server •  Each of the servers must contain a new/vanilla install of ESXi 5.0. •  During the configuration, the user selects a datacenter. The user is then presented with a list of ESXi servers in that datacenter. •  The installer will check the compatibility of each of these physical hosts to make sure they are suitable for VSA deployment. •  The user must then select which compatible ESXi servers should participate in the VSA cluster, i.e. which servers will host VSA nodes. •  It then ‘creates’ the storage cluster by aggregating and virtualizing each server’s local storage to present a logical pool of shared storage.
  • 110. Two Member VSA vCenter Server VSA VSA Cluster Manager Service Manage Volume 2 Volume 1 Volume 1 Volume 2 (Replica) (Replica) VSA  VSA  Datastore 1 Datastore 2 VSA cluster with 2 members
  • 111. Three Member VSA vCenter Server VSA Manager Manage VSA  VSA  VSA  Datastore 1 Datastore 2 Datastore 3 Volume 3 Volume 2 Volume 1 Volume 3 (Replica) (Replica) Volume 1 Volume 2 (Replica) VSA cluster with 3 members
  • 112. VSA Manager§ The VSA Manager helps an administrator perform the following tasks: •  Deploy vSphere Storage Appliance instances onto ESXi hosts to create a VSA cluster •  Automatically mount the NFS volumes that each vSphere Storage Appliance exports as datastores to the ESXi hosts •  Monitor, maintain, and troubleshoot a VSA cluster
  • 113. Resilience§ Many storage arrays are a single point of failure (SPOF) in customer environments.§ VSA is very resilient to failures.§ If a node fails in the VSA cluster, another node will seamlessly take over the role of presenting its NFS datastore.§ The NFS datastore that was being presented from the failed node will now be presented from the node that holds its replica (mirror copy).§ The new node will use the same NFS server IP address that the failed node was using for presentation, so that any VMs that reside on that NFS datastore will not be affected by the failover.
  • 114. What’s New in VMware vCenterSite Recovery Manager v5.0 –Technical
  • 115. vCenter Site Recovery ManagerEnsures Simple, Reliable DR§ Site Recovery Manager Complements vSphere to provide the simplest and most reliable disaster protection and site migration for all applications§ Provide cost-efficient replication of applications to failover site •  Built-in vSphere Replication •  Broad support for storage-based replication§ Simplify management of recovery and migration plans •  Replace manual runbooks with centralized recovery plans •  From weeks to minutes to set up new plan§ Automate failover and migration processes for reliable recovery •  Enable frequent non-disruptive testing •  Ensure fast, automated failover •  Automate failback processes
  • 116. SRM Provides Broad Choice ofReplication Options Site SitevCenter Server Recovery vCenter Server Recovery Manager Manager VM V V V V V V V VM V V V V V V M M M M M M M M M M M M M vSphere vSphere vSphere Replication V V V M M M Storage-based replication vSphere Replication: simple, cost-efficient replication for Tier 2 applications and smaller sites Storage-based replication: High-performance replication for business-critical applications in larger sites
  • 117. SRM of Today’s High-Level Architecture “Protected” Site “Recovery” Site vSphere Client vSphere Client SRM Plug-In SRM Plug-In SRM Server vCenter Server vCenter Server SRM Server SRA SRA ESX ESX ESX ESX ESX Replication Software Replication Software Replication SAN SAN VMFS VMFS Array VMFS VMFS Array
  • 118. Technology – vSphere Replication§ Adding native replication to SRM •  Virtual machines can be replicated regardless of what storage they live on •  Enables replication between heterogeneous datastores •  Replication is managed as a property of a virtual machine •  Efficient replication minimizes impact on VM workloads •  Provides guest-level copy of the VM and not a copy of the VM itself
  • 119. vSphere Replication Details§ Replication Granularity per Virtual Machine •  Can opt to replicate all or a subset of the VM’s disks •  You can create the initial copy in any way you want - even via sneaker net! •  You have the option to place the replicated disks where you want. •  Disks are replicated in group consistent manner§ Simplified Replication Management •  User selects destination location for target disks •  User selects Recovery Point Objective (RPO) •  User can supply initial copy to save on bandwidth§  Replication Specifics •  Changes on the source disks are tracked by ESX •  Deltas are sent to the remote site •  Does not use VMware snapshots
  • 120. Replication UI§ Select VMs to replicate from within the vSphere client by right click options§ Can do this on one VM, or multiple at the same time!
  • 121. vSphere Replication 1.0 Limitations§ Focus on virtual disks of powered-on VMs. •  ISOs and floppy images are not replicated. •  Powered-off/suspended VMs not replicated. •  Non-critical files not replicated (e.g. logs, stats, swap, dumps).§ vSR works at the virtual device layer. •  Independent of disk format specifics. •  Independent of primary-side snapshots. •  Snapshots work with vSR, snapshot is replicated, but VM is recovered with collapse snapshots. •  Physical RDMs are not supported.§ FT, linked clones, VM templates are not supported with HBR.§ Automated failback of vSR-protected VMs will be late, but will be supported in the future. § Virtual Hardware 7, or later, in the VM is required.
  • 122. SRM Architecture with vSphere Replication “Protected” Site “Recovery” Site vSphere Client vSphere Client SRM Plug-In SRM Plug-In SRM Server vCenter Server vCenter Server SRM Server vRMS vRMS vRS ESX ESX ESX ESX ESX vRA vRA vRA Storage Storage Storage VMFS VMFS VMFS VMFS
  • 123. SRM Scalability Maximum Enforced Protected virtual machines total 3000 No Protected virtual machines in a single 500 No protection group Protection groups 250 No Simultaneous running recovery plans 30 No vSphere Replicated virtual machines 500 No
  • 124. Workflow§ Currently we have DR Event failover, and Test.
  • 125. Planned Migration§ New is Planned Migration. Will shutdown protected VM’s, and than synchronize them! Planned migration ensures application consistency and no data-loss during migration •  Graceful shutdown of production VMs in application consistent state •  Data sync to complete replication of VMs •  Recover fully replicated VMs
  • 126. Failback Description Benefits •  “Single button” to •  Facilitates DR operations for enterprises that are mandated failback all recovered to perform a true failover as part of DR testing VMs •  Simplifies recovery process after disaster •  Interfaces with storage to automatically reverse replication •  Replays existing recovery plans – so new virtual machines are not part of failback Reverse Replication Site A (Primary) Site B (Recovery)
  • 127. Failback§ To failback, you need first to do a planned migration, followed by a reprotect. Then, to do the actual failback, you do a recovery.§ Below is a successful recovery of a planned migration.
  • 128. Failback (continued)§ Reprotect is now almost complete . . .
  • 129. Failback (continued)§ Replication now goes in reverse – to the protected side.
  • 130. Failback (continued)§ Now we are ready to failover to our original side – the protected site!
  • 131. DR Event
  • 132. Dependencies§ There is more functionality to help manage multitier applications.
  • 133. Dependencies (continued)
  • 134. Dependencies (continued) – VMStartup Order Group 1 Group 2 Group 3 Group 4 Group 5 App Server Desktop Apache Apache Master Database Database Database Desktop Apache Desktop App Server Desktop Exchange Mail Sync