Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Simplify, Virtualize and Protect Your Datacenter Cost Savings and Business Continuity with VMware's Latest vSphere Solutio...
Cloud Computing:  What does it mean? <ul><li>Fighting Complexity In The Data Center   </li></ul><ul><li>Good Is Good, But ...
Cloud Computing and  Economic Recovery <ul><li>Data centers represent expensive &quot;pillars of complexity&quot; for comp...
Datacenter Challenges <ul><li>IT Managers are looking for increased levels of resource utilization. </li></ul><ul><li>Stor...
Conclusion <ul><li>Carefully compare the cost of private vs public clouds. </li></ul><ul><li>Often outsourcing hardware an...
<ul><li>Background </li></ul><ul><li>Review of Basics </li></ul><ul><li>VMware on SAN Integration </li></ul>copyright I/O ...
Traditional  DAS Direct-Attached Storage External SCSI Storage Array = Stranded Capacity Parallel SCSI3 connection provide...
SAN- attached Storage FC Storage Array FC SAN Switches 200/400/800 MB/s OR IP SAN w/ iSCSI Ethernet Switches Tape Library ...
Physical Servers represent the  Before  illustration running one application per server. VMware “Converter” can migrate ph...
What is a Virtual Machine? 10/13/09 copyright 2007  I/O Continuity Group Virtual Machine VM Virtual Hardware Regular Opera...
ESX Architecture 10/13/09 copyright 2007  I/O Continuity Group Memory CPU Disk and NIC <ul><li>Three ESX4 Versions: </li><...
Storage Overview <ul><li>Industry Storage Technology </li></ul><ul><li>VMware Datastore Format Types </li></ul>10/13/09 co...
ESX Datastore and VMFS 10/13/09 copyright 2007  I/O Continuity Group Volume LUN (Storage hardware) Datastore VMFS mounted ...
VMware Deployment Conclusions <ul><li>Adopting a SAN is a precondition for implementing VMware’s server virtualization req...
<ul><li>vSphere Storage Management and Efficiency Features </li></ul>copyright I/O Continuity Group, LLC
New vSphere Storage Features <ul><li>Storage Efficiency  </li></ul><ul><ul><li>Virtual Disk Thin Provisioning </li></ul></...
<ul><li>Thin Provisioning </li></ul>10/13/09 copyright 2007  I/O Continuity Group
Disk Thin Provisioning  In a Nutshell <ul><li>Thin Provisioning was designed to handle unpredictable VM application growth...
Disk Thin Provisioning  Comparison 10/13/09 copyright 2007  I/O Continuity Group Without  Thin Provisioning (aka  Thick ) ...
Disk Thin Provisioning  Defined <ul><li>Method to Increase the Efficiency of Storage Utilization </li></ul><ul><ul><li>VM’...
Without Thin Provisioning/Without VMware Thick LUN 500 GB virtual disk 10/13/09 copyright 2007  I/O Continuity Group Serve...
Thin Provisioning 10/13/09 copyright 2007  I/O Continuity Group <ul><li>Virtual machine disks consume only the amount of p...
Virtual Disk Thin Provisioning  Configured Copyright  © 2006 Dell Inc.
Thin Disk Provisioning Operations 10/13/09 copyright 2007  I/O Continuity Group
Improved Storage Management 10/13/09 copyright 2007  I/O Continuity Group Datastore now managed as an object within vCente...
Thin Provisioning Caveats <ul><li>There is capacity overhead in Thin LUNs for handling individual VM allocations (metadata...
Thin Provisioning Conclusions <ul><li>For VMs expected to grow frequently or unpredictably, consider adopting Thin Provisi...
<ul><li>iSCSI Software Initiator </li></ul>10/13/09 copyright 2007  I/O Continuity Group
iSCSI Software Initiator In a Nutshell <ul><li>iSCSI is a more affordable storage protocol than Fibre Channel, however it ...
What is iSCSI? <ul><li>IP SAN sends blocks of data over TCP/IP protocol  (This network is traditionally used for file tran...
iSCSI Software Initiator  Key Improvements <ul><li>vSphere Goal is CPU Efficiency: </li></ul><ul><li>Software iSCSI stack ...
Why is ordinary  Software iSCSI Slow? <ul><li>iSCSI is sometimes referred to as a “bloated” protocol due to high overhead ...
vSphere Software iSCSI Configuration <ul><li>iSCSI Datastore Configuration is Easier and Secure </li></ul><ul><li>No longe...
iSCSI Performance  Improvements 10/13/09 copyright 2007  I/O Continuity Group SW iSCSI stack is most improved
Software iSCSI Conclusions <ul><li>Software iSCSI has been considered unsuitable for many VM application workloads due to ...
<ul><li>Dynamic Expansion of VMFS Volumes </li></ul>10/13/09 copyright 2007  I/O Continuity Group
Dynamic Storage Growth In a Nutshell <ul><li>VM applications outgrowing VMFS virtual disk can be dynamically expanded to g...
Without Hot Disk Extend LUN Spanning 10/13/09 copyright 2007  I/O Continuity Group Before 20 GB Added 20 GB Each 20 GB Ext...
Hot Extend  VMFS Volume Growth Option <ul><li>Hot Extend Volume Growth expands a LUN as a single extent so that it fills t...
Dynamic Expansion up to VM copyright I/O Continuity Group, LLC VM Guest OS level Virtual Disk ESX level Datastore holding ...
Virtual Disk  Hot Extend Configuration 10/13/09 copyright 2007  I/O Continuity Group Increase from 2 GB to 40 GB After upd...
Virtual Disk Hot Extend  Conclusion <ul><li>VMs that require more disk space can use the new Virtual Disk Hot Extend to gr...
<ul><li>Storage VMotion </li></ul>10/13/09 copyright 2007  I/O Continuity Group
Storage VMotion  In a Nutshell 10/13/09 copyright 2007  I/O Continuity Group Storage VMotion (SVM) enables live migration ...
Enhanced Storage VMotion Features <ul><li>New  GUI  capabilities and  full integration  into vCenter. </li></ul><ul><li>Mi...
Storage VMotion  Benchmarks 10/13/09 copyright 2007  I/O Continuity Group Change Block Tracking replaces Snapshot technolo...
Storage VMotion  New Capabilities <ul><li>How “Change Block Tracking” scheme improves handling migrations: </li></ul><ul><...
Storage VMotion Benefits <ul><li>Avoids downtime required when coordinating needs of: </li></ul><ul><ul><li>Application ow...
Storage VMotion How it Works copyright I/O Continuity Group, LLC <ul><li>Copies VM to new location on Destination </li></u...
Storage VMotion  Pre-requisites <ul><li>Prior to running Storage VMotion: </li></ul><ul><li>Remove Snapshots from VMs to b...
Storage VMotion  Conclusion <ul><li>Built-in GUI provides more efficient, flexible storage options and easier processing o...
<ul><li>Paravirtualized SCSI </li></ul><ul><li>PV SCSI </li></ul>10/13/09 copyright 2007  I/O Continuity Group
Paravirtualized SCSI  In a Nutshell <ul><li>PV SCSI is a high-performance Virtual Storage Adapter. </li></ul><ul><li>VMI p...
PV SCSI <ul><li>Serial-Attached SCSI (SAS) paravirtualized PCIe storage adapter  (Peripheral Component Interconnect expres...
PV SCSI  Key Benefits <ul><li>Efficiency gains from PVSCSI can result in: </li></ul><ul><ul><li>additional 50 percent CPU ...
VMware Performance Testing Reduced CPU Usage 10/13/09 copyright 2007  I/O Continuity Group <ul><ul><li>Additional  50% CPU...
PV SCSI Configuration <ul><li>In VM Properties, highlight Hard Drive and  select SCSI (1:0) or higher, you’ll see the new ...
PV SCSI Use Cases <ul><li>The performance factors indicate benefits to adopting PV SCSI, but ultimately the decision will ...
PV SCSI  Conclusions <ul><li>PV SCSI improves VM access time to disk positively impacting on application performance throu...
<ul><li>Pluggable Storage Architecture (PSA) </li></ul>10/13/09 copyright 2007  I/O Continuity Group
Pluggable Storage Architecture In a Nutshell <ul><li>Multipathing technology optimizes I/O throughput across multiple SAN ...
Pluggable Storage Architecture 10/13/09 copyright 2007  I/O Continuity Group ESX 3.5 did not support third-party storage v...
Pluggable Storage Architecture (PSA) <ul><li>Two classes of third-party plug-ins: </li></ul><ul><li>Basic path-selection (...
Pluggable Storage Architecture (PSA) 10/13/09 copyright 2007  I/O Continuity Group By default, VMware provides a generic M...
Enhanced Multipathing  with Pluggable Storage Architecture 10/13/09 copyright 2007  I/O Continuity Group Each ESX4 host wi...
VMDirectPath I/O  (Experimental) <ul><li>VM DirectPath I/O  e nables virtual machines to  directly access the underlying h...
Third-party PSPs 10/13/09 copyright 2007  I/O Continuity Group
Higher Performance  API for Multipathing <ul><li>Experimental support for the following storage I/O devices: </li></ul><ul...
EMC PowerPath/VE <ul><li>Integrates popular path management software directly into ESX vmkernel  handling I/O below the VM...
PSA Conclusions <ul><li>Former ESX3 native MPIO (multipathing I/O) did not support for 3 rd  party plug-in’s, lumping all ...
<ul><li>Improved VM Availability and Failover </li></ul>copyright I/O Continuity Group, LLC
<ul><li>Fault Tolerance FT </li></ul>10/13/09 copyright 2007  I/O Continuity Group
FT in a Nutshell <ul><li>VM’s running in an ordinary HA Cluster of ESX hosts (with or without DRS) will experience downtim...
HA vs FT <ul><li>HA  </li></ul><ul><ul><li>Simple High Availability Cluster Solution, like MSCS </li></ul></ul><ul><ul><li...
New Fault Tolerance copyright I/O Continuity Group, LLC <ul><li>Provides continuous protection for (VM) in when host fails...
Fault Tolerance  (FT) Technology <ul><li>FT uses “Record and Replay” technology to record the “primary” VM’s activity and ...
FT Lockstep Technology copyright I/O Continuity Group, LLC <ul><li>Requires identical processor on secondary ESX host  (cl...
FT System Requirements <ul><li>ESX hardware requires same Family of Processors </li></ul><ul><ul><li>Need specific process...
Other FT Configuration Restrictions <ul><li>FT Protected VMs must  Share Same Storage  with  No-Single-Point-of-Failure  d...
Other FT  Configuration Guidelines <ul><li>VM hardware  must be upgraded to  v7 . </li></ul><ul><li>No support  for VM par...
FT Conclusions <ul><li>Fault Tolerance keeps mission-critical VMs on line even if an ESX host fails, which takes HA to the...
<ul><li>Data Recovery </li></ul>10/13/09 copyright 2007  I/O Continuity Group
Data Recovery in a Nutshell <ul><li>Backing up an ESX3 VM to tape using ordinary VCB is complex due to difficult integrati...
Data Recovery 10/13/09 copyright 2007  I/O Continuity Group Data Recovery provides faster restores to disk than tape-based...
vSphere Data Recovery vs VCB <ul><li>New Data Recovery </li></ul><ul><li>Implemented via a  Virtual Appliance  ( pre-confi...
Data Recovery Key Components 10/13/09 copyright 2007  I/O Continuity Group
Implementation Considerations <ul><li>Not compatible with ESX/ESXi 3.x/VC2.5 and older </li></ul><ul><li>Must upgrade VMs ...
Next Evolution of VCB  shipping with vSphere 10/13/09 copyright 2007  I/O Continuity Group Improved API enables native int...
Data Recovery  Conclusions <ul><li>Prior to vSphere, backup was a complicated command line configuration with configuratio...
<ul><li>New vSphere Licensing </li></ul>10/13/09 copyright 2007  I/O Continuity Group
Understanding  vSphere Licensing <ul><li>vSphere fully redesigned licensing scheme </li></ul><ul><ul><li>VMware no longer ...
Legacy VI3 vCenter  License Server Topology copyright I/O Continuity Group, LLC ESX3 Server ESX3 Server ESX3 Server Active...
New vSphere  ESX License Configuration copyright I/O Continuity Group, LLC <ul><li>Click  License Features  in  Configurat...
Upgrading to vSphere License Keys <ul><li>Existing VI 3.x licenses will not work on ESX 4 </li></ul><ul><li>Must activate ...
New License Count <ul><li>This example shows how the new and old license counts map. </li></ul><ul><li>NOTE: VMware vSpher...
License Downgrade Options <ul><li>If you purchased vSphere licenses and wish to convert licenses to VI 3. </li></ul><ul><l...
vSphere Upgrade  Requirements  <ul><li>Some downtime is required to upgrade from VI 3.x environments to vSphere 4. </li></...
vSphere Compatibility Lists <ul><li>Compatibility of Existing VMware Products  </li></ul><ul><ul><li>View, Lab Manager, Si...
Survey of Upgrade Timing copyright I/O Continuity Group, LLC Majority out of 140 votes are waiting at least 3-6 months.  T...
VMware Upgrade Conclusion <ul><li>Consider adoption or upgrading to vSphere if server hardware is 64-bit, hardware-VT-assi...
Q&A <ul><li>How would you describe your current datacenter? </li></ul><ul><li>What have you identified as your biggest dat...
VMware on SAN Design Questions <ul><li>How many servers do I currently have and how many do I add every year?  (aka server...
Vendor Neutral Design Benefits <ul><li>There are four main benefits to designing SAN’s independent of any single equipment...
Closing Remarks <ul><li>Thank you for joining us. </li></ul><ul><li>Let us know how we can assist you with your next datac...
Upcoming SlideShare
Loading in …5
×

Iocg Whats New In V Sphere

1,977 views

Published on

Storage-related VMware enhancements in vSphere

  • Be the first to comment

Iocg Whats New In V Sphere

  1. 1. Simplify, Virtualize and Protect Your Datacenter Cost Savings and Business Continuity with VMware's Latest vSphere Solution 10/13/09 copyright 2007 I/O Continuity Group
  2. 2. Cloud Computing: What does it mean? <ul><li>Fighting Complexity In The Data Center </li></ul><ul><li>Good Is Good, But Cheap Is Sometimes Better </li></ul>copyright I/O Continuity Group, LLC
  3. 3. Cloud Computing and Economic Recovery <ul><li>Data centers represent expensive &quot;pillars of complexity&quot; for companies of all sizes, which is why they're being threatened by cloud computing. </li></ul><ul><li>Spending $1 to acquire infrastructure and then spending $8 to manage it is unacceptable </li></ul><ul><li>The idea is to let companies view all resources (internal and in the cloud) as a single &quot;private cloud,&quot; and easily move data and applications among various data centers and cloud providers as needed. </li></ul>copyright I/O Continuity Group, LLC
  4. 4. Datacenter Challenges <ul><li>IT Managers are looking for increased levels of resource utilization. </li></ul><ul><li>Storage waste is measured in “unused” but allocated (aka stranded) storage </li></ul><ul><ul><li>Storage Disk Utilization ratio is below 50% in most datacenters </li></ul></ul><ul><ul><li>Storage hardware vendors have released support for “Thin Provisioning” in their storage array. </li></ul></ul><ul><li>Reducing CPU overhead between the host server and storage is another method of increasing storage efficiency. </li></ul><ul><ul><li>This reduction in overhead can greatly increase the throughput of a given system. </li></ul></ul>Copyright © 2006 Dell Inc.
  5. 5. Conclusion <ul><li>Carefully compare the cost of private vs public clouds. </li></ul><ul><li>Often outsourcing hardware and services comes at a prohibitive cost. </li></ul><ul><li>Housing your hardware in a secure 24x7x365 facility is the best insurance for unexpected downtime and unmet SLAs. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  6. 6. <ul><li>Background </li></ul><ul><li>Review of Basics </li></ul><ul><li>VMware on SAN Integration </li></ul>copyright I/O Continuity Group, LLC
  7. 7. Traditional DAS Direct-Attached Storage External SCSI Storage Array = Stranded Capacity Parallel SCSI3 connection provides throughput of approx 200 MB/s after overhead. LAN Each server is separately attached to a dedicated SCSI storage array requiring high storage maintenance with difficult scalability and provisioning. Different vendor platforms cannot share the same external array. copyright I/O Continuity Group, LLC Popular method for deploying applications was to install each on a dedicated server.
  8. 8. SAN- attached Storage FC Storage Array FC SAN Switches 200/400/800 MB/s OR IP SAN w/ iSCSI Ethernet Switches Tape Library Servers with NICs and FC HBA’s LAN FC SAN’s offer a SHARED, high speed, dedicated block-level infrastructure independent of the LAN. IP SANs uses Ethernet Switches Brocade copyright I/O Continuity Group, LLC Applications able to run anywhere.
  9. 9. Physical Servers represent the Before illustration running one application per server. VMware “Converter” can migrate physical machines to Virtual Machines running on ESX in the After illustration. copyright I/O Continuity Group, LLC
  10. 10. What is a Virtual Machine? 10/13/09 copyright 2007 I/O Continuity Group Virtual Machine VM Virtual Hardware Regular Operating System Regular Application <ul><li>Users see a software platform like a physical computer running an OS and application. </li></ul><ul><li>ESX hypervisor sees a discrete set of files: </li></ul><ul><li>Configuration file (.vmx) </li></ul><ul><li>Virtual Disk file (.vmdk) </li></ul><ul><li>NVRAM settings file </li></ul><ul><li>Log file </li></ul>Shared Storage
  11. 11. ESX Architecture 10/13/09 copyright 2007 I/O Continuity Group Memory CPU Disk and NIC <ul><li>Three ESX4 Versions: </li></ul><ul><li>Std ESX installed on supported hardware </li></ul><ul><li>ESXi installed on supported hardware (without Service Console) </li></ul><ul><li>ESXi Embedded hard coded in OEM server firmware (not upgradeable) </li></ul><ul><li>vSphere is aka ESX4 </li></ul>Shared Hardware Resources
  12. 12. Storage Overview <ul><li>Industry Storage Technology </li></ul><ul><li>VMware Datastore Format Types </li></ul>10/13/09 copyright 2007 I/O Continuity Group Locally Attached Fibre Channel iSCSI or IP SAN NAS VMware VMFS NFS Raw Device Mappings -RDM Internal or external DAS High speed SCSI on SAN SCSI over std TCP/IP File level share on LAN
  13. 13. ESX Datastore and VMFS 10/13/09 copyright 2007 I/O Continuity Group Volume LUN (Storage hardware) Datastore VMFS mounted on ESX from LUN VM Files Datastores are logical storage units on a physical LUN (disk device) or on a disk partition. Datastore format types are VMFS or NFS (RDMs are for VMs). Datastores can hold VM files, templates and ISO images, or the RDM used to access the raw data. VM VM ESX Storage Array
  14. 14. VMware Deployment Conclusions <ul><li>Adopting a SAN is a precondition for implementing VMware’s server virtualization requiring “shared storage”. </li></ul><ul><li>SANs consolidate and share disk resources to save costs on wasted space and eliminate outages and downtime. </li></ul><ul><li>iSCSI is the IP SAN storage protocol of choice for organizations with tight budgets. </li></ul><ul><li>FC is the SAN storage protocol of choice for mission-critical, high performance applications. </li></ul><ul><li>Choosing a storage system housing both iSCSI and FC connections provides the most flexibility and scalability. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  15. 15. <ul><li>vSphere Storage Management and Efficiency Features </li></ul>copyright I/O Continuity Group, LLC
  16. 16. New vSphere Storage Features <ul><li>Storage Efficiency </li></ul><ul><ul><li>Virtual Disk Thin Provisioning </li></ul></ul><ul><ul><li>Improved iSCSI software initiator </li></ul></ul><ul><li>Storage Control </li></ul><ul><ul><li>New vCenter Storage Capabilities </li></ul></ul><ul><ul><li>Dynamic Expansion of VMFS Volumes </li></ul></ul><ul><li>Storage Flexibility and Enhanced Performance </li></ul><ul><ul><li>Enhanced Storage VMotion </li></ul></ul><ul><ul><li>Pluggable Storage Architecture </li></ul></ul><ul><ul><li>Paravirtualized SCSI and Direct Path I/O </li></ul></ul>Copyright © 2006 Dell Inc.
  17. 17. <ul><li>Thin Provisioning </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  18. 18. Disk Thin Provisioning In a Nutshell <ul><li>Thin Provisioning was designed to handle unpredictable VM application growth. </li></ul><ul><ul><li>On the one hand, you don’t want to over-allocate disk space which may never be used, </li></ul></ul><ul><ul><li>On the other hand, you don’t want to under-allocate disk space which requires growing the disk requiring admin time. </li></ul></ul><ul><li>Thin Provisioning adopts a “ shared disk pool ” approach to disk capacity allocation, thereby automating the underlying administration. </li></ul><ul><li>All you have to do is ensure the overall disk pool capacity never runs out of available space. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  19. 19. Disk Thin Provisioning Comparison 10/13/09 copyright 2007 I/O Continuity Group Without Thin Provisioning (aka Thick ) With Thin Provisioning If you create a 500GB virtual disk, the VM will use entire 500GB VMFS Datastore allocated. If you create a 500GB virtual disk but only 100GB of VMFS Datastore is used, then only 100GB will be utilized, even though 500GB is technically allocated to the VM for growth. <ul><ul><li>Create when VM is deployed </li></ul></ul><ul><ul><li>Or during VM migration </li></ul></ul>
  20. 20. Disk Thin Provisioning Defined <ul><li>Method to Increase the Efficiency of Storage Utilization </li></ul><ul><ul><li>VM’s Virtual Disk see all, but use only amount of underlying storage resource needed by the VMs application (out of the Datastore shared pool) </li></ul></ul><ul><ul><li>Initial allocation of virtual disk requires 1 MB of space in Datastore (level of disk granularity). </li></ul></ul><ul><ul><li>Additional 1 MB chunks of storage will be allocated as storage demand grows, with some lost capacity for metadata. </li></ul></ul><ul><li>Capacity is comparable to airline industry overbooking flights. </li></ul><ul><ul><li>Airlines can reassign seats for no-show, booked passengers </li></ul></ul><ul><ul><li>Thin Provisioning reallocates unused storage to other VMs while they continue to grow into that available capacity on-the-fly. </li></ul></ul>10/13/09 copyright 2007 I/O Continuity Group
  21. 21. Without Thin Provisioning/Without VMware Thick LUN 500 GB virtual disk 10/13/09 copyright 2007 I/O Continuity Group Servers With Dedicated Disks ESX Servers on SAN Switches Storage Array holding SHARED disks HBA’s HBA’s Thin LUN 500 GB Virtual Disk Traditional Servers with DAS- Direct Attached Storage Totally stranded storage devices. What you see is what you get. All VMs see capacity allocated, but Thin LUN offers only what’s used. . 400 GB unused but allocated 100GB application usage SCSI Adapters SCSI Adapters
  22. 22. Thin Provisioning 10/13/09 copyright 2007 I/O Continuity Group <ul><li>Virtual machine disks consume only the amount of physical space in use at a given time </li></ul><ul><ul><li>VM sees full logical disk size at all times </li></ul></ul><ul><ul><li>Full reporting and alerting on consumption </li></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>Significant improvement of actual storage utilization </li></ul></ul><ul><ul><li>Eliminates need to over-provision virtual disk capacity </li></ul></ul><ul><ul><li>Reduces storage costs by up to 50% </li></ul></ul><ul><ul><li>Can convert “thick” to “thin” in conjunction with Storage VMotion data migration. </li></ul></ul>120 GB allocated to Thin VM disks, with 60 GB used
  23. 23. Virtual Disk Thin Provisioning Configured Copyright © 2006 Dell Inc.
  24. 24. Thin Disk Provisioning Operations 10/13/09 copyright 2007 I/O Continuity Group
  25. 25. Improved Storage Management 10/13/09 copyright 2007 I/O Continuity Group Datastore now managed as an object within vCenter to view all components in the storage layout and utilization levels . Details for each datastore reveal which ESX servers are accessing capacity. .
  26. 26. Thin Provisioning Caveats <ul><li>There is capacity overhead in Thin LUNs for handling individual VM allocations (metadata consumes some space). </li></ul><ul><li>Check Storage Vendor Compatibility for Thin Provisioning support (depends on storage hardware vendor support). </li></ul><ul><li>Understand how to configure Alerts well in advance of running out of physical storage </li></ul><ul><ul><li>Hosts attempting to write to completely full Thin LUN can cause loss of entire Datastore. </li></ul></ul><ul><li>VMs with Thin Provisioned Disks do not work with VMware FT Fault Tolerance (req’r thick-eagerzeroed disks) </li></ul>copyright I/O Continuity Group, LLC
  27. 27. Thin Provisioning Conclusions <ul><li>For VMs expected to grow frequently or unpredictably, consider adopting Thin Provisioning while monitoring the disk capacity utilization. </li></ul><ul><li>The more VMs sharing a Thin Provisioned Datastore, the faster it will fill up, so size the initial capacity accordingly. </li></ul><ul><li>If there is any risk of forgetting to grow the Thin Provisioned Datastore when it gets full, DO NOT ADOPT Thin Provisioning. </li></ul><ul><li>If a Thin Provisioned LUN runs out of disk space, all data could be lost. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  28. 28. <ul><li>iSCSI Software Initiator </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  29. 29. iSCSI Software Initiator In a Nutshell <ul><li>iSCSI is a more affordable storage protocol than Fibre Channel, however it is slower for lighter VM workloads. </li></ul><ul><li>vSphere iSCSI stack is tweaked and tuned to use less CPU time and deliver better throughput. </li></ul><ul><ul><li>Software iSCSI (NIC) runs at the ESX layer </li></ul></ul><ul><ul><li>Hardware iSCSI uses an HBA leveraged by ESX </li></ul></ul><ul><li>vSphere iSCSI configuration process is easier without requiring Service Console connection to communicate with the iSCSI target. </li></ul>copyright I/O Continuity Group, LLC
  30. 30. What is iSCSI? <ul><li>IP SAN sends blocks of data over TCP/IP protocol (This network is traditionally used for file transfers). </li></ul><ul><li>To address the cost of FC-switched SANs, storage vendors added support for basic Ethernet switch connections (GigE- 1000 Mbps). </li></ul><ul><li>ESX Hosts connecting to iSCSI SAN require an Initiator: </li></ul><ul><ul><li>Software iSCSI – relies on Ethernet NIC </li></ul></ul><ul><ul><li>Hardware iSCSI – uses dedicated HBA </li></ul></ul><ul><li>Normally a host server can only connect through one of the two storage connection types (FC SAN or IP SAN). </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  31. 31. iSCSI Software Initiator Key Improvements <ul><li>vSphere Goal is CPU Efficiency: </li></ul><ul><li>Software iSCSI stack entirely rewritten. </li></ul><ul><ul><li>(NIC-dependent protocols push ESX CPU- 10 GB NICs push CPU 10x harder) </li></ul></ul><ul><li>ESX4 uses TCP/IP2 optimized stack, tuned through IPv6 locking and multi-threading capabilities. </li></ul><ul><li>Reduced use of atomics and pre-fetching of locks with better use of internal locks (low level ESX programming). </li></ul><ul><li>Better cache memory efficiency with optimized cache affinity settings. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  32. 32. Why is ordinary Software iSCSI Slow? <ul><li>iSCSI is sometimes referred to as a “bloated” protocol due to high overhead and inefficiency of IP network. </li></ul><ul><li>The faster the network the higher the drag on the host CPU for added processing. </li></ul>10/13/09 copyright 2007 I/O Continuity Group iSCSI protocol over TCP/IP with high overhead processing Fibre Channel Protocol over a High-speed dedicated network FC SAN
  33. 33. vSphere Software iSCSI Configuration <ul><li>iSCSI Datastore Configuration is Easier and Secure </li></ul><ul><li>No longer requires Service Console connection to communicate with an iSCSI target (unnecessary configuration step) </li></ul><ul><li>New iSCSI initiator features include bi-directional CHAP authentication for better security (2-way initiator/target handshake). </li></ul>10/13/09 copyright 2007 I/O Continuity Group General tab changes are global and propagate down to each target. Bi-directional CHAP is added for authentication to initiator.
  34. 34. iSCSI Performance Improvements 10/13/09 copyright 2007 I/O Continuity Group SW iSCSI stack is most improved
  35. 35. Software iSCSI Conclusions <ul><li>Software iSCSI has been considered unsuitable for many VM application workloads due to its high overhead. </li></ul><ul><li>vSphere has tuned its stack for sending VM block transmissions over TCP/IP (IP SAN) to buffer the natively slower iSCSI protocol. </li></ul><ul><li>Once 10 Gbit iSCSI is widely supported, higher workloads will run better. </li></ul><ul><li>Consider a test-dev environment to test Software iSCSI hosts prior to deployment of mission-critical VMs. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  36. 36. <ul><li>Dynamic Expansion of VMFS Volumes </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  37. 37. Dynamic Storage Growth In a Nutshell <ul><li>VM applications outgrowing VMFS virtual disk can be dynamically expanded to grow capacity (with no reboot). </li></ul><ul><li>Prior to vSphere, the only option for increasing the size of an “existing” VM’s virtual disk was adding new LUNs partitions (“spanning extents ”) vs growing the original LUN . </li></ul><ul><li>Corruption or loss of one extent (partition) in a spanned virtual disk resulted in loss of all combined extents= risky. </li></ul><ul><li>Hot Extend now allows the Virtual Disk to grow dynamically up to 2 TB. </li></ul><ul><li>Thin Provisioning impacts capacity usage in the datastore; Hot Extend allows resizing VM virtual disk. </li></ul>copyright I/O Continuity Group, LLC
  38. 38. Without Hot Disk Extend LUN Spanning 10/13/09 copyright 2007 I/O Continuity Group Before 20 GB Added 20 GB Each 20 GB Extent (Virtual Disk) becomes a separate partition (file system with drive letter) in the guest OS. After 40 GB If one spanned extent is lost, the entire volume becomes corrupt.
  39. 39. Hot Extend VMFS Volume Growth Option <ul><li>Hot Extend Volume Growth expands a LUN as a single extent so that it fills the available adjacent capacity. </li></ul><ul><li>Used to increase size of a virtual disk. </li></ul><ul><ul><li>Only flat virtual disks in “Persistent Mode” </li></ul></ul><ul><ul><li>No snapshots in “Virtual Mode” </li></ul></ul><ul><li>vSphere 4 VMFS volumes can grow an expanded LUN up to a 2 TB virtual disk. </li></ul>copyright I/O Continuity Group, LLC Before 20 GB After 40 GB up to 2 TB Volume Grown
  40. 40. Dynamic Expansion up to VM copyright I/O Continuity Group, LLC VM Guest OS level Virtual Disk ESX level Datastore holding VM Virtual Disks-ESX Admin Storage level LUN (LUN presented as one Datastore- SAN Admin Datastore Volume Growth Dynamic LUN Expansion Hot Virtual Disk Extend
  41. 41. Virtual Disk Hot Extend Configuration 10/13/09 copyright 2007 I/O Continuity Group Increase from 2 GB to 40 GB After updating the VM Properties, use Guest OS to format file system to use newly allocated disk space. Must be a non-system virtual disk. Ultimate VM application capacity is not always predictable at the outset.
  42. 42. Virtual Disk Hot Extend Conclusion <ul><li>VMs that require more disk space can use the new Virtual Disk Hot Extend to grow an existing LUN to a larger size. </li></ul><ul><li>The alternative is to add “extents” representing separate file system partitions. The drawback is the loss of the entire virtual disk volume if just one of the added extents fails. </li></ul><ul><li>Thin Provisioning allows VM virtual disks to grow from its configured capacity by adding capacity on-the-fly (avoids wasting datastore space). </li></ul><ul><li>Virtual Disk Hot Extend allows a VM Guest OS to enlarge its original capacity (allows VM application to grow from the originally configured size). </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  43. 43. <ul><li>Storage VMotion </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  44. 44. Storage VMotion In a Nutshell 10/13/09 copyright 2007 I/O Continuity Group Storage VMotion (SVM) enables live migration of virtual machine disks from one datastore to another with no disruption or downtime. This hot migration of the storage location allows easy movement of VMs data. Like VMotion, Storage VMotion reduces service disruptions without server downtime. Minimizes disruption when rebalancing or retiring storage arrays, reducing or eliminating planned storage downtime. Simplifies array migration and upgrades, reducing I/O bottlenecks by moving virtual machines while the VM remains up and running. <ul><li>ESX 3.5 Limitations: </li></ul><ul><li>FC datastores support only </li></ul><ul><li>RCLI – no GUI </li></ul><ul><li>Relied on snapshot technology for migrations </li></ul><ul><li>Experimental usage </li></ul>
  45. 45. Enhanced Storage VMotion Features <ul><li>New GUI capabilities and full integration into vCenter. </li></ul><ul><li>Migration from FC, iSCSI or NFS to any of the three storage protocols. </li></ul><ul><li>Migrate from Thick or Thin LUNs to the opposite virtual disk format during Storage VMotion. </li></ul><ul><li>New Change Block Tracking method moves VM’s home disk over to a new Datastore (without using a VM snapshot) </li></ul><ul><li>Storage VMotion moves location of data while VM stays online. </li></ul>copyright I/O Continuity Group, LLC
  46. 46. Storage VMotion Benchmarks 10/13/09 copyright 2007 I/O Continuity Group Change Block Tracking replaces Snapshot technology Less CPU processing consumed Shorter time to migrate data . Fewer resources consumed in process.
  47. 47. Storage VMotion New Capabilities <ul><li>How “Change Block Tracking” scheme improves handling migrations: </li></ul><ul><ul><li>Speeds up the migration process </li></ul></ul><ul><ul><li>Reduces former excessive memory and CPU requirements. No longer requires 2x memory </li></ul></ul><ul><ul><li>Leverages “fast suspend/resume” with change block tracking to speed up migration </li></ul></ul><ul><li>Supports moving VMDKs from Thick to Thin formats or migrate RDMs to VMDKs </li></ul><ul><ul><li>RDMs support storage vendor agents directly accessing the disk. </li></ul></ul>copyright I/O Continuity Group, LLC
  48. 48. Storage VMotion Benefits <ul><li>Avoids downtime required when coordinating needs of: </li></ul><ul><ul><li>Application owners </li></ul></ul><ul><ul><li>Virtual Machine owners </li></ul></ul><ul><ul><li>Storage Administrators </li></ul></ul><ul><li>Moves a running VM to a different Datastore when performance suffers on over-subscribed ESX host or datastore. </li></ul><ul><li>Easily reallocate stranded or unclaimed storage (ie wasted space) non-disruptively by moving VM to larger capacity storage LUN. </li></ul>copyright I/O Continuity Group, LLC
  49. 49. Storage VMotion How it Works copyright I/O Continuity Group, LLC <ul><li>Copies VM to new location on Destination </li></ul><ul><li>Start tracking changes in delta block </li></ul><ul><li>Fast suspend and resume VM on new Destination disk </li></ul>Source Disk Array (FC) Destination Disk Array (iSCSI) <ul><li>4. Copies remaining VM delta disk blocks </li></ul><ul><li>Deletes original VM on source </li></ul><ul><li>Use to improve I/O workload distribution. </li></ul>
  50. 50. Storage VMotion Pre-requisites <ul><li>Prior to running Storage VMotion: </li></ul><ul><li>Remove Snapshots from VMs to be migrated </li></ul><ul><li>RDMs must be in “Persistent” mode (not Virtual mode) </li></ul><ul><li>ESX host requires VMotion license </li></ul><ul><li>ESX host must have Access to both Source and Target Datastores </li></ul><ul><li>Cannot VMotion (move VM) concurrently during Storage VMotion data migration. </li></ul><ul><li>Up to Four Concurrent Storage VMotion migrations supported. </li></ul>copyright I/O Continuity Group, LLC
  51. 51. Storage VMotion Conclusion <ul><li>Built-in GUI provides more efficient, flexible storage options and easier processing of data migrations while VMs are running. </li></ul><ul><li>Most suitable uses include: </li></ul><ul><ul><li>When a Datastore becomes full. </li></ul></ul><ul><ul><li>When a VM’s application data requires faster or slower disk access (tiered storage). </li></ul></ul><ul><ul><li>When moving data to a new storage vendor. </li></ul></ul><ul><ul><li>To migrate RDM to VMFS or Thick to Thin or vice versa </li></ul></ul>10/13/09 copyright 2007 I/O Continuity Group
  52. 52. <ul><li>Paravirtualized SCSI </li></ul><ul><li>PV SCSI </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  53. 53. Paravirtualized SCSI In a Nutshell <ul><li>PV SCSI is a high-performance Virtual Storage Adapter. </li></ul><ul><li>VMI paravirtualization std supported by some guest OSs. </li></ul><ul><li>Designed for Virtual Machine applications requiring better throughput and lower CPU utilization. </li></ul><ul><li>Best suited for environments with very I/O intensive guest applications. </li></ul><ul><li>Improves efficiency by: </li></ul><ul><ul><li>Reducing the cost of virtual interrupts </li></ul></ul><ul><ul><li>Batching the processing of I/O requests </li></ul></ul><ul><ul><li>Batching I/O completion interrupts </li></ul></ul><ul><ul><li>Reduces number of context switches between guest and VMM </li></ul></ul>10/13/09 copyright 2007 I/O Continuity Group
  54. 54. PV SCSI <ul><li>Serial-Attached SCSI (SAS) paravirtualized PCIe storage adapter (Peripheral Component Interconnect express or Local SCSI bus) </li></ul><ul><ul><li>A virtual adapter with the hardware specification written by VMware with drivers for W2K3/8 and RHEL5. </li></ul></ul><ul><ul><li>Provides functionality similar to VMware’s BusLogic, LSILogic and LSILogic SAS. </li></ul></ul><ul><ul><li>Supports MSI-X, PME, MSI capabilities in the device (“Message Signaled Interrupts” use in-band vs out-of-band PCI memory space for lower interrupt latency ). </li></ul></ul>10/13/09 copyright 2007 I/O Continuity Group Configure PV SCSI drive in VM
  55. 55. PV SCSI Key Benefits <ul><li>Efficiency gains from PVSCSI can result in: </li></ul><ul><ul><li>additional 50 percent CPU savings for Fibre Channel (FC) </li></ul></ul><ul><ul><li>up to 30 percent CPU savings for iSCSI. </li></ul></ul><ul><li>Lower overhead and higher CPU efficiency in I/O processing. </li></ul><ul><ul><li>Higher throughput (92% higher IOPS) and lower latency (45% less latency) </li></ul></ul><ul><ul><li>Better VM scalability (more VMs/VCPUs per host) </li></ul></ul><ul><li>Configuring PV SCSI may require VM downtime to move virtual disks (.vmdk) to new adapter. Only works on VM data drives – not supported for boot drives. </li></ul>copyright I/O Continuity Group, LLC
  56. 56. VMware Performance Testing Reduced CPU Usage 10/13/09 copyright 2007 I/O Continuity Group <ul><ul><li>Additional 50% CPU savings for Fibre Channel (FC) </li></ul></ul><ul><ul><li>Up to 30 percent CPU savings for iSCSI. </li></ul></ul>Less CPU usage and overhead FC HBAs offer least overhead.
  57. 57. PV SCSI Configuration <ul><li>In VM Properties, highlight Hard Drive and select SCSI (1:0) or higher, you’ll see the new SCSI controller added. </li></ul><ul><li>Click Change Type and select VMware Paravirtual </li></ul>copyright I/O Continuity Group, LLC
  58. 58. PV SCSI Use Cases <ul><li>The performance factors indicate benefits to adopting PV SCSI, but ultimately the decision will most depend the VM application workload. </li></ul><ul><li>Other factors to consider include vSphere Fault Tolerance which cannot be enabled on a VM using PVSCSI. </li></ul><ul><li>VMware recommends that you create a primary adapter for use with a disk that will host the system software (boot disk) and a separate PVSCSI adapter for the disk that will store user data. </li></ul>copyright I/O Continuity Group, LLC
  59. 59. PV SCSI Conclusions <ul><li>PV SCSI improves VM access time to disk positively impacting on application performance through the storage stack (ESX storage adapter to SAN storage). </li></ul><ul><li>VM’s paravirtualized SCSI adapter improves the disk communication and application response time. </li></ul><ul><li>With PV SCSI-supported hardware and guest OS, simply create a new .VMDK file for a VM’s data applications. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  60. 60. <ul><li>Pluggable Storage Architecture (PSA) </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  61. 61. Pluggable Storage Architecture In a Nutshell <ul><li>Multipathing technology optimizes I/O throughput across multiple SAN connections between a host and storage system. See example on next slide. </li></ul><ul><li>Previously, vmkernel could not use third party storage plug-in to spreading I/O load across ALL available SAN Fibre paths (known as“multipathing”), relying on inefficient native MPIO scheme. </li></ul><ul><li>vSphere now integrates third-party vendor solutions to improve host throughput and failover. </li></ul><ul><li>Third-party plug-ins install on the ESX host and require a reboot, with functionality depending on the storage hardware controller type (Active/Active or Active/Passive) </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  62. 62. Pluggable Storage Architecture 10/13/09 copyright 2007 I/O Continuity Group ESX 3.5 did not support third-party storage vendor multi-path software. ESX 3.5 required native MPIO driver which was not optimized for dynamic load balancing and failover. vSphere ESX 4 allows storage partners to write plug-ins for their specific capabilities. Dynamic multipathing and load balancing on “active-active arrays ” replacing low intelligent “native multipathing” (basic round-robin or fail-over)
  63. 63. Pluggable Storage Architecture (PSA) <ul><li>Two classes of third-party plug-ins: </li></ul><ul><li>Basic path-selection ( PSP s) optimize the choice of which path to use for active/passive type arrays </li></ul><ul><li>Full storage array type ( SATP s) allow load balancing across multiple paths and path selection for active/active arrays </li></ul>copyright I/O Continuity Group, LLC NMP =generic VMware Native Multipathing-default without vendor plug-in PSP =Path Selection Plug-in Third-Party PSP =vendor written path mgmt plug-in SATP =vendor Storage Array Type plug
  64. 64. Pluggable Storage Architecture (PSA) 10/13/09 copyright 2007 I/O Continuity Group By default, VMware provides a generic MPP called NMP (native multipathing). Multipathing plug-in
  65. 65. Enhanced Multipathing with Pluggable Storage Architecture 10/13/09 copyright 2007 I/O Continuity Group Each ESX4 host will apply one of the plug-in options based on storage vendor choices.
  66. 66. VMDirectPath I/O (Experimental) <ul><li>VM DirectPath I/O e nables virtual machines to directly access the underlying hardware devices by binding a physical FC HBA to a single guest OS. </li></ul><ul><li>Enhances CPU efficiency for workloads that require constant and frequent access to I/O devices </li></ul><ul><li>This feature maps a single HBA to a single VM and will not enable sharing of the HBA by more than a single Virtual Machine. </li></ul><ul><li>Other virtualization features, such as VMotion , hardware independence and sharing of physical I/O devices will not be available to the virtual machines using VMDirectPath </li></ul>copyright I/O Continuity Group, LLC
  67. 67. Third-party PSPs 10/13/09 copyright 2007 I/O Continuity Group
  68. 68. Higher Performance API for Multipathing <ul><li>Experimental support for the following storage I/O devices: </li></ul><ul><ul><li>QLogic QLA25xx 8Gb Fibre Channel </li></ul></ul><ul><ul><li>Emulex LPe12000 8Gb Fibre Channel </li></ul></ul><ul><ul><li>LSI 3442e-R and 3801e (1068 chip based) 3Gb SAS adapters </li></ul></ul><ul><li>vSphere performance claims of a 3x increase to over 300,000 I/O operations per second, which bodes well for most mission-critical applications. </li></ul>copyright I/O Continuity Group, LLC
  69. 69. EMC PowerPath/VE <ul><li>Integrates popular path management software directly into ESX vmkernel handling I/O below the VM, guest OS, application, database and file system, but the HBA. </li></ul><ul><li>All Guest OS I/O run through PowerPath using a “pseudo device” acting like a traffic cop directing processing to the appropriate data path. </li></ul><ul><li>PowerPath/VE removes all admin overhead by providing 1) dynamic load balance across ALL paths and 2) dynamic path failover and recovery. </li></ul><ul><li>EMC’s PowerPath/VE API and generic NMP cannot manage the same device simultaneously. </li></ul><ul><li>Licensing is on per-socket basis (like VMware) </li></ul>copyright I/O Continuity Group, LLC
  70. 70. PSA Conclusions <ul><li>Former ESX3 native MPIO (multipathing I/O) did not support for 3 rd party plug-in’s, lumping all VM workloads on a single path, with no load balancing. </li></ul><ul><li>vSphere’s Pluggable Storage Architecture supports 3 rd party plugs with more choices for allocating multiple VMs across the ESX paths (typically four) from the host HBAs through the SAN down to the storage system. </li></ul><ul><li>Storage vendors write their own Plug-ins for PSA to manage dynamic path failover, failback and load balancing. </li></ul><ul><li>Variations in the type of storage controller also affects PSA configuration. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  71. 71. <ul><li>Improved VM Availability and Failover </li></ul>copyright I/O Continuity Group, LLC
  72. 72. <ul><li>Fault Tolerance FT </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  73. 73. FT in a Nutshell <ul><li>VM’s running in an ordinary HA Cluster of ESX hosts (with or without DRS) will experience downtime during automatic failover if an ESX host goes down. </li></ul><ul><li>FT VMs in an HA Cluster never go down by using a ghost image VM running on a second ESX host able to survive the loss of the primary VM running on the failed host. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  74. 74. HA vs FT <ul><li>HA </li></ul><ul><ul><li>Simple High Availability Cluster Solution, like MSCS </li></ul></ul><ul><ul><li>Leads to VM Interruptions During Host Failover </li></ul></ul><ul><li>FT is New to vSphere </li></ul><ul><ul><li>Improves on HA by ensuring VMs never go down </li></ul></ul><ul><ul><li>Designed for Most Mission-Critical Applications </li></ul></ul><ul><ul><li>Does not inter-operate with other VMware features </li></ul></ul>copyright I/O Continuity Group, LLC
  75. 75. New Fault Tolerance copyright I/O Continuity Group, LLC <ul><li>Provides continuous protection for (VM) in when host fails. (takes VMware HA to next level) </li></ul><ul><ul><li>Included in vSphere Advanced, Enterprise and Enterprise Plus editions </li></ul></ul><ul><li>Limit of four FT-enabled VMs per ESX host </li></ul>
  76. 76. Fault Tolerance (FT) Technology <ul><li>FT uses “Record and Replay” technology to record the “primary” VM’s activity and later play it back on the “secondary” VM. </li></ul><ul><li>FT creates ghost image VM on another ESX host sharing same virtual disk file as the primary VM. </li></ul><ul><ul><li>Essentially both VMs function as a single VM </li></ul></ul><ul><li>Transfers CPU and virtual device inputs from Primary VM (Record) to Secondary VM (Replay) relying on a heartbeat monitor between ESX hosts. The FT logging NIC supports lockstep. </li></ul>copyright I/O Continuity Group, LLC
  77. 77. FT Lockstep Technology copyright I/O Continuity Group, LLC <ul><li>Requires identical processor on secondary ESX host (clock speed within 400 MHz) to monitor and verify the operation of the first processor. </li></ul><ul><li>VMs kept in sync on Secondary ESX host, receiving same inputs. </li></ul><ul><li>Only Primary VM produces output (ie disk writes and network transmits). </li></ul><ul><li>Secondary VM’s output is suppressed by network until it becomes a primary VM. Both hosts send heartbeat signals through the Logging NICs. </li></ul><ul><li>Essentially both VMs function as a single VM. </li></ul><ul><li>. </li></ul>
  78. 78. FT System Requirements <ul><li>ESX hardware requires same Family of Processors </li></ul><ul><ul><li>Need specific processors that support Lockstep technology </li></ul></ul><ul><ul><li>HV (Hardware Virtualization) must be enabled </li></ul></ul><ul><ul><li>Turn OFF Power Mgmt in BIOS (ESX can never be down) </li></ul></ul><ul><li>Primary and Secondary ESX hosts must be running same build of ESX (no mixing of ESX3 and ESX4) </li></ul><ul><ul><li>Primary and secondary ESX hosts must be in HA cluster . </li></ul></ul><ul><ul><li>At least 3 ESX hosts in HA Cluster for every single host failure </li></ul></ul><ul><li>At least gigabit NICs required </li></ul><ul><ul><li>At least two teamed NICs on separate physical switches (one for VMotion, one for FT and one NIC as shared failover for both). </li></ul></ul><ul><ul><li>10 Gbit NICs support Jumbo Frames for performance boost </li></ul></ul>copyright I/O Continuity Group, LLC
  79. 79. Other FT Configuration Restrictions <ul><li>FT Protected VMs must Share Same Storage with No-Single-Point-of-Failure design (multipathing, redundant switches, NIC teaming) </li></ul><ul><li>No thin or sparse virtual disks , only “thick-eagerzeroed” on VMFS3 formatted disks (otherwise FT will convert to thick). </li></ul><ul><li>No Datastores using RDM (Raw Device Mapping) in Physical compatibility mode (virtual Compatibility mode is support). </li></ul><ul><li>Remove MSCS Clustering of VMs before protecting with FT. </li></ul><ul><li>No DRS supported with FT (only manual VMotion) </li></ul><ul><li>No simultaneous Storage VMotion , without disabling FT first. </li></ul><ul><li>No FT VM backup using vStorage API/VCB or VMware Data Recovery (requires snapshots not supported with FT). </li></ul><ul><li>No NPIV (N-Port ID Virtualization) which assigns unique HBA addresses to each VM sharing a single HBA. </li></ul>copyright I/O Continuity Group, LLC
  80. 80. Other FT Configuration Guidelines <ul><li>VM hardware must be upgraded to v7 . </li></ul><ul><li>No support for VM paravirtualized guest OS ( PV SCSI ) </li></ul><ul><li>No VM snapshots are supported with FT </li></ul><ul><li>No VMs with NPT/EPT (Nested Page Tables/Extended Page Tables) or hot plug devices or USBs. </li></ul><ul><li>VMs cannot use more than one vCPU ( SMP is not supported ) </li></ul><ul><li>Run VMware Site Survey utility to verify software and hardware support </li></ul><ul><li>Enable Host Certificate checking (enabled by default) before adding ESX host to vCenter Server. </li></ul>copyright I/O Continuity Group, LLC
  81. 81. FT Conclusions <ul><li>Fault Tolerance keeps mission-critical VMs on line even if an ESX host fails, which takes HA to the next level. </li></ul><ul><li>Consider adoption if hardware CPU, GigE NICs, VM hardware and guest OS support are available and there are VMs with high SLAs configured in an ESX HA Cluster. </li></ul><ul><li>FT currently does not integrate with SMP vCPUs, PV SCSI, VM snapshots, Thin LUNs, RDM or Storage VMotion. </li></ul><ul><li>SMBs might save on licensing costs from a competitor product called “Marathon everRun”. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  82. 82. <ul><li>Data Recovery </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  83. 83. Data Recovery in a Nutshell <ul><li>Backing up an ESX3 VM to tape using ordinary VCB is complex due to difficult integration with tape libraries and third-party vendor backup software. </li></ul><ul><li>vSphere’s Data Recovery solution copies ESX4 VMs to disk without need of third-party vendor backup software. </li></ul><ul><li>Data Backup and Recovery is implemented through a wizard-driven GUI to create a disk-based backup of a VM on a separate disk. </li></ul><ul><li>Data Recovery copies VM’s files to a different disk while it is running. </li></ul><ul><li>Data Recovery uses VM snapshots to eliminate any downtime. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  84. 84. Data Recovery 10/13/09 copyright 2007 I/O Continuity Group Data Recovery provides faster restores to disk than tape-based backup solutions. <ul><li>Data Recovery Appliance is deployed as an OVF template (a pre-configured VM). </li></ul><ul><li>Must add the Virtual Appliance to vCenter Server Inventory. </li></ul><ul><li>License is based on number of ESX hosts being backed up. </li></ul>
  85. 85. vSphere Data Recovery vs VCB <ul><li>New Data Recovery </li></ul><ul><li>Implemented via a Virtual Appliance ( pre-configured VM) </li></ul><ul><li>D2D Model (Disk-to-Disk) VMs backups on Shared Disk </li></ul><ul><li>Easy, wizard-driven backup job and restore job creation for SMBs </li></ul><ul><li>Agentless , disk-based backup and recovery tool, leveraging disk as destination storage. </li></ul><ul><li>Current VCB </li></ul><ul><li>Implemented via VCB Proxy Server interacting with hosts & tape libraries ( manual config ) </li></ul><ul><li>D2D2T Model (Disk-toSnapshot-to Tape) </li></ul><ul><li>Complicated integration between ESX host VMs and Third Party Backup Software and Tape HW. </li></ul><ul><li>Agent-based or agentless solution designed for Enterprise Data Protection </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  86. 86. Data Recovery Key Components 10/13/09 copyright 2007 I/O Continuity Group
  87. 87. Implementation Considerations <ul><li>Not compatible with ESX/ESXi 3.x/VC2.5 and older </li></ul><ul><li>Must upgrade VMs to HW version 7 to leverage changed blocked tracking for faster generation of changes to be transferred </li></ul><ul><li>Update VMware Tools for Windows VMs to enable VSS which properly quiesces VM prior to snapshot </li></ul><ul><li>Does not backup snapshot tree (only active VMs) </li></ul><ul><li>Destination disk selection impacts performance– “you get what you pay for” </li></ul><ul><li>Use of shared storage allows off-LAN backups leading to faster data transfer/minimize LAN load </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  88. 88. Next Evolution of VCB shipping with vSphere 10/13/09 copyright 2007 I/O Continuity Group Improved API enables native integration with partner backup application
  89. 89. Data Recovery Conclusions <ul><li>Prior to vSphere, backup was a complicated command line configuration with configuration steps requiring integration with a VCB proxy server, tape library drivers and backup software. </li></ul><ul><li>vSphere Data Recovery is a good substitute for VCB, when adequate disk storage is available and backup to tape is less essential. </li></ul><ul><li>VM downtime or interruption for Data Recovery backup and restore to disk is non-existent, due to use of VM snapshot technology. </li></ul><ul><li>The next revision of VCB will integrate the two solutions. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  90. 90. <ul><li>New vSphere Licensing </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  91. 91. Understanding vSphere Licensing <ul><li>vSphere fully redesigned licensing scheme </li></ul><ul><ul><li>VMware no longer issuing VI 3 licenses </li></ul></ul><ul><li>License administration is built directly into vCenter Server </li></ul><ul><ul><li>No separate license server </li></ul></ul><ul><li>One single key contains all advanced features on ESX host. </li></ul><ul><ul><li>License keys are simple 25-character strings instead of complex text files. </li></ul></ul><ul><ul><li>Encodes a CPU quantity determining total ESX hosts that can use the license key. Keys can be split among multiple ESXs. </li></ul></ul><ul><ul><li>If you upgrade to add DRS or other features, you receive a replacement license key. </li></ul></ul>copyright I/O Continuity Group, LLC
  92. 92. Legacy VI3 vCenter License Server Topology copyright I/O Continuity Group, LLC ESX3 Server ESX3 Server ESX3 Server Active Directory Domain Database Server VirtualCenter Database VirtualCenter Server VMware License Server running on separate VM or Server <ul><li>Licenses were stored on a license server in the datacenter. </li></ul><ul><li>When ESX server boots, it would learn from License Server the available per-processor licenses and supported features. </li></ul><ul><li>Before ESX3, licenses were installed locally for each feature. </li></ul>
  93. 93. New vSphere ESX License Configuration copyright I/O Continuity Group, LLC <ul><li>Click License Features in Configuration tab to Add a License Key (enter 25 character key bundling all features) view licensed Product Features and assign to an ESX host. </li></ul><ul><li>Operates only on ESX4 hosts </li></ul><ul><li>License Server stays in place if upgrading to vSphere and impacts only legacy hosts. </li></ul>In navigation bar: Home->Administration->Licensing
  94. 94. Upgrading to vSphere License Keys <ul><li>Existing VI 3.x licenses will not work on ESX 4 </li></ul><ul><li>Must activate new vCenter 4 and ESX 4 license keys, rec’d via email and added to portal </li></ul><ul><li>Customer can log into License Portal at at http://www.vmware.com/licensing/license.portal </li></ul>copyright I/O Continuity Group, LLC
  95. 95. New License Count <ul><li>This example shows how the new and old license counts map. </li></ul><ul><li>NOTE: VMware vSphere licenses are sold in 1-CPU increments (vs VMware Infrastructure 3.x licenses were sold in 2-CPU increments.) </li></ul><ul><li>Single CPU Licensing now available, so a 2 CPU license may now be split and used on 2 single-CPU physical hosts. </li></ul>copyright I/O Continuity Group, LLC
  96. 96. License Downgrade Options <ul><li>If you purchased vSphere licenses and wish to convert licenses to VI 3. </li></ul><ul><li>Allowed for Standard, Advanced, Enterprise and Enterprise Plus. </li></ul><ul><li>20 vSphere Advanced licenses downgrade to 10 dual-CPUs of VI3 Standard using vSphere Licensing Portal (2-CPU licenses per ESX3 host) </li></ul><ul><li>ESX 4 Single CPU and Essentials (Plus) are not downgradeable. </li></ul>copyright I/O Continuity Group, LLC
  97. 97. vSphere Upgrade Requirements <ul><li>Some downtime is required to upgrade from VI 3.x environments to vSphere 4. </li></ul><ul><li>Upgrade vCenter Server </li></ul><ul><li>Upgrade ESX/ESXi hosts </li></ul><ul><li>Upgrade VMs to version 7 (introduces new hardware version) </li></ul><ul><li>Optional PVSCSI (new paravirtualized SCSI driver) </li></ul>copyright I/O Continuity Group, LLC
  98. 98. vSphere Compatibility Lists <ul><li>Compatibility of Existing VMware Products </li></ul><ul><ul><li>View, Lab Manager, Site Recovery not on list yet </li></ul></ul><ul><li>Hardware and Guest OS Compatibility Lists </li></ul><ul><ul><li>Check minimum levels underlying memory and CPU </li></ul></ul><ul><li>Database and Patches for vCenter Server 2.0 </li></ul><ul><ul><li>Oracle 9i and SQL 2000 no longer supported </li></ul></ul><ul><li>Complete Backup of vCenter Server and Database prior to upgrade </li></ul>copyright I/O Continuity Group, LLC
  99. 99. Survey of Upgrade Timing copyright I/O Continuity Group, LLC Majority out of 140 votes are waiting at least 3-6 months. The preference to allow some time before implementation (survey shows 6 months) indicates interest in added, fuller support on vSphere 4 in the near future.
  100. 100. VMware Upgrade Conclusion <ul><li>Consider adoption or upgrading to vSphere if server hardware is 64-bit, hardware-VT-assisted technology and other hardware and guest OS pre-requisites are met to support the new vSphere feature sets desired. </li></ul><ul><li>If your VMs are mission-critical (can never go down) or involve heavy workloads (needing faster processing) consider vSphere adoption/upgrade. </li></ul><ul><li>Software iSCSI performance and single processor licensing is attractive to SMBs. </li></ul><ul><li>If you currently have a support contract, the upgrade process is easy. If you’re are deploying vSphere for the first time without 64-bit servers, a license downgrade will be required. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  101. 101. Q&A <ul><li>How would you describe your current datacenter? </li></ul><ul><li>What have you identified as your biggest datacenter issues? </li></ul><ul><li>What is the estimated timing of your next upgrade initiative? </li></ul><ul><li>Have you deployed 64-bit server hardware? </li></ul><ul><li>What stage of SAN adoption are you at? </li></ul><ul><li>Is your backup window shrinking? </li></ul><ul><li>Please feel free to send us any questions subsequent to the presentation. </li></ul>10/13/09 copyright 2007 I/O Continuity Group
  102. 102. VMware on SAN Design Questions <ul><li>How many servers do I currently have and how many do I add every year? (aka server sprawl) </li></ul><ul><li>How much time do IT staff spend setting up new servers with operating system and applications? </li></ul><ul><li>How often do my servers go down? </li></ul><ul><li>Is our IT budget shrinking? </li></ul><ul><li>How difficult is it to convert a physical machine to a virtual machine with each option? </li></ul><ul><li>What hardware and guest OS’s are supported? </li></ul>copyright I/O Continuity Group, LLC
  103. 103. Vendor Neutral Design Benefits <ul><li>There are four main benefits to designing SAN’s independent of any single equipment vendor. </li></ul><ul><li>The relative importance of these benefits will change depending on your priorities and which vendors you choose. </li></ul><ul><li>The benefits in some cases are: </li></ul><ul><ul><li>Lower costs </li></ul></ul><ul><ul><li>Getting the best possible technology </li></ul></ul><ul><ul><li>Greater flexibility for future technology improvements </li></ul></ul><ul><ul><li>Non-proprietary and non-exclusivity models </li></ul></ul>copyright I/O Continuity Group, LLC
  104. 104. Closing Remarks <ul><li>Thank you for joining us. </li></ul><ul><li>Let us know how we can assist you with your next datacenter project. </li></ul>10/13/09 copyright 2007 I/O Continuity Group

×