Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Elastic vSphere?


Published on

This presentation discusses design considerations around the use of stretched clusters with VMware vSphere. It was presented to the Denver VMUG on September 28, 2010.

Published in: Technology
  • Dating direct: ❤❤❤ ❤❤❤
    Are you sure you want to  Yes  No
    Your message goes here
  • Follow the link, new dating source: ♥♥♥ ♥♥♥
    Are you sure you want to  Yes  No
    Your message goes here

Elastic vSphere?

  1. 1. Elastic vSphere?<br />Design Considerations for Building Stretched Clusters<br />Scott Lowe, vSpecialist, EMC Corporation<br />VCDX #39, VMware vExpert<br /><br /><br />
  2. 2. Agenda<br />Reasons for building stretched clusters<br />Storage configurations for stretched clusters<br />Design considerations for stretched clusters<br />EMC VPLEX in stretched clusters<br />Q&A<br />
  3. 3. Reasons for Building Stretched Clusters<br />Valid reasons:<br />Provide high availability across sites<br />Workload balancing across sites<br />Invalid reasons:<br />Because you can/because it’s cool (is that a valid business justification?)<br />Enable vMotion over distance (stretched clusters are not a prerequisite)<br />Use vMotion as a DR mechanism (vMotion not applicable when both ends aren’t up and available)<br />
  4. 4. Storage Configurations for Stretched Clusters<br />A review of storage configurations to support stretched cluster designs<br />
  5. 5. Stretched SAN Configuration<br />Literally just stretching the SAN fabric between locations<br />Requires synchronous replication<br />Limited in distance to ~100km in most cases<br />Typically read/write in one location, read-only in second location<br />Implementations with only a single storage controller at each location create a SPoF (single point of failure)<br />
  6. 6. Stretched SAN Configuration<br />X<br />X<br />Read/Write<br />Read-Only<br />
  7. 7. Distributed Virtual Storage Configuration<br />Leverages new storage technologies to distribute storage across multiple sites<br />Requires synchronous mirroring<br />Limited in distance to ~100km in most cases<br />Read/write storage in both locations, employs data locality algorithms<br />Typically uses multiple controllers in a scale-out fashion<br />Must address “split brain” scenarios<br />
  8. 8. Distributed Virtual Storage Configuration<br />X<br />X<br />Read/Write<br />Read/Write<br />
  9. 9. EMC VPLEX Overview<br />EMC VPLEX falls into the distributed virtual storage category<br />Keeps data synchronized between two locations but provides read/write storage simultaneously at both locations<br />Uses scale-out architecture with multiple engines in a cluster and two clusters in a Metro-Plex<br />Supports both EMC and non-EMC arrays behind the VPLEX<br />
  10. 10. Preferred Site in VPLEX Metro<br />VPLEX Metro provides read/write storage in two locations at thesame time (AccessAnywhere)<br />In a failure scenario, VPLEX uses “detach rules” to prevent split brain<br />A preferred site is defined on a per-distributed virtual volume basis<br />Preferred site remains read/write;I/O suspended at non-preferred site<br />Invoked only by entire cluster failure, entire site failure, or cluster partition<br />Read/<br />write<br />Read/<br />write<br />I/O Suspended<br />Distributed Virtual Volume<br />X<br />IP/FC links<br />for Metro-Plex<br />Preferred Site<br />Non-Preferred Site<br />
  11. 11. Design Considerations for Stretched Clusters<br />A review of design considerations and design impacts when using stretched clusters<br />
  12. 12. Stretched Cluster Considerations #1<br />Consideration:Without read/write storage at both sites, roughly half the VMs incur a storage performance penalty.<br />With stretched SAN configurations:<br />VMs running in one site are accessing storage in another site<br />Creates additional latency for every I/O operation<br />With distributed virtual storage configurations:<br />Read/write storage provided, so this doesn’t apply<br />
  13. 13. Stretched Cluster Considerations #2<br />Consideration:Prior to vSphere 4.1, you can’t control HA/DRS behavior.<br />With stretched SAN configurations:<br />Additional latency introduced when VM storage resides in other location<br />Storage vMotion required to remove this latency<br />With distributed virtual storage configurations:<br />Need to keep cluster behaviors in mind<br />Data is access locally due to data locality algorithms<br />
  14. 14. Stretched Cluster Considerations #3<br />Consideration:With vSphere 4.1, you can use DRS host affinity rules to control HA/DRS behavior.<br />With all storage configurations:<br />Doesn’t address HA primary/secondary node selection<br />With stretched SAN configurations:<br />Beware of single-controller implementations<br />Storage latency still present in the event of a controller failure<br />With distributed virtual storage configurations:<br />Plan for cluster failure/cluster partition behaviors<br />
  15. 15. Stretched Cluster Considerations #4<br />Consideration:Host affinity rules don’t affect VMware HA admission control.<br />With all storage configurations:<br />Must configure admission control for 50% failure in order to guarantee resource availability<br />Can’t configure “per site” admission control rules<br />Impacts the reasons people would build stretched clusters, especially workload balancing <br />
  16. 16. Stretched Cluster Considerations #5<br />Consideration:There is no supported way to control VMware HA primary /secondary node selection.<br />With all storage configurations:<br />Limits cluster size to 8 hosts (4 in each site)<br />das.preferredprimaries is not a supported mechanism for controlling primary/secondary node selection<br />Methods for increasing the number of primary nodes also not supported by VMware<br />
  17. 17. Stretched Cluster Considerations #6<br />Consideration:Stretched HA/DRS clusters require Layer 2 adjacency (or equivalent) at the network layer.<br />With all storage configurations:<br />Complicates the network infrastructure<br />Involves technologies like OTV, VPLS/Layer 2 VPNs<br />With stretched SAN configurations:<br />Can’t leverage vMotion at distance without storage latency<br />With distributed virtual storage configurations:<br />Data locality enables vMotion at distance without latency<br />
  18. 18. Stretched Cluster Considerations #7<br />Consideration:The network lacks site awareness, so stretched clusters introduce new networking challenges.<br />With all storage configurations:<br />The movement of VMs from one site to another doesn’t update the network<br />VM movement causes “horseshoe routing” (LISP, a future networking standard, helps address this)<br />You’ll need to use multiple isolation addresses in your VMware HA configuration<br />
  19. 19. Horseshoe Routing<br />
  20. 20. Stretched Cluster Recommendations<br />Use separate HA/DRS clusters in each datacenter<br />Use separate distributed VMFS datastores for each clusters<br />Use vMotion to move VMs as needed between clusters<br />Keep preferred/non-preferred site behavior in mind!<br />Try to keep related VMs together in a site<br />Change detach rules to switch preferred site if necessary<br />A VMware KB article is available discussing HA/DRS clusters with VPLEX<br />
  21. 21. For More Information…<br />VMware support with NetApp MetroCluster:<br />Using VPLEX Metro with VMware HA:<br />vMotion over Distance Support with VPLEX Metro:<br />The Case For and Against Stretched ESX Clusters:<br />
  22. 22. Q&A<br />