Exploring Stretched Clusters

36,475 views

Published on

This presentation was given at VMware Partner Exchange (PEX) 2012 in Las Vegas at the EMC boot camp. It provides a comparison of stretched clusters and SRM, and supplies some best practices for building stretched clusters if that is the right solution.

Published in: Technology, Business
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
36,475
On SlideShare
0
From Embeds
0
Number of Embeds
32,733
Actions
Shares
0
Downloads
667
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • N+2 redundancy allows for a host failure in each site (no real control over which servers can fail)\n
  • \n
  • "Should" rules allow vSphere HA to violate them if the need arises (such as restarting a VM in the other site)\n
  • \n
  • Deploy active/active storage; understand read/write performance penalties; look for architectures that can minimize or eliminate such penalties\n
  • \n
  • Moving VMs between replicated/synchronized datastores could cause additional storage traffic and impact performance\n
  • \n
  • This protects against loss of connectivity to vCenter Server due to inter-site link outage\n
  • Consider the use of vApps; need to ensure that vApps will actually help\n
  • \n
  • \n
  • \n
  • \n
  • Exploring Stretched Clusters

    1. 1. Before we start • Get involved! Ask questions and participate. • If you use Twitter, feel free to tweet about this session (use hashtag #VMwarePEX or #PEX) • I encourage you to take photos or videos of this session and share them online • This presentation will be made available online after the event© Copyright 2012 EMC Corporation. All rights reserved.
    2. 2. EXPLORING STRETCHED CLUSTERS Examining the use of stretched clusters in a VMware vSphere environment Scott Lowe, VCDX 39 CTO, VMware Affinity Team Author, Mastering VMware vSphere 5 Blogger, http://blog.scottlowe.org© Copyright 2012 EMC Corporation. All rights reserved.
    3. 3. Part 1: Stretched Cluster Or SRM?© Copyright 2012 EMC Corporation. All rights reserved.
    4. 4. Part 1 Agenda • Quick review of terminology • Comparing SRM and vMSC requirements • Comparing SRM and vMSC advantages • Comparing SRM and vMSC disadvantages • Mixing SRM and vMSC© Copyright 2012 EMC Corporation. All rights reserved.
    5. 5. New vMSC HCL category • vMSC = vSphere Metro Stretched Cluster • Introduces some new terms: • Uniform access = "stretched SAN" • Non-uniform access = "distributed virtual storage" • Provides boundaries for supportability of stretched cluster configurations© Copyright 2012 EMC Corporation. All rights reserved.
    6. 6. RPO versus RTO • RPO = Recovery Point Objective • RPO is a measure of how much data loss the organization is willing to sustain • RTO = Recovery Time Objective • RTO is a measure of how long of a wait the organization is willing to tolerate before recovery is complete© Copyright 2012 EMC Corporation. All rights reserved.
    7. 7. DR versus DA • DA = Disaster avoidance • Seeks to protect applications and data before a disaster occurs • How often do you know before a disaster is going to occur? • DR = Disaster recovery • Seeks to recover applications and data after a disaster occurs • Think of DA as vMotion and DR as vSphere HA© Copyright 2012 EMC Corporation. All rights reserved.
    8. 8. Requirements for SRM • Some form of supported storage replication (synchronous or asynchronous) • Layer 3 connectivity • No minimum inter-site bandwidth requirements (driven by SLA/RPO/RTO) • No maximum latency between sites (driven by SLA/RPO/RTO) • At least two vCenter Server instances© Copyright 2012 EMC Corporation. All rights reserved.
    9. 9. Requirements for vMSC • Some form of supported synchronous active/ active storage architecture • Stretched Layer 2 connectivity • 622Mbps bandwidth (minimum) between sites • Less than 5 ms latency between sites (10 ms with vSphere 5 Enterprise Plus/Metro vMotion) • A single vCenter Server instance© Copyright 2012 EMC Corporation. All rights reserved.
    10. 10. Advantages of SRM • Defined startup orders (with prerequisites) • No need for stretched Layer 2 connectivity (but supported) • The ability to simulate workload mobility without affecting production • Supports multiple vCenter Server instances (including in Linked Mode)© Copyright 2012 EMC Corporation. All rights reserved.
    11. 11. Advantages of vMSC • The possibility of non-disruptive workload migration (disaster avoidance) • No need to deal with issues changing IP addresses • Potential for running active/active data centers and more easily balancing workloads between them • Typically a near-zero RPO with RTO of minutes • Requires only a single vCenter Server instance© Copyright 2012 EMC Corporation. All rights reserved.
    12. 12. Disadvantages of SRM • Typically higher RPO and RTO than stretched clusters • Workload mobility is always disruptive • Requires at least two vCenter Server instances • Operational overhead from managing protection groups and protection plans© Copyright 2012 EMC Corporation. All rights reserved.
    13. 13. Disadvantages of vMSC • Greater physical networking complexity due to stretched Layer 2 connectivity requirement • Greater cost resulting from higher-end networking equipment, more bandwidth, active/active storage solution • No ability to test workload mobility • Operational overhead from management of DRS host affinity groups • Supports only a single vCenter Server instance© Copyright 2012 EMC Corporation. All rights reserved.
    14. 14. What about a mixed architecture? • It can be done, but it has its own set of design considerations • For any given workload, its an “either/or” situation© Copyright 2012 EMC Corporation. All rights reserved.
    15. 15. Diagram of a mixed architecture© Copyright 2011 EMC Corporation. All rights reserved.
    16. 16. Part 2: Building Stretched Clusters© Copyright 2012 EMC Corporation. All rights reserved.
    17. 17. Part 2 Agenda • vSphere recommendations • Storage recommendations • Networking recommendations • Operational recommendations© Copyright 2012 EMC Corporation. All rights reserved.
    18. 18. vSphere recommendations • Use vSphere 5 • Use DRS host affinity groups • Run vSphere HA with N+2 capacity© Copyright 2012 EMC Corporation. All rights reserved.
    19. 19. Use vSphere 5 • vSphere 5 eliminates some vSphere HA limitations • vSphere 5 introduces the vMSC HCL category© Copyright 2012 EMC Corporation. All rights reserved.
    20. 20. Use vSphere DRS host affinity groups • Allows you to mimic site awareness • Use PowerCLI to address manageability concerns • Use "should" rules instead of "must" rules© Copyright 2012 EMC Corporation. All rights reserved.
    21. 21. Using PowerCLI with host affinity groups • Use some sort of unique property to "group" VMs • Use this "grouping" to automate VM placement into groups • Run the PowerCLI script regularly to ensure correct VM group assignment© Copyright 2012 EMC Corporation. All rights reserved.
    22. 22. Storage recommendations • Use storage from vMSC category • Be aware of storage performance considerations • Account for storage availability • Plan Storage DRS carefully • Use profile-driven storage© Copyright 2012 EMC Corporation. All rights reserved.
    23. 23. Account for storage availability • Consider cross-connect topology • Ensure multiple storage controllers at each site for availability • Provide redundant and independent inter-site storage connections • With VPLEX, use the third-site cluster witness© Copyright 2012 EMC Corporation. All rights reserved.
    24. 24. Plan Storage DRS carefully • Align datastore clusters to site/array boundaries • Dont combine stretched/non-stretched datastores • Understand the impact of SDRS on your storage solution© Copyright 2012 EMC Corporation. All rights reserved.
    25. 25. Use profile-driven storage • Use user-defined capabilities to model site topology • Create VM storage profiles to provide site affinity • Can help avoid operational concerns with VM placement© Copyright 2012 EMC Corporation. All rights reserved.
    26. 26. Networking recommendations • Plan for different traffic patterns • Where possible, separate management traffic onto a vSwitch • Incorporate redundant and independent inter- site network connections • Minimize latency as much as possible© Copyright 2012 EMC Corporation. All rights reserved.
    27. 27. Operational recommendations • Account for backup/restore in your design • Handle inter-site vMotion carefully • Dont split multi-tier apps across sites© Copyright 2012 EMC Corporation. All rights reserved.
    28. 28. Backup/restore for stretched clusters • Consider a solution with client-side deduplication to reduce WAN traffic • A mechanism to reduce restore traffic would be nice to have as well • Might be able to leverage storage solution itself for restores • Restore to local side • Allow storage solution to replicate to remote side© Copyright 2012 EMC Corporation. All rights reserved.
    29. 29. Handling inter-site vMotion • An inter-site vMotion will impact DRS host affinity rules • An inter-site vMotion could require storage configuration updates • Review inter-site vMotions to: • Reconcile DRS host affinity rules and VM locations • Reconcile storage availability rules and VM locations • Impact on other operational areas© Copyright 2012 EMC Corporation. All rights reserved.
    30. 30. Questions & Answers© Copyright 2012 EMC Corporation. All rights reserved.
    31. 31. © Copyright 2012 EMC Corporation. All rights reserved.

    ×