Building Business Continuity Solutions With Hyper V

2,332 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
2,332
On SlideShare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • If you would like to host your demo on the Virtual Server, please use the myVPC demo slide, not this slide.
  • STEP 3 – optionally, DPM captures System State of operating system every week
  • STEP 3 – optionally, DPM captures System State of operating system every week
  • STEP 3 – optionally, DPM captures System State of operating system every week
  • RECOVERY STEP 1 = VMM automates starting of Virtual Machines within host(s)RECOVERY STEP 2 (option A) = DPM restores data into VMs
  • RECOVERY STEP 1 = VMM automates starting of Virtual Machines within host(s)RECOVERY STEP 2 (option B) = LUN’s from DPM reattached to VM heads, using iSCSI initiators
  • Users re-attach to production machines, unaware that their servers are now virtualized stand-ins.
  • Live Migration migrates a running VM with minimal disruption of the VM’s servicesGoal is that open TCP/IP connections do not timeoutLive Migration steps:Establish connection between source and target host Transfer VM configuration and device informationTransfer VM memorySuspend source VM and transfer state Resume target VM
  • CSV enables multiple Windows Servers to access SAN storage with a single consistent namespace for all volumes on all hosts. Multiple hosts can access the same Logical Unit Number (LUN) on SAN storage.
  • Clustered Shared Volumes (CSV) implements hybrid sharing for LUNsOne node owns the storage namespace (e.g. directory structure) and metadataOther nodes can own the storage backing an individual file’s data (e.g. VHD)Benefits of CSV:Can store all VM VHDs on one LUNHigh-performance access to VHDs from owning nodesAll nodes have read/write access to file dataSeamless VM movement between nodes Any node can read from VHD, so no need to change LUN ownershipSeamless LUN ownership transitionPersistent handles allow cluster to move LUN without interruption to VM operation
  • Building Business Continuity Solutions With Hyper V

    1. 1. Rohit Gulati | Partner Technical Consultant Microsoft
    2. 2. Session Objectives & Agenda Virtualization and High Availability Types of high availability enabled by virtualization Enabling a highly available cluster with virtual machines Demo: Windows Server 2008 Cluster Creation Demo: Making a Virtual Machine highly available Stretch Clusters and Hyper V Guest Clustering Best Practices Hyper V and NLB Disaster Recovery and Virtualization What’s new in R2
    3. 3. Session Prerequisites Knowledge of Windows Server 2008 Knowledge of Microsoft Hyper-V Cluster Experience NOT REQUIRED!
    4. 4. Windows Server 2008 With Hyper-V Technology Can be installed on both Windows Server 2008 Full and Core Production servers can be configured as a minimal footprint Server Core role
    5. 5. Virtualization and High Availability
    6. 6. Virtualization And High Availability Value of the physical server goes Downtime is bad, but affects only up one workload Downtime is far worse because multiple workloads are affected
    7. 7. Microsoft Hyper-V Quick Migration Quickly move virtualized workloads to service underlying hardware More common than unplanned Automatic failover to other nodes (hardware or power failure) Not as common and more difficult
    8. 8. Quick Migration Fundamentals – Planned Downtime Save entire virtual machine VHDs state Move storage connectivity from origin to destination host Restore virtual machine and run
    9. 9. Quick Migration Planned Downtime Active server requires servicing Move virtualized workloads (3 + 1 Servers) to a standby server Ethernet VHDs
    10. 10. Quick Migration Unplanned Downtime Active server loses power Virtual machines automatically restart on (3 + 1 Servers) the next cluster node If there is not enough memory, the failover automatically moves to the Ethernet next node until done VHDs
    11. 11. Quick Migration Storage Considerations (1/2) Provides enhanced I/O performance Requires VM configuration file to be stored separate from the virtual machine file Create file share on the cluster and store VM configuration files for virtual machines that use pass-thru One LUN per VM best practice Ability to provision more then one VM per LUN but all failover as a unit
    12. 12. Quick Migration Storage Considerations (2/2) Leverage MPIO solutions for path availability and I/O throughput Leverage VM provisioning via GUID ID instead of drive letter ?<GUID> Deploy KB951308 cluster update to support: Support for MountPoints or Volumes with no Drive Letter GUID Volume Path Copy Allow more than one virtual machine in a quot;Services or Applicationsquot; group
    13. 13. Other Types Of High Availability Suited for stateful applications Application clusters SQL Exchange Stateless HA approach Enabled with Hyper-V’s enhanced networking support
    14. 14. Comparing High Availability To Fault Tolerance Continuous replication of VM memory contents Fail through concept vs. failover Hardware or component failure undetected by applications Requires redundant hardware configurations deployed in lock step Special interconnects Vendors with FT Hyper V solutions Marathon Stratus Technologies
    15. 15. Cluster Creation and Quick Migration
    16. 16. Cluster Management – VMM 2008 Add entire Hyper-V host cluster in a single step Automatic detection of node additions/removals Specify the number of node failures to sustain while keeping all HA VMs running Intelligent Placement ensures that new HA VMs will not over commit the cluster Node failures automatically trigger over commit re-calculation
    17. 17. Configuring A VM To Be Highly Available 4. Create VM 3. Move the disk to Configuration resource group resource 2. Find available disk 5. Create VM resource resource 1. Create resource 6. Set resource group dependencies VM Resource VM Config Resource Disk Resource
    18. 18. Placement And Cluster Reserve Cluster reserve = 1 node YES Clustered Host 1 Clustered Host 2 Clustered Host 3
    19. 19. Placement And Cluster Reserve Cluster reserve = 1 node NO Clustered Host 1 Clustered Host 2 Clustered Host 3
    20. 20. Stretch Clusters
    21. 21. Geographically Diverse Clusters Stretching Nodes across the river used to be good enough… But businesses are now demanding more!
    22. 22. Stretch Clusters (Long Distance) Allow cluster nodes to communicate across network routers No more having to connect nodes with VLANs! Increase to Extend Geographically Dispersed Clusters over greater distances Decrease to detect failures faster and take recovery actions for quicker failover Mirrored storage between stretched locations Hardware or Software based replication
    23. 23. Guest Clustering
    24. 24. Guest Clustering File, Print, DNS, DHCP, SQL, etc
    25. 25. Guest Clustering iSCSI Target
    26. 26. Guest Clustering – Best Practices Private virtual network for cluster communication Virtual network dedicated for ISCSI traffic
    27. 27. Hyper-V and NLB
    28. 28. Why Network Load Balancing? Network Load Balancing can load balance requests for individual TCP/IP request across the cluster Can load balance multiple server requests, from the same client or from several clients across multiple hosts in the cluster Can automatically detect and recover from a failed or offline computer Can automatically rebalance the network load when servers are added or removed
    29. 29. Hyper-V and Network Load Balancing June : Web Store working well Nov: Operations Manager notices seasonal demand and signals Virtual Machine Manager to deploy an additional Web Server Dec: Even more customer demand means that another Web Server will be rapidly deployed Ethernet Monitoring system health and performance Enforces business policy Integrates with Virtual Machine Manager VHDs Virtual Machine Management Rapid deployment Centralized Virtual Machine Library
    30. 30. Virtualization and Disaster Recovery
    31. 31. Disaster Recovery Staging SVR1 Data Protection every 15 minutes … DPM SVR2 Data Protection every 15 minutes … DPM Data Protection Manager 2007 SP1 SVR3 Data Protection every 15 minutes … DPM
    32. 32. Disaster Recovery Staging P2V – Physical to Virtual weekly … VMM SVR1 Data Protection every 15 minutes … DPM P2V – Physical to Virtual weekly … VMM SVR2 Data Protection every 15 minutes … DPM Data Protection Manager 2007 SP1 Virtual Machine Manager 2008 P2V – Physical to Virtual weekly … VMM SVR3 Data Protection every 15 minutes … DPM
    33. 33. Disaster Recovery Staging P2V – Physical to Virtual weekly … VMM SVR1 Data Protection every 15 minutes … DPM System State daily … DPM P2V – Physical to Virtual weekly … VMM SVR2 Data Protection every 15 minutes … DPM System State daily … DPM Data Protection Manager 2007 SP1 Virtual Machine Manager 2008 P2V – Physical to Virtual weekly … VMM SVR3 Data Protection every 15 minutes … DPM System State daily … DPM
    34. 34. Disaster Recovery Staging SVR1 SVR1 data SVR2 Data Protection Manager 2007 SP1 Virtual Machine Manager 2008 Windows Server 2008 SVR3 Hyper V
    35. 35. Disaster Recovery Staging SVR1 SVR2 Data Protection Manager 2007 SP1 Virtual Machine Manager 2008 Windows Server 2008 SVR3 Hyper V
    36. 36. Disaster Recovery Staging SVR1 SVR2 Data Protection Manager 2007 SP1 Virtual Machine Manager 2008 Windows Server 2008 SVR3 Hyper V
    37. 37. Live Migration in Hyper-V R2
    38. 38. Live Migration Overview Moving a virtual machine from one server to another without loss of service
    39. 39. Live Migration Live Migration via Cluster Manager In box UI Live Migration via Virtual Machine Manager Orchestrate migrations via policy Moving from Quick to Live Migration: Guest OS limitations?: No Changes to VMs needed?: No Changes to Storage infrastructure: No Changes to Network Infrastructure: No Update to WS 2008 R2 Hyper-V: Yes
    40. 40. How does Live Migration work? Prerequisites: Source and Destination computers running WS08 R2 Source and destination nodes must be part of a Failover Cluster Files used by the VM must be located on shared storage Failover Cluster Source Node Destination Node .BIN .VSV .XML .VHD Storage
    41. 41. How does Live Migration work? Phase 1: Setup Create TCP connection between source and destination nodes Transfer VM configuration data to destination node Setup a skeleton for the VM on the destination node Configuration Data Source Node Destination Node .BIN .VSV .XML .VHD Network Storage
    42. 42. How does Live Migration work? Phase 2: Memory transfer Transfer the content of the VM’s memory to the destination node Track pages modified by the VM, retransfer these pages Pause the VM before the final transfer pass Memory Content Source Node Destination Node .BIN .VSV .XML .VHD Network Storage
    43. 43. How does Live Migration work Phase 3: State transfer and VM restore Save register and device state of VM on source node Transfer saved state and storage ownership to destination node Restore VM from saved state on destination node Cleanup VM on source node Saved State Source Node Destination Node .BIN .VSV .XML .VHD Network Storage
    44. 44. Quick Migration Live Migration (Windows Server 2008 Hyper-V) (Windows Server 2008 R2 Server Hyper-V) 1. Save state 1. VM State/Memory Transfer a) Create VM on the target a) Create VM on the target b) Write VM memory to shared storage b) Move memory pages from the source to the target via Ethernet 2. Move virtual machine a) Move storage connectivity from 2. Final state transfer and virtual source host to target host via Ethernet machine restore 3. Restore state & Run a) Pause virtual machine a) Take VM memory from shared storage b) Move storage connectivity from source and restore on Target host to target host via Ethernet b) Run 3. Un-pause & Run Host 1 Host 2 Host 1 Host 2
    45. 45. Cluster Share Volumes: Migration & Storage NEW Cluster Shared Volumes (CSV) in Windows Server 2008 R2 Overview CSV provides a single consistent file name space; All Windows Server 2008 R2 Server servers see the same storage Highly recommended for live migration scenarios
    46. 46. Cluster Shared Volumes All servers “see” the same storage
    47. 47. Hot Add/Remove Storage Overview Add and remove VHD and pass-through disks to a running VM without requiring a reboot. Hot-add/remove disk applies to VHDs and pass-through disks attached to the virtual SCSI controller
    48. 48. Resources Virtualization Home Page: www.microsoft.com/virtualization Virtualization Solution Accelerators: www.microsoft.com/vsa MAP tool : http://microsoft.com/map Hyper-V Green Tool : http://hyper-green.com
    49. 49. © 2009 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

    ×