3 Hyper V

2,638 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
2,638
On SlideShare
0
From Embeds
0
Number of Embeds
12
Actions
Shares
0
Downloads
80
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

3 Hyper V

  1. 1. Virtualization Tour:Windows Server 2008 R2Hyper-V<br />
  2. 2. Agenda of Hyper-V<br />Windows Server Hyper-V fundamentals<br />Planning Deployment of Virtualization Solutions<br />Microsoft Self-Assessment & Planning (MAP) Toolkit<br />Windows Server 2008 R2 Hyper-V<br />
  3. 3. Core Scenarios for Hyper-V<br />
  4. 4. Server Consolidation<br />Issues<br />Underutilized hardware <br />Excessive power consumption<br />Expensive space across datacenter and branch offices<br />Solutions<br />Minimize capital expenditures <br />Reduce operating costs <br />Improve service levels<br />The fastest way to reduce costs<br />
  5. 5. Virtualization Workloads<br />Management of the workloads is key, not just the virtual machines.<br />
  6. 6. Concept of Dynamic Data Center<br />6<br />Servers provisioned on demand<br />Storage<br />Network<br />Efficient<br />Operations<br />Energy usage<br />Resource utilization <br />Storage<br />Network<br />Compute<br />Virtualization provides mobility<br />
  7. 7. VM Worker Process<br />Microsoft / XenSource<br />ISV / IHV / OEM<br />OS<br />Child Partitions<br />Parent Partition<br />Microsoft Hyper-V<br />Applications<br />Applications<br />Applications<br />Applications<br />User Mode<br />Ring 3<br />WMI Provider<br />VM Service<br />Windows Kernel<br />Windows Kernel<br />Windows Server 2008<br />Non-Hypervisor Aware OS<br />Xen-Enabled Linux Kernel<br />Windows Server 2003, 2008<br />Kernel Mode<br />Ring 0 <br />VSP<br />VSC<br />Linux VSC<br />IHV Drivers<br />Enlightment<br />VMBus<br />Emulation<br />VMBus<br />VMBus<br />Hypercall Adapter<br />Windows Hypervisor<br />Ring -1<br />Hyper-V Architecture<br />Ring -1 <br />“Designed for Windows” Server Hardware<br />
  8. 8. VM Worker Process<br />Microsoft / XenSource<br />ISV / IHV / OEM<br />OS<br />Child Partitions<br />Parent Partition<br />Microsoft Hyper-V<br />Applications<br />Applications<br />Applications<br />Applications<br />User Mode<br />Ring 3<br />WMI Provider<br />VM Service<br />Windows Kernel<br />Windows Kernel<br />Windows Server 2008<br />Non-Hypervisor Aware OS<br />Xen-Enabled Linux Kernel<br />Windows Server 2003, 2008<br />Kernel Mode<br />Ring 0 <br />VSP<br />VSC<br />Linux VSC<br />IHV Drivers<br />Enlightment<br />VMBus<br />Emulation<br />VMBus<br />VMBus<br />Hypercall Adapter<br />Windows Hypervisor<br />Ring -1<br />Hyper-V Architecture<br />Ring -1 <br />“Designed for Windows” Server Hardware<br />
  9. 9. Power Management<br />
  10. 10. Agenda of Hyper-V<br />Windows Server Hyper-V fundamentals<br />Planning Deployment of Virtualization Solutions<br />Microsoft Self-Assessment & Planning (MAP) Toolkit<br />Windows Server 2008 R2 Hyper-V<br />
  11. 11. Online Optimization Self-Assessment & ROI Tools<br />
  12. 12. Core IO Self-Assessment Sample<br />
  13. 13. Demo<br />Online ROI Tools<br />
  14. 14. Microsoft Assessment & Planning (MAP) Toolkit<br />Secure and agentless inventory<br />Comprehensive data analysis <br />In depth readiness reporting<br />
  15. 15. MAP Server Virtualization & Consolidation Wizard<br />MAP<br />MAP Tool User<br />
  16. 16. Case Studies & Results<br />http://www.microsoft.com/optimization<br />http://www.microsoft.com/optimization/about/newsandreviews.mspx<br />
  17. 17. Demo<br />Planning deployment of virtualization solutions with MAP<br />
  18. 18. Agenda of Hyper-V<br />Windows Server Hyper-V fundamentals<br />Planning Deployment of Virtualization Solutions<br />Microsoft Self-Assessment & Planning (MAP) Toolkit<br />Windows Server 2008 R2 Hyper-V<br />
  19. 19. Better flexibility<br />Live Migration<br />Cluster Shared Volumes<br />Hot Add/remove of Storage<br />Processor compatibility mode for live migration<br />Improved performance<br />Improved memory management<br />TCP Offload support<br />Virtual Machine Queue (VMQ) Support<br />Improved Networking<br />Greater Scalability<br />At 64 logical processor support<br />Enhance Green IT with Core Parking<br />
  20. 20. Live Migration<br />Overview<br />Moving a virtual machine from one server to another without loss of service<br />Benefits<br />Enables new scenarios<br />Load balancing VMs for power<br />Load balancing VMs for CPU<br />Upgrade of host hardware and maintenance<br />
  21. 21. Live Migration<br />Live Migration via Cluster Manager<br />In box UI<br />Live Migration via Virtual Machine Manager<br />Orchestrate migrations via policy<br />Moving from Quick to Live Migration:<br />Guest OS limitations?: No<br />Changes to VMs needed?: No<br />Changes to Storage infrastructure: No<br />Changes to Network Infrastructure: No<br />Update to WS 2008 R2 Hyper-V: Yes<br />
  22. 22. vs.<br />Live Migration<br />(Windows Server 2008 R2 Hyper-V)<br />VM State/Memory Transfer<br />Create VM on the target<br />Move memory pages from the source to the target via Ethernet<br />Final state transfer and virtual machine restore<br />Pause virtual machine<br />Move storage connectivity from source host to target host via Ethernet<br />Un-pause & Run <br />Quick Migration<br />(Windows Server 2008 Hyper-V)<br />Save state<br />Create VM on the target<br />Write VM memory to shared storage<br />Move virtual machine<br />Move storage connectivity from source host to target host via Ethernet<br />Restore state & Run<br />Take VM memory from shared storage and restore on Target<br />Run<br />Host 2<br />Host 1<br />Host 1<br />Host 2<br />
  23. 23. Cluster Share Volume (CSV)<br />Overview<br />CSV provides a single consistent file name space; All Windows Server 2008 R2 Server servers see the same storage<br />Benefits<br />Easy setup; Uses NTFS<br />No reformatting SANs<br />Create one big data store<br />No more drive letter problems<br />Existing tools just work<br />Highly recommended for live migration scenarios<br />
  24. 24. CSV Compatibility<br />No special hardware requirements<br />Same requirements as standard cluster disk<br />iSCSI, Fibre Channel, SAS<br />No directory structure or depth limitations<br />No proprietary file system <br />Standard NTFS<br />Support Hyper-V workload<br />It just works!<br />
  25. 25. Network Prioritization <br />Plan your internal cluster networks for CSV use<br />Select the best network for your needs<br />Give networks a “cost” (metric)<br />Lower value, higher priority (private)<br />Higher value, lower priority (public)<br />
  26. 26. 99.99%<br />99.9%<br />Availability<br />CSV Benefits<br />Simplify Storage Management<br />Consolidate VMs on a CSV disk<br />Individual VMs can failover on a shared LUN<br />Remove drive letter dependency<br />Improves live migration<br />Increases reliability<br />Dynamic I/O redirection<br />Node Fault Tolerance<br />Network Fault Tolerance<br />Storage Access Redirection<br />CSV gives your cluster higher availability<br />
  27. 27. Hot Addition & Removal of Storage<br />Add/remove disks to virtual machines while they are running<br />Virtual hard disk (VHD) and pass-through disks<br />Must be attached to virtual SCSI controller<br />Benefits<br />Enables storage growth in VMs without downtime<br />Enables additional datacenter backup scenarios<br />Enables new SQL/Exchange scenarios<br />
  28. 28. Processor Compatibility Mode<br />Live migration across different CPU versions within the same processor family<br />Intel-to-Intel and AMD-to-AMD<br />Does NOT enable cross platform from Intel to AMD or vice versa.<br />per-VM basis<br />Down to the lowest common denominator of instruction sets available to a VM <br />No specific hardware requirements needed<br />Migration flexibility within clusters and across a broader range of Hyper-V host hardware<br />
  29. 29. How Processor Compatibility Mode Works<br />VM NOT in Processor compatibility mode<br />VM in Processor compatibility mode<br />VM sees processor features: X,Y<br />VM sees processor features: X,Y,Z<br />Migration succeeds<br />Migration fails<br />Physical Processor <br />features: X,Y,Z<br />Physical Processor <br />features: X,Y<br />Physical Processor <br />features: X,Y<br />Physical Processor <br />features: X,Y,Z<br />Host B<br />Host B<br />Host A<br />Host A<br />
  30. 30. Demo<br />Hot Addition & Removal of Storage<br />
  31. 31. Better flexibility<br />Live Migration<br />Cluster Shared Volumes<br />Hot Add/remove of Storage<br />Processor compatibility mode for live migration<br />Improved performance<br />Improved memory management<br />TCP Offload support<br />Virtual Machine Queue (VMQ) Support<br />Improved Networking<br />Greater Scalability<br />64 logical processor support<br />Enhance Green IT with Core Parking<br />
  32. 32. Second Level Address Translation (SLAT)<br />New processor features to improve performance and reduce load on Windows Hypervisor<br />AMD: Nested Page Tables (NPT)<br />Intel: Extended Page Tables (EPT)<br />Benefits<br />Improve memory management performance<br />Reduce in memory copies<br />Memory usage will decrease from ~5% to 1% of total physical memory<br />Most improvement with large working sets (TS/SQL)<br />
  33. 33. VM Chimney (TCP Offload Support)<br />Overview<br />TCP/IP traffic in a VM can be offloaded to a physical NIC on the host computer.<br />Disabled by default<br />Benefits<br />Reduce CPU burden<br />Networking offload to improve performance<br />Live Migration is fully supported with Full TCP Offload <br />Cautions: <br />Not all applications benefit from Chimney<br />Works best for long-lived connections with large data transfers <br />Applications with pre-posted buffers<br />Chimney capable hardware supports a fixed number of offloaded connections – shared between all VMs<br />
  34. 34. Virtual Machine Queue (VMQ)<br />Overview<br />Disabled by default<br />NIC can DMA packets directly into VM memory<br />VM Device buffer gets assigned to one of the queues<br />Avoids packet copies in the VSP<br />Avoids route lookup in the virtual switch (VMQ Queue ID)<br />Allows the NIC to essentially appear as multiple NICs on the physical host (queues)<br />Best performance gains seen on 10G NICs (highly recommended)<br />Benefits<br />Host no longer has device DMA data in its own buffer resulting in a shorter path length for I/O (performance gain)<br />
  35. 35. Jumbo Frame Support<br />Overview<br />Enables 6x larger payload per packet<br />Ethernet frames &gt;1,500 bytes<br />Ad hoc standard is ~9k<br />Benefits<br />Improves throughput<br />Reduce CPU utilization of large file transfers<br />
  36. 36. Better flexibility<br />Live Migration<br />Cluster Shared Volumes<br />Hot Add/remove of Storage<br />Processor compatibility mode for live migration<br />Improved performance<br />Improved memory management<br />TCP Offload support<br />Virtual Machine Queue (VMQ) Support<br />Improved Networking<br />Greater Scalability<br />At 64 logical processor support<br />Enhance Green IT with Core Parking<br />
  37. 37. 64 Logical Processor Support<br />Overview<br />Provides Hyper-V the ability to utilizes up to 64 of the logical processor pool<br />presented to Windows Server 2008 R2<br />Benefits<br />Significantly increases host server density<br />Easily provide multiple processers per virtual machine<br />
  38. 38. Windows Server 2008 R2 Server Core Parking<br />Overview<br />Scheduling virtual machines on a single server for density as opposed to dispersion<br />This allows “park/sleep” cores by putting them in deep C states<br />Benefits<br />Significantly enhances Green IT by being able to reduce power required for CPUs<br />
  39. 39. Windows Server 2008 (16 LP w/o Core Parking)<br />
  40. 40. Windows Server 2008 R2 (16 LP w/ Core Parking)<br />Processor is “parked”<br />Processor is “parked”<br />
  41. 41. Resources<br />Microsoft Virtualization Solution Accelerators<br />Hyper-V Getting Started Guide<br />Hyper-V Step-by-Step Guide: Testing Hyper-V and Failover Clustering<br />Virtualization Newsgroup<br />

×