Windows server 8 hyper v networking (aidan finn)


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Windows server 8 hyper v networking (aidan finn)

  1. 1. Windows Server 8 Hyper-V Networking Aidan Finn, MVP (Virtual Machine)@joe_elway
  2. 2. About Aidan Finn• MVP (Virtual Machine)• Technical Sales Lead at MicroWarehouse• Working in IT since 1996• Experienced with Windows Server/Desktop, System Center, virtualisation, and IT infrastructure.• Blog:• Twitter: @joe_elway
  3. 3. Writing
  4. 4. Just Announced
  5. 5. WARNING!• All content in this presentation is subject to change• We have not even reached beta release – Currently Developer Preview Release• A lot of material to cover – More in this sub-topic than in all of W2008 R2 Hyper-V
  6. 6. Agenda• NIC Teaming• Storage optimisation• Workload mobility• Performance & optimisations• Extensible Hyper-V Switch• Security• Fabric convergence• Host network architectures
  7. 7. Windows Server 8 Hyper-V Plans• Great Big Hyper-V Survey 2011: – Conducted by me, Hans Vredevoort, and Damian Flynn in August 2011 (before Win 8 Dev Prev) – Who’s deploying it: • 27.21% interested • 62.01% planning • 8.09% undecided • 2.7% not interested
  8. 8. NIC Teaming & Windows 2008 R2• KB968703: No support from Microsoft – Use HP/Dell/Broadcom/Intel drivers/software – Complicates deployment & support• Great Big Hyper-V Survey of 2011 – 27.94% found NIC teaming to be biggest challenge in Hyper-V deployment – 27.21% said networking was their biggest issue• One of the last objections by VMware enthusiasts
  9. 9. NIC Teaming & Windows Server 8• Built into the OS and supported – Simplified deployment & support• Load balancing and failover (LBFO)• Aggregate bandwidth• Use different model & vendor NICs!• Opens up interesting opportunities• One more VMware wall knocked down
  10. 10. NIC Teaming Hyper-V Extensible Switch LBFO Admin GUI Frame distribution/aggregation Failure detection WMI Control protocol implementation LBFO ProviderLBFO Configuration IOCTL DLL Port 1 Port 2 Port 3 Virtual miniport 1 IM Mux Kernel mode User mode Protocol edge NIC 1 NIC 2 NIC 3 Network switch
  11. 11. Scaling File SharingTraffic• CPU utilisation is a challenge for high I/O SMB traffic• Solution: Remote Direct Memory Access (RDMA) – A secure way to enable a DMA engine to transfer buffers – Built into Windows Server 8• Why care about SMB? More to come …
  12. 12. SMB 2.2Used by File Server and ClusteredShared Volumes• Scalable, fast and efficient storage access• Minimal CPU utilization for I/O• High throughput with low latency• Multi-channel • NIC Teaming • Much greater I/O speeds• •Required hardware • •InfiniBand • •10G Ethernet w/ RDMA
  13. 13. And SMB 2.2 Enables• Storage of VMs on file shares without performance compromise• Affordable scalable & continuously available storage – Active/Active file share cluster – VMs stored on UNC paths• Live Migration between non-clustered hosts – VMs on file shares
  14. 14. Multi-Tenant Cloud Flexibility & Security• Great Big Hyper-V Survey of 2011 – 28.68% considering hybrid cloud deployment• A public cloud (hosting) or large private cloud (centralisation) has lots of hosted organisations – Trust issues – Compliance & regulations• Hosting company requires flexibility & mobility of virtual workloads – Virtualisation is mobile – But networking addresses are not
  15. 15. Network Virtualisation Woodgrove VM Contoso VM Woodgrove network Contoso network Physical Physical network serverHyper-V Machine Virtualization Hyper-V Network Virtualization• Run multiple virtual servers on a • Run multiple virtual networks on a physical network physical server • Each virtual network has illusion it is running as a physical• Each VM has illusion it is running fabric as a physical server
  16. 16. Network Virtualisation Benefits• No need to re-address virtual workloads – For example to – Retain communications and LOB app SLA• Enable easy migration of private cloud to multi- tenant public cloud• Enable Live Migration mobility of workloads within the data centre – Move virtual workloads between network footprints
  17. 17. Virtual Machine Queue• Static (non VMQ) networking can become overloaded during high I/O loads• Virtual Machine Queue (VMQ) – Add in Windows 2008 R2 – Offloads burden from the parent to the network controller, to accelerate network I/O throughput• Can overload CPU cores
  18. 18. Dynamic Virtual Machine Queue (DVMQ) Root Partition Root Partition Root Partition CP CP CP CP CP CP CP CP CP CP CP CP U U U U U U U U U U U U 0 1 2 3 0 1 2 3 0 1 2 3 Physical NIC Physical NIC Physical NIC No VMQ Static VMQ Windows Server 8 Dynamic VMQ Adaptive network processing across CPU to provide optimal power and performance across changing workloads
  19. 19. Single Root I/O Virtualization (SR-IOV) Host Host Root Partition Virtual Root Partition Virtual Machine Machine Hyper-V Switch Hyper-V Switch Virtual Virtual NIC Function Routing Routing VLAN Filtering VLAN Filtering Data Copy Data Copy Physical NIC SR-IOV Physical NIC Network I/O path without SRIOV Network I/O path with SRIOV
  20. 20. Hyper-V Live Migration Policy• No new features that prevent Live Migration• For example, SR-IOV enabled VM being live migrated to host without SR-IOV – Switches from SR-IOV virtual function to Hyper-V switch on original host – Live Migration then takes place – Zero downtime
  21. 21. More Optimisations• Receive Side Scaling (RSS) – Share network I/O across many processors – Incompatible with VMQ on the NIC• Receive Side Coalescing (RSC) – Consolidate network caused interrupts• IPSec Task Offload (IPsecTO) – Moves the workload from the host’s CPU to a dedicated processor on the network adapter
  22. 22. Virtual Network -> Virtual Switch• In 2008/R2: – A VM has a vNIC – The vNIC connects to a virtual network (aka virtual switch) • Remember that we have something new called Network Virtualisation to abstract IP addressess – The virtual network connects to a pNIC in the host• In Windows Server 8: – The Extensible Hyper-V Virtual Switch – Supports unified tracing for network diagnostics
  23. 23. Extensible Hyper-V Virtual Switch Virtual Virtual Machine Root Partition Machine VM NIC Host NIC VM NIC Hyper-V Switch Extension Protocol Capture Extensions WFP Extensions Certified Extensions Filtering Extensions Forwarding Extension Extension Miniport Physical NIC
  24. 24. Cloud & Security• Great Big Hyper-V Survey 2011: – 42.65% concerned about private cloud security• You cannot trust tenants in multi-tenant cloud – Tenant VS hosting company – Tenant VS Tenant• We’ve been using physical security: – Firewall • Requires centralised skills & slow to configure • Gets complicated – VLANs • Never intended for security • Restricted number per physical network
  25. 25. Windows Server 8 & Security• Software easier & quicker to configure – Automate with provisioning• Port ACLs – Define allowed communication paths between virtual machines based on IP range or MAC address.• PVLAN (Private VLAN) – VLAN-like domains created in Hyper-V• DHCP Guard – Isolate rogue virtual DHCP servers
  26. 26. Cloud & Network Performance• Can aggregate bandwidth with NIC teaming• Hosting company must control network bandwidth utilisation: – “Give him enough rope and he’ll hang himself” – Prioritise important applications – Limit tenants based on fees paid – Guarantee SLAs• Network Quality of Service (QoS)
  27. 27. QoS• Configured using PowerShell• Minimum bandwidth policy: – Enforce bandwidth allocation - SLA – Redistribute unused bandwidth – Efficiency & consolidation• Maximum bandwidth policy – Cross charge for expensive bandwidth• Possibly combine with network resource metering
  28. 28. A 2008 R2 Clustered Host• 6 NICs: – Parent – VM – Redirected I/O – Live Migration – 2 * iSCSI• NIC teaming?• Backup?• Lot$ of NIC$. Consider costs of 10 GbE
  29. 29. Physical Isolation• Traditional Server VM 1 VM 2• Multiple physical NICs• ACLs for guests Migration Cluster / Manage Storage Live Hyper-V Extensible Switch
  30. 30. Data Center Bridging (DCB)PowerShell WMI Traffic Windows Windows Classification Network Stack Storage Stack DCB LAN Miniport iSCSI Miniport
  31. 31. Converged Fabric• A new possibility• Consolidate all those NICs to a simpler network• Take advantage of: – 10 GbE/Infiniband networking: Bandwidth & VM density – NIC Teaming: Aggregation and fault tolerance, e.g. lots of 1 GbE NICs – DCB: Converge very different protocols – QoS: Guarantee performance SLA• Lots of variations
  32. 32. Management and Guest Isolation• 10 GbE NIC for parent Server partition VM 1 VM 2• ACLs for guests Migration Cluster / Manage Storage• DCB to converge Live protocols Hyper-V• QoS for SLA Extensible Switch
  33. 33. Using Network Offloads for Increased Scale• Scalability Offloads take Server advantage of all CPU cores – Receive Side Scaling for native VM 1 VM 2 path Migration Cluster / Manage Storage – Virtual Machine Queue for Live Hyper-V Switch path Hyper-V Extensible Switch RSS VMQ
  34. 34. Converged Fabrics (1 NIC)• ACLs for all switch ports Server VM 1 VM 2• QoS for Management OS traffic Manage Live Migration Cluster / Storage Hyper-V Extensible Switch
  35. 35. Converged Fabrics (2 NICs)• ACLs for all switch ports Server VM 1 VM 2• QoS for Management OS traffic Manage Live Migration Cluster / Storage• NIC Teaming for LBFO Hyper-V Extensible Switch NIC Teaming
  36. 36. Sample Documented Configuration• No network legacy Windows Hyper-V Server concerns (green File VM 1 VM n Live Migration Server field) Cluster / Manage Manage Storage Storage Cluster• Hyper-V clustered Hyper-V• Converged 10GbE Extensible Switch with DCB for QoS QoS NIC Teaming NIC Teaming QoS• File Server RSS RSS RSS RSS NIC Teaming clustered with DCB DCB DCB DCB 10 GbE 10 GbE 10 GbE 10 GbE 1 GbE 1 GbE HBA scale-out 10GBE Switch + DCB support SAN 1GBE Switch
  37. 37. For More Information• The original Build Windows 2011 sessions: – 11 – SAC-439T – SAC-437T – SAC-430T
  38. 38. The EndThanks to Hyper-V.nuAidan Finn• @joe_elway•