Windows server 8 hyper v networking (aidan finn)
Upcoming SlideShare
Loading in...5
×
 

Windows server 8 hyper v networking (aidan finn)

on

  • 2,630 views

 

Statistics

Views

Total Views
2,630
Views on SlideShare
2,630
Embed Views
0

Actions

Likes
0
Downloads
67
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Windows server 8 hyper v networking (aidan finn) Windows server 8 hyper v networking (aidan finn) Presentation Transcript

  • Windows Server 8 Hyper-V Networking Aidan Finn, MVP (Virtual Machine)@joe_elway http://www.aidanfinn.com
  • About Aidan Finn• MVP (Virtual Machine)• Technical Sales Lead at MicroWarehouse• Working in IT since 1996• Experienced with Windows Server/Desktop, System Center, virtualisation, and IT infrastructure.• Blog: http://www.aidanfinn.com• Twitter: @joe_elway
  • Writing View slide
  • Just Announced View slide
  • WARNING!• All content in this presentation is subject to change• We have not even reached beta release – Currently Developer Preview Release• A lot of material to cover – More in this sub-topic than in all of W2008 R2 Hyper-V
  • Agenda• NIC Teaming• Storage optimisation• Workload mobility• Performance & optimisations• Extensible Hyper-V Switch• Security• Fabric convergence• Host network architectures
  • Windows Server 8 Hyper-V Plans• Great Big Hyper-V Survey 2011: – Conducted by me, Hans Vredevoort, and Damian Flynn in August 2011 (before Win 8 Dev Prev) – Who’s deploying it: • 27.21% interested • 62.01% planning • 8.09% undecided • 2.7% not interested
  • NIC Teaming & Windows 2008 R2• KB968703: No support from Microsoft – Use HP/Dell/Broadcom/Intel drivers/software – Complicates deployment & support• Great Big Hyper-V Survey of 2011 – 27.94% found NIC teaming to be biggest challenge in Hyper-V deployment – 27.21% said networking was their biggest issue• One of the last objections by VMware enthusiasts
  • NIC Teaming & Windows Server 8• Built into the OS and supported – Simplified deployment & support• Load balancing and failover (LBFO)• Aggregate bandwidth• Use different model & vendor NICs!• Opens up interesting opportunities• One more VMware wall knocked down
  • NIC Teaming Hyper-V Extensible Switch LBFO Admin GUI Frame distribution/aggregation Failure detection WMI Control protocol implementation LBFO ProviderLBFO Configuration IOCTL DLL Port 1 Port 2 Port 3 Virtual miniport 1 IM Mux Kernel mode User mode Protocol edge NIC 1 NIC 2 NIC 3 Network switch
  • Scaling File SharingTraffic• CPU utilisation is a challenge for high I/O SMB traffic• Solution: Remote Direct Memory Access (RDMA) – A secure way to enable a DMA engine to transfer buffers – Built into Windows Server 8• Why care about SMB? More to come …
  • SMB 2.2Used by File Server and ClusteredShared Volumes• Scalable, fast and efficient storage access• Minimal CPU utilization for I/O• High throughput with low latency• Multi-channel • NIC Teaming • Much greater I/O speeds• •Required hardware • •InfiniBand • •10G Ethernet w/ RDMA
  • And SMB 2.2 Enables• Storage of VMs on file shares without performance compromise• Affordable scalable & continuously available storage – Active/Active file share cluster – VMs stored on UNC paths• Live Migration between non-clustered hosts – VMs on file shares
  • Multi-Tenant Cloud Flexibility & Security• Great Big Hyper-V Survey of 2011 – 28.68% considering hybrid cloud deployment• A public cloud (hosting) or large private cloud (centralisation) has lots of hosted organisations – Trust issues – Compliance & regulations• Hosting company requires flexibility & mobility of virtual workloads – Virtualisation is mobile – But networking addresses are not
  • Network Virtualisation Woodgrove VM Contoso VM Woodgrove network Contoso network Physical Physical network serverHyper-V Machine Virtualization Hyper-V Network Virtualization• Run multiple virtual servers on a • Run multiple virtual networks on a physical network physical server • Each virtual network has illusion it is running as a physical• Each VM has illusion it is running fabric as a physical server
  • Network Virtualisation Benefits• No need to re-address virtual workloads – For example 192.168.1.0/24 to 10.100.25.0/24 – Retain communications and LOB app SLA• Enable easy migration of private cloud to multi- tenant public cloud• Enable Live Migration mobility of workloads within the data centre – Move virtual workloads between network footprints
  • Virtual Machine Queue• Static (non VMQ) networking can become overloaded during high I/O loads• Virtual Machine Queue (VMQ) – Add in Windows 2008 R2 – Offloads burden from the parent to the network controller, to accelerate network I/O throughput• Can overload CPU cores
  • Dynamic Virtual Machine Queue (DVMQ) Root Partition Root Partition Root Partition CP CP CP CP CP CP CP CP CP CP CP CP U U U U U U U U U U U U 0 1 2 3 0 1 2 3 0 1 2 3 Physical NIC Physical NIC Physical NIC No VMQ Static VMQ Windows Server 8 Dynamic VMQ Adaptive network processing across CPU to provide optimal power and performance across changing workloads
  • Single Root I/O Virtualization (SR-IOV) Host Host Root Partition Virtual Root Partition Virtual Machine Machine Hyper-V Switch Hyper-V Switch Virtual Virtual NIC Function Routing Routing VLAN Filtering VLAN Filtering Data Copy Data Copy Physical NIC SR-IOV Physical NIC Network I/O path without SRIOV Network I/O path with SRIOV
  • Hyper-V Live Migration Policy• No new features that prevent Live Migration• For example, SR-IOV enabled VM being live migrated to host without SR-IOV – Switches from SR-IOV virtual function to Hyper-V switch on original host – Live Migration then takes place – Zero downtime
  • More Optimisations• Receive Side Scaling (RSS) – Share network I/O across many processors – Incompatible with VMQ on the NIC• Receive Side Coalescing (RSC) – Consolidate network caused interrupts• IPSec Task Offload (IPsecTO) – Moves the workload from the host’s CPU to a dedicated processor on the network adapter
  • Virtual Network -> Virtual Switch• In 2008/R2: – A VM has a vNIC – The vNIC connects to a virtual network (aka virtual switch) • Remember that we have something new called Network Virtualisation to abstract IP addressess – The virtual network connects to a pNIC in the host• In Windows Server 8: – The Extensible Hyper-V Virtual Switch – Supports unified tracing for network diagnostics
  • Extensible Hyper-V Virtual Switch Virtual Virtual Machine Root Partition Machine VM NIC Host NIC VM NIC Hyper-V Switch Extension Protocol Capture Extensions WFP Extensions Certified Extensions Filtering Extensions Forwarding Extension Extension Miniport Physical NIC
  • Cloud & Security• Great Big Hyper-V Survey 2011: – 42.65% concerned about private cloud security• You cannot trust tenants in multi-tenant cloud – Tenant VS hosting company – Tenant VS Tenant• We’ve been using physical security: – Firewall • Requires centralised skills & slow to configure • Gets complicated – VLANs • Never intended for security • Restricted number per physical network
  • Windows Server 8 & Security• Software easier & quicker to configure – Automate with provisioning• Port ACLs – Define allowed communication paths between virtual machines based on IP range or MAC address.• PVLAN (Private VLAN) – VLAN-like domains created in Hyper-V• DHCP Guard – Isolate rogue virtual DHCP servers
  • Cloud & Network Performance• Can aggregate bandwidth with NIC teaming• Hosting company must control network bandwidth utilisation: – “Give him enough rope and he’ll hang himself” – Prioritise important applications – Limit tenants based on fees paid – Guarantee SLAs• Network Quality of Service (QoS)
  • QoS• Configured using PowerShell• Minimum bandwidth policy: – Enforce bandwidth allocation - SLA – Redistribute unused bandwidth – Efficiency & consolidation• Maximum bandwidth policy – Cross charge for expensive bandwidth• Possibly combine with network resource metering
  • A 2008 R2 Clustered Host• 6 NICs: – Parent – VM – Redirected I/O – Live Migration – 2 * iSCSI• NIC teaming?• Backup?• Lot$ of NIC$. Consider costs of 10 GbE
  • Physical Isolation• Traditional Server VM 1 VM 2• Multiple physical NICs• ACLs for guests Migration Cluster / Manage Storage Live Hyper-V Extensible Switch
  • Data Center Bridging (DCB)PowerShell WMI Traffic Windows Windows Classification Network Stack Storage Stack DCB LAN Miniport iSCSI Miniport
  • Converged Fabric• A new possibility• Consolidate all those NICs to a simpler network• Take advantage of: – 10 GbE/Infiniband networking: Bandwidth & VM density – NIC Teaming: Aggregation and fault tolerance, e.g. lots of 1 GbE NICs – DCB: Converge very different protocols – QoS: Guarantee performance SLA• Lots of variations
  • Management and Guest Isolation• 10 GbE NIC for parent Server partition VM 1 VM 2• ACLs for guests Migration Cluster / Manage Storage• DCB to converge Live protocols Hyper-V• QoS for SLA Extensible Switch
  • Using Network Offloads for Increased Scale• Scalability Offloads take Server advantage of all CPU cores – Receive Side Scaling for native VM 1 VM 2 path Migration Cluster / Manage Storage – Virtual Machine Queue for Live Hyper-V Switch path Hyper-V Extensible Switch RSS VMQ
  • Converged Fabrics (1 NIC)• ACLs for all switch ports Server VM 1 VM 2• QoS for Management OS traffic Manage Live Migration Cluster / Storage Hyper-V Extensible Switch
  • Converged Fabrics (2 NICs)• ACLs for all switch ports Server VM 1 VM 2• QoS for Management OS traffic Manage Live Migration Cluster / Storage• NIC Teaming for LBFO Hyper-V Extensible Switch NIC Teaming
  • Sample Documented Configuration• No network legacy Windows Hyper-V Server concerns (green File VM 1 VM n Live Migration Server field) Cluster / Manage Manage Storage Storage Cluster• Hyper-V clustered Hyper-V• Converged 10GbE Extensible Switch with DCB for QoS QoS NIC Teaming NIC Teaming QoS• File Server RSS RSS RSS RSS NIC Teaming clustered with DCB DCB DCB DCB 10 GbE 10 GbE 10 GbE 10 GbE 1 GbE 1 GbE HBA scale-out 10GBE Switch + DCB support SAN 1GBE Switch
  • For More Information• The original Build Windows 2011 sessions: – http://channel9.msdn.com/events/BUILD/BUILD20 11 – SAC-439T – SAC-437T – SAC-430T
  • The EndThanks to Hyper-V.nuAidan Finn• @joe_elway• http://www.aidanfinn.com