NIC Teaming & Converged Fabric
Hyper-V | Private Cloud | Hosted Cloud                    www.hyper-v.nuMarc van Eijk       @_marcvaneijk                 ...
Agenda              Quality of   System CenterNIC Teaming                                Designs               Service    ...
NIC Teaming
Windows Server 2008 R2                           Dedicated Network                                     Live Migration   Ma...
Windows Server 2012Members   Connect   Distribute   Combine
NIC Teaming team members                 Team NICs                                                     Virtual NICs       ...
NIC Teaming Connection ModesSwitch Independent    Switch Dependent   LACP       MAC                  MAC          MAC     ...
NIC Teaming Load distribution modes                             Address Hash    HyperVPort                                ...
Matrix                            Native Teaming / Hyper-V Switch    Switch Independent /        Address Hash•   Native mo...
Matrix                                                         Native Teaming / Hyper-V Switch    Switch Independent /    ...
Matrix                                                           Native Teaming / Hyper-V Switch    Switch Independent /  ...
Matrix                                                           Native Teaming / Hyper-V Switch    Switch Independent /  ...
Matrix                                                           Native Teaming / Hyper-V Switch                          ...
NIC Teaming in a VM                    Guest teaming                                                         Modes        ...
NIC Teaming PowershellGet-Command -Module NetLbfoNew-NetLbfoTeam Team1 NIC1,NIC2 -TeamingMode LACPLoadBalancingAlgorithm H...
NIC Teaming                               DemoManagement                                        Failover Cluster          ...
Quality of Service
Hyper-V Switch & Quality of Service                                                                                       ...
Hyper-V Switch & Quality of Service     Hyper-V Switch                                               QoS guidelines       ...
Quality of Service  Absolute               Weight                      Default Flow [50]             5                    ...
Quality of Service  Absolute               WeightDefault Flow [BPS]  Management     Cluster                      Default F...
Quality of Service                                                   DemoManagement                                  90   ...
Designs
Upgrade with existing hardware            Windows Server 2008 R2                                Windows Server 2012       ...
Converged     Single Team                                                                  VMs isolatedManagement         ...
ISCSI            MPIO                           MPIO Dedicated SwitchesManagement                                         ...
SMB 3.0                         NIC Teaming & SMB 3.0ManagementLive Migration       Cluster   VM      VM        Hyper-V Sw...
System Center VMM 2012 SP1    Existing hosts                                          Bare Metal DeploymentManagement     ...
Many, many thanks to:
Upcoming SlideShare
Loading in …5
×

Nic teaming and converged fabric

2,548 views

Published on

Hyper-V.nu meeting 16-04-2013, NIC teaming and Converged Fabric, Marc van Eijk

Published in: Technology
2 Comments
1 Like
Statistics
Notes
  • Hi Saurabh Mishra, thanks for your comment. LACP provides additional error checking on where and what cables are connected compared to static teaming. Besides that the functionality is the same to static teaming. So in the slidee i have grouped them together as one. The traffic distribution is the same for static teaming and LACP. Hope this answers you question. There is a great session by Don Stanwyck that will also you you more insight. You can find the recording here http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/WSV314 Thanks, Marc
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Hi Marc,
    thanks for this useful ppt , actually i have some questions regarding nic teaming please clarify if you know , in slide number 13 as you had shown, nic taming can be in four forms , while working on windows server 2012 i experienced switch dependent can be used in two static or generic mode and dynamic or LACP mode, and these both can also be used with Address Hash and Hyper-V addressing mode. So totally we have four modes in Switch dependent mode. If you know this better please clarify me static teaming with Hyper-v and LACP with address hash roles.
    Thanks Again
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total views
2,548
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
86
Comments
2
Likes
1
Embeds 0
No embeds

No notes for slide

Nic teaming and converged fabric

  1. 1. NIC Teaming & Converged Fabric
  2. 2. Hyper-V | Private Cloud | Hosted Cloud www.hyper-v.nuMarc van Eijk @_marcvaneijk DUVAK
  3. 3. Agenda Quality of System CenterNIC Teaming Designs Service VMM 2012 SP1
  4. 4. NIC Teaming
  5. 5. Windows Server 2008 R2 Dedicated Network Live Migration Management VM VM Storage Cluster Hyper-V Switch1Gb 1Gb 1Gb 1Gb 1Gb 1Gb 1Gb 1Gb 1Gb 1Gb MPIO
  6. 6. Windows Server 2012Members Connect Distribute Combine
  7. 7. NIC Teaming team members Team NICs Virtual NICs vNIC Management VLAN 100 Live Migration Cluster VM VM Hyper-V Switch tNIC tNICVLAN 200 VLAN 100 tNIC tNIC tNIC NIC Team Default VLAN 100 NIC Team Default mode Mode Physical Physical NIC NIC Requirements Ethernet Windows Logo
  8. 8. NIC Teaming Connection ModesSwitch Independent Switch Dependent LACP MAC MAC MAC LACPDUs LACP priority System MAC address Port LACP priority Port number Operational key
  9. 9. NIC Teaming Load distribution modes Address Hash HyperVPort D-VMQ TransportPorts Source and destinationTCP ports and IP addresses NIC Team IPAddresses Hyper-V SwitchSource and destination IP NIC Team addresses mgmtOS VM MacAddresses Source and destination MAC addresses
  10. 10. Matrix Native Teaming / Hyper-V Switch Switch Independent / Address Hash• Native mode teaming with switch diversity Management• Active / Standby• Teaming in a VM Live Migration• Workloads with heavy outbound / light Cluster VM VM inbound Hyper-V Switch NIC Team NIC Team MAC MAC
  11. 11. Matrix Native Teaming / Hyper-V Switch Switch Independent / Switch Independent / Address Hash HyperVPort• Native mode teaming • Maximum use of Virtual with switch diversity Machine Queues Management• Active / Standby (VMQs)• Teaming in a VM • More VMs than team Live Migration• Workloads with heavy members outbound / light • If VM bandwidth max Cluster VM VM inbound one NIC is enough Hyper-V Switch NIC Team NIC Team MAC MAC
  12. 12. Matrix Native Teaming / Hyper-V Switch Switch Independent / Switch Independent / Address Hash HyperVPort• Native mode teaming • Maximum use of Virtual with switch diversity Machine Queues Management• Active / Standby (VMQs)• Teaming in a VM • More VMs than team Live Migration• Workloads with heavy members outbound / light • If VM bandwidth max Cluster VM VM inbound one NIC is enough Switch Dependent / Hyper-V Switch Address Hash NIC Team NIC Team• Native teaming with maximum performance and no switch diversity• One VM needs more bandwidth than one team member MAC MAC
  13. 13. Matrix Native Teaming / Hyper-V Switch Switch Independent / Switch Independent / Address Hash HyperVPort• Native mode teaming • Maximum use of Virtual with switch diversity Machine Queues Management• Active / Standby (VMQs)• Teaming in a VM • More VMs than team Live Migration• Workloads with heavy members outbound / light • If VM bandwidth max Cluster VM VM inbound one NIC is enough Switch Dependent / Switch Dependent / Hyper-V Switch Address Hash HyperVPort NIC Team NIC Team• Native teaming with • Company policy maximum performance requires LACP and no switch diversity • More VMs than team• One VM needs more members bandwidth than one • If VM bandwidth max team member one NIC is enough MAC MAC
  14. 14. Matrix Native Teaming / Hyper-V Switch Switch Independent / HyperVPort • Maximum use of Virtual Machine Queues Management (VMQs) • More VMs than team Live Migration members • If VM bandwidth max Cluster VM VM one NIC is enough Switch Dependent / Hyper-V Switch Address Hash NIC Team NIC Team• Native teaming with maximum performance and no switch diversity• One VM needs more bandwidth than one team member
  15. 15. NIC Teaming in a VM Guest teaming Modes VM Switch Independent Address Hash NIC Team VF VF Support / Limit Maximum 2 vNICs External Switch MACHyper-V Switch Hyper-V Switch SRIOV SRIOV Physical Physical NIC NIC
  16. 16. NIC Teaming PowershellGet-Command -Module NetLbfoNew-NetLbfoTeam Team1 NIC1,NIC2 -TeamingMode LACPLoadBalancingAlgorithm HyperVPorts• SwitchIndependent• Static• LACP• TransportPorts• IPAddresses• MacAddresses• HyperVPortAdd-NetLbfoTeamMember NIC1 Team1 (Add physical NIC to Team1Add-NetLbfoTeamNIC Team1 83 (Add TeamNIC to Team1 with vlan id 83)
  17. 17. NIC Teaming DemoManagement Failover Cluster ManagementLive Migration Live Migration Cluster VM VM Cluster VM VM Hyper-V Switch Hyper-V Switch NIC Team NIC Team NIC Team NIC Team NIC Team NIC Team
  18. 18. Quality of Service
  19. 19. Hyper-V Switch & Quality of Service • Weight (Default) • Absolute (bits per second) Hyper-V Switch • None New-VMSwitch "VSwitch" -MinimumBandwidthMode Weight $false -NetAdapterName "Team" -AllowManagementOS 0 Management -DefaultFlowMinimumBandwidthAbsolute VLAN VLANLive Migration ID 58 ID 63 Set-VMSwitch "VSwitch" -DefaultFlowMinimumBandwidthWeight 50 Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "VSwitch" Cluster VM VM Add-VMNetworkAdapter -ManagementOS -Name "Live Migration" -SwitchName "VSwitch" Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "VSwitch"10 30 10 50 Hyper-V Switch Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 10 Set-VMNetworkAdapter -ManagementOS -Name "Live Migration" -MinimumBandwidthWeight 30 Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10 NIC Team Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Management" -Access -VlanId 10 Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Live Migration" -Access -VlanId 11 Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Cluster" -Access -VlanId 12
  20. 20. Hyper-V Switch & Quality of Service Hyper-V Switch QoS guidelines Default Flow and VMs Total Weight = 100 Management VLAN VLANLive Migration ID 58 ID 63 Set-VMSwitch "VSwitch" -DefaultFlowMinimumBandwidthWeight 50 Cluster VM VM10 30 10 50 Hyper-V Switch Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 10 Set-VMNetworkAdapter -ManagementOS -Name "Live Migration" -MinimumBandwidthWeight 30 Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10 NIC Team Critical workloads Gap weights
  21. 21. Quality of Service Absolute Weight Default Flow [50] 5 90 / 40 * 10 = 22,5 Available Bandwidth Management [10] 90 Cluster [10] 5Default Flow [BPS] Available Weight Management 40 Cluster 90 / 40 * 30 = 67,5 Live Migration Live Migration [30]
  22. 22. Quality of Service Absolute WeightDefault Flow [BPS] Management Cluster Default Flow [50] Management [10] Cluster [10] Live Migration Live Migration [30]
  23. 23. Quality of Service DemoManagement 90 Failover Cluster ManagementLive Migration VM VM Live Migration Cluster VM VM Cluster VM VM 10 Hyper-V Switch Hyper-V Switch Hyper-V Switch LACP LACP HyperVPort Address Hash NIC Team NIC Team NIC Team NIC Team
  24. 24. Designs
  25. 25. Upgrade with existing hardware Windows Server 2008 R2 Windows Server 2012 Cluster VM VM Management VM VM Hyper-V LiveManagement Cluster Hyper-V Switch Live Migration Switch Migration NIC Team NIC Team NIC Team NIC Team NIC Team NIC Team
  26. 26. Converged Single Team VMs isolatedManagement Management Cluster Live MigrationLive Migration Cluster VM VM VM VM VM Hyper-V Switch Hyper-V Switch Hyper-V Switch NIC Team NIC Team NIC Team DCB Datacenter Bridging
  27. 27. ISCSI MPIO MPIO Dedicated SwitchesManagement VM MPIOLive Migration Cluster VM VM Management ISCSI Live Migration ISCSI MPIO MPIO ISCSI Cluster ISCSI Hyper-V Switch Hyper-V Switch Hyper-V Switch NIC Team NIC Team NIC Team
  28. 28. SMB 3.0 NIC Teaming & SMB 3.0ManagementLive Migration Cluster VM VM Hyper-V Switch NIC Team NIC Team NIC Team SMB 3.0 RDMA Multichannel Hardware RSS
  29. 29. System Center VMM 2012 SP1 Existing hosts Bare Metal DeploymentManagement Live Migration ManagementLive Migration Cluster Cluster VM VM VM VM VM Hyper-V Switch Hyper-V Switch Hyper-V Switch NIC Team NIC Team NIC Team Logical Switch Logical Switch Logical Switch
  30. 30. Many, many thanks to:

×