Windows Server 2012 includes several new and improved networking features for Hyper-V. These features help improve performance and scalability by offloading more processing to the network interface card. New features include improved Receive Side Scaling, Receive Segment Coalescing, Dynamic Virtual Machine Queuing, Single Root I/O Virtualization, and NIC teaming. These features address challenges around availability, reliability, security and reducing complexity for virtualized workloads.
LCU13: An Introduction to ARM Trusted FirmwareLinaro
Resource: LCU13
Name: An Introduction to ARM Trusted Firmware
Date: 28-10-2013
Speaker: Andrew Thoelke
Video: http://www.youtube.com/watch?v=q32BEMMxmfw
XPDDS18: CPUFreq in Xen on ARM - Oleksandr Tyshchenko, EPAM SystemsThe Linux Foundation
The motivation of hypervisor based CPUFreq is to enable the one of the main PM use-cases (Dynamic voltage and frequency scaling) in virtualized system powered by Xen hypervisor. Rationale behind this activity is that CPU virtualization is done by hypervisor and the guest OS doesn't actually know anything about physical CPUs because it is running on virtual CPUs.
In this talk Oleksandr will briefly describe the possible approach of generic CPUFreq in Xen on ARM, the advantages and disadvantages of having DVFS support on ARM boards powered by Xen hypervisor and share results of his CPUFreq PoC which implies power consumption measurements with and without CPUFreq enabled on R-Car Gen3 based board as an example.
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...The Linux Foundation
Xen role, details of implementation and problems in a sample solution based on OSS (Android, Linux and Xen) that addresses Automotive requirements such as ultra-fast RVC boot time, quick IVI system boot time, cloud connectivity and multimedia capabilities, reliability and security through hardware virtualization. Secure CAN/LIN/MOST bus integration handled by Linux on Dom0 while Android runs customizable QML-based HMI in a sandbox of DomU. These case studies will include but not be limited to: computing power requirements, memory requirements, virtualization, stability, boot-time sequence and optimization, video clips showing results of the work done. Case study is built on TexasInstruments OMAP5 SoC.
The accompanying demo (slide 15) can be found at https://vimeo.com/90534015
The presentation will cover Xen vs Xen Automotive gaps and analysis. We will elaborate technical solutions for the identified gaps:
* ARM architecture - support HW virtualization extensions for embedded systems
* Stability requirements
* RT Scheduler
* Rich virtualized peripheral support (WiFi, Gfx, MM, USB, etc.)
* Performance benchmarking
* Security
The audience is anyone interesting in building OSS based IVI systems. Attendees can expect the OSS stack detailed architecture, current status of the project, the challenges seen, road map and much more.
LCU13: An Introduction to ARM Trusted FirmwareLinaro
Resource: LCU13
Name: An Introduction to ARM Trusted Firmware
Date: 28-10-2013
Speaker: Andrew Thoelke
Video: http://www.youtube.com/watch?v=q32BEMMxmfw
XPDDS18: CPUFreq in Xen on ARM - Oleksandr Tyshchenko, EPAM SystemsThe Linux Foundation
The motivation of hypervisor based CPUFreq is to enable the one of the main PM use-cases (Dynamic voltage and frequency scaling) in virtualized system powered by Xen hypervisor. Rationale behind this activity is that CPU virtualization is done by hypervisor and the guest OS doesn't actually know anything about physical CPUs because it is running on virtual CPUs.
In this talk Oleksandr will briefly describe the possible approach of generic CPUFreq in Xen on ARM, the advantages and disadvantages of having DVFS support on ARM boards powered by Xen hypervisor and share results of his CPUFreq PoC which implies power consumption measurements with and without CPUFreq enabled on R-Car Gen3 based board as an example.
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...The Linux Foundation
Xen role, details of implementation and problems in a sample solution based on OSS (Android, Linux and Xen) that addresses Automotive requirements such as ultra-fast RVC boot time, quick IVI system boot time, cloud connectivity and multimedia capabilities, reliability and security through hardware virtualization. Secure CAN/LIN/MOST bus integration handled by Linux on Dom0 while Android runs customizable QML-based HMI in a sandbox of DomU. These case studies will include but not be limited to: computing power requirements, memory requirements, virtualization, stability, boot-time sequence and optimization, video clips showing results of the work done. Case study is built on TexasInstruments OMAP5 SoC.
The accompanying demo (slide 15) can be found at https://vimeo.com/90534015
The presentation will cover Xen vs Xen Automotive gaps and analysis. We will elaborate technical solutions for the identified gaps:
* ARM architecture - support HW virtualization extensions for embedded systems
* Stability requirements
* RT Scheduler
* Rich virtualized peripheral support (WiFi, Gfx, MM, USB, etc.)
* Performance benchmarking
* Security
The audience is anyone interesting in building OSS based IVI systems. Attendees can expect the OSS stack detailed architecture, current status of the project, the challenges seen, road map and much more.
XPDDS18: Design and Implementation of Automotive: Virtualization Based on Xen...The Linux Foundation
This talk presents a production-ready automotive virtualization solution with Xen. The key requirements that we focus are super-fast startup and recovery from failure, static virtual machine creation with dedicated resources, and performance effective graphics rendering. To reduce the boot time, we optimize the Xen startup procedure by effectively initializing Xen heap and VM memory, and booting multiple VMs concurrently. We provide fast recovery mechanism by re-implementing the VM reset feature. We also develop a highly optimized graphics APIs-forwarding mechanism supporting OpenGLES APIs up to v3.2. The pass rate of Khronos CTS in a guest OS is comparable to the Domain0’s. Our experiment shows that our virtualization solution provides reasonable performance for ARM-based automotive systems (hypervisor booting: less than 70ms, graphics performance: about 96% of Domain0).
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...The Linux Foundation
Hypervisors are used in a broad range of domains ranging from Embedded systems, Automotive to big iron servers. The choice of hypervisor has a strong impact on the overall design of your project and its performance. This talk introduces the state of virtualization on ARM, and provides a description of three popular open source hypervisors: KVM, Jailhouse and Xen. Julien Grall explains respective key features, technical differences and suitability of the hypervisor for different application domains.
Julien Grall is a Software Virtualisation Engineer at ARM.
The talk was delivered at Root Linux Conference 2017. Learn more: http://linux.globallogic.com/materials. The video recording is available at https://www.youtube.com/watch?v=jZNXtqFJpuc
Slides from Android Builder's Summit 2014 in San Jose, CA
The 4.4 KitKat release includes the results of “Project Svelte”: a set of tweaks to the operating system to make it run more easily on devices with around 512 MiB RAM. This is likely to be especially important for people working with “Embedded Android”, that is, implementing Android on devices that are not smart phones or tablets.
The U-Boot is an "Universal Bootloader" ("Das U-Boot") is a monitor program that is under GPL. This production quality boot-loader is used as default boot loader by several board vendors. It is easily portable and easy to port and to debug by supporting PPC, ARM, MIPS, x86,m68k, NIOS, Microblaze architectures. Here is a presentation that introduces U-Boot.
Kubernetes Story - Day 1: Build and Manage Containers with PodmanMihai Criveti
OpenShift Workshop Day 1: https://www.youtube.com/watch?v=3IuaZu8-fsY - Build and Manage Containers with Podman
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
XPDDS18: Design and Implementation of Automotive: Virtualization Based on Xen...The Linux Foundation
This talk presents a production-ready automotive virtualization solution with Xen. The key requirements that we focus are super-fast startup and recovery from failure, static virtual machine creation with dedicated resources, and performance effective graphics rendering. To reduce the boot time, we optimize the Xen startup procedure by effectively initializing Xen heap and VM memory, and booting multiple VMs concurrently. We provide fast recovery mechanism by re-implementing the VM reset feature. We also develop a highly optimized graphics APIs-forwarding mechanism supporting OpenGLES APIs up to v3.2. The pass rate of Khronos CTS in a guest OS is comparable to the Domain0’s. Our experiment shows that our virtualization solution provides reasonable performance for ARM-based automotive systems (hypervisor booting: less than 70ms, graphics performance: about 96% of Domain0).
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...The Linux Foundation
Hypervisors are used in a broad range of domains ranging from Embedded systems, Automotive to big iron servers. The choice of hypervisor has a strong impact on the overall design of your project and its performance. This talk introduces the state of virtualization on ARM, and provides a description of three popular open source hypervisors: KVM, Jailhouse and Xen. Julien Grall explains respective key features, technical differences and suitability of the hypervisor for different application domains.
Julien Grall is a Software Virtualisation Engineer at ARM.
The talk was delivered at Root Linux Conference 2017. Learn more: http://linux.globallogic.com/materials. The video recording is available at https://www.youtube.com/watch?v=jZNXtqFJpuc
Slides from Android Builder's Summit 2014 in San Jose, CA
The 4.4 KitKat release includes the results of “Project Svelte”: a set of tweaks to the operating system to make it run more easily on devices with around 512 MiB RAM. This is likely to be especially important for people working with “Embedded Android”, that is, implementing Android on devices that are not smart phones or tablets.
The U-Boot is an "Universal Bootloader" ("Das U-Boot") is a monitor program that is under GPL. This production quality boot-loader is used as default boot loader by several board vendors. It is easily portable and easy to port and to debug by supporting PPC, ARM, MIPS, x86,m68k, NIOS, Microblaze architectures. Here is a presentation that introduces U-Boot.
Kubernetes Story - Day 1: Build and Manage Containers with PodmanMihai Criveti
OpenShift Workshop Day 1: https://www.youtube.com/watch?v=3IuaZu8-fsY - Build and Manage Containers with Podman
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
This paper shares actionable threat intelligence for an advanced adversary RSA Incident Response calls Shell_Crew, which is referred to by other security organizations as Deep Panda, WebMasters, KungFu Kittens, SportsFans, and PinkPanther.
This is the deck that I used at the January 2012 Hyper-V.nu event in Amsterdam, Netherlands. It focuses on the Build announced details on Windows Server 8 Hyper-V networking.
Apresentações | Jantar Exclusivo Cisco e Netapp | 27 de Junho de 2012 | Spett...Softcorp
A Softcorp, em parceria com a NetApp e a Cisco, realizou um jantar especial sobre a tecnologia FlexPod™.
Durante o evento foi possível conhecer os benefícios da solução e tirar dúvidas técnicas, operacionais e consultivas com os especialistas das três empresas.
O momento também foi oportuno para trocar experiências com outros profissionais do setor.
Para descontrair, tivemos uma palestra com boas dicas sobre cortes de carne e os segredos do bom churrasqueiro para garantir o sucesso do churrasco.
Marvell SR-IOV Improves Server Virtualization PerformanceMarvell
Marvell FastLinQ 3400/8400 Series Adapters Enhance I/O Throughput and Reduce Processor Utilization
SR-IOV alleviates bottlenecks in virtual operating systems and enables “bare-metal” performance of virtualized resources.
HYPER-V upcoming version 3.0 features overview. This presentation covers the information about new enhancement about Netwokring, Storage and VHDx format.
During this session we will look into Windows 10 for the Enterprise.
Let’s explore the new management capabilities and choices.
Let’s understand the Windows 10 deployment infrastructure and mechanisms.
Let’s discover new Windows 10 features and improvements.
You are eager to learn about Windows 10 and want to gather early-stage info about this exciting Operating System… ?
Well you know what to do! See you there!
Compliance settings, formerly known as DCM, remains one of the often unexplored features in Configuration Manager. During this session we will walk through the new capabilities and improvements of this feature in ConfigMgr 2012, discuss implementation details, and demonstrate how you can start using it to fulfill actual business requirements.
Discover what’s new in Windows 8.1 regarding interface, settings, deployment, security, … How will Windows 8.1 fit in your enterprise? How do you upgrade? All answers are here!
7. Socket, NUMA, Core, K-Group
Processor: One physical processor, which can consist Kernel Group (K-Group)
of one or more NUMA nodes. Today a physical
processor ≈ a socket, with multiple cores.
Non-uniform memory architecture (NUMA) node:
A set of logical processors and cache that are close to
one another.
Core: One processing unit, which can consist of one or
more logical processors.
Logical processor (LP): One logical computing
engine from the perspective of the operating system,
application or driver. In effect, a logical processor is a
thread (think hyper threading).
Kernel Group: A set of up to 64 logical processors.
8. Advanced Network Features (1)
Receive Side Scaling (RSS)
Receive Segment Coalescing (RSC)
Dynamic Virtual Machine Queuing (DVMQ)
Single Root I/O Virtualization (SR-IOV)
NIC TEAMING
RDMA/Multichannel support for virtual machines on SMB3.0
9. Receive Side Scaling (RSS)
Windows Server 2012 scales RSS to the next generation of
servers & workloads
Spreads interrupts across all available CPUs
Even for those very large scale hosts
RSS now works across K-Groups
Even RSS is “Numa Aware” to optimize performance
Now load balances UDP traffic across CPUs
40% to 100% more throughput (backups, file copies, web)
10. Node 0 Node 1 Node 2 Node 3
Queues
Incoming Packets RSS NIC with 8 Queues
RSS improves scalability on multiple processors / NUMA nodes by distributing
TCP/UDP receive traffic across the cores in ≠ nodes / K-Groups
11. Receive Segment Coalescing (RSC)
Coalesces packets in the NIC
so the stack processes fewer headers
Multiple packets belonging to a connection
are coalesced by the NIC to a larger packet (max of 64 K)
and processed within a single interrupt
10 - 20% improvement in throughput &
CPU workload Offload to NIC
Enabled by default on all 10Gbps
12. Receive Segment Coalescing
Coalesced into larger buffer
NIC with RSC
Incoming Packets
RSC helps by coalescing multiple inbound packets into a
larger buffer or “packet” which reduces per packet CPU
costs as less headers need to be processed.
13. Dynamic Virtual Machine Queue (DVMQ)
VMQ is to virtualization what RSS is to native workloads.
It makes sure that Routing, Filtering etc. is done by the NIC in queues and
that the interrupts for those queues don’t get done by 1 processor (0).
Most inbox 10Gbps Ethernet adapters support this.
Enabled by default.
Network I/O path with VMQ
Network I/O path without VMQ
14. Dynamic Virtual Machine Queue (DVMQ)
Root Partition Root Partition Root Partition
CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU
0 1 2 3 0 1 2 3 0 1 2 3
Physical NIC Physical NIC Physical NIC
No VMQ Static VMQ Dynamic VMQ
Adaptive optimal performance across changing workloads
15. Single-Root I/O Virtualization (SR-IOV)
Reduces CPU utilization for processing network traffic
Reduces latency path
Root Partition Virtual Machine
Increases throughput
Requires: Hyper-V Switch
Virtual NIC
Chipset: Interrupt & DMA remapping Routing
BIOS Support VLAN
CPU: Hardware virtualization, EPT or NPT Filtering VMBUS
Data Copy
Virtual Function
Physical NIC Physical NIC
SR-IOV
Network I/O path without SR-IOV
with SR-IOV
16. SR-IOV Enabling & Live Migration
Turn On IOV Live Migration Post Migration
Enable IOV (VM NIC Property) Switch back to Software path Reassign Virtual Function
Virtual Function is “Assigned” Remove VF from VM Assuming resources are
available
“NIC” automatically created Migrate as normal
Traffic flows through VF
Software path is not used
Virtual Machine
Network Stack
“NIC” VM has connectivity “NIC”
even if
Software NIC Switch not in IOV mode Software NIC
IOV physical NIC not
present
Software Switch Different NIC vendor Software Switch
(IOV Mode) Different NIC firmware (IOV Mode)
Virtual Function Virtual Function
Physical NIC Physical NIC
SR-IOV SR-IOV Physical NIC
17. NIC TEAMING
Customers are dealing with
way to many issues.
NIC vendors would like to
get rid of supporting this.
Microsoft needs this to be
competitive & complete the
solution stack + reduce
support issues.
18. NIC Teaming
Hyper-V Extensible Switch
Teaming modes: LBFO Admin GUI
Switch dependent Frame distribution/aggregation
Failure detection
Switch independent WMI Control protocol implementation
LBFO Provider
Load balancing: LBFO
Configuration DLL IOCTL
Address Hash Port 1 Port 2 Port 3
Virtual miniport 1
Hyper-Port
IM MUX
Kernel mode
Hashing modes:
User mode
Protocol edge
4-tuple
NIC 1 NIC 2 NIC 3
2-tuple
MAC address
Active/Active & Active/Standby Network switch
Vendor Agnostic
19. NIC TEAMING (LBFO)
VM (Guest Running Any OS) VM (Guest Running Windows Server 2012)
LBFO Teamed NIC
Hyper-V virtual switch
SR-IOV Not exposed Hyper-V virtual Hyper-V virtual
switch switch
LBFO Teamed NIC
SR-IOV NIC SR-IOV NIC SR-IOV NIC SR-IOV NIC
Parent NIC Teaming Guest NIC Teaming
20. SMB Direct (SMB over RDMA)
What
Addresses congestion in network stack by offloading the stack to SMB Client SMB Server
the network adapter
Advantages
Application
Scalable, fast and efficient storage access
User
High throughput, low latency & minimal CPU utilization
Load balancing, automatic failover & bandwidth aggregation via Kernel
SMB Multichannel SMB Client SMB Server
Scenarios
Network w/ Network w/
NTFS
High performance remote file access for application RDMA RDMA
SCSI
support support
servers like Hyper-V, SQL Server, IIS and HPC
Used by File Server and Clustered Shared Volumes (CSV) for
storage communications within a cluster R-NIC R-NIC
Disk
Required hardware
RDMA-capable network interface (R-NIC)
Three types: iWARP, RoCE & Infiniband
21. SMB Multichannel
Multiple connections per SMB session
Full Throughput
Bandwidth aggregation with multiple NICs
Multiple CPUs cores engaged when using Receive Side Scaling (RSS)
Automatic Failover
SMB Multichannel implements end-to-end failure detection
Leverages NIC teaming if present, but does not require it
Automatic Configuration
SMB detects and uses multiple network paths
22. SMB Multichannel Single NIC Port
1 session, without Multichannel 1 session, with Multichannel
No failover No failover
Can’t use full 10Gbps Full 10Gbps available
Only one TCP/IP connection Multiple TCP/IP connections
Only one CPU core engaged Receive Side Scaling (RSS) helps
distribute load across CPU cores
SMB Client CPU utilization per core SMB Client CPU utilization per core
RSS RSS
NIC NIC
10GbE 10GbE
Switch Switch
10GbE 10GbE
NIC NIC
10GbE 10GbE
RSS RSS
Core 1 Core 2 Core 3 Core 4 Core 1 Core 2 Core 3 Core 4
SMB Server SMB Server
23. SMB Multichannel Multiple NIC Ports
1 session, without Multichannel 1 session, with Multichannel
No automatic failover Automatic NIC failover
Can’t use full bandwidth Combined NIC bandwidth available
Only one NIC engaged Multiple NICs engaged
Only one CPU core engaged Multiple CPU cores engaged
SMB Client 1 SMB Client 2 SMB Client 1 SMB Client 2
RSS RSS RSS RSS
NIC NIC NIC NIC NIC NIC NIC NIC
10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE
Switch Switch Switch Switch Switch Switch Switch Switch
10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE
NIC NIC NIC NIC NIC NIC NIC NIC
10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE
RSS RSS RSS RSS
SMB Server 1 SMB Server 2 SMB Server 1 SMB Server 2
24. SMB Multichannel & NIC Teaming
1 session, NIC Teaming without MC 1 session, NIC Teaming with MC
Automatic NIC failover Automatic NIC failover (faster with
Can’t use full bandwidth NIC Teaming)
Only one NIC engaged Combined NIC bandwidth available
Only one CPU core engaged Multiple NICs engaged
Multiple CPU cores engaged
SMB Client 1 SMB Client 2 SMB Client 1 SMB Client 2
RSS NIC Teaming RSS NIC Teaming RSS NIC Teaming RSS NIC Teaming
NIC NIC NIC NIC NIC NIC NIC NIC
10GbE 10GbE 1GbE 1GbE 10GbE 10GbE 1GbE 1GbE
Switch Switch Switch Switch Switch Switch Switch Switch
10GbE 10GbE 1GbE 1GbE 10GbE 10GbE 1GbE 1GbE
NIC NIC NIC NIC NIC NIC NIC NIC
10GbE 10GbE 1GbE 1GbE 10GbE 10GbE 1GbE 1GbE
RSS RSS RSS RSS
NIC Teaming NIC Teaming NIC Teaming NIC Teaming
SMB Server 2 SMB Server 2 SMB Server 1 SMB Server 2
25. SMB Direct & Multichannel
1 session, without Multichannel 1 session, with Multichannel
No automatic failover Automatic NIC failover
Can’t use full bandwidth Combined NIC bandwidth available
Only one NIC engaged Multiple NICs engaged
RDMA capability not used Multiple RDMA connections
SMB Client 1 SMB Client 2 SMB Client 1 SMB Client 2
R-NIC R-NIC R-NIC R-NIC R-NIC R-NIC R-NIC R-NIC
54GbIB 54GbIB 10GbE 10GbE 54GbIB 54GbIB 10GbE 10GbE
Switch Switch Switch Switch Switch Switch Switch Switch
54GbIB 54GbIB 10GbE 10GbE 54GbIB 54GbIB 10GbE 10GbE
R-NIC R-NIC R-NIC R-NIC R-NIC R-NIC R-NIC R-NIC
54GbIB 54GbIB 10GbE 10GbE 54GbIB 54GbIB 10GbE 10GbE
SMB Server 1 SMB Server 2 SMB Server 1 SMB Server 2
26. SMB Multichannel Auto Configuration
Auto configuration looks at NIC type/speed => Same NICs are
used for RDMA/Multichannel (doesn’t mix 10Gbps/1Gbps,
RDMA/non-RDMA)
Let the algorithms work before you decide to intervene
Choose adapters wisely for their function
SMB Client SMB Client SMB Client SMB Client
RSS
NIC NIC R-NIC R-NIC R-NIC NIC NIC NIC
10GbE 1GbE 10GbE 32GbIB 10GbE 1GbE 1GbE Wireless
Switch Switch Switch Switch Switch Switch Switch Switch
10GbE 1GbE 10GbE IB 10GbE 1GbE 1GbE Wireless
NIC NIC R-NIC R-NIC R-NIC NIC NIC NIC
10GbE 1GbE 10GbE 32GbIB 10GbE 1GbE 1GbE Wireless
RSS
SMB Server SMB Server SMB Server SMB Server
27. Networking Features Cheat Sheet
Metric Large Receive Receive Virtual Remote Single Root
Send Segment Side Machine DMA I/O
Offload Coalescing Scaling Queues (RDMA) Virtualization
(LSO) (RSC) (RSS) (VMQ) (SR-IOV)
Lower
Latency
Higher
Scalability
Higher
Throughput
Lower Path
Length
28. Advanced Network Features (2)
Consistent Device Naming
DCTCP/DCB/QOS
DHCP Guard/Router Guard/Port Mirroring
Port ACLs
IPSEC Task Offload for Virtual Machines (IPsecTOv2)
Network virtualization & Extensible Switch
30. DCTCP Requires Less Buffer Memory
1Gbps flow controlled by TCP 1Gbps flow controlled by DCTCP
Needs 400 to 600KB of memory Requires 30KB of memory
TCP saw tooth visible Smooth
31. Datacenter TCP (DCTCP)
W2K12 deals with network congestion by reacting to
the degree & not merely the presence of congestion.
DCTCP aims to achieve low latency, high burst tolerance and
high throughput, with small buffer switches.
Requires Explicit Congestion Notification (ECN, RFC 3168)
capable switches.
Algorithm enabled when it makes sense
(low round trip times, i.e. in the data center).
33. Datacenter TCP (DCTCP)
Running out of buffer in a
switch gets you in to stop/go
hell by getting a boatload of
green, orange & red lights
along your way
Big buffers mitigate this but
are very expensive
http://www.flickr.com/photos/mwichary/3321222807/ http://www.flickr.com/photos/bexross/2636921208/
34. Datacenter TCP (DCTP)
You want to be in a green wave
http://www.flickr.com/photos/highwaysagency/6281302040/
Windows Server 2012 & ECN
provides network traffic control
http://www.telegraph.co.uk/motoring/news/5149151/Motorists-to-be-given-green-
traffic-lights-if-they-stick-to-speed-limit.html by default
35. Data Center Bridging (DCB)
Prevents congestion in NIC & network by reserving
bandwidth for particular traffic types
Windows 2012 provides support & control for DCB, tags
packets by traffic type
Provides lossless transport for mission critical workloads
36. DCB is like a car pool lane …
http://www.flickr.com/photos/philopp/7332438786/
37. DCB Requirements
1. Enhanced Transmission Selection (IEEE 802.1Qaz)
2. Priority Flow Control (IEEE 802.1Qbb)
3. (Optional) Data Center Bridging Exchange protocol
4. (Not required) Congestion Notification (IEEE 802.1Qau)
38. Hyper-V Qos beyond the VM
Management OS VM 1 VM n
Live Migration
Storage
Hyper-V virtual switch
Management
LBFO Teamed NIC
Manage the Network Bandwidth 10 GbE Phy NIC 10 GbE Phy NIC
with a Maximum (value) and/or a
Minimum (value or weight)
40. Default Flow per Virtual Switch
Customers may group a number of
VMs that each don’t have Gold
VM1 VM2
minimum bandwidth. They will be Tenant
bucketed into a default flow, which
has minimum weight allocation.
? ? 10
This is to prevent starvation.
Hyper-V Extensible Switch
1 Gbps
41. Maximum Bandwidth for Tenants
One common customer pain
point is WAN links are Unified Remote Access
expensive Gateway
Cap VM throughput to the <100Mb ∞
Internet to avoid bill shock
Hyper-V Extensible Switch
Internet Intranet
42. Bandwidth Network Management
Manage the Network
Bandwidth with a
Maximum and a
Minimum value
SLAs for hosted Virtual
Machines
Control per VMs and not
per HOST
44. IPsec Task Offload
IPsec is CPU intensive => Offload to NIC
In demand due to compliance (SOX, HIPPA, etc.)
IPsec is required & needed for secured operations
Only available to host/parent workloads in W2K8R2
Now extended to virtual machines
Managed by the Hyper-V switch
45. Port ACL
Port ACL
Allow/Deny/Counter
MAC, IPv4 or IPv6 addresses
Wildcards allowed in IP addresses
Note: Counters are implemented as ACLs
Counts packets to address/range
Read via Note: Counters
WMI/PowerShell are implemented as ACLs
Counters are– Counts resource metering you can do for charge/show back, planning etc.
tied into the packets to address/range
– Read via WMI/PowerShell
ACLs are the basic building blocks the resource metering you
– Counters are tied into of virtual switch security functions
can do for charge/show back, planning etc.