Microsoft
Virtual
Academy
Microsoft
Virtual
Academy
Part 1 | Windows Server 2012 Hyper-V
&. VMware vSphere 5.1
Part 2 | System Center 2012 SP1 &
VMw...
Benefits
• Layer 2 virtual interface
• Managed programmatically
• Extensible by partners or customers
New feature
Handles ...
6
7
Capability
Hyper-V
(2012)
vSphere
Hypervisor
vSphere 5.1
Enterprise Plus
Extensible vSwitch Yes No Replaceable1
Confirmed ...
• Reduces latency of network
path
• Reduces CPU utilization for
processing network traffic
• Increases throughput
• Suppor...
Virtual Machine
Network Stack
Software NIC
 Enable IOV (VM NIC Property)
 Virtual Function is “Assigned”
 Team automati...
Capability
Hyper-V
(2012)
vSphere
Hypervisor
vSphere 5.1
Enterprise Plus
Dynamic Virtual Machine Queue Yes NetQueue1 NetQu...
Improvements
• Faster and simultaneous migration
• Live migration outside a clustered
environment
• Store virtual machines...
Computer running
Hyper-V
Target deviceSource device
VIRTUAL MACHINE
MOBILITY
Benefits
• Manage storage in a cloud environm...
Destination
Hyper-V
Virtual
machine
Target deviceSource device
Virtual
machine
Source
Hyper-V
IP connection
Configuration ...
VLAN tags
ToR
Aggregation
Switches
VMs
ToR
Topology limits VM placement and requires
reconfiguration of production switches
Blue VM Red VM
Virtualization
Physical
Server
Blue Network Red Network
Physical
Network
Virtualization Policy
System Center
Customer Address Space (CA)
Red2Blue2
10.0.0.5
Red1Blue1
10.0.0.5 10.0.0.7 10.0.0.7
Bl...
Blue Corp Red Corp
Blue Subnet1
Blue Subnet3Blue Subnet2
Blue Subnet5
Blue Subnet4
Red Subnet2
Red Subnet1
Blue R&D Net Bl...
10.0.0.5 10.0.0.5 10.0.0.7 10.0.0.7
192.168.2.22 192.168.5.55
192.168.2.22
192.168.5.55
10.0.0.5 
10.0.0.7
GRE Key
5001
...
PA Y
CA Y
Datacenter
Host 1
VM2 VMY
Host 2
CA2
PA2
CA1
AA1
PA1
VM1
CAX
AAX
PAX
VMX
System
Center
Blue
• VM1: MAC1, CA1, PA...
192.168.4.11
NIC
Hyper-V Switch
IP Virtualization
Policy Enforcement
Routing
VSID ACL Enforcement
Blue1 Red1
Network Virtu...
192.168.4.11
NIC
Hyper-V Switch
IP Virtualization
Policy Enforcement
Routing
VSID ACL Enforcement
Blue1 Red1
Network Virtu...
192.168.4.11
NIC
Hyper-V Switch
IP Virtualization
Policy Enforcement
Routing
VSID ACL Enforcement
Blue1 Red1
Network Virtu...
192.168.4.11
NIC
Hyper-V Switch
IP Virtualization
Policy Enforcement
Routing
VSID ACL Enforcement
Blue1 Red1
Network Virtu...
Capability
Hyper-V
(2012)
vSphere
Hypervisor
vSphere 5.1
Enterprise Plus
VM Live Migration Yes No1 Yes2
1GB Simultaneous L...
©2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Office, Azure, System Center, Dynamics and other pro...
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and Flexibility
Upcoming SlideShare
Loading in …5
×

VMWARE Professionals - Security, Multitenancy and Flexibility

774 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
774
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
20
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Virtualized data centers are becoming more popular and practical every day. IT organizations and hosting providers have begun offering infrastructure as a service (IaaS), which provides more flexible, virtualized infrastructures to customers—“server instances on‑demand.” Because of this trend, IT organizations and hosting providers must offer customers enhanced security and isolation from one another.If a service provider’s infrastructure is hosting two companies, the IT Admin must help ensure that each company is provided its own privacy and security. Before Windows Server 2012, server virtualization provided isolation between virtual machines, but the network layer of the data center was still not fully isolated and implied layer-2 connectivity between different workloads that run over the same infrastructure.For the hosting provider, isolation in the virtualized environment must be equal to isolation in the physical data center, to meet customer expectations and not be a barrier to cloud adoption. Isolation is almost as important in an enterprise environment. Although all internal departments belong to the same organization, certain workloads and environments (such as finance and human resource systems) must still be isolated from each other. IT departments that offer private clouds and move to an IaaS operational mode must consider this requirement and provide a way to isolate such highly sensitive workloads.Windows Server 2012 Hyper-V and Hyper-V Server 2012 contain new security and isolation capabilities through the Hyper‑V Extensible Switch.
  • Virtualized data centers are becoming more popular and practical every day. IT organizations and hosting providers have begun offering infrastructure as a service (IaaS), which provides more flexible, virtualized infrastructures to customers—“server instances on‑demand.” Because of this trend, IT organizations and hosting providers must offer customers enhanced security and isolation from one another.If you’re hosting two companies, you must help ensure that each company is provided its own privacy and security. Before Windows Server 2012, server virtualization provided isolation between virtual machines, but the network layer of the data center was still not fully isolated and implied layer-2 connectivity between different workloads that run over the same infrastructure.For the hosting provider, isolation in the virtualized environment must be equal to isolation in the physical data center, to meet customer expectations and not be a barrier to cloud adoption.Isolation is almost as important in an enterprise environment. Although all internal departments belong to the same organization, certain workloads and environments (such as finance and human resource systems) must still be isolated from each other. IT departments that offer private clouds and move to an IaaS operational mode must consider this requirement and provide a way to isolate such highly sensitive workloads.Windows Server 2012 contains new security and isolation capabilities through the Hyper‑V Extensible Switch.The Hyper‑V Extensible Switch is a layer‑2 virtual network switch that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network with policy enforcement for security and isolation. The figure shows a network with Hyper‑V Extensible Switch.With Windows Server 2012, you can configure Hyper‑V servers to enforce network isolation among any set of arbitrary isolation groups, which are typically defined for individual customers or sets of workloads.Windows Server 2012 provides the isolation and security capabilities for multitenancy by offering the new features presented in the next slides.
  • With Windows Server 2012 and Hyper-V Server 2012, you can configure Hyper-V servers to enforce network isolation among any set of arbitrary isolation groups, which are typically defined for individual customers or sets of workloads. Windows Server 2012 Hyper-V and Hyper-V Server 2012 provide the isolation and security capabilities for multitenancy by offering the following new features: Virtual machine isolation with PVLANs VLAN technology is traditionally used to subdivide a network and provide isolation for individual groups that share a single physical infrastructure. Windows Server 2012 introduces support for PVLANs, a technique used with VLANs that can be used to provide isolation between two virtual machines on the same VLAN. When a virtual machine doesn’t need to communicate with other virtual machines, you can use PVLANs to isolate it from other virtual machines in your data center. By assigning each virtual machine in a PVLAN one primary VLAN ID and one or more secondary VLAN IDs, you can put the secondary PVLANs into one of three modes.These PVLAN modes determine which other virtual machines on the PVLAN a virtual machine can talk to. If you want to isolate a virtual machine, put it in isolated mode. Isolated- Isolated ports cannot exchange packets with each other at layer 2. Promiscuous- Promiscuous ports can exchange packets with any other port on the same primary VLAN ID. Community- Community ports on the same VLAN ID can exchange packets with each other at layer 2. ARP/ND poisoning and spoofing protection Hyper-V Extensible Switch provides protection against a malicious virtual machine stealing IP addresses from other virtual machines through ARP spoofing (also known as ARP poisoning in IPv4). With this type of man-in-the-middle attack, a malicious virtual machine sends a fake ARP message, which associates its own MAC address to an IP address that it doesn’t own. Unsuspecting virtual machines send the network traffic targeted to that IP address to the MAC address of the malicious virtual machine instead of the intended destination. For IPv6, Windows Server 2012 and Hyper-V Server 2012 provide equivalent protection for ND spoofing. DHCP Guard protection In a DHCP environment, a rogue DHCP server could intercept client DHCP requests and provide incorrect address information. The rogue DHCP server could cause traffic to be routed to a malicious intermediary that sniffs all traffic before forwarding it to the legitimate destination. To protect against this particular man-in-the-middle attack, the Hyper-V administrator can designate which Hyper-V Extensible Switch ports can have DHCP servers connected to them. DHCP server traffic from other Hyper-V Extensible Switch ports is automatically dropped. The Hyper-V Extensible Switch now protects against a rogue DHCP server attempting to provide IP addresses that would cause traffic to be rerouted. Virtual port ACLs for network isolation and metering Port ACLs provide a mechanism for isolating networks and metering network traffic for a virtual port on the Hyper-V Extensible Switch. By using port ACLs, you can meter the IP addresses or MAC addresses that can (or can’t) communicate with a virtual machine. For example, you can use port ACLs to enforce isolation of a virtual machine by letting it talk only to the Internet or communicate only with a predefined set of addresses. By using the metering capability, you can measure network traffic going to or from a specific IP address or MAC address, which lets you report on traffic, sent or received from the Internet or from network storage arrays. You can configure multiple port ACLs for a virtual port. Each port ACL consists of a source or destination network address and a permit to deny or meter action. The metering capability also supplies information about the number of instances where traffic was attempted to or from a virtual machine from a restricted (“deny”) address. Trunk mode to virtual machines A VLAN makes a set of host machines or virtual machines appear to be on the same local LAN, independent of their actual physical locations. With the Hyper-V Extensible Switch trunk mode, traffic from multiple VLANs can now be directed to a single network adapter in a virtual machine that could previously receive traffic from only one VLAN. As a result, traffic from different VLANs is consolidated, and a virtual machine can listen in on multiple VLANs. This feature can help you shape network traffic and enforce multitenant security in your data center. MonitoringMany physical switches can monitor the traffic from specific ports flowing through specific virtual machines on the switch. Hyper-V Extensible Switch also provides this port mirroring. You can designate which virtual ports should be monitored and to which virtual port the monitored traffic should be delivered for further processing. For example, a security monitoring virtual machine can look for anomalous patterns in the traffic that flows through other specific virtual machines on the switch. In addition, you can diagnose network connectivity issues by monitoring traffic bound for a particular virtual switch port. Windows PowerShell and WMI Windows Server 2012 now provides Windows PowerShell cmdlets for the Hyper-V Extensible Switch that lets you build command-line tools or automated scripts for setup, configuration, monitoring, and troubleshooting. These cmdlets can be run remotely. Windows PowerShell also enables third parties to build their own tools to manage the Hyper-V Extensible Switch.
  • Many enterprises need the ability to extend virtual switch features with their own plug-ins to suit their virtual environment. If you’re in charge of making IT purchasing decisions at your company, you want to know that the virtualization platform you choose won’t lock you in to a small set of compatible features, devices, or technologies.In Windows Server 2012 and Hyper-V Server 2012, the Hyper V Extensible Switch provides new extensibility features. The Hyper V Extensible Switch in Windows Server 2012 and Hyper-V Server 2012 is a layer-2 virtual network switch that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network. The Hyper V Extensible Switch is an open platform that lets multiple vendors provide extensions that are written to standard Windows API frameworks. The reliability of extensions is strengthened through the Windows standard framework and reduction of required third-party code for functions and is backed by the Windows Hardware Quality Labs (WHQL) certification program. The IT Admin can manage the Hyper-V Extensible Switch and its extensions by using Windows PowerShell, programmatically with WMI or through the Hyper-V Manager user interface.Several Partners have already announced extensions for the Hyper-V Extensible Switch, including:Cisco - Nexus 1000V Series Switches & UCS Virtual Machine Fabric Extender (VM-FEX)NEC - OpenFlow5nine – Security ManagerInMon – sFlowThese extensions are across Packet Inspection, Packet Filtering, Network Forwarding and Intrusion Detection, ensuring comprehensive levels of granularity and control for specific customer scenarios.
  • What about VMware? Whilst VMware offer an advanced distributed network switch, unfortunately, it is only available in the Enterprise Plus edition of vSphere 5.1, thus customers wishing to take advantage of the increased granularity, management capability and control, have to upgrade to the highest edition, at substantial cost. The VMware vSphere Hypervisorunfortunately doesn’t provide this capability. A key point to note however, is that the vSphere vSwitch, isn’t open and extensible, but instead, closed and replaceable. Up until recently, Cisco were the only vendor to offer an alternative to the VMware vSphere Distributed Switch. IBM have recently released an alternative to this, however with Windows Server 2012 Hyper-V and Hyper-V Server 2012, there is already commitment from 4 Partners; Cisco, NEC, 5nine and InMon, to deliver extended functionality across a variety of different extension types, from packet inspection and filtering through to forwarding and intrusion detection, offering customers a greater set of choice for their specific environment. It’s also important to note that so far, the approach from VMware’s Partners has been more about replacement than integration, with the Cisco Nexus 1000V and the IBM System Networking Distributed Virtual Switch 5000V both effectively replacing the inbox vSphere Distributed Switch.Many of the more advanced networking capabilities within Windows Server 2012 Hyper-V and Hyper-V Server 2012 are unfortunately not present within the free vSphere Hypervisor, and even with vSphere, key security protection capabilities such as ARP and ND Spoofing Protection, DHCP Snooping Protection and DHCP Guard, along with Virtual Port Access Control Lists are only available through the purchase of additional technologies on top of vSphere 5.1; either the App component of the vCloud Networking & Security (vCNS) product or within the network switch technologies from vendors such as Cisco. This means that again, customers have to add additional, costly technologies in order to provide protection from these threats.With the Hyper-V Extensible Switch trunk mode, traffic from multiple VLANs can now be directed to a single network adapter in a virtual machine that could previously receive traffic from only one VLAN. As a result, traffic from different VLANs is consolidated, and a virtual machine can listen in on multiple VLANs. This feature can help the IT Admin shape network traffic and enforce multitenant security in the data center. Unfortunately, this feature isn’t available in vSphere Hypervisor but is available in the vSphere Distributed Switch, available in the Enterprise Plus edition of vSphere 5.1.Finally, the Hyper-V Extensible Switch provides organizations with the ability to not only monitor individual ports within a vSwitch, but also mirror the traffic that is passing, to an alternative location for further analysis. With VMware vSphere Hypervisor however, all traffic on a Port Group or vSwitch, on which ‘Promiscuous Mode’ is enabled, is exposed, posing a potential risk to the security of that network. This lack of granularity restricts it’s usage in real world environments, and means that customers who require this level of protection have to upgrade to vSphere 5.1 Enterprise Plus, which has the Distributed Switch technology to provide the capability through features such as NetFlow and Port Mirroring.
  • Windows Server 2012 Hyper-V and Hyper-V Server 2012 also include a number of performance enhancements within the networking stack to help customers virtualize their most intensive workloads. Virtual Machine Queue, introduced in Windows Server 2008 R2 Hyper-V, enables, when combined with VMq-capable network hardware, a more streamlined and efficient delivery of packets from the external network to the virtual machine, reducing the overhead on the host operating system. In Windows Server 2012 however, this has been streamlined and improved considerably, with Dynamic Virtual Machine Queue spreading the processing of the network traffic more intelligently across CPUs in the host, resulting in higher networking performance.When it comes to security, many customers are familiar with IPsec. IPsec protects network communication by authenticating and encrypting some, or all of the contents of network packets. IPsec Task Offload in Windows Server 2012, leverages the hardware capabilities of server NICs to offload IPsec processing. This reduces the CPU overhead of IPsec encryption and decryption significantly. In Windows Server 2012, IPsec Task Offload is extended to Virtual Machines as well. Customers using VMs who want to protect their network traffic with IPsec can take advantage of the IPsec hardware offload capability available in server NICs, thus freeing up CPU cycles to perform more application-level work and leaving the per packet encryption/decryption to hardware.Finally, when it comes to virtual networking, a primary goal is native I/O throughput. Windows Server 2012 Hyper-V and Hyper-V Server 2012 add the ability to assign SR-IOV functionality from physical devices directly to virtual machines. This gives VMs the ability to bypass the software-based Hyper-V Virtual Switch, and directly address the NIC. As a result, CPU overhead and latency is reduced, with a corresponding rise in throughput. This is all available, without sacrificing key Hyper-V features such as virtual machine Live Migration.
  • When it comes to deployment of virtualization technologies, many are within secure datacenter environments, but what about those that aren’t? Satellite offices, remote sites, home offices and retail stores are all examples of environments that may not have them same levels of physical security as the enterprise datacenter, yet may still have physical servers, with virtualization technologies present. If the physical hosts were compromised, there could be very serious repercussions for the business.With Windows Server 2012 Hyper-V and Hyper-V Server 2012, BitLocker Drive Encryption is included to solve that very problem, by allowing customers to encrypt all data stored on the host operating system volume and configured data volumes, along with any Failover Cluster disks, including Cluster Shared Volumes, ensuring that environments, large and small, that are implemented in less physically secure locations, can have the highest levels of data protection for their key workloads, at no additional cost
  • Now, whilst VMware provide a capability known as NetQueue, in VMware’s own documentation, ‘Performance Best Practices for VMware vSphere 5.1’, it is noted that “On some 10 Gigabit Ethernet hardware network adapters, ESXi supports NetQueue, a technology that significantly improves performance of 10 Gigabit Ethernet network adapters in virtualized environments”. What does this mean for customers who have servers that don’t have 10 GigE? With Windows Server 2012 Hyper-V and Hyper-V Server 2012, and D-VMq, customers with existing 1 gigabit and 10 gigabit Ethernet adaptors can flexibly utilize these advanced capabilities to improve performance and throughput, whilst reducing the CPU burden on their Hyper-V hosts.When it comes to network security, specifically IPsec, VMware offers no offloading capabilities from the virtual machine through to the physical network interface, thus in a densely populated environment, valuable host CPU cycles will be lost to maintain the desired security level. With Windows Server 2012 Hyper-V, the IPsec Task Offload capability will move this workload to a dedicated processor on the network adaptor, enabling customers to make dramatically better use of the resources and bandwidth that is available.As stated earlier, when it comes to virtual networking, a primary goal is native I/O. With SR-IOV, customers have the ability to directly address the physical network interface card from within the virtual machine, reducing CPU overhead and latency whilst increasing throughput. In vSphere 5.1, VMware have introduced SR-IOV support, however it requires the vSphere Distributed Switch – a feature only found in the highest vSphere edition, meaning customers have to upgrade to take advantage of this higher levels of performance. Also, VMware’s implementation of SR-IOV unfortunately doesn’t support other features such as vMotion, High Availability and Fault Tolerance, meaning customers who wish to take advantage of higher levels of performance, must sacrifice agility and resiliency. Prior to vSphere 5.1, VMware provided a feature that offered a similar capability to SR-IOV, and continues to offer this in 5.1. DirectPath I/O, a technology which binds a physical network card to a virtual machine, offers that same enhancement, to near native performance, however, unlike SR-IOV in Windows Server 2012 Hyper-V and Hyper-V Server 2012, a virtual machine with DirectPath I/O enabled is restricted to that particular host, unless the customer is running a certain configuration of Cisco UCS. Other caveats include:Very Small Hardware Compatibility ListNo Memory OvercommitNo vMotion (unless running certain configurations of Cisco UCS)No Fault ToleranceNo Network I/O ControlNo VM Snapshots (unless running certain configurations of Cisco UCS)No Suspend/Resume (unless running certain configurations of Cisco UCS)No VMsafe/Endpoint Security supportWhilst DirectPath I/O may be attractive to customers from a performance perspective, VMware ask customers to sacrifice agility, losing vMotion in most cases, and scale, having to disable memory overcommit, along with a number of other vSphere features. No such restrictions are imposed when using SR-IOV with Windows Server 2012 Hyper-V, ensuring customers can combine the highest levels of performance with the flexibility they need for an agile, scalable infrastructure.Finally, when it comes to physical security, VMware has no capability within the vSphere Hypervisor or vSphere 5.1 that can enable the encryption of either VMFS, or the VMDK files themselves, and instead rely on hardware-based, or in-guest alternatives, which add cost, management overhead, and additional resource usage.
  • To maintain optimal use of physical resources and to be able to easily add new virtual machines, IT must be able to move virtual machines whenever necessary without disrupting the business. The ability to move virtual machines across Hyper‑V hosts is available in Windows Server 2008 R2, with a feature known as Live Migration. Windows Server 2012 Hyper-V and Hyper-V Server 2012 build on that feature and enhances the ability to migrate virtual machines with support for simultaneous live migrations - the ability to move several virtual machines at the same time, enabling a more agile, responsive infrastructure and a more optimal usage of network bandwidth during the migration process.In addition, Windows Server 2012 Hyper-V and Hyper-V Server 2012, introduce Live Storage Migration, which lets the IT Admin move virtual hard disks that are attached to a running virtual machine. Through this feature, IT can transfer virtual hard disks, with no downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing the storage load. The IT Admin can perform this operation by using a new wizard in Hyper‑V Manager or the new Hyper‑V cmdlets for Windows PowerShell. Live storage migration is available for both storage area network (SAN)-based and file-based storage.With Windows Server 2012 Hyper-V and Hyper-V Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries. An example of this could be a developer working on a virtualized web server on his local Windows Server 2012 Hyper-V host, and once testing is complete, this workload could be migrated, live, with no interruption, from the developer’s individual host system, where the virtual machine resides on locally attached storage, across to the production cluster, where the virtual machine will reside on high-performance SAN storage. With Shared-Nothing Live Migration, this migration is seamless, with no interruption or downtime
  • NOTE: This slide is animated and has 5 clicksTo maintain optimal use of physical resources and to add new virtual machines easily, you must be able to move virtual machines whenever necessary – without disrupting your business. Windows Server 2008 R2 introduced live migration, which made it possible to move a running virtual machine from one physical computer to another with no downtime and no service interruption. However, this assumed that the virtual hard disk for the virtual machine remained consistent on a shared storage device such as a Fibre Channel or iSCSI SAN. In Windows Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries, including to any Hyper-V host server in your environment. Hyper-V builds on this feature, adding support for simultaneous live migrations, enabling you to move several virtual machines at the same time. When combined with features such as Network Virtualization, this feature even allows virtual machines to be moved between local and cloud hosts with ease.In this example, we are going to show how live migration works when connected to an SMB File Share. With Windows Server 2012 and SMB3, you can store your virtual machine hard disk files and configuration files on an SMB share and live migrate the VM to another host whether that host is part of a cluster or not.[Click]Live migration setup: During the live migration setup stage, the source host creates a TCP connection with the destination host. This connection transfers the virtual machine configuration data to the destination host. A skeleton virtual machine is set up on the destination host, and memory is allocated to the destination virtual machine.[Click]Memory page transfer: In the second stage of a SMB-based live migration, the memory that is assigned to the migrating virtual machine is copied over the network from the source host to the destination host. This memory is referred to as the “working set” of the migrating virtual machine. A page of memory is 4 KB.During this phase of the migration, the migrating virtual machine continues to run. Hyper-V iterates the memory copy process several times, with each iteration requiring a smaller number of modified pages to be copied. After the working set is copied to the destination host, the next stage of the live migration begins.[Click]Memory page copy process: This stage is a memory copy process that duplicates the remaining modified memory pages for “Test VM” to the destination host. The source host transfers the CPU and device state of the virtual machine to the destination host.During this stage, the available network bandwidth between the source and destination hosts is critical to the speed of the live migration. Use of a 1‑gigabit Ethernet (GbE) or faster connection is important. The faster the source host transfers the modified pages from the migrating virtual machine’s working set, the more quickly live migration is completed.The number of pages transferred in this stage is determined by how actively the virtual machine accesses and modifies the memory pages. The more modified pages, the longer it takes to transfer all pages to the destination host.[Click]Moving the storage handle from source to destination: During this stage of a live migration, control of the storage that is associated with “Test VM”, such as any virtual hard disk files or physical storage attached through a virtual Fibre Channel adapter, is transferred to the destination host. (Virtual Fibre Channel is also a new feature of Hyper-V. For more information, see “Virtual Fibre Channel in Hyper-V”). The following figure shows this stage.[Click]Bringing the virtual machine online on the destination server: In this stage of a live migration, the destination server has the up-to-date working set for the virtual machine and access to any storage that the VM uses. At this time, the VM resumes operation.Network cleanup: In the final stage of a live migration, the migrated virtual machine runs on the destination server. At this time, a message is sent to the network switch, which causes the switch to obtain the new MAC addresses of the migrated virtual machine so that network traffic to and from the VM can use the correct switch port.The live migration process completes in less time than the TCP time-out interval for the virtual machine that is being migrated. TCP time-out intervals vary based on network topology and other factors.
  • To maintain optimal use of physical resources and to be able to easily add new virtual machines, IT must be able to move virtual machines whenever necessary without disrupting the business. The ability to move virtual machines across Hyper‑V hosts is available in Windows Server 2008 R2, with a feature known as Live Migration. Windows Server 2012 Hyper-V and Hyper-V Server 2012 build on that feature and enhances the ability to migrate virtual machines with support for simultaneous live migrations - the ability to move several virtual machines at the same time, enabling a more agile, responsive infrastructure and a more optimal usage of network bandwidth during the migration process.In addition, Windows Server 2012 Hyper-V and Hyper-V Server 2012, introduce Live Storage Migration, which lets the IT Admin move virtual hard disks that are attached to a running virtual machine. Through this feature, IT can transfer virtual hard disks, with no downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing the storage load. The IT Admin can perform this operation by using a new wizard in Hyper‑V Manager or the new Hyper‑V cmdlets for Windows PowerShell. Live storage migration is available for both storage area network (SAN)-based and file-based storage.With Windows Server 2012 Hyper-V and Hyper-V Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries. An example of this could be a developer working on a virtualized web server on his local Windows Server 2012 Hyper-V host, and once testing is complete, this workload could be migrated, live, with no interruption, from the developer’s individual host system, where the virtual machine resides on locally attached storage, across to the production cluster, where the virtual machine will reside on high-performance SAN storage. With Shared-Nothing Live Migration, this migration is seamless, with no interruption or downtime
  • NOTE: This slide is animated and has 3 clicksNot only can we live migrate a virtual machine between two physical hosts, Hyper‑V in Windows Server 2012 introduces live storage migration, which lets you move virtual hard disks that are attached to a running virtual machine without downtime. Through this feature, you can transfer virtual hard disks, with no downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing your storage load. You can perform this operation by using a new wizard in Hyper‑V Manager or the new Hyper‑V cmdlets for Windows PowerShell. Live storage migration is available for both storage area network (SAN)-based and file-based storage.When you move a running virtual machine’s virtual hard disks, Hyper‑V performs the following steps to move storage:Throughout most of the move operation, disk reads and writes go to the source virtual hard disk.[Click]After live storage migration is initiated, a new virtual hard disk is created on the target storage device. While reads and writes occur on the source virtual hard disk, the disk contents are copied to the new destination virtual hard disk.[Click]After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated.[Click]After the source and destination virtual hard disks are synchronized, the virtual machine switches over to using the destination virtual hard disk.The source virtual hard disk is deleted.Just as virtual machines might need to be dynamically moved in a cloud data center, allocated storage for running virtual hard disks might sometimes need to be moved for storage load distribution, storage device servicing, or other reasons.[Additional information]Updating the physical storage that is available to Hyper‑V is the most common reason for moving a virtual machine’s storage. You also may want to move virtual machine storage between physical storage devices, at runtime, to take advantage of new, lower-cost storage that is supported in this version of Hyper‑V, such as SMB-based storage, or to respond to reduced performance that can result from bottlenecks in the storage throughput. Windows Server 2012 provides the flexibility to move virtual hard disks both on shared storage subsystems and on non-shared storage as long as a Windows Server 2012 SMB3 network shared folder is visible to both Hyper‑V hosts.You can add physical storage to either a stand-alone system or to a Hyper‑V cluster and then move the virtual machine’s virtual hard disks to the new physical storage while the virtual machines continue to run.Storage migration, combined with live migration, also lets you move a virtual machine between hosts on different servers that are not using the same storage. For example, if two Hyper‑V servers are each configured to use different storage devices and a virtual machine must be migrated between these two servers, you can use storage migration to a shared folder on a file server that is accessible to both servers and then migrate the virtual machine between the servers (because they both have access to that share). Following the live migration, you can use another storage migration to move the virtual hard disk to the storage that is allocated for the target server.You can easily perform the live storage migration using a wizard in Hyper‑V Manager or Hyper‑V cmdlets for Windows PowerShell.BenefitsHyper‑V in Windows Server 2012 lets you manage the storage of your cloud environment with greater flexibility and control while you avoid disruption of user productivity. Storage migration with Hyper‑V in Windows Server 2012 gives you the flexibility to perform maintenance on storage subsystems, upgrade storage appliance firmware and software, and balance loads as capacity is used without shutting down virtual machines.Requirements for live storage migrationWindows Server 2012.The Hyper‑V role.Virtual machines configured to use virtual hard disks for storage.
  • To maintain optimal use of physical resources and to be able to easily add new virtual machines, IT must be able to move virtual machines whenever necessary without disrupting the business. The ability to move virtual machines across Hyper‑V hosts is available in Windows Server 2008 R2, with a feature known as Live Migration. Windows Server 2012 Hyper-V and Hyper-V Server 2012 build on that feature and enhances the ability to migrate virtual machines with support for simultaneous live migrations - the ability to move several virtual machines at the same time, enabling a more agile, responsive infrastructure and a more optimal usage of network bandwidth during the migration process.In addition, Windows Server 2012 Hyper-V and Hyper-V Server 2012, introduce Live Storage Migration, which lets the IT Admin move virtual hard disks that are attached to a running virtual machine. Through this feature, IT can transfer virtual hard disks, with no downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing the storage load. The IT Admin can perform this operation by using a new wizard in Hyper‑V Manager or the new Hyper‑V cmdlets for Windows PowerShell. Live storage migration is available for both storage area network (SAN)-based and file-based storage.With Windows Server 2012 Hyper-V and Hyper-V Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries. An example of this could be a developer working on a virtualized web server on his local Windows Server 2012 Hyper-V host, and once testing is complete, this workload could be migrated, live, with no interruption, from the developer’s individual host system, where the virtual machine resides on locally attached storage, across to the production cluster, where the virtual machine will reside on high-performance SAN storage. With Shared-Nothing Live Migration, this migration is seamless, with no interruption or downtime
  • NOTE: This slide is animated and has 4 clicksWith Windows Server 2012 Hyper-V, you can also perform a “Shared Nothing” Live Migration where you can move a virtual machine, live, from one physical system to another even if they don’t have connectivity to the same shared storage. This is useful, for example, in a branch office where you may be storing the virtual machines on local disk, and you want to move a VM from one node to another. This is also especially useful when you have two independent clusters and you want to move a virtual machine, live, between them, without having to expose their shared storage to one another. You can also use “Shared Nothing” Live Migration to migrate a virtual machine from one datacenter to another provided your bandwidth is large enough to transfer all of the data between the datacenters.As you can see in the animation, when you perform a live migration of a virtual machine between two computers that do not share an infrastructure, Hyper-V first performs a partial migration of the virtual machine’s storage by creating a virtual machine on the remote system and creating the virtual hard disk on the target storage device.[Click]While reads and writes occur on the source virtual hard disk, the disk contents are copied over the network to the new destination virtual hard disk.This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts.[Click]After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated.This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts.[Click]After the source and destination virtual hard disks are synchronized, the virtual machine live migration process is initiated, following the same process that was used for live migration with shared storage.After the virtual machine’s storage is migrated, the virtual machine migrates while it continues to run and provide network services. [Click]After the live migration is complete and the virtual machine is successfully running on the destination server, the files on the source server are deleted.
  • Isolating virtual machines of different departments or customers can be a challenge on a shared network. When these departments or customers must isolate entire networks of virtual machines, the challenge becomes even greater. Traditionally, VLANs are used to isolate networks, but VLANs are very complex to manage on a large scale. The following are the primary drawbacks of VLANs:Cumbersome reconfiguration of production switches is required whenever virtual machines or isolation boundaries must be moved, and the frequent reconfiguration of the physical network to add or modify VLANs increases the risk of an inadvertent outage.VLANs have limited scalability because typical switches support no more than 1,000 VLAN IDs (with a maximum of 4,095).VLANs cannot span multiple subnets, which limits the number of nodes in a single VLAN and restricts the placement of virtual machines based on physical location.In addition to the drawbacks of VLANs, virtual machine IP address assignment presents other key issues when organizations move to the cloud:Required renumbering of service workloads.Policies that are tied to IP addresses.Physical locations that determine virtual machine IP addresses.Topological dependency of virtual machine deployment and traffic isolation.The IP address is the fundamental address that is used for layer‑3 network communication because most network traffic is TCP/IP. Unfortunately, when IP addresses are moved to the cloud, the addresses must be changed to accommodate the physical and topological restrictions of the data center. Renumbering IP addresses is cumbersome because all associated policies that are based on IP addresses must also be updated.The physical layout of a data center influences the permissible potential IP addresses for virtual machines that run on a specific server or blade that is connected to a specific rack in the data center. A virtual machine that is provisioned and placed in the data center must adhere to the choices and restrictions regarding its IP address. Therefore, the typical result is that data center administrators assign IP addresses to the virtual machines and force virtual machine owners to adjust all their policies that were based on the original IP address. This renumbering overhead is so high that many enterprises choose to deploy only new services into the cloud and leave legacy applications unchanged.Hyper‑V Network Virtualization solves these problems. With this feature, IT can isolate network traffic from different business units or customers on a shared infrastructure and not be required to use VLANs. Hyper‑V Network Virtualization also lets IT move virtual machines as needed within the virtual infrastructure while preserving their virtual network assignments. Finally, IT can even use Hyper‑V Network Virtualization to transparently integrate these private networks into a preexisting infrastructure on another site
  • Network Virt does to networks, what we did to Servers in Hyper-V.With NV, we do the same thing, to entire networks.Blue Network and Red Network – collection of subnets.We’re going to take each of those subnets, virtualize them, and run them on a common shared physical networkEach network thinks it’s unique with it’s own IP address ranges, routing policies, internet connection but in reality, they are all running on the shared network.Advantages:Can move subnets into the shared environments without needed to re-IP addressCan have overlapping IP address subnets running on my shared physical fabric, without VLANs.
  • So, how does this work?Lets assume we have a red network and a blue network. VMs in each of these networks are in the 10.X space, and they overlap.Normally, this would be a problem – unless we used VLANsWith NV, we virtualize those networks. We place the VMs on hosts.Each host has a physical address. This is the address that is connected to the datacenter fabric.We’re going to have a centralized NV mapping table, and the mapping table maps the IP address of the VM, to the current IP address of the host that the VM is currently running on.CA = the VM’s IP addressPA = the address of the host on which the VM is currently running on.System Center 2012 VMM SP1 will construct and manage the mapping table between CA & PA.That policy is deployed to the host – has a segment of the mapping table. We don’t load the whole policy table to the host – we demand load just the bits each hosts needs.
  • Customer network represents an isolation boundary.A Virtual Subnet is a classic subnet, or a broadcast domain. If I send a broadcast, every VM on that subnet should receive that broadcast.Every virtual subnet in the datacenter is assigned a unique number, a VSID that identifies that subnet to Hyper-V Network Virtualization.When you create a new Hyper-V Virtual Subnet, a VSID is generated.
  • We use tunneling. NVGRE is based on the GRE protocol that has been around for a number of years.In this example, we have 2 customers, with infrastructures running on hosts that are on 2 different subnets in the datacenter.Suppose these VMs need to send a packet. Each of those VMs will send a packet. Source IP, destination IP. VM will have no idea that it’s running in a virtual network.We encapsulate that packet and move it across the network. The packet from blue 10.0.0.5 to 10.0.0.7 has a source and destination mac address. We wrap it in a tunnel. The VSID is included in the packet, just before the source and destination MAC addresses. On the outer header, we include the physical IP addresses that are used by the datacenter network to move the packet. The source address is changed to represent the physical source address, and the destination address is changed to represent the physical destination address.On the source side, we’re also making sure that VMs can only send traffic to VSIDs that they are authorized to send traffic to. So, the blue VM will be blocked from sending packets to the Red VMs etc. So we can enforce that it can only talk to VSIDs on it’s ACLs.
  • Lets talk about the physical architecture of the host, and how this actually works.On the physical host, we have a collection of VMs that are connected to a Hyper-V virtual switch. In 2012, it supports ACLing, and switch extensibility. Packet comes into the switch, it can be ACLd, apply QoS policies, and various switch extensions.As soon as the packet enters the switch, we figure out which VSID the packet came from, and we include that VSID information along with the packet.Once the packet leaves the switch, it enters the Network Virtualization module. In here, we do an ACL check to see if the source VM can talk to the destination VM, and we encapsulate for GRE. The Network virtualization module does the mapping from the Customer Address, to the Provider Address.
  • So we’ll start with the same scenario – suppose 10.0.0.5 wants to talk to .7First thing it does, sends an ARP packet to find the MAC address with the destination VM. Hyper-V Switch broadcasts the ARP to check if any local VMs on the VSID 5001 are 10.0.0.7.Hyper-V Switch will then forward that broadcast into the rest of the network, however in this case, the packet is intercepted by the NV filter, checks mapping table, and if it does, it responds with MAC address. If it doesn’t know, it will check with VMM 2012 SP1, pull down the updated table, and then encapsulates and sends the packet onto the physical network.
  • So, now we know the destination MAC, the NV filter sends an ARP response back into the Hyper-V Switch, and back to the source VM.ARPs never leave the host.
  • Now that the source VM knows where it needs to send the packet to, it can start to construct the packet that it wants to send. Source and destination MAC, source and destination IP address, payload, sends that into the Hyper-V Switch. VSID is attached.NV Filter checks if the VM can contact the other VM, and if so, it then constructs the GRE packet
  • On the receiving host, we do the reverse. NV filter extracts the packet from the GRE encapsulated packet. Pulls out the VSID info from the packet. Passes that to the switch, switch extensions are processed, VSID is dropped off, and packet is deposited into the VM.Any VM, any OS. Transparent to the network.
  • As shown in the table, the flexibility and agility provided by the inbox features of Windows Server 2012 Hyper-V and Hyper-V Server 2012 are simply unmatched by VMware. The VMware vSphere Hypervisor supports none of the capabilities required for an agile infrastructure today, meaning customers have to purchase a more expensive vSphere 5.1 edition.vSphere 5.1 Essentials Plus edition, and higher, now support vMotion (virtual machine live migration) yet on 1GigE networks, VMware restrict the number of simultaneous vMotions to 4, and on 10GigE, to 8. With Windows Server 2012 Hyper-V and Hyper-V Server 2012, Microsoft supports an unlimited number of simultaneous live migrations, within the confines of what the networking hardware will support, with the process utilizing 100% of the available, dedicated live migration network to complete the process as quickly and efficiently as possible, with no interruption to the running virtual machines.Just like virtual machine vMotion, Storage vMotion is unavailable in VMware vSphere Hypervisor, and is restricted to the Standard,Enterprise and Enterprise Plus editions of vSphere 5.1, available at considerable cost. In vSphere 5.1, VMware also introduced a feature, known as Enhanced vMotion, that enables the migration of a virtual machine between 2 hosts without shared storage. This feature was already available in all editions of Hyper-V, in the form of Shared-Nothing Live Migration.Finally, with Hyper‑V Network Virtualization, network traffic from different business units or customers can be isolated, even on a shared infrastructure, without the need to use VLANs. Hyper‑V Network Virtualization also lets IT Admins move virtual machines as needed within the virtual infrastructure while preserving their virtual network assignments. IT Admins can even use Hyper‑V Network Virtualization to transparently integrate these private networks into a preexisting infrastructure on other sites. With VMware, to obtain any kind of functionality similar to what Network Virtualization can deliver, customers must first purchase the vCloud Networking & Security product, of which VXLAN is a component, and also, as VXLAN requires the vSphere Distributed Switch, customers must upgrade to the Enterprise Plus edition of vSphere 5.1 to take advantage. Network Virtualization has some significant advantages over VXLAN, with one in particular being better integration with existing hardware and software stacks, which is of particular importance when VMs need to communicate out of the ESXi hosts and into the physical network infrastructure. Not all switches are VXLAN aware, meaning this traffic cannot be handled effectively.
  • VMWARE Professionals - Security, Multitenancy and Flexibility

    1. 1. Microsoft Virtual Academy
    2. 2. Microsoft Virtual Academy Part 1 | Windows Server 2012 Hyper-V &. VMware vSphere 5.1 Part 2 | System Center 2012 SP1 & VMware’s Private Cloud (01) Introduction & Scalability (05) Introduction & Overview of System Center 2012 (02) Storage & Resource Management (06) Application Management (03) Security, Multi-tenancy & Flexibility (07) Cross-Platform Management (04) High-Availability & Resiliency (08) Foundation, Hybrid Clouds & Costs ** MEAL BREAK **
    3. 3. Benefits • Layer 2 virtual interface • Managed programmatically • Extensible by partners or customers New feature Handles network traffic among virtual machines, external network, and host operating system ISOLATION AND MULTITENANCY Virtual machine Network application Virtual network adapter Hyper–V host Hyper-V Extensible Switch Physical network adapter Physical switch Virtual machine Network application Virtual network adapter Virtual machine Network application Virtual network adapter
    4. 4. 6
    5. 5. 7
    6. 6. Capability Hyper-V (2012) vSphere Hypervisor vSphere 5.1 Enterprise Plus Extensible vSwitch Yes No Replaceable1 Confirmed Partner Extensions 5 No 2 Private Virtual LAN (PVLAN) Yes No Yes1 ARP Spoofing Protection Yes No vCNS/Partner2 DHCP Snooping Protection Yes No vCNS/Partner2 Virtual Port ACLs Yes No vCNS/Partner2 Trunk Mode to Virtual Machines Yes No Yes3 Port Monitoring Yes Per Port Group Yes3 Port Mirroring Yes Per Port Group Yes3 1 The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.1 and is replaceable (By Partners such as Cisco/IBM) rather than extensible. 2 ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the App component of VMware vCloud Network & Security (vCNS) product or a Partner solution, all of which are additional purchases 3 Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in the Enterprise Plus edition of vSphere 5.1 vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www- 03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technical-resources/virtualization-topics/virtual-networking/distributed-virtual- switches.html, http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf, http://www.vmware.com/products/vshield- app/features.html and http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html
    7. 7. • Reduces latency of network path • Reduces CPU utilization for processing network traffic • Increases throughput • Supports Live Migration Network I/O path with SR-IOVNetwork I/O path without SR-IOV Physical NIC Root Partition Hyper-V Switch Routing VLAN Filtering Data Copy Virtual Machine Virtual NIC SR-IOV Physical NIC Virtual Function
    8. 8. Virtual Machine Network Stack Software NIC  Enable IOV (VM NIC Property)  Virtual Function is “Assigned”  Team automatically created  Traffic flows through VF Turn On IOV  Break Team  Reassign Virtual Function  Assuming resources are available  Migrate as normal Live Migration Post Migration  Remove VF from VM VM has connectivity even if  Switch not in IOV mode  IOV physical NIC not present  Different NIC vendor  Different NIC firmware SR-IOV Physical NICPhysical NIC Software Switch (IOV Mode) “TEAM”Software NIC Virtual Function SR-IOV Physical NIC Software Switch (IOV Mode) “TEAM” Virtual Function  Software path is not used
    9. 9. Capability Hyper-V (2012) vSphere Hypervisor vSphere 5.1 Enterprise Plus Dynamic Virtual Machine Queue Yes NetQueue1 NetQueue1 IPsec Task Offload Yes No No SR-IOV with Live Migration Yes No2 No2 Storage Encryption Yes No No 1 VMware vSphere and the vSphere Hypervisor support VMq only (NetQueue) 2 VMware’s SR-IOV implementation does not support vMotion, HA or Fault Tolerance. DirectPath I/O, whilst not identical to SR-IOV, aims to provide virtual machines with more direct access to hardware devices, with network cards being a good example. Whilst on the surface, this will boost VM networking performance, and reduce the burden on host CPU cycles, in reality, there are a number of caveats in using DirectPath I/O: • Very small Hardware Compatibility List • No Memory Overcommit • No vMotion (unless running certain configurations of Cisco UCS) • No Fault Tolerance • No Network I/O Control • No VM Snapshots (unless running certain configurations of Cisco UCS) • No Suspend/Resume (unless running certain configurations of Cisco UCS) • No VMsafe/Endpoint Security support SR-IOV also requires the vSphere Distributed Switch, meaning customers have to upgrade to the highest vSphere edition to take advantage of this capability. No such restrictions are imposed when using SR-IOV in Hyper-V, ensuring customers can combine the highest levels of performance with the flexibility they need for an agile infrastructure. vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.1.pdf
    10. 10. Improvements • Faster and simultaneous migration • Live migration outside a clustered environment • Store virtual machines on a File Share VM VM Live migration setup SMB network storage IP connection Configuration data Memory pages transferred Memory content MEMORYMEMORY Modified pages transferred Modified memory pages Storage handle moved VIRTUAL MACHINE MOBILITY Live migration based on server message block (SMB) share VM
    11. 11. Computer running Hyper-V Target deviceSource device VIRTUAL MACHINE MOBILITY Benefits • Manage storage in a cloud environment with greater flexibility and control • Move storage with no downtime • Update physical storage available to a virtual machine (such as SMB-based storage) • Windows PowerShell cmdlets Live migration of storage Move virtual hard disks attached to a running virtual machine Reads and writes go to the source VHD Disk contents are copied to new destination VHD Disk writes are mirrored; outstanding changes are replicated Reads and writes go to new destination VHD Virtual machine
    12. 12. Destination Hyper-V Virtual machine Target deviceSource device Virtual machine Source Hyper-V IP connection Configuration dataMemory contentModified memory pages VIRTUAL MACHINE MOBILITY Benefits • Increase flexibility of virtual machine placement • Increase administrator efficiency • Reduce downtime for migrations across cluster boundaries Shared-nothing live migration Reads and writes go to the source VHD Reads and writes go to the source VHD. Live Migration Begins Disk contents are copied to new destination VHD Disk writes are mirrored; outstanding changes are replicated Live Migration MEMORYMEMORY Live Migration ContinuesLive Migration Completes
    13. 13. VLAN tags ToR Aggregation Switches VMs ToR Topology limits VM placement and requires reconfiguration of production switches
    14. 14. Blue VM Red VM Virtualization Physical Server Blue Network Red Network Physical Network
    15. 15. Virtualization Policy System Center Customer Address Space (CA) Red2Blue2 10.0.0.5 Red1Blue1 10.0.0.5 10.0.0.7 10.0.0.7 Blue 10.0.0.5 192.168.4.11 10.0.0.7 192.168.4.22 Red 10.0.0.5 192.168.4.11 10.0.0.7 192.168.4.22 Blue 10.0.0.5 10.0.0.7 Blue Corp Red Corp Red 10.0.0.5 10.0.0.7 Datacenter Network Host 1 Host 2 Provider Address Space (PA) 192.168.4.22192.168.4.11 Blue 10.0.0.5 192.168.4.11 10.0.0.7 192.168.4.22 Red 10.0.0.5 192.168.4.11 10.0.0.7 192.168.4.22 Blue 10.0.0.5 192.168.4.11 10.0.0.7 192.168.4.22 Red 10.0.0.5 192.168.4.11 10.0.0.7 192.168.4.22
    16. 16. Blue Corp Red Corp Blue Subnet1 Blue Subnet3Blue Subnet2 Blue Subnet5 Blue Subnet4 Red Subnet2 Red Subnet1 Blue R&D Net Blue Sales Net Red HR Net Hoster Datacenter Customer Network Virtual Subnet
    17. 17. 10.0.0.5 10.0.0.5 10.0.0.7 10.0.0.7 192.168.2.22 192.168.5.55 192.168.2.22 192.168.5.55 10.0.0.5  10.0.0.7 GRE Key 5001 MAC 10.0.0.5  10.0.0.7 GRE Key 6001 MAC 192.168.2.22 192.168.5.55 10.0.0.5 10.0.0.7 10.0.0.5 10.0.0.7 10.0.0.5  10.0.0.7 10.0.0.5 10.0.0.7
    18. 18. PA Y CA Y Datacenter Host 1 VM2 VMY Host 2 CA2 PA2 CA1 AA1 PA1 VM1 CAX AAX PAX VMX System Center Blue • VM1: MAC1, CA1, PA1 • VM2: MAC2, CA2, PA3 • VM3: MAC3, CA3, PA5 • … Red • VM1: MACX, CA1, PA2 • VM2: MACY, CA2, PA4 • VM3: MACZ, CA3, PA6 • … Data Center Policy NIC Management Cluster Storage Live Migration NIC Hyper-V Switch VSID ACL Isolation Switch Extensions Host Network Stack PA1 Network Virtualization VM1 VM1 System Center Host Agent Windows Server 2012 CA1 CA1 IP Virtualization Policy Enforcement Routing
    19. 19. 192.168.4.11 NIC Hyper-V Switch IP Virtualization Policy Enforcement Routing VSID ACL Enforcement Blue1 Red1 Network Virtualization 10.0.0.510.0.0.5 MACPA1 VSID 5001 VSID 6001 where is 10.0.0.7 ? ARP for 10.0.0.7 192.168.4.22 NIC IP Virtualization Policy Enforcement Routing Network Virtualization MACPA2 Hyper-V Switch VSID ACL Enforcement Blue2 Red2 10.0.0.710.0.0.7 VSID 5001 VSID 6001 Hyper-V Switch broadcasts ARP to: 1. All local VMs on VSID 5001 2. Network Virtualization filter OOB: VSID:5001 Network Virtualization filter responds to ARP for IP 10.0.0.7 on VSID 5001 with Blue2 MAC ARP for 10.0.0.7
    20. 20. 192.168.4.11 NIC Hyper-V Switch IP Virtualization Policy Enforcement Routing VSID ACL Enforcement Blue1 Red1 Network Virtualization 10.0.0.510.0.0.5 MACPA1 VSID 5001 VSID 6001 192.168.4.22 NIC IP Virtualization Policy Enforcement Routing Network Virtualization MACPA2 Hyper-V Switch VSID ACL Enforcement Blue2 Red2 10.0.0.710.0.0.7 VSID 5001 VSID 6001 OOB: VSID:5001 Use MACB2 for 10.0.0.7 Use MACB2 for 10.0.0.7 Blue1 learns MAC of Blue2
    21. 21. 192.168.4.11 NIC Hyper-V Switch IP Virtualization Policy Enforcement Routing VSID ACL Enforcement Blue1 Red1 Network Virtualization 10.0.0.510.0.0.5 MACPA1 VSID 5001 VSID 6001 sent from Blue1 MACB1MACB2 10.0.0.5  10.0.0.7 192.168.4.22 NIC IP Virtualization Policy Enforcement Routing Network Virtualization MACPA2 Hyper-V Switch VSID ACL Enforcement Blue2 Red2 10.0.0.710.0.0.7 VSID 5001 VSID 6001 OOB: VSID:5001 in Hyper-V switch MACB1MACB2 10.0.0.5  10.0.0.7 in Network Virtualization filter OOB: VSID:5001 MACB1MACB2 10.0.0.5  10.0.0.7 NVGRE on the wire MACPA1  MACPA2 192.168.4.11  192.168.4.22 5001 MACB1MACB2 10.0.0.5  10.0.0.7
    22. 22. 192.168.4.11 NIC Hyper-V Switch IP Virtualization Policy Enforcement Routing VSID ACL Enforcement Blue1 Red1 Network Virtualization 10.0.0.510.0.0.5 MACPA1 VSID 5001 VSID 6001 received by Blue2 MACB1MACB2 10.0.0.5  10.0.0.7 192.168.4.22 NIC IP Virtualization Policy Enforcement Routing Network Virtualization MACPA2 Hyper-V Switch VSID ACL Enforcement Blue2 Red2 10.0.0.710.0.0.7 VSID 5001 VSID 6001 OOB: VSID:5001 in Hyper-V switch MACB1MACB2 10.0.0.5  10.0.0.7 NVGRE on the wire in Network Virtualization filter OOB: VSID:5001 MACB1MACB2 10.0.0.5  10.0.0.7 MACPA1  MACPA2 192.168.4.11  192.168.4.22 5001 MACB1MACB2 10.0.0.5  10.0.0.7
    23. 23. Capability Hyper-V (2012) vSphere Hypervisor vSphere 5.1 Enterprise Plus VM Live Migration Yes No1 Yes2 1GB Simultaneous Live Migrations Unlimited3 N/A 4 10GB Simultaneous Live Migrations Unlimited3 N/A 8 Live Storage Migration Yes No4 Yes5 Shared Nothing Live Migration Yes No Yes5 Network Virtualization Yes No VXLAN6 1 Live Migration (vMotion) is unavailable in the vSphere Hypervisor – vSphere 5.1 required 2 Live Migration (vMotion) and Shared Nothing Live Migration (Enhanced vMotion) is available in Essentials Plus & higher editions of vSphere 5.1 3 Within the technical capabilities of the networking hardware 4 Live Storage Migration (Storage vMotion) is unavailable in the vSphere Hypervisor 5 Live Storage Migration (Storage vMotion) is available in Standard, Enterprise & Enterprise Plus editions of vSphere 5.1 6 VXLAN is a feature of the vCloud Networking & Security Product, which is available at additional cost to vSphere 5.1. In addition, it requires the vSphere Distributed Switch, only available in vSphere 5.1 Enterprise Plus. vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/buy/editions_comparison.html, http://www.vmware.com/files/pdf/products/vcns/vCloud-Networking-and-Security-Overview-Whitepaper.pdf http://www.vmware.com/products/datacenter-virtualization/vcloud- network-security/features.html#vxlan
    24. 24. ©2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Office, Azure, System Center, Dynamics and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 3

    ×