Your SlideShare is downloading. ×
Tudor Damian - Comparing Microsoft Cloud with VMware Cloud
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Tudor Damian - Comparing Microsoft Cloud with VMware Cloud


Published on

The session plans to review the key capabilities of the latest release of Hyper-V and see how they match with the latest release of VMware vSphere across four key areas: scalability and performance, …

The session plans to review the key capabilities of the latest release of Hyper-V and see how they match with the latest release of VMware vSphere across four key areas: scalability and performance, security and multi-tenant environments.

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Tudor Damian IT Solutions Specialist Virtual Machine MVP
  • 2. The Good • You have an API set in here that vendors can program against • Antivirus can run in this level and you can use that to scan all virtual machines. • You can run on CPUs that don’t have virtualization extensions • Only 144 Meg of code vs competitions 5 Gig The Not as Good • • • • You have an API set in there that hackers can program against Antivirus has access to all VMs – so would an exploited AV You have 144 Meg of stuff running at Ring –1 Drivers must be written for this Hypervisor so supported hardware is somewhat limited
  • 3. The Good • • • • No 3rd party APIs for hackers to code against in Hypervisor No global AV option that would could compromise all VMs Lots of hardware choices because it relies on the Windows drivers. 1.4MB Hypervisor running in Ring –1 vs. 144 Meg in vSphere 5.1 The Not as Good • • • • No APIs for third parties to add value in hypervisor No option to run Antivirus in the Hypervisor Requires hardware with CPU Virtualization Extensions Requires Windows Management Partition for the drivers
  • 4.
  • 5. Source: Kevin Turner (Microsoft COO) @ WPC 2013, based on IDC reports
  • 6. Self-Service vCloud Director App Controller Service Mgmt. vCloud Automation Center Service Manager Protection vSphere Data Protection Data Protection Manager Automation vCenter Orchestrator Orchestrator Monitoring vCenter Ops Mgmt. Suite Operations Manager VM Management vCenter Server vFabric Application Director Virtual Machine Manager Hypervisor vSphere Hypervisor Hyper-V
  • 7. Virtual Machine Manager & vSphere Day to Day VM Management with Virtual Machine Manager VMM integrates with vCenter 4.1/5.0/5.1 for managing ESX/ESXi 4.1/5.0/5.1 Aimed at providing the day to day management of VMware VMs – Create, Manage, Store, Deploy. More advanced tasks still use vCenter – vDS, FT VMs, Update Management VMM supports managing existing, and creating new vSphere VM & Service templates Supports key vSphere Features such as vMotion, Storage vMotion, PVSCSI, Thin Provisioning, HotAdd and adds its own capabilities on top – DO, PO, PRO, intelligent placement, Private Clouds etc.
  • 8. App Controller & vSphere Self-Service access to VMs running on vSphere App Controller integrates with VMM, and provides access to any VMM clouds VMM clouds can consist of capacity from Hyper-V, vSphere, XenServer or a combination Users & Groups can be delegated access to these vSphere-based clouds with individuallevel capacity limits Users can deploy vSphere-based VM & Service Templates to vSphere hosts Users can also have access to Windows Azure for deploying VMs & applications
  • 9. Operations Manager & vSphere Partnering with Veeam to deliver deep vSphere insight Veeam MP for VMware provides OpsMgr admins with granular insight into their vSphere infrastructure Agentless Collection providing end-to-end visibility from the physical server, to the hypervisor, to the virtual machines hosting your critical applications and services Full System Center functionality – including alerts, diagrams, dashboards, reporting, auditing, notifications, responses and automation for all VMware components Powerful reports for capacity planning, failure modelling, cluster capacity and more Rich topology views for Storage, Compute & Networking
  • 10. Orchestrator & vSphere Automating key tasks within the vSphere environment vSphere Integration Pack contains a large number out-of-the-box activities for automating vSphere Administrator connects Orchestrator to vCenter, or to ESXi directly. Allows the administrator to automate vSphere tasks in isolation, or combine vSphere activities into broader runbooks, connected with other systems If the Integration Pack doesn’t contain the desired task, admins can add their on IP through scripts, or PowerCLI vSphere Integration Pack - Activities
  • 11. Constructing, Delivering & Consuming Apps Maintaining, Managing & Monitoring Apps Protection of Key Applications & Workloads
  • 12. Construction, Delivery & Consumption Standardized VM Templates Roles & Features Application Layers VM Templates 2.0: Service Templates Deployment into clouds Role-based Self Service Controlled Consumption
  • 13. Application Construction, Delivery & Consumption Capability Microsoft VMware Request Private Cloud Resources Yes Yes1 Role-Based Self-Service Yes Yes Standardized Templates Yes Yes2 Template Granularity: Roles / Features Yes No Template Granularity: Application Layer Yes Yes3 Service/Multi-Tier Templates Yes Yes3 Deployment Across Heterogeneous Clouds Yes Yes4 1. 2. 3. 4. vCloud Automation Center allows for the requesting of private cloud resources but lacks a true CMDB capability in box. Each VMware VM template will have it’s own VMDK, even if the template varies only slightly in it’s configuration options. No alternatives to Server Application Virtualization (App-V) thus relies on regular installation methods or inflexible scripts. vCloud Automation Center allows deployment onto non-VMware infrastructure at a cost of $400 per managed machine + S&S however once deployed, it could not be managed from vCloud Director along with other VMware-based VMs. VMware Information:,
  • 14. Maintenance, Management & Monitoring Centralized Maintenance Extends beyond the private cloud Integrated Service Management Powerful, relevant automation Deep application insight Connecting Dev-Ops
  • 15. Application Maintenance, Management & Monitoring Capability Microsoft VMware Centralized Patching & Maintenance Yes Yes Non-Virtualized Infrastructure Management Yes Yes1 Integrated Service Management Yes Lacks CMDB2 Heterogeneous Automation Yes VMware Centric3 Deep Application Insight Yes Yes4 Integrated Dev-Ops Yes No5 1. 2. 3. 4. 5. Would require purchases outside of the vCloud Suite including vCloud Automation Center, vFabric Hyperic, vCenter Operations Management Suite Enterprise Edition vCloud Automation Center enables application owners or administrators to request infrastructure but vCAC lacks any form of true CMDB for complete ITIL/MOF IT Service Management VMware's vCenter Orchestrator has a limited set of plug-ins, of which the vast majority are VMware centric. No mention of plug-ins for other enterprise management systems and tools such as those from HP, IBM, BMC etc. Remediation limited to VMware best practices thus lacking in application-specific remediation guidance Lab Manager deprecated, with customers expected to upgrade to vCloud Director, which has no connections with Development IDE. VMware Information:,,,,,
  • 16. Protection of Key Applications & Workloads Granular Workload Protection Physical or Virtual Generic Data Source Protection Centralized, RoleBased Management Backup to Tape Low-Cost Disaster Recovery
  • 17. Protection of Key Applications & Workloads Capability Microsoft VMware Granular Workload Protection Yes No1 Physical & Virtual Protection Yes No1 3rd Party Integration Yes No2 Centralized Role-Based Management Yes Yes3 Tape Backup Yes No4 Integrated Disaster Recovery Yes Yes 1. 2. 3. 4. VMware Data Protection offers no protection for the workloads within the virtual machine, simply focusing on the VM itself as the protection unit and offers no protection of physical machines VMware Data Protection is not extensible by 3rd parties VMware Data Protection is capped at 10 appliances per vCenter with a maximum storage of 2TB/100 VMs per appliance. VMware Data Protection offers no protection to tape media. Disk only VMware Information:,
  • 18. Application Frameworks Management OS Hypervisor Fabric
  • 19. Cross-Platform Infrastructure Management Capability Microsoft VMware Multi-Hypervisor Management Yes Limited1 Comprehensive Guest OS Support Yes Yes2 3rd Party Management Integration Yes Limited3 Multiple Application Frameworks Yes Yes4 1. 2. 3. 4. vCloud Automation Center focuses on provisioning VMs to alternative hypervisors, whilst the Multi-Hypervisor Manager plug-in for vCenter offers only very basic capabilities VMware do not produce any operating systems, and support is therefore focused not on the guest operating system itself, but instead, on the VM Tools and hardware. vCenter Orchestrator has a limited number of 3rd party plug-ins and vCenter Operations Management Suite requires the purchase of 3 rd Party adaptors to integrate. Monitoring capabilities do extend to multiple frameworks but support for many frameworks is out of date - .NET 3.0 is the latest for instance. Also, the monitoring is not connected to any true DevOps capability, and lacks remediation guidance around detected issues. VMware Information:,,
  • 20. VMware vCloud Service, vCloud Providers vCloud Automation Center vCloud Connector 2.0 vCloud – On-Premise (w/ Director) vCloud Connector 2.0 Amazon, Hyper-V, Xen vCloud - Hoster (w/ Director)
  • 21. Scalability & Performance Security & Multitenancy Flexible Infrastructure High Availability & Resiliency
  • 22. Scalability, Performance & Density
  • 23. System Resource 64 320 5× Physical Memory 1TB 4TB 4× 512 2,048 4× Virtual CPUs per VM 4 64 16× 64GB 1TB 16× Active VMs per Host 384 1,024 2.7× Guest NUMA Cluster Improvement Factor Virtual CPUs per Host VM Hyper-V (2012 R2) Logical Processors Host Hyper-V (2008 R2) No Yes - Maximum Nodes 16 64 4× 1,000 8,000 8× Memory per VM Maximum VMs
  • 24. vSphere Hypervisor vSphere 5.1 Ent+ vSphere 5.5 Ent+ 320 160 160 320 Physical Memory 4TB 32GB1 2TB 4TB Virtual CPUs per Host 2,048 2,048 2,048 4,096 64 8 642 642 1TB 32GB1 1TB 1TB 1,024 512 512 512 Guest NUMA Host Hyper-V (2012 R2) Logical Processors System Yes Yes Yes Yes Maximum Nodes 64 N/A3 32 32 8,000 N/A3 4,000 4,000 Resource Virtual CPUs per VM Memory per VM VM Active VMs per Host Cluster Maximum VMs 1 Host physical memory is capped at 32GB thus maximum VM memory is also restricted to 32GB usage. 5.x Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM with all other editions supporting 8 vCPUs per VM 3 For clustering/high availability, customers must purchase vSphere 2 vSphere vSphere Hypervisor / vSphere 5.x Ent+ Information:, and
  • 25. Virtual Fibre Channel Native 4K Disk Support 64TB Virtual Hard Disks Online VHDX Resize Connect a VM directly to FC SAN without sacrificing features Take advantage of enhanced density and reliability Increased capacity, protection & alignment optimization Increased flexibility for virtual disks, with support for grow & shrink operations
  • 26. Boot from USB Disk Offloaded Data Transfer Storage Spaces Flexible deployment option for diskless servers (Hyper-V Server) Offloads storage-intensive tasks to the SAN Storage resiliency, availability & performance with commodity hardware
  • 27. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Virtual Fiber Channel Yes Yes Yes 3rd Party Multipathing (MPIO) Yes No Yes (VAMP)1 Native 4-KB Disk Support Yes No No Maximum Virtual Disk Size 64TB VHDX 62TB2 62TB2 Online Virtual Disk Resize Yes Grow Only Grow Only 256TB+3 64TB 64TB Offloaded Data Transfer Yes No Yes (VAAI)4 Boot from USB Yes Yes Yes Tiered Storage Pooling Yes No No Maximum Pass Through Disk Size vStorage API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1 and above vSphere 5.5 support for 62TB VMDK files is limited to when using VMFS5 and NFS datastores only, VMFS3 datastores are still limited to 2TB VMDK files; also, Hot-Expand, VMware FT , Virtual Flash Read Cache and Virtual SAN are not supported with 62TB VMDK files 3 The maximum size of a physical disk attached to a virtual machine is determined by the guest operating system and the chosen file system within the guest. More recent Windows Server operating systems support disks in excess of 256TB in size 4 vStorage API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1 and above 1 2 vSphere Hypervisor / vSphere 5.x Ent+ Information: and
  • 28. Dynamic Memory Resource Metering Increased control for greater virtual machine consolidation Track historical data for virtual machine usage Network QoS Storage QoS Consistent level of network performance based on SLAs Control allocation of Storage IOPS between VM Disks
  • 29. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Dynamic Memory Yes Yes Yes Resource Metering Yes Yes1 Yes Network QoS Yes No2 Yes2 Storage QoS Yes No2 Yes2 1 Without 2 Quality vCenter, Resource Metering in the vSphere Hypervisor is only available on an individual host by host basis. of Service (QoS) is only available in the Enterprise Plus edition of vSphere 5.5 vSphere Hypervisor / vSphere 5.x Ent+ Information: and
  • 30. Security & Multitenancy
  • 31. Layer-2 Network Switch for Virtual Machine Connectivity Granular In-box Capabilities • ARP/ND Poisoning (spoofing) protection • DHCP Guard protection • Trunk Mode to VMs • Network Traffic Monitoring • Isolated (Private) VLAN (PVLANs) • PowerShell & WMI Interfaces for extensibility Virtual machine Virtual machine Network application Virtual machine Network application Virtual network adapter Virtual network adapter Network application Virtual network adapter Virtual Port ACLs • Hyper–V host Hyper-V Extensible Switch Physical network adapter Physical switch
  • 32. Build Extensions for Capturing, Filtering & Forwarding Many Key Features • Extension monitoring & uniqueness • Extensions that learn VM life cycle • Extensions that can veto state changes • Multiple extensions on same switch Several Partner Solutions Available • Cisco – Nexus 1000V & UCS-VMFEX • NEC – ProgrammableFlow PF1000 • 5nine – Security Manager • InMon - SFlow Virtual Machine Virtual Machine Parent Partition VM NIC Host NIC Virtual Switch Extension Protocol Capture Extensions Extension A Filtering Extensions Extension C Forwarding Extension Extension D Extension Miniport Physical NIC Hyper-V Extensible Switch architecture VM NIC
  • 33. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Yes No Replaceable1 5 N/A 2 Private Virtual LAN (PVLAN) Yes No Yes1 ARP Spoofing Protection Yes No vCNS/Partner2 DHCP Snooping Protection Yes No vCNS/Partner2 Virtual Port ACLs Yes No vCNS/Partner2 Trunk Mode to Virtual Machines Yes No Yes3 Port Monitoring Yes Per Port Group Yes3 Port Mirroring Yes Per Port Group Yes3 Extensible vSwitch Confirmed Partner Extensions 1 The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.x and is replaceable (By Partners such as Cisco/IBM) rather than extensible. 2 ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the App component of VMware vCloud Network & Security (vCNS) product or a Partner solution, all of which are additional purchases 3 Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in the Enterprise Plus edition of vSphere 5.1 vSphere Hypervisor / vSphere 5.x Ent+ Information:,,,, and
  • 34. Dynamic VMq Dynamically span multiple CPUs when processing virtual machine network traffic IPsec Task Offload Offload IPsec processing from within virtual machine, to physical network adaptor, enhancing performance Virtual Receive Side Scaling Scale a VM's send & receive side traffic to multiple virtual processors, increasing performance whilst reducing bottlenecks SR-IOV Support Map virtual function of an SR-IOV capable physical network adaptor, directly to a virtual machine
  • 35. Integrated with NIC hardware for increased performance • Standard that allows PCI Express devices to be shared by multiple VMs • Reduces network latency, CPU utilization for processing traffic and increases throughput VM Network Stack More direct hardware path for I/O • Virtual Machine • SR-IOV capable physical NICs contain virtual functions that are securely mapped to VM • This bypasses the Hyper-V Extensible Switch • Synthetic NIC Virtual Function Hyper-V Extensible Switch SR-IOV NIC VF VF VF Full support for Live Migration Traffic Flow Traffic Flow
  • 36. In-box Disk Encryption to Protect Sensitive Data VHDX on Traditional LUN E:VM2 Data Protection, built in • Supports Used Disk Space Only Encryption • Integrates with TPM chip • VHDX on DAS F:VM1 Network Unlock & AD Integration Multiple Disk Type Support • Direct Attached Storage (DAS) • Traditional SAN LUN • Cluster Shared Volumes • Windows Server 2012 File Server Share VHDX on Cluster Shared Volumes C:ClusterStorageVolume1VM4 VHDX on File Server FileServerVM3
  • 37. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Dynamic Virtual Machine Queue Yes NetQueue1 NetQueue1 IPsec Task Offload Yes No No Virtual Receive Side Scaling Yes Yes (VMXNet3) Yes (VMXNet3) SR-IOV with Live Migration Yes No2 No2 Storage Encryption Yes No No 1 VMware vSphere and the vSphere Hypervisor support VMq only (NetQueue) SR-IOV implementation does not support vMotion, HA or Fault Tolerance. DirectPath I/O, whilst not identical to SR-IOV, aims to provide virtual machines with more direct access to hardware devices, with network cards being a good example. Whilst on the surface, this will boost VM networking performance, and reduce the burden on host CPU cycles, in reality, there are a number of caveats in using DirectPath I/O: 2 VMware’s • • • • Small Hardware Compatibility List No Memory Overcommit | No vMotion (unless running certain configurations of Cisco UCS) | No Fault Tolerance No Network I/O Control | No VM Snapshots (unless running certain configurations of Cisco UCS) No Suspend/Resume (unless running certain configurations of Cisco UCS) | No VMsafe/Endpoint Security support SR-IOV also requires the vSphere Distributed Switch, meaning customers have to upgrade to the highest vSphere edition to take advantage of this capability. No such restrictions are imposed when using SR-IOV in Hyper-V, ensuring customers can combine the highest levels of performance with the flexibility they need for an agile infrastructure. vSphere Hypervisor / vSphere 5.x Ent+ Information:
  • 38. Flexible Infrastructure
  • 39. Comprehensive feature support for virtualized Linux Significant Improvements in Interoperability • Multiple supported Linux distributions and versions on Hyper-V. • Includes Red Hat, SUSE, OpenSUSE, CentOS, and Ubuntu Comprehensive Feature Support • 64 vCPU SMP • Virtual SCSI, Hot-Add & Online Resize • Full Dynamic Memory Support • Live Backup • Deeper Integration Services Support Configuration Store Worker Processes WMI Provider Management Service Windows Kernel Virtual Service Provider Independent Hardware Vendor Drivers Hyper-V Server Hardware
  • 40. Duplication of a Virtual Machine whilst Running Export a clone of a running VM • Point-time image of running VM exported to an alternate location • Useful for troubleshooting VM without downtime for primary VM Export from an existing checkpoint VM1 VM2 1 • Export a full cloned virtual machine from a point-in-time, existing checkpoint of a virtual machine 2 • Checkpoints automatically merged into single virtual disk 3 4
  • 41. Live Migration Live Storage Migration Shared-Nothing Live Migration
  • 42. Simplified upgrade process from 2012 to 2012 R2 • Customers can upgrade from Windows Server 2012 Hyper-V to Windows Server 2012 R2 Hyper-V with no VM downtime • Supports Shared Nothing Live Migration for migration when changing storage locations • If using SMB share, migration transfers only the VM running state for faster completion • Automated with PowerShell • One-way Migration Only Hyper-V Cluster Upgrade without Downtime 2012 Cluster Nodes 2012 R2 Cluster Nodes
  • 43. Network Isolation & Flexibility without VLAN Complexity • Secure Isolation for traffic segregation, without VLANs • Blue Network Red Network VM migration flexibility & Seamless Integration Key Concepts • Provider Address – Unique IP addresses routable on physical network • VM Networks – Boundary of isolation between different sets of VMs Network/VSID Provider Address Customer Address Blue (5001) Customer Address – VM Guest OS IP addresses within the VM Networks Blue (5001) Blue (5001) Policy Table – maintains relationship between different addresses & networks Red (6001) Red (6001) Red (6001) • •
  • 44. Network Isolation & Flexibility without VLAN Complexity • Network Virtualization using Generic Route Encapsulation uses encapsulation & tunneling • Standard proposed by Microsoft, Intel, Arista Networks, HP, Dell & Emulex • VM traffic within the same VSID routable over different physical subnets • Network Virtualization is part of the Hyper-V Switch GRE Key (5001) MAC Same Customer Network & VSID -> VM’s packet encapsulated for transmission over physical network • -> Different Subnets
  • 45. Bridge Between VM Networks & Physical Networks • Multi-tenant VPN gateway in Windows Server 2012 R2 • Integral multitenant edge gateway for seamless connectivity • Guest clustering for high availability • BGP for dynamic routes update • Encapsulates & De-encapsulates NVGRE packets • Multitenant aware NAT for Internet access
  • 46. Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Enterprise Plus Yes No1 Yes2 VM Live Migration with Compression Yes (R2) No No VM Live Migration over RDMA Yes (R2) No No 1GB Simultaneous Live Migrations Unlimited3 N/A 4 10GB Simultaneous Live Migrations Unlimited3 N/A 8 Live Storage Migration Yes No4 Yes5 Shared Nothing Live Migration Yes No Yes5 Live Migration Upgrades Yes (R2) N/A Yes VM Live Cloning Yes (R2) No Yes6 Capability VM Live Migration vSphere Hypervisor / vSphere 5.x Ent+,,
  • 47. High Availability & Resiliency
  • 48. Integrated Solution for Resilient Virtual Machines • Massive scalability with support for 64 physical nodes & 8,000 VMs • VMs automatically failover & restart on physical host outage • Enhanced Cluster Shared Volumes • Cluster VMs on SMB 3.0 Storage • Dynamic Quorum & Witness • Reduced AD dependencies • Drain Roles – Maintenance Mode • VM Drain on Shutdown • VM Network Health Detection • Enhanced Cluster Dashboard Cluster Dynamic Quorum Configuration
  • 49. Complete Flexibility for Deploying App-Level HA • Full support for running clustered workloads on Hyper-V host cluster • Guest Clusters that require shared storage can utilize software iSCSI, Virtual FC or SMB • Full support for Live Migration of Guest Cluster Nodes • Full Support for Dynamic Memory of Guest Cluster Nodes • Restart Priority, Possible & Preferred Ownership, & AntiAffinityClassNames help ensure optimal operation Guest Cluster running on a Hyper-V Cluster node supported with Live Migration Guest cluster nodesrestarts on physical host failure
  • 50. Guest Clustering No Longer Bound to Storage Topology • VHDX files can be presented to multiple VMs simultaneously, as shared storage • VM sees shared virtual SAS disk • Unrestricted number of VMs can connect to a shared VHDX file • Utilizes SCSI-persistent reservations • VHDX can reside on a Cluster Shared Volume on block storage, or on File-based storage • Supports both Dynamic and Fixed VHDX Flexible choices for placement of Shared VHDX
  • 51. Ensure Optimal VM Placement and Restart Operations • Failover Priority ensures certain VMs start before others on the cluster • Affinity rules allow VMs to reside on certain hosts in the cluster • AntiAffinityClassNames helps to keep virtual machines apart on separate physical cluster nodes • AntiAffinityClassNames exposed through VMM as Availability Set Hyper-V cluster with related VMs apart Upon failover, VMs restart in prioritynode Anti-Affinity keeps VMs on each order
  • 52. Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Enterprise Plus Yes No1 Yes2 64 Nodes N/A 32 Nodes 8,000 N/A 4,000 Failover Prioritization Yes N/A Yes4 Affinity Rules Yes N/A Yes4 Guest OS Application Monitoring Yes N/A Yes3 Cluster-Aware Updating Yes N/A Yes4 Capability Integrated High Availability Maximum Cluster Size Maximum VMs per Cluster vSphere Hypervisor / vSphere 5.x Ent+ Information: and,,
  • 53. Capability Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Ent+ 64 N/A1 32 8,000 N/A1 4,000 Max Size Guest Cluster (iSCSI) 64 Nodes 5 Nodes1 5 Nodes1 Max Size Guest Cluster (Fiber) 64 Nodes 5 Nodes2 5 Nodes2 Max Size Guest Cluster (File Based) 64 Nodes 5 Nodes1 5 Nodes1 Guest Clustering with Shared Virtual Disk Yes Yes6 Yes6 Guest Clustering with Live Migration Support Yes N/A3 No4 Guest Clustering with DM Support Yes No5 No5 Nodes per Cluster VMs per Cluster vSphere Hypervisor / vSphere 5.x Ent+ Information,,
  • 54. Replicate Hyper-V VMs from a Primary to a Replica site • Affordable in-box business continuity and disaster recovery • Configurable replication frequencies of 30 seconds, 5 minutes and 15 minutes • Secure replication across network • Agnostic of hardware on either site • No need for other virtual machine replication technologies • Automatic handling of live migration • Simple configuration and management Once replicated, changes enabled, VMs chosen frequency Once Hyper-V Replica is replicated onon secondary site Upon site failure, VMs can be started begin replication
  • 55. Replicate to 3rd Location for Extra Level of Resiliency • Once a VM has been successfully replicated to the replica site, replica can be replicated to a 3rd location • Chained Replication • Extended Replica contents match the original replication contents • Extended Replica replication frequencies can differ from original replica • Useful for scenarios such as SMB -> Service Provider -> Service Provider DR Site Replication canconfigured fromthe 1st replica to a 3rd site Replication be enabled on primary to secondary
  • 56. Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Enterprise Plus Incremental Backup Yes No1 Yes1 Inbox VM Replication Yes No1 Yes1 Capability Replication Capability Hyper-V Replica vSphere Replication Inbox with Hypervisor Virtual Appliance Asynchronous Asynchronous RTO 30s, 5, 15m 15 Minutes-24 Hours Replication Tertiary (R2) Secondary Planned Failover Yes No Unplanned Failover Yes Yes Test Failover Yes No Simple Failback Process Yes No Automatic Re-IP Address Yes No Yes, 15 points No Yes, PowerShell, HVRM No, SRM Architecture Replication Type Point in Time Recovery Orchestration vSphere Hypervisor / vSphere 5.x Ent+ Information:,,,
  • 57. Scalability, Performance & Density Security & Multitenancy Flexible Infrastructure Host: 320 LP | 4TB Host: 1024 VMs VM: 64 vCPU | 1TB VM: 64TB VHDX Cluster: 64 | 8,000 Virtual Fiber Channel 4K Disk Support ODX QoS Extensible Switch: PVLANS ARP/ND Spoofing DHCP Guard Monitoring Mirroring DVMQ | SR-IOV IPsec Task Offload BitLocker Live Migration Storage Migration Shared-Nothing LM Network Virtualization High Availability & Resiliency Incremental Backup Hyper-V Replica NIC Teaming Cluster: 64 | 8,000 Secure Cluster Storage Enhanced CSV 3 Level Availability Priority & Affinity Hyper-V: A More Complete Virtualization Platform
  • 58.