Tudor Damian - Comparing MS Cloud with VMware Cloud

2,356 views
2,191 views

Published on

One of my sessions at Microsoft Summit 2013

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,356
On SlideShare
0
From Embeds
0
Number of Embeds
11
Actions
Shares
0
Downloads
46
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Tudor Damian - Comparing MS Cloud with VMware Cloud

  1. 1. Microsoft Summit 2013 the ultimate business and technology conference Public & Private Cloud Track
  2. 2. Tudor Damian IT Solutions Specialist Virtual Machine MVP tudy.tel
  3. 3. The Good • You have an API set in here that vendors can program against • Antivirus can run in this level and you can use that to scan all virtual machines. • You can run on CPUs that don’t have virtualization extensions • Only 144 Meg of code vs competitions 5 Gig The Not as Good • You have an API set in there that hackers can program against • Antivirus has access to all VMs – so would an exploited AV • You have 144 Meg of stuff running at Ring –1 • Drivers must be written for this Hypervisor so supported hardware is somewhat limited
  4. 4. The Good • No 3rd party APIs for hackers to code against in Hypervisor • No global AV option that would could compromise all VMs • Lots of hardware choices because it relies on the Windows drivers. • 1.4MB Hypervisor running in Ring –1 vs. 144 Meg in vSphere 5.1 The Not as Good • No APIs for third parties to add value in hypervisor • No option to run Antivirus in the Hypervisor • Requires hardware with CPU Virtualization Extensions • Requires Windows Management Partition for the drivers
  5. 5. http://blogs.technet.com/b/keithmayer/archive/2013/10/15/vmware-or-microsoft- comparing-vsphere-5-5-and-windows-server-2012-r2-at-a-glance.aspx http://www.virtualizationmatrix.com/matrix.php?category_search=all https://channel9.msdn.com/Events/TechEd/Europe/2013/MDC-B353 https://channel9.msdn.com/Events/TechEd/Europe/2013/MDC-B352
  6. 6. Source: Kevin Turner (Microsoft COO) @ WPC 2013, based on IDC reports
  7. 7. Scalability & Performance Security & Multitenancy Flexible Infrastructure High Availability & Resiliency
  8. 8. Scalability, Performance & Density
  9. 9. System Resource Hyper-V (2008 R2) Hyper-V (2012 R2) Improvement Factor Host Logical Processors 64 320 5× Physical Memory 1TB 4TB 4× Virtual CPUs per Host 512 2,048 4× VM Virtual CPUs per VM 4 64 16× Memory per VM 64GB 1TB 16× Active VMs per Host 384 1,024 2.7× Guest NUMA No Yes - Cluster Maximum Nodes 16 64 4× Maximum VMs 1,000 8,000 8×
  10. 10. System Resource Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.1 Ent+ vSphere 5.5 Ent+ Host Logical Processors 320 160 160 320 Physical Memory 4TB 32GB1 2TB 4TB Virtual CPUs per Host 2,048 2,048 2,048 4,096 VM Virtual CPUs per VM 64 8 642 642 Memory per VM 1TB 32GB1 1TB 1TB Active VMs per Host 1,024 512 512 512 Guest NUMA Yes Yes Yes Yes Cluster Maximum Nodes 64 N/A3 32 32 Maximum VMs 8,000 N/A3 4,000 4,000 1 Host physical memory is capped at 32GB thus maximum VM memory is also restricted to 32GB usage. 2 vSphere 5.1 Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM with all other editions supporting 8 vCPUs per VM 3 For clustering/high availability, customers must purchase vSphere vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf, https://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Platform-Technical- Whitepaper.pdf and http://www.vmware.com/products/vsphere-hypervisor/faq.html
  11. 11. Virtual Fibre Channel Connect a VM directly to FC SAN without sacrificing features 64TB Virtual Hard Disks Increased capacity, protection & alignment optimization Native 4K Disk Support Take advantage of enhanced density and reliability Online VHDX Resize Increased flexibility for virtual disks, with support for grow & shrink operations
  12. 12. Boot from USB Disk Flexible deployment option for diskless servers (Hyper-V Server) Offloaded Data Transfer Offloads storage-intensive tasks to the SAN Storage Spaces Storage resiliency, availability & performance with commodity hardware
  13. 13. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Virtual Fiber Channel Yes Yes Yes 3rd Party Multipathing (MPIO) Yes No Yes (VAMP)1 Native 4-KB Disk Support Yes No No Maximum Virtual Disk Size 64TB VHDX 62TB2 62TB2 Online Virtual Disk Resize Yes Grow Only Grow Only Maximum Pass Through Disk Size 256TB+3 64TB 64TB Offloaded Data Transfer Yes No Yes (VAAI)4 Boot from USB Yes Yes Yes Tiered Storage Pooling Yes No No 1 vStorage API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1 and above 2 vSphere 5.5 support for 62TB VMDK files is limited to when using VMFS5 and NFS datastores only, VMFS3 datastores are still limited to 2TB VMDK files; also, Hot-Expand, VMware FT , Virtual Flash Read Cache and Virtual SAN are not supported with 62TB VMDK files 3 The maximum size of a physical disk attached to a virtual machine is determined by the guest operating system and the chosen file system within the guest. More recent Windows Server operating systems support disks in excess of 256TB in size 4 vStorage API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1 and above vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and http://www.vmware.com/products/vsphere/buy/editions_comparison.html http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc%2FGUID-BF2C8E24-B530-4C94-85F6-09E5AE781466.html&resultof=%2262tb%22%20
  14. 14. Dynamic Memory Increased control for greater virtual machine consolidation Resource Metering Track historical data for virtual machine usage Network QoS Consistent level of network performance based on SLAs Storage QoS Control allocation of Storage IOPS between VM Disks
  15. 15. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Dynamic Memory Yes Yes Yes Resource Metering Yes Yes1 Yes Network QoS Yes No2 Yes2 Storage QoS Yes No2 Yes2 1 Without vCenter, Resource Metering in the vSphere Hypervisor is only available on an individual host by host basis. 2 Quality of Service (QoS) is only available in the Enterprise Plus edition of vSphere 5.5 vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and http://www.vmware.com/products/vsphere/buy/editions_comparison.html
  16. 16. Security & Multitenancy
  17. 17. Granular In-box Capabilities • ARP/ND Poisoning (spoofing) protection • DHCP Guard protection • Virtual Port ACLs • Trunk Mode to VMs • Network Traffic Monitoring • Isolated (Private) VLAN (PVLANs) • PowerShell & WMI Interfaces for extensibility Layer-2 Network Switch for Virtual Machine Connectivity Virtual machine Network application Virtual network adapter Hyper–V host Hyper-V Extensible Switch Physicalnetwork adapter Physicalswitch Virtual machine Network application Virtual network adapter Virtual machine Network application Virtual network adapter
  18. 18. Many Key Features • Extension monitoring & uniqueness • Extensions that learn VM life cycle • Extensions that can veto state changes • Multiple extensions on same switch Several Partner Solutions Available • Cisco – Nexus 1000V & UCS-VMFEX • NEC – ProgrammableFlow PF1000 • 5nine – Security Manager • InMon - SFlow Build Extensions for Capturing, Filtering & Forwarding Parent Partition Hyper-V Extensible Switch architecture Extension C Extension D Extension A Extension Miniport Extension Protocol Virtual Switch Physical NIC Virtual Machine Host NIC VM NIC Virtual Machine VM NIC Capture Extensions Filtering Extensions Forwarding Extension
  19. 19. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Extensible vSwitch Yes No Replaceable1 Confirmed Partner Extensions 5 N/A 2 Private Virtual LAN (PVLAN) Yes No Yes1 ARP Spoofing Protection Yes No vCNS/Partner2 DHCP Snooping Protection Yes No vCNS/Partner2 Virtual Port ACLs Yes No vCNS/Partner2 Trunk Mode to Virtual Machines Yes No Yes3 Port Monitoring Yes Per Port Group Yes3 Port Mirroring Yes Per Port Group Yes3 1 The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.1 and is replaceable (By Partners such as Cisco/IBM) rather than extensible. 2 ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the App component of VMware vCloud Network & Security (vCNS) product or a Partner solution, all of which are additional purchases 3 Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in the Enterprise Plus edition of vSphere 5.1 vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technical- resources/virtualization-topics/virtual-networking/distributed-virtual-switches.html, http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf, http://www.vmware.com/products/vshield- app/features.html and http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html
  20. 20. Dynamic VMq IPsec Task Offload SR-IOV Support Dynamically span multiple CPUs when processing virtual machine network traffic Offload IPsec processing from within virtual machine, to physical network adaptor, enhancing performance Map virtual function of an SR-IOV capable physical network adaptor, directly to a virtual machine Virtual Receive Side Scaling Scale a VM's send & receive side traffic to multiple virtual processors, increasing performance whilst reducing bottlenecks
  21. 21. • Standard that allows PCI Express devices to be shared by multiple VMs • More direct hardware path for I/O • Reduces network latency, CPU utilization for processing traffic and increases throughput • SR-IOV capable physical NICs contain virtual functions that are securely mapped to VM • This bypasses the Hyper-V Extensible Switch • Full support for Live Migration Integrated with NIC hardware for increased performance Virtual Machine VM Network Stack Synthetic NIC Hyper-V Extensible Switch SR-IOV NIC VF Traffic Flow Virtual Function VF Traffic Flow VF
  22. 22. VHDX on Traditional LUN E:VM2 Data Protection, built in • Supports Used Disk Space Only Encryption • Integrates with TPM chip • Network Unlock & AD Integration Multiple Disk Type Support • Direct Attached Storage (DAS) • Traditional SAN LUN • Cluster Shared Volumes • Windows Server 2012 File Server Share In-box Disk Encryption to Protect Sensitive Data VHDX on Cluster Shared Volumes C:ClusterStorageVolume1VM4 VHDX on File Server FileServerVM3 VHDX on DAS F:VM1
  23. 23. Capability Hyper-V (2012 R2) vSphere Hypervisor vSphere 5.5 Ent+ Dynamic Virtual Machine Queue Yes NetQueue1 NetQueue1 IPsec Task Offload Yes No No Virtual Receive Side Scaling Yes Yes (VMXNet3) Yes (VMXNet3) SR-IOV with Live Migration Yes No2 No2 Storage Encryption Yes No No 1 VMware vSphere and the vSphere Hypervisor support VMq only (NetQueue) 2 VMware’s SR-IOV implementation does not support vMotion, HA or Fault Tolerance. DirectPath I/O, whilst not identical to SR-IOV, aims to provide virtual machines with more direct access to hardware devices, with network cards being a good example. Whilst on the surface, this will boost VM networking performance, and reduce the burden on host CPU cycles, in reality, there are a number of caveats in using DirectPath I/O: • Small Hardware Compatibility List • No Memory Overcommit | No vMotion (unless running certain configurations of Cisco UCS) | No Fault Tolerance • No Network I/O Control | No VM Snapshots (unless running certain configurations of Cisco UCS) • No Suspend/Resume (unless running certain configurations of Cisco UCS) | No VMsafe/Endpoint Security support SR-IOV also requires the vSphere Distributed Switch, meaning customers have to upgrade to the highest vSphere edition to take advantage of this capability. No such restrictions are imposed when using SR-IOV in Hyper-V, ensuring customers can combine the highest levels of performance with the flexibility they need for an agile infrastructure. vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf
  24. 24. Flexible Infrastructure
  25. 25. Significant Improvements in Interoperability • Multiple supported Linux distributions and versions on Hyper-V. • Includes Red Hat, SUSE, OpenSUSE, CentOS, and Ubuntu Comprehensive Feature Support • 64 vCPU SMP • Virtual SCSI, Hot-Add & Online Resize • Full Dynamic Memory Support • Live Backup • Deeper Integration Services Support Comprehensive feature support for virtualized Linux Server Hardware IndependentHardware VendorDrivers Windows Kernel Virtual Service Provider Configuration Store Worker Processes ManagementService WMI Provider Hyper-V
  26. 26. Export a clone of a running VM • Point-time image of running VM exported to an alternate location • Useful for troubleshooting VM without downtime for primary VM Export from an existing checkpoint • Export a full cloned virtual machine from a point-in-time, existing checkpoint of a virtual machine • Checkpoints automatically merged into single virtual disk Duplication of a Virtual Machine whilst Running VM1 VM2 1 2 3 4
  27. 27. Live Migration Live Storage Migration Shared-Nothing Live Migration
  28. 28. • Customers can upgrade from Windows Server 2012 Hyper-V to Windows Server 2012 R2 Hyper-V with no VM downtime • Supports Shared Nothing Live Migration for migration when changing storage locations • If using SMB share, migration transfers only the VM running state for faster completion • Automated with PowerShell • One-way Migration Only Simplified upgrade process from 2012 to 2012 R2 2012 Cluster Nodes 2012 R2 Cluster Nodes Hyper-V Cluster Upgrade without Downtime
  29. 29. • Secure Isolation for traffic segregation, without VLANs • VM migration flexibility & Seamless Integration Key Concepts • Provider Address – Unique IP addresses routable on physical network • VM Networks – Boundary of isolation between different sets of VMs • Customer Address – VM Guest OS IP addresses within the VM Networks • Policy Table – maintains relationship between different addresses & networks Network Isolation & Flexibility without VLAN Complexity 192.168.2.10 192.168.2.11 192.168.2.12 192.168.2.13 192.168.2.14 10.10.10.10 10.10.10.11 10.10.10.12 Blue Network 10.10.10.10 10.10.10.11 10.10.10.12 Red Network Network/VSID Provider Address Customer Address Blue (5001) 192.168.2.10 10.10.10.10 Blue (5001) 192.168.2.10 10.10.10.11 Blue (5001) 192.168.2.12 10.10.10.12 Red (6001) 192.168.2.13 10.10.10.10 Red (6001) 192.168.2.14 10.10.10.11 Red (6001) 192.168.2.12 10.10.10.12
  30. 30. • Network Virtualization using Generic Route Encapsulation uses encapsulation & tunneling • Standard proposed by Microsoft, Intel, Arista Networks, HP, Dell & Emulex • VM traffic within the same VSID routable over different physical subnets • VM’s packet encapsulated for transmission over physical network • Network Virtualization is part of the Hyper-V Switch Network Isolation & Flexibility without VLAN Complexity 192.168.2.10 192.168.5.12 Different Subnets 10.10.10.10 10.10.10.11 192.168.2.10 -> 192.168.5.12 GRE Key (5001) MAC 10.10.10.10 -> 10.10.10.11 Same Customer Network & VSID
  31. 31. • Multi-tenant VPN gateway in Windows Server 2012 R2 • Integral multitenant edge gateway for seamless connectivity • Guest clustering for high availability • BGP for dynamic routes update • Encapsulates & De-encapsulates NVGRE packets • Multitenant aware NAT for Internet access Bridge Between VM Networks & Physical Networks
  32. 32. Capability Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Enterprise Plus VM Live Migration Yes No1 Yes2 VM Live Migration with Compression Yes (R2) No No VM Live Migration over RDMA Yes (R2) No No 1GB Simultaneous Live Migrations Unlimited3 N/A 4 10GB Simultaneous Live Migrations Unlimited3 N/A 8 Live Storage Migration Yes No4 Yes5 Shared Nothing Live Migration Yes No Yes5 Live Migration Upgrades Yes (R2) N/A Yes VM Live Cloning Yes (R2) No Yes6 vSphere Hypervisor / vSphere 5.x Ent+ http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/products/vsphere/compare.html,
  33. 33. High Availability & Resiliency
  34. 34. • Massive scalability with support for 64 physical nodes & 8,000 VMs • VMs automatically failover & restart on physical host outage • Enhanced Cluster Shared Volumes • Cluster VMs on SMB 3.0 Storage • Dynamic Quorum & Witness • Reduced AD dependencies • Drain Roles – Maintenance Mode • VM Drain on Shutdown • VM Network Health Detection • Enhanced Cluster Dashboard Integrated Solution for Resilient Virtual Machines Cluster Dynamic Quorum Configuration
  35. 35. • Full support for running clustered workloads on Hyper-V host cluster • Guest Clusters that require shared storage can utilize software iSCSI, Virtual FC or SMB • Full support for Live Migration of Guest Cluster Nodes • Full Support for Dynamic Memory of Guest Cluster Nodes • Restart Priority, Possible & Preferred Ownership, & AntiAffinityClassNames help ensure optimal operation Complete Flexibility for Deploying App-Level HA Guest Cluster running on a Hyper-V ClusterGuest cluster node restarts on physical host failureGuest cluster nodes supported with Live Migration
  36. 36. • VHDX files can be presented to multiple VMs simultaneously, as shared storage • VM sees shared virtual SAS disk • Unrestricted number of VMs can connect to a shared VHDX file • Utilizes SCSI-persistent reservations • VHDX can reside on a Cluster Shared Volume on block storage, or on File-based storage • Supports both Dynamic and Fixed VHDX Guest Clustering No Longer Bound to Storage Topology Flexible choices for placement of Shared VHDX
  37. 37. • Failover Priority ensures certain VMs start before others on the cluster • Affinity rules allow VMs to reside on certain hosts in the cluster • AntiAffinityClassNames helps to keep virtual machines apart on separate physical cluster nodes • AntiAffinityClassNames exposed through VMM as Availability Set Ensure Optimal VM Placement and Restart Operations Hyper-V cluster with VMs on each nodeUpon failover, VMs restart in priority orderAnti-Affinity keeps related VMs apart
  38. 38. Capability Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Enterprise Plus Integrated High Availability Yes No1 Yes2 Maximum Cluster Size 64 Nodes N/A 32 Nodes Maximum VMs per Cluster 8,000 N/A 4,000 Failover Prioritization Yes N/A Yes4 Affinity Rules Yes N/A Yes4 Guest OS Application Monitoring Yes N/A Yes3 Cluster-Aware Updating Yes N/A Yes4 vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/compare.html and http://www.yellow-bricks.com/2011/08/11/vsphere-5-0-ha-application- monitoring-intro/, http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/products/vsphere/features/application-HA.html
  39. 39. Capability Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Ent+ Nodes per Cluster 64 N/A1 32 VMs per Cluster 8,000 N/A1 4,000 Max Size Guest Cluster (iSCSI) 64 Nodes 5 Nodes1 5 Nodes1 Max Size Guest Cluster (Fiber) 64 Nodes 5 Nodes2 5 Nodes2 Max Size Guest Cluster (File Based) 64 Nodes 5 Nodes1 5 Nodes1 Guest Clustering with Shared Virtual Disk Yes Yes6 Yes6 Guest Clustering with Live Migration Support Yes N/A3 No4 Guest Clustering with DM Support Yes No5 No5 vSphere Hypervisor / vSphere 5.x Ent+ Information http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.mscs.doc%2FGUID-6BD834AE- 69BB-4D0E-B0B6-7E176907E0C7.html, http://kb.vmware.com/kb/1037959
  40. 40. • Affordable in-box business continuity and disaster recovery • Configurable replication frequencies of 30 seconds, 5 minutes and 15 minutes • Secure replication across network • Agnostic of hardware on either site • No need for other virtual machine replication technologies • Automatic handling of live migration • Simple configuration and management Replicate Hyper-V VMs from a Primary to a Replica site Once Hyper-V Replica is enabled, VMs begin replicationOnce replicated, changes replicated on chosen frequencyUpon site failure, VMs can be started on secondary site
  41. 41. Replication configured from primary to secondaryReplication can be enabled on the 1st replica to a 3rd site • Once a VM has been successfully replicated to the replica site, replica can be replicated to a 3rd location • Chained Replication • Extended Replica contents match the original replication contents • Extended Replica replication frequencies can differ from original replica • Useful for scenarios such as SMB -> Service Provider -> Service Provider DR Site Replicate to 3rd Location for Extra Level of Resiliency
  42. 42. Capability Hyper-V (2012 & R2) vSphere Hypervisor vSphere 5.5 Enterprise Plus Incremental Backup Yes No1 Yes1 Inbox VM Replication Yes No1 Yes1 vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/compare.html, http://www.vmware.com/products/vsphere/features/replication.html, http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Replication-Overview.pdf, Replication Capability Hyper-V Replica vSphere Replication Architecture Inbox with Hypervisor Virtual Appliance Replication Type Asynchronous Asynchronous RTO 30s, 5, 15m 15 Minutes-24 Hours Replication Tertiary (R2) Secondary Planned Failover Yes No Unplanned Failover Yes Yes Test Failover Yes No Simple Failback Process Yes No Automatic Re-IP Address Yes No Point in Time Recovery Yes, 15 points No Orchestration Yes, PowerShell, HVRM No, SRM
  43. 43. Scalability, Performance & Density Security & Multitenancy Flexible Infrastructure High Availability & Resiliency Host: 320 LP | 4TB Host: 1024 VMs VM: 64 vCPU | 1TB VM: 64TB VHDX Cluster: 64 | 8,000 Virtual Fiber Channel 4K Disk Support ODX QoS Extensible Switch: PVLANS ARP/ND Spoofing DHCP Guard Monitoring Mirroring DVMQ | SR-IOV IPsec Task Offload BitLocker Live Migration Storage Migration Shared-Nothing LM Network Virtualization Incremental Backup Hyper-V Replica NIC Teaming Cluster: 64 | 8,000 Secure Cluster Storage Enhanced CSV 3 Level Availability Priority & Affinity Hyper-V: A More Complete Virtualization Platform
  44. 44. Hypervisor VM Management Automation Service Mgmt. Self-Service Monitoring vSphere Hypervisor vCenter Server vFabric Application Director vCenter Orchestrator vCloud Automation Center vCloud Director vCenter Ops Mgmt. Suite Hyper-V Virtual Machine Manager Orchestrator Service Manager App Controller Operations Manager Protection vSphere Data Protection Data Protection Manager
  45. 45. Virtual Machine Manager & vSphere VMM integrates with vCenter 4.1/5.0/5.1 for managing ESX/ESXi 4.1/5.0/5.1 Aimed at providing the day to day management of VMware VMs – Create, Manage, Store, Deploy. More advanced tasks still use vCenter – vDS, FT VMs, Update Management VMM supports managing existing, and creating new vSphere VM & Service templates Supports key vSphere Features such as vMotion, Storage vMotion, PVSCSI, Thin Provisioning, Hot- Add and adds its own capabilities on top – DO, PO, PRO, intelligent placement, Private Clouds etc. Day to Day VM Management with Virtual Machine Manager
  46. 46. App Controller & vSphere App Controller integrates with VMM, and provides access to any VMM clouds VMM clouds can consist of capacity from Hyper-V, vSphere, XenServer or a combination Users & Groups can be delegated access to these vSphere-based clouds with individual- level capacity limits Users can deploy vSphere-based VM & Service Templates to vSphere hosts Users can also have access to Windows Azure for deploying VMs & applications Self-Service access to VMs running on vSphere
  47. 47. Operations Manager & vSphere Veeam MP for VMware provides OpsMgr admins with granular insight into their vSphere infrastructure Agentless Collection providing end-to-end visibility from the physical server, to the hypervisor, to the virtual machines hosting your critical applications and services Full System Center functionality – including alerts, diagrams, dashboards, reporting, auditing, notifications, responses and automation for all VMware components Powerful reports for capacity planning, failure modelling, cluster capacity and more Rich topology views for Storage, Compute & Networking Partnering with Veeam to deliver deep vSphere insight
  48. 48. Orchestrator & vSphere vSphere Integration Pack contains a large number out-of-the-box activities for automating vSphere Administrator connects Orchestrator to vCenter, or to ESXi directly. Allows the administrator to automate vSphere tasks in isolation, or combine vSphere activities into broader runbooks, connected with other systems If the Integration Pack doesn’t contain the desired task, admins can add their on IP through scripts, or PowerCLI Automating key tasks within the vSphere environment vSphere Integration Pack - Activities
  49. 49. Constructing, Delivering & Consuming Apps Maintaining, Managing & Monitoring Apps Protection of Key Applications & Workloads
  50. 50. Standardized VM Templates Roles & Features Application Layers VM Templates 2.0: Service Templates Deployment into clouds Role-based Self Service Controlled Consumption Construction, Delivery & Consumption
  51. 51. Application Construction, Delivery & Consumption Capability Microsoft VMware Request Private Cloud Resources Yes Yes1 Role-Based Self-Service Yes Yes Standardized Templates Yes Yes2 Template Granularity: Roles / Features Yes No Template Granularity: Application Layer Yes Yes3 Service/Multi-Tier Templates Yes Yes3 Deployment Across Heterogeneous Clouds Yes Yes4 1. vCloud Automation Center allows for the requesting of private cloud resources but lacks a true CMDB capability in box. 2. Each VMware VM template will have it’s own VMDK, even if the template varies only slightly in it’s configuration options. 3. No alternatives to Server Application Virtualization (App-V) thus relies on regular installation methods or inflexible scripts. 4. vCloud Automation Center allows deployment onto non-VMware infrastructure at a cost of $400 per managed machine + S&S however once deployed, it could not be managed from vCloud Director along with other VMware-based VMs. VMware Information: http://www.vmware.com/products/datacenter-virtualization/vcloud-automation-center/features.html, http://www.vmware.com/files/pdf/management/vmw-vcloud-automation-center-faq.pdf
  52. 52. Centralized Maintenance Extends beyond the private cloud Integrated Service Management Powerful, relevant automation Deep application insight Connecting Dev-Ops Maintenance, Management & Monitoring
  53. 53. Application Maintenance, Management & Monitoring Capability Microsoft VMware Centralized Patching & Maintenance Yes Yes Non-Virtualized Infrastructure Management Yes Yes1 Integrated Service Management Yes Lacks CMDB2 Heterogeneous Automation Yes VMware Centric3 Deep Application Insight Yes Yes4 Integrated Dev-Ops Yes No5 1. Would require purchases outside of the vCloud Suite including vCloud Automation Center, vFabric Hyperic, vCenter Operations Management Suite Enterprise Edition 2. vCloud Automation Center enables application owners or administrators to request infrastructure but vCAC lacks any form of true CMDB for complete ITIL/MOF IT Service Management 3. VMware's vCenter Orchestrator has a limited set of plug-ins, of which the vast majority are VMware centric. No mention of plug-ins for other enterprise management systems and tools such as those from HP, IBM, BMC etc. 4. Remediation limited to VMware best practices thus lacking in application-specific remediation guidance 5. Lab Manager deprecated, with customers expected to upgrade to vCloud Director, which has no connections with Development IDE. VMware Information: http://www.vmware.com/products/datacenter-virtualization/vcloud-suite/compare.html, http://www.vmware.com/products/datacenter- virtualization/vcloud-automation-center/overview.html, http://www.vmware.com/products/datacenter-virtualization/vcloud-automation-center/buy.html, http://www.vmware.com/products/application-platform/vfabric-hyperic/buy.html, https://solutionexchange.vmware.com/store/categories/21/view_all, http://www.vmware.com/products/labmanager/overview.html
  54. 54. Granular Workload Protection Physical or Virtual Generic Data Source Protection Centralized, Role- Based Management Backup to Tape Low-Cost Disaster Recovery Protection of Key Applications & Workloads
  55. 55. Protection of Key Applications & Workloads Capability Microsoft VMware Granular Workload Protection Yes No1 Physical & Virtual Protection Yes No1 3rd Party Integration Yes No2 Centralized Role-Based Management Yes Yes3 Tape Backup Yes No4 Integrated Disaster Recovery Yes Yes 1. VMware Data Protection offers no protection for the workloads within the virtual machine, simply focusing on the VM itself as the protection unit and offers no protection of physical machines 2. VMware Data Protection is not extensible by 3rd parties 3. VMware Data Protection is capped at 10 appliances per vCenter with a maximum storage of 2TB/100 VMs per appliance. 4. VMware Data Protection offers no protection to tape media. Disk only VMware Information: http://www.vmware.com/files/pdf/techpaper/Introduction-to-Data-Protection.pdf, http://pubs.vmware.com/vsphere- 51/topic/com.vmware.ICbase/PDF/vmware-data-protection-administration-guide-51.pdf
  56. 56. Fabric Hypervisor OS Management Application Frameworks
  57. 57. Cross-Platform Infrastructure Management Capability Microsoft VMware Multi-Hypervisor Management Yes Limited1 Comprehensive Guest OS Support Yes Yes2 3rd Party Management Integration Yes Limited3 Multiple Application Frameworks Yes Yes4 1. vCloud Automation Center focuses on provisioning VMs to alternative hypervisors, whilst the Multi-Hypervisor Manager plug-in for vCenter offers only very basic capabilities 2. VMware do not produce any operating systems, and support is therefore focused not on the guest operating system itself, but instead, on the VM Tools and hardware. 3. vCenter Orchestrator has a limited number of 3rd party plug-ins and vCenter Operations Management Suite requires the purchase of 3rd Party adaptors to integrate. 4. Monitoring capabilities do extend to multiple frameworks but support for many frameworks is out of date - .NET 3.0 is the latest for instance. Also, the monitoring is not connected to any true DevOps capability, and lacks remediation guidance around detected issues. VMware Information: http://www.vmware.com/support/mhm/doc/vcenter-multi-hypervisor-manager-10-release-notes.html, http://partnerweb.vmware.com/GOSIG/home.html,
  58. 58. vCloud – On-Premise (w/ Director) vCloud - Hoster (w/ Director) vCloud Connector 2.0 Amazon, Hyper-V, Xen vCloud Automation Center VMware vCloud Service, vCloud Providers vCloud Connector 2.0
  59. 59. http://blogs.technet.com/b/keithmayer/archive/2013/10/15/vmware-or-microsoft-comparing-vsphere-5-5-and-windows- server-2012-r2-at-a-glance.aspx http://www.virtualizationmatrix.com/matrix.php?category_search=all https://channel9.msdn.com/Events/TechEd/Europe/2013/MDC-B353 https://channel9.msdn.com/Events/TechEd/Europe/2013/MDC-B352 http://www.datacentertcotool.com/

×