For more information on Virtualization Manager visit: http://www.solarwinds.com/virtualization-manager.aspx
Watch this webcast: http://www.solarwinds.com/resources/webcasts/hyper-v-vs-vsphere-understanding-the-differences.html
Watch this webinar with Scott Lowe, Founder and Managing Consultant at The 1610 Group, and SolarWinds virtualization expert Jonathan Reeve where they discuss “Hyper-V vs. vSphere: Understanding the differences.”
The virtualization market is abuzz with talk of different hypervisors – most prominently VMware ESX® versus Microsoft Hyper-V®, who together own over 90% of the market. Small and medium businesses are already moving quickly toward Hyper-V, and a growing number of larger organizations are beginning to put plans in place to transition some portion of their environment from ESX to Hyper-V.
In this webcast we explore the reasons for these changes and the ecosystems for these two platforms both now and in the future. We also take a look ahead to what is known about Hyper-V 3.0 and why it warrants an even deeper look when evaluating hypervisors for your future virtualization deployments.
2. About the Speakers
Scott Lowe
•18 years experience in the IT industry
•Prolific author of thousands of articles and 3 books
•Top virtualization blogger
•Founder and Managing Consultant, The 1610 Group
Follow me on Twitter @otherscottlowe
Jonathan Reeve
• SolarWinds, Senior Director of Product Management
• Previously ran product management at Hyper9™
• Multiple successful start-ups in the IT space
Side by Side: vSphere™ and Hyper-V™ - Slide 2 -
3. Agenda
Why Should You Learn About Hyper-V™?
Hypervisor Types and Footprints
Kernel Variances
A Similarity: CPU Scheduling Controls
vSphere Memory Handling
Hyper-V™ Dynamic Memory
Product Storage Options
vSphere™ Storage Capabilities
Networking
Workload migrations
Side by Side: vSphere™ and Hyper-V™ - Slide 3 -
4. Why Should You Learn About Hyper-V?
You may not always be working with Vmware®
With Windows® 8, Microsoft® will release a new
version of Hyper-V with new features
For many organizations, Hyper-V has proven to
be “good enough” for their needs
For those with existing Microsoft infrastructures,
Hyper-V may be the best fit
Side by Side: vSphere™ and Hyper-V™ - Slide 4 -
5. Hypervisor Types and Footprints
Common misunderstanding
Both vSphere and Hyper-V are Type 1 hypervisors
vSphere has a much smaller footprint than Hyper-V
vSphere: 144 MB
Hyper-V: Minimum of 10 GB
Hyper-V requires a full (or
core) Windows Server
installation
Hyper-V also requires the use
of a “root partition” for
operations
General purpose Windows =
greater hardware compatibility
Side by Side: vSphere™ and Hyper-V™ - Slide 5 -
6. Kernel Variances
vSphere
Monolithic kernel
vSphere’s architecture revolves
around a more monolithic core
which includes many shared drivers
as well as the virtualization stack
Hyper-V
Microkernelized
Lends flexibility and security to the hypervisor model by
isolating the virtual machines from one another with little shared
code, such as drivers
More synthetic drivers are used, which can boost overall
service performance
Side by Side: vSphere™ and Hyper-V™ - Slide 6 -
7. High Level Overview
Operating system support
vSphere enjoys far broader operating system support
Licensing limitations
vSphere imposes stricter hardware-based licensing
limits
Hyper-V provided significant Windows licensing
benefits
Scalability
vSphere scales fay beyond Hyper-V
vSphere vCPU per VM: 32
Hyper-V vCPU per VM: 4
Side by Side: vSphere™ and Hyper-V™ - Slide 7 -
8. A Similarity: CPU Scheduling Controls
vSphere
Shares. If a VM has a share value that is half
of another, it’s entitled to only half the CPU
resources.
Reservation. A guarantee that a virtual
machine will receive at least some level of
resourcing.
Limit. Limits the ability of the virtual machine to
consume unlimited resources.
vSphere has a powerful CPU scheduling
mechanism in place that ensures that virtual
machines receive attention from the system.
VMware has produced a white paper that goes
into great technical depth for how this
scheduling is achieved.
Side by Side: vSphere™ and Hyper-V™ - Slide 8 -
9. A Similarity: CPU Scheduling Controls
Hyper-V
Virtual machine reserve (percentage). Allows the reservation
of a portion of the server’s total processing resources for this
virtual machine.
Virtual machine limit (percentage). Limit how much of a host’s
processing resources can be consumed by a single virtual
machine.
Relative weight. allows the weighting of this virtual machine
against others.
Side by Side: vSphere™ and Hyper-V™ - Slide 9 -
10. Automated Resource Scheduling
vSphere
Distributed Resource Scheduler
Aggregates cluster resources into a single resource pool
Provides both initial placement services and continuous optimization
Enables affinity rules to ensure that workload placement meets
business and availability rules
Supports clusters of up to 32 hosts and 1,280 virtual machines
Hyper-V
» Resource placement
• Current VMM provides initial placement services only
» One-off service only
• VMM 2012 will provide Dynamic Optimization
» Will provide cluster-level workload balancing for VMs
Side by Side: vSphere™ and Hyper-V™ - Slide 10 -
11. vSphere Memory Handling
VMware Oversubscription/Overcommit. Allows
administrators to assign more aggregate RAM to virtual
machines than is actually physically available in the
server.
Transparent Page Sharing. This is basically a deduplication method
applied to RAM rather than storage.
Guest Ballooning. A method by which virtual machines can borrow
memory from one another.
Memory compression. A technique
that is
used to prevent the hypervisor from
needing to swap memory pages to disk
when RAM becomes limited.
Side by Side: vSphere™ and Hyper-V™ - Slide 11 -
12. Hyper-V Dynamic Memory
Dynamic Memory relies
primarily on a process
similar to vSphere’s Guest
Ballooning feature.
To prevent a virtual
machine from having RAM
reduced to dangerous
levels, Hyper-V provides a
(default) buffer of 20% of
unused memory.
Side by Side: vSphere™ and Hyper-V™ - Slide 12 -
13. Product Storage Options
Technology Description vSphere Hyper-V
DAS Directly attached storage
NAS Network attached storage --
FC Fibre Channel
iSCSI Internet SCSI
FCoE Fibre Channel over Ethernet --
Side by Side: vSphere™ and Hyper-V™ - Slide 13 -
14. Supported Storage Features
Technology Description vSphere Hyper-V
Allows administrators to allocate the space
Thin they believe they may ultimately need for a
Provisioning service without actually having to dedicate
the space right now
Link base virtual hard drive images to one
Linked Images another so that there is less repetition of -- --
data
Side by Side: vSphere™ and Hyper-V™ - Slide 14 -
15. VMFS vs. VHD
Both VMware and Microsoft provide clustering
mechanisms
VHD relies on MS CSV
Much more complicated than vSphere’s clustering
Both MS and VMware provide
direct access to storage
vSphere: Raw Device
Mapping (RDM)
Hyper-V: Pass-through disks
Side by Side: vSphere™ and Hyper-V™ - Slide 15 -
16. vSphere Storage Capabilities
Centralized management of datastores. A
single location in which all data stores can be
managed in order to provide more visibility into
the environment.
Storage Management Initiative Specification
(SMI-S) support. Standardized monitoring of
storage.
Caching. Improves performance.
Storage DRS. A way to automatically place VMs
to load balance Storage IO demands.
Side by Side: vSphere™ and Hyper-V™ - Slide 16 -
18. vSphere Network Features
vSphere
TCP Segmentation Offload. The TCP/IP stack can submit
frames of up to 64 KB to the NIC -- the NIC then repackages
these frames into sizes that fit inside the network’s maximum
transmission unit (MTU) size.
NetQueue. Enables the system to process multiple network
receive requests simultaneously across multiple CPUs.
iSCSI. iSCSI traffic results in a “double hit” from a CPU
overhead perspective.
Distributed Virtual Switch. A virtual device that spans multiple
vSphere hosts.
Side by Side: vSphere™ and Hyper-V™ - Slide 18 -
19. Hyper-V Network Features
Chimney (TCP offload). Offloads to the NIC significant
portions of the CPU workload normally associated with
TCP/IP functionality
Large Send Offload (LSO). Provides Hyper-V hosts with
the ability to submit larger frames – in
this case up to 256KB in size – to the
network adapter for further processing
Virtual Machine Queue (VMQ). Creates
multiple virtual network queues for each
virtual machine. Network packets
destined for these virtual machines are
then sent directly to the VM, reducing
some overhead
Side by Side: vSphere™ and Hyper-V™ - Slide 19 -
20. Workload Migration
vSphere
vMotion is one of VMware’s claims
to fame and for good reason
Zero downtime migrations
Multiple network adapter use
Metro vMotion
Hyper-V
Live Migration in shipping version
is “vMotion™ Lite”
Requires Microsoft Failover Clustering
More complex environment
Side by Side: vSphere™ and Hyper-V™ - Slide 20 -
21. Storage Migration
vSphere
Storage vMotion is another of
VMware’s claims to fame
Zero downtime migrations
Thick to thin
Raw Device Mapping disk (RDM) to VMDK
Across protocols Hyper-V
Quick Storage Migration in shipping version is not as
robust
Not fully transparent to end user
Requires short period of downtime
Side by Side: vSphere™ and Hyper-V™ - Slide 21 -
22. Availability
vSphere
VMware High Availability
Monitors virtual machines to detect operating system and
hardware failures and moves workloads to other hosts
VMware Fault Tolerance
Continuous protection for mission critical workloads by
running a shadow copy of a protected VM
Hyper-V
Much more complex
Relies on MSCS
Side by Side: vSphere™ and Hyper-V™ - Slide 22 -
23. High Level Features
VMware HyperV
Feature Standard Enterprise Ent. Plus Standard Ent. DC
Max host processors 160 160 160 4 8 64
Max virtual SMP (guest) 8 8 32 4 4 4
Max host RAM (GB) 2048 2048 2048 32 2048 2048
Max RAM per VM 255 255 255 64 64 64
Failover nodes 32 32 32 16 16
Memory overcommit/dynamic mem.
Transparent page sharing
Live workload migration
Live storage migration
Max guests per host 512 512 512 384 384 384
Distributed Resource Scheduler
Distributed switch
Virtual instance rights (Windows) 0 0 0 1 4 No limit
Hypervisor licensing model per proc per proc per proc per host per host per proc
Side by Side: vSphere™ and Hyper-V™ - Slide 23 -
24. A cost comparison scenario
Impossible to do 1:1 comparison for every scenario
Pricing Assumptions
Side by Side: vSphere™ and Hyper-V™ - Slide 24 -
25. A cost comparison scenario
Environmental assumptions
» This example will assume a need for 150 virtual machines
» Consolidation ratio: 15 to 1 = 10 hosts
Side by Side: vSphere™ and Hyper-V™ - Slide 25 -
26. Registration Survey Response #1
*Based on 330 responses to the registration survey to this webinar
Side by Side: vSphere™ and Hyper-V™ - Slide 26 -
27. Registration Survey Response #2
*Based on 330 responses to the registration survey to this webinar
Side by Side: vSphere™ and Hyper-V™ - Slide 27 -
28. The Future of Hyper-V
Hyper-V 3.0 will bring a lot to the table
Fast provisioning of virtual machines.
V2V conversion of VMware-based virtual machines
to Hyper-V.
Conversion of physical servers to virtual ones (P2V).
Template-based virtual machine creation.
Automatic placement of new virtual machines to aid
in load balancing.
Centralized management of multiple Hyper-V hosts.
Side by Side: vSphere™ and Hyper-V™ - Slide 28 -
29. Summary
VMware remains significantly in front of
Microsoft on a feature-by-feature basis
For mission critical needs, vSphere is still
the obvious choice
Microsoft ‘s Hyper-V does, in fact, provide
a good enough solution for many
Hyper-V 3.0 will bring a lot more to the
table and give VMware a true challnges
Side by Side: vSphere™ and Hyper-V™ - Slide 29 -
30. The SolarWinds Story
Easy to find
» www.solarwinds.com
» Partner websites, and Internet Search
Easy to buy
» Downloadable from the website for evaluation and purchase
» Affordable price points
Easy to install
» Products can be downloaded, installed, and configured in less than 1
hour
» No Professional Services needed for deployment
Easy to use
» Windows-based products
» Intuitive user interfaces and graphical tools
Side by Side: vSphere™ and Hyper-V™ - Slide 30 -