Comparative study of
VMWare ESX with
other similar products
Monolithic v/s Microlithic
• Hypervisors can be classified as monolithic or
microlithic based on the kernel organization.
• Microlithic (Microkernel based)
• Xen
• Hyper - V
• KVM (controversial)
• closer to monolithic
• VM -> Linux kernel space -> Linux user space ->
Linux kernel space -> VM
VMWare ESXi
• Type 1 monolithic hypervisor
• VMkernel is responsible for virtualization
• The VMkernel provides support for :
• Resource scheduling
• File System
• Users and Groups
• Users : Virtual Infrastructure Client, Remote
Command Line Interface,VIM API
• Groups : To combine multiple users and assign
privileges as a group
VMWare ESXi
Type 1 Hypervisors
• VMWare - Proprietary
• Xen Hypervisor - Open Source
• KVM - Open Source, (free)
• Microsoft Hyper-V - Proprietary
• WIND RIVER - proprietary
• LynxSecure Separation Kernel Hypervisor - Proprietary
• Proxmox VE - Open Source, (free)
• nuxi -Open Source , (free)
• RTS Real Time Hypervisor - Proprietary
Microsoft Hyper - V
• Hypervisor based virtualization technology for X64 based
Windows Server
• Two variants
• Standalone
• Role based
• Requires a processor with hardware assisted
virtualization functionality.
• Implements isolation using partition.
• Parent partition manages resources and create child
partition.
• Child partition do not have direct access to resources
Microsoft Hyper - V
• Host types
• Enlightened
• Unenlightened
• Parent runs (VSP)
• Child run (VSC)
• Communication between partition takes place through
VMBus (logical channel)
Xen Hypervisor
• Developed at University of Cambridge
• Only available bare-metal Open Source hypervisor
• Components
• Domain 0 (Dom0)
• Domain U (Dom U)
• Lightweight due to delegation of management guest.
• Dom 0 is a linux kernel with special privileges.
• Dom U do not have any direct access to physical
resources.
• Uses CREDIT scheduler with round robin scheduling of
VCPUs.
Xen Hypervisor
• Uses emulation to provide support for disk, network,
motherboard, and PCI devices
• Shadow code to virtualize psagetables
• Uses emulated interrupt controllers
• Integrated ROMBIOS to provide virtual BIOS to the guest
KVM Hypervisor
• Type 1 hypervisor
• Uses virtualization technology support for full
virtualization
• Uses Shadow page tables for memory management
• Guest sees CPU, RAM, disk, etc like on real machines
• Processes can create virtual machines
• Guest physical memory part of creating process' address
space
• Proximity of guest and user space hypervisor
• Massive Linux kernel reuse
• Pass-through PCI adapters, disks, etc also possible
Feature wise comparison
Computing for Sustainable Global Development (INDIACom), 2016 3rd International Conference on
Feature wise comparison
https://www.proxmox.com/en/proxmox-ve/comparison
Comparison based on various
tests
• Components to be tested
• CPU
• Disk I/O
• Memory
• Network I/O
Performance comparison
• CPU test (Bytemark)
• All hypervisors performed quite similarly
• CPU operations require no help from hypervisor
• Memory test (ramsmp)
• with single VCPU similar results
• with all VCPU Xen and KVM shows performance drop
• KVM and Xen uses heavy management systems
2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013)
Performance comparison
• Disk I/O test (Bonnie++ & FileBench)
• Xen has poor performance
• KVM beats all
• Hyper-V experience thrashing with multiple VCPU
• Network test (Netperf)
• Xen has poor performance due to network
transmission using network backend driver
• More indirection
2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013)
References
1. Benchmarking the Performance of Microsoft Hyper-V server,VMware ESXi
and Xen Hypervisors,Hasan Fayyad-Kazan,Luc Perneel,Martin
Timmerman,Journal of Emerging Trends in Computing and Information
Sciences,Vol. 4, No. 12, December 2013
2. A component-based performance comparison of four hypervisors,Written
by: Jinho Hwang; Sai Zeng; Wu, F.Y.; Wood, T,2013 IFIP/IEEE
International Symposium on Integrated Network Management (IM 2013)
3. Performance Comparison of Commercial VMM: ESXI, XEN, HYPER-V &
KVM,Varun Kumar Manik,Deepak Arora,2016 3rd International
Conference on Computing for Sustainable Global Development
(INDIACom)
4. https://wiki.xen.org/wiki/Virtualization_Spectrum#Full_virtualization
5. https://technet.microsoft.com/library/hh831531.aspx
6. https://pubs.vmware.com/vsphere-4-esx-
vcenter/index.jsp?topic=/com.vmware.vsphere.bsa.doc_40/vc_admin_g
uide/migrating_virtual_machines/c_nx_xd_considerations.html
7. http://wiki.prgmr.com/mediawiki/index.php/Chapter_12:_HVM:_Beyon
d_Paravirtualization
8. http://www.stratoscale.com/blog/compute/hypervisor-comparison-of-
io-virtualization-models/
9. http://www-
archive.xenproject.org/files/Marketing/HowDoesXenWork.pdf

Cloud.pptm

  • 1.
    Comparative study of VMWareESX with other similar products
  • 2.
    Monolithic v/s Microlithic •Hypervisors can be classified as monolithic or microlithic based on the kernel organization. • Microlithic (Microkernel based) • Xen • Hyper - V • KVM (controversial) • closer to monolithic • VM -> Linux kernel space -> Linux user space -> Linux kernel space -> VM
  • 3.
    VMWare ESXi • Type1 monolithic hypervisor • VMkernel is responsible for virtualization • The VMkernel provides support for : • Resource scheduling • File System • Users and Groups • Users : Virtual Infrastructure Client, Remote Command Line Interface,VIM API • Groups : To combine multiple users and assign privileges as a group
  • 4.
  • 5.
    Type 1 Hypervisors •VMWare - Proprietary • Xen Hypervisor - Open Source • KVM - Open Source, (free) • Microsoft Hyper-V - Proprietary • WIND RIVER - proprietary • LynxSecure Separation Kernel Hypervisor - Proprietary • Proxmox VE - Open Source, (free) • nuxi -Open Source , (free) • RTS Real Time Hypervisor - Proprietary
  • 6.
    Microsoft Hyper -V • Hypervisor based virtualization technology for X64 based Windows Server • Two variants • Standalone • Role based • Requires a processor with hardware assisted virtualization functionality. • Implements isolation using partition. • Parent partition manages resources and create child partition. • Child partition do not have direct access to resources
  • 7.
    Microsoft Hyper -V • Host types • Enlightened • Unenlightened • Parent runs (VSP) • Child run (VSC) • Communication between partition takes place through VMBus (logical channel)
  • 8.
    Xen Hypervisor • Developedat University of Cambridge • Only available bare-metal Open Source hypervisor • Components • Domain 0 (Dom0) • Domain U (Dom U) • Lightweight due to delegation of management guest. • Dom 0 is a linux kernel with special privileges. • Dom U do not have any direct access to physical resources. • Uses CREDIT scheduler with round robin scheduling of VCPUs.
  • 9.
    Xen Hypervisor • Usesemulation to provide support for disk, network, motherboard, and PCI devices • Shadow code to virtualize psagetables • Uses emulated interrupt controllers • Integrated ROMBIOS to provide virtual BIOS to the guest
  • 10.
    KVM Hypervisor • Type1 hypervisor • Uses virtualization technology support for full virtualization • Uses Shadow page tables for memory management • Guest sees CPU, RAM, disk, etc like on real machines • Processes can create virtual machines • Guest physical memory part of creating process' address space • Proximity of guest and user space hypervisor • Massive Linux kernel reuse • Pass-through PCI adapters, disks, etc also possible
  • 11.
    Feature wise comparison Computingfor Sustainable Global Development (INDIACom), 2016 3rd International Conference on
  • 12.
  • 13.
    Comparison based onvarious tests • Components to be tested • CPU • Disk I/O • Memory • Network I/O
  • 14.
    Performance comparison • CPUtest (Bytemark) • All hypervisors performed quite similarly • CPU operations require no help from hypervisor • Memory test (ramsmp) • with single VCPU similar results • with all VCPU Xen and KVM shows performance drop • KVM and Xen uses heavy management systems 2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013)
  • 15.
    Performance comparison • DiskI/O test (Bonnie++ & FileBench) • Xen has poor performance • KVM beats all • Hyper-V experience thrashing with multiple VCPU • Network test (Netperf) • Xen has poor performance due to network transmission using network backend driver • More indirection 2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013)
  • 16.
    References 1. Benchmarking thePerformance of Microsoft Hyper-V server,VMware ESXi and Xen Hypervisors,Hasan Fayyad-Kazan,Luc Perneel,Martin Timmerman,Journal of Emerging Trends in Computing and Information Sciences,Vol. 4, No. 12, December 2013 2. A component-based performance comparison of four hypervisors,Written by: Jinho Hwang; Sai Zeng; Wu, F.Y.; Wood, T,2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013) 3. Performance Comparison of Commercial VMM: ESXI, XEN, HYPER-V & KVM,Varun Kumar Manik,Deepak Arora,2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom) 4. https://wiki.xen.org/wiki/Virtualization_Spectrum#Full_virtualization 5. https://technet.microsoft.com/library/hh831531.aspx 6. https://pubs.vmware.com/vsphere-4-esx- vcenter/index.jsp?topic=/com.vmware.vsphere.bsa.doc_40/vc_admin_g uide/migrating_virtual_machines/c_nx_xd_considerations.html 7. http://wiki.prgmr.com/mediawiki/index.php/Chapter_12:_HVM:_Beyon d_Paravirtualization 8. http://www.stratoscale.com/blog/compute/hypervisor-comparison-of- io-virtualization-models/ 9. http://www- archive.xenproject.org/files/Marketing/HowDoesXenWork.pdf