Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Comparison of Open Source Virtualization Technology

16,600 views

Published on

Results of a comparison of Open Source Virtualization Technologies

Published in: Technology, News & Politics
  • thanks for the slide.. gracias
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • comparison
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Fantastic primer.. Many thanks.
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • This presentation has a really substantial set of benchmarks, nice work! It may be aging, however. I doubt the KVM benchmarks were produced with the current 'virtio' network and block drivers (paravirt), which should put KVM much closer to Xen in terms of network and disk I/O bandwidth. If KVM has to emulate I/O (which it would in the absence of virtio) it will be slow, indeed.
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • it's great coparison, and good document format.
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Comparison of Open Source Virtualization Technology

  1. 1. Fernando Laudares Camargos Gabriel Girard Benoit des Ligneris, Ph. D. [email_address] Comparative study of Open Source virtualization & contextualization technologies
  2. 2. Context (1) <ul><li>Introduction </li></ul><ul><li>Why virtualize the server infrastructure </li></ul><ul><li>Virtualization technologies </li></ul><ul><li>The experiments </li></ul><ul><li>Explanations & anomalies </li></ul><ul><li>Which technology is best for ... you ? </li></ul>
  3. 3. Context (2) <ul><li>Research executed by Fernando L. Camargos in pursuit of his Masters degree in Computer Science under the direction of Gabriel Girard (Université de Sherbrooke) and Benoît des Ligneris (Révolution Linux) </li></ul><ul><li>This being a research work, some questions remain unsolved... maybe you can help ! </li></ul>
  4. 4. Why virtualize the server infrastructure (1) ? Server consolidation is the most mentionned argument
  5. 5. Why virtualize the server infrastructure (2) ? <ul><ul><li>Reduction of the purchase and maintenance costs </li></ul></ul><ul><ul><li>Compatibility with legacy applications and OSs </li></ul></ul><ul><ul><li>Security : environment to execute untrusty applications </li></ul></ul><ul><ul><li>Low cost environment for software development </li></ul></ul><ul><ul><li>Centralized control/management </li></ul></ul><ul><ul><li>Easy backup/restore procedures </li></ul></ul><ul><ul><li>Live migration </li></ul></ul><ul><ul><li>Quick server fail-over </li></ul></ul><ul><ul><li>High availability </li></ul></ul><ul><ul><li>Virtual appliances </li></ul></ul><ul><ul><li>Controled sharing of ressources </li></ul></ul><ul><ul><li>Cloud computing </li></ul></ul><ul><ul><li>Hardware abstraction </li></ul></ul><ul><ul><li>It's ... cool ! </li></ul></ul>
  6. 6. <ul><li>Full virtualization </li></ul><ul><li>Para-virtualization </li></ul><ul><li>OS-level virtualization ( contextualization ) </li></ul><ul><li>Hardware emulation </li></ul><ul><li>Binary translation </li></ul><ul><li>Classic virtualization </li></ul>Virtualization technologies (1)
  7. 7. <ul><li>Full virtualization </li></ul><ul><li>Para-virtualization </li></ul><ul><li>OS-level virtualization </li></ul><ul><li>( contextualization/ </li></ul><ul><li>containers ) </li></ul><ul><li>Hardware emulation </li></ul><ul><li>binary translation </li></ul><ul><li>Classic virtualization </li></ul>Xen Linux-VServer OpenVZ KVM VirtualBox KQEMU Virtualization technologies (2)
  8. 8. <ul><ul><li>virtualization != emulation </li></ul></ul>QEMU is an emulator Virtualization technologies (3)
  9. 9. Virtualization technologies (4) Virtualization technologies partial emulation no emulation KQEMU KVM VirtualBox OpenVZ Xen (Linux) Linux-VServer
  10. 10. 2 types of hypervisors: <ul><li>Hypervisors type I: KVM, Xen </li></ul><ul><li>Hypervisors type II: VirtualBox, KQEMU </li></ul>Virtualization technologies (5)
  11. 11. The experiments (1) virtualization layer overhead But of how much ? To discover, we need to mesure the efficiency of the virtualization technologies efficiency = performance + scalability where :
  12. 12. <ul><li>Performance (overhead) : one virtual machine only </li></ul><ul><li>Scalability: several virtual machines </li></ul>2 types of experiments: The experiments (2)
  13. 13. Virtualization solutions evaluated in this study <ul><li>Chosen OSs: </li></ul><ul><li>Host: Ubuntu 7.10 </li></ul><ul><li>VMs: Ubuntu 6.06 </li></ul><ul><li>Test bed: </li></ul><ul><li>Intel Core 2 Duo 6300, 1.86GHz (x86_64 / VT-x) </li></ul><ul><li>4G Memory </li></ul><ul><li>Hard drive SATA 80G </li></ul>The experiments (3)
  14. 14. <ul><li>64 bit kernel for all technologies </li></ul><ul><li>Use of VT extension for KVM, Xen </li></ul><ul><li>32 bit VM for VirtualBox </li></ul><ul><li>Identical memory allocation per VM for every technology but Vserver : 2039 Mo </li></ul>Bits & Bytes & VMs : The experiment (4)
  15. 15. <ul><li>7 benchmarks (different workloads) </li></ul><ul><ul><li>Reference : executed in the Linux host (scale = 1) </li></ul></ul><ul><ul><li>executed inside the virtual machines </li></ul></ul><ul><li>4 execution sets </li></ul><ul><ul><li>results = the average of the 3 last sets normalized by </li></ul></ul><ul><ul><ul><li> the result obtained by the Linux host </li></ul></ul></ul>Methodology : The experiments – Performance (1)
  16. 16. <ul><li>An equilibrated workload – a little bit of everything without stressing one particular ressource too much </li></ul><ul><li>Metric: given time for the completion of the compilation </li></ul>Compilation of the Linux kernel tar xvzf linux-XXX.tar.gz cd linux-XXX make defconfig # (&quot;New config with default answer to all options&quot;) --- date +%s.%N && make && date +%s.%N ... make clean date +%s.%N && make && date +%s.%N ... 3x The experiments – Performance (2)
  17. 17. The experiments – Performance (3)
  18. 18. <ul><li>Software for file compression </li></ul><ul><li>Using option that yields maximal compression which considerably increases the me m ory utilisation per process </li></ul><ul><li>Metric: given time for the completion of the compression </li></ul>Bzip2 cd /var/tmp cp /home/fernando/Iso/ubuntu-6.06.1-server-i386.iso . date +%s.%N && bzip2 -9 ubuntu-6.06.1-server-i386.iso && date +%s.%N rm ubuntu-6.06.1-server-i386.iso.bz2 ... 4x The experiments – Performance (4)
  19. 19. The experiments – Performance (5)
  20. 20. <ul><li>Derived from the Netbench benchmark </li></ul><ul><li>Emulates the load imposed in a file server by n Windows 95 clients </li></ul><ul><li>n(umber of clients)=100, t(ime)=300 </li></ul><ul><li>Metric: throughput (Mb/sec) </li></ul>Dbench /usr/local/bin/dbench -t 300 -D /var/tmp 100 # 4x The experiments – Performance (6)
  21. 21. * no results for VirtualBox The experiments – Performance (7)
  22. 22. <ul><li>Application for low level (bit by bit) data copy </li></ul><ul><li>Mesures the performance of the I/O system (hard drive access) </li></ul><ul><li>2 tests: </li></ul><ul><ul><li>copy of a single big file </li></ul></ul><ul><ul><li>copy of 60G of /dev/zero to /dev/null </li></ul></ul><ul><li>Metric : throughput </li></ul>dd ... dd if=/opt/iso/ubuntu-6.06.1-server-i386.iso of=/var/tmp/out.iso ... dd if=/dev/zero of=/dev/null count=117187560 # 117187560 = 60G ... rm -fr /var/tmp/* # between execution sets ... The experiments – Performance (8)
  23. 23. * no results for KQEMU nor VirtualBox The experiments – Performance (9)
  24. 24. * no results for KQEMU nor VirtualBox The experiments – Performance (10)
  25. 25. <ul><li>A benchmark that can be used to measure several aspects of the network performance </li></ul><ul><li>TCP Stream test : measure the speed of the exchange of TCP packets through the network (10 sec.) </li></ul><ul><li>Metric: throughput (bits/sec) </li></ul>Netperf netserver # in the server .. netperf -H <nom du serveur> # in the client, 4x The experiments – Performance (11)
  26. 26. The experiments – Performance (12)
  27. 27. <ul><li>Similar to Netperf's TCP Stream Test, measures the performance of file exchange through the network </li></ul><ul><li>2 tests: </li></ul><ul><ul><li>ISO file: 1 big file (433M) </li></ul></ul><ul><ul><li>Linux kernel tree: several small files (294M) </li></ul></ul><ul><li>Metric: time (sec.) </li></ul>Rsync .. date +%s.%N && rsync -av <server>::kernel /var/tmp && date +%s.%N ... date +%s.%N && rsync -av <server>::iso /var/tmp && date +%s.%N ... rm -fr /var/tmp/* # between execution sets ... The experiments – Performance (13)
  28. 28. The experiments – Performance (14)
  29. 29. The experiments – Performance (15)
  30. 30. <ul><li>Measures the performance of a DB server </li></ul><ul><li>Workload centered in I/O operations in the file system </li></ul><ul><li>Metric: throughput (transactions/sec) </li></ul>Sysbench sysbench --test=oltp --mysql-user=root --mysql-host=localhost --debug=off prepare # (1x) sysbench --test=oltp --mysql-user=root --mysql-host=localhost --debug=off run # (4x) O n- L ine T ransaction P rocessing OLTP test statistics: queries performed: read: 140000 write: 50000 other: 20000 total: 210000 transactions: 10000 ( 376.70 per sec. ) deadlocks: 0 (0.00 per sec.) read/write requests: 190000 (7157.28 per sec.) other operations: 20000 (753.40 per sec.) The experiments – Performance (16)
  31. 31. The experiments – Performance (17)
  32. 32. Conclusion : <ul><li>Linux-VServer: excellent performance. Has presented minimal to no overhead when compared to Linux. </li></ul><ul><li>Xen: has shown a great performance in all but the Dbench benchmark (I/O bound benchmark). </li></ul><ul><li>KVM's performance was fairly good for a full virtualization solution but should be avoided to run applications that strongly rely on I/O. </li></ul>The experiments – Performance (18)
  33. 33. Conclusion (cont) : <ul><li>OpenVZ has shown a very variable performance (from weak to excellent). Certainly because of accounting for I/O and because of some network optimization for the Network related tests. </li></ul><ul><li>VirtualBox has presented a good performance for the file compression and network based benchmarks. Poor performance for all the other situations. </li></ul><ul><li>KQEMU has shown a poor performance for all benchmarks. It is clear that this virtualization solution does not make a good use of the available ressources and its application in production servers should be avoided. </li></ul>The experiments – Performance (19)
  34. 34. <ul><li>1 benchmark ( Sysbench, kernel compilation) executed by n VMs concurrently </li></ul><ul><li>n = 1, 2, 4, 8, 16 et 32 </li></ul><ul><li>4 execution sets: </li></ul><ul><ul><li>results = average of the last tree execution sets </li></ul></ul><ul><li>Memory allocation per VM : </li></ul>Methodology: * 1536 Mb (KQEMU) The experiments – Scalability (1)
  35. 35. The experiments – Scalability (2)
  36. 36. The experiments – Scalability (3)
  37. 37. The experiments – Scalability (4)
  38. 38. The experiments – Scalability (5)
  39. 39. Conclusion : <ul><li>The efficiency of virtualization solutions is strongly related to the number of VMs executing concurrently (scalability) </li></ul><ul><li>Most of the time, one additional VM helps to get the maximum performance out of a given server (link with the number of CPU) </li></ul><ul><li>More decreases performance as a bottleneck is limiting performance (CPU/core number important !) </li></ul><ul><li>Linux-VServer has shown the best global performance for up to 5-7 VMs. </li></ul><ul><li>Xen has proved to be the best full virtualization based solution. </li></ul>Conclusion - Scalability (1)
  40. 40. <ul><li>KVM has shown a reasonable performance for a full virtualization solution. </li></ul><ul><li>OpenVZ's performance was not what we would expect of a contextualization solution. Our hypothesis is that the accounting (beancounter) is the root cause of th overhead </li></ul><ul><li>VirtualBox has shown an impressive performance, the total throughput has more than doubled when the number of VMs has pass from 8 to 16. However, we were unaible to execute this experiment with 32 VirtualBox VMs executing concurrently. </li></ul><ul><li>KQEMU's performance was weak when compared to all other solutions, independently of the number of VMs in execution. </li></ul>Conclusion – Scalability (2)
  41. 41. Which technology to use in each case ? A technological point of view ... <ul><li>OpenVZ : pure network related applications, thanks to the optimizations done in the network layer. Not indicated for I/O applications. </li></ul><ul><li>Linux-VServer: all kinds of situations ( a priori ). </li></ul><ul><li>Xen can also be used in all kinds of situations but requires important modifications in the guest OS kernels OR the use of the VT virtualization instructions. </li></ul><ul><li>KVM and VirtualBox have proven to be good options for development environments. </li></ul><ul><li>KQEMU has shown weak performance. It is indicated for development only. </li></ul>
  42. 42. <ul><li>Results are very different for every benchmark/technology ,so benchmark the technology you plan to use with your own mission critical application BEFORE virtualizing your servers (ex: File Servers benchmark). </li></ul><ul><li>Only Xen is actually supported by the industry (RedHat, SuSE, Mandrake, IBM, etc.) </li></ul><ul><li>KVM is available in the standard Linux kernel : Yeah! But poor performance overall ;-( </li></ul><ul><li>Linux-VServer and OpenVZ: they both need a modified kernel that is not officially supported by the afore mentioned giants of the industry but . . . </li></ul>Conclusion : Which technology to use in each case ?
  43. 43. <ul><li>Since last OLS, key players like IBM, Intel, and Google are working hard to include a container based technology in the Linux kernel </li></ul><ul><li>Lots of patches from OpenVZ gets integrated into the kernel recently and everyone expects that we will have Really Soon Now a contextualization in the linux kernel without the need for any kernel hacking </li></ul><ul><li>We strongly believe that the integration of a contextualization/container solution is the best way to go for Linux-on-Linux virtualization needs </li></ul><ul><li>It will offer VMWare a very strong and completely open-source competition </li></ul>Future / contextualisation

×