The talk is about operating system virtualization technology known as OpenVZ. This is an effective way of partitioning a Linux machine into multiple isolated Linux containers. All containers are running on top of one single Linux kernel, which results in excellent density, performance and manageability. The talk gives an overall description of OpenVZ building blocks, such as namespaces, cgroups and various resource controllers. A few features, notably live migration and virtual swap, are described in greater details. Results of some performance measurements against VMware, Xen and KVM are given. Finally, we will provide a status update on merging bits and pieces of OpenVZ kernel to upstream Linux kernel, and share our plans for the future.
08448380779 Call Girls In Civil Lines Women Seeking Men
OpenVZ Linux Containers
1. OpenVZ
Linux Containers
Kir Kolyshkin
<kir@openvz.org>
OpenStack Design Summit
16 April 2012, San Francisco
2. Agenda
● Virtualization approaches
● Containers versus hypervisors:
● OpenVZ internals
● New features
3. What is virtualization?
Virtualization is a technique for deploying technologies. Virtualization
creates a level of indirection or an abstraction layer between a
physical object and the managing or using application.
http://www.aarohi.net/info/glossary.html
Virtualization is a framework or methodology of dividing the
resources of a computer into multiple execution environments...
http://www.kernelthread.com/publications/virtualization/
A key benefit of the virtualization is the ability to run multiple
operating systems on a single physical server and share the
underlying hardware resources – known as partitioning.
http://www.vmware.com/pdf/virtualization.pdf
3
7. Comparison
Hypervisor (VM) Containers (CT)
One real HW, many virtual One real HW (no virtual
HWs, many OSs HW), one kernel, many
High versatility – can run userspace instances
different OSs Higher density,
Lower density, natural page sharing
performance, scalability Dynamic resource
«Lowers» are mitigated by allocation
new hardware features
(such as VT-D)
Native performance:
[almost] no overhead
8. Containers are thinner and more
performant than hypervisors
● Containers
– Share host OS and Drivers
– Have small virtualization layer
– Naturally share pages
● Hypervisors
– Have separate OS plus virtual Hardware
– Hardware emulation requires VMM state
– Have trouble sharing Guest OS pages
● Containers are more elastic than hypervisors
● Container slicing of the OS is ideally suited to cloud slicing
● Hypervisors’ only advantage in IaaS is support for different OS
families on one server
9. Density
Hypervisor
41 VMs
Containers with page sharing
112 CTs
Containers without page sharing
57 CTs
Admin CP time > 4 sec
13. OpenVZ vs. Xen from HP labs
● For all the configuration and workloads we
have tested, Xen incurs higher virtualization
overhead than OpenVZ does
● For all the cases tested, the virtualization
overhead observed in OpenVZ is limited, and
can be neglected in many scenarios
● Xen systems becomes overloaded when
hosting four instances of RUBiS, while the
OpenVZ system should be able to host at
least six without being overloaded
14. Evolution of Operating Systems
● Multitask
many processes
● Multiuser
many users
● Multicontainer
many containers (CTs, VEs, VPSs, guests, partitions...)
14
15. OpenVZ: components
Kernel
– Namespaces: virtualization and Isolation
– CGroups: Resource Management
– Checkpoint/restart (live migration)
Tools
– vzctl: containers control utility
Templates
– precreated images for fast container creation
15
16. Kernel: Virtualization & Isolation
Each container has its own
● Files: chroot()
System libraries, applications, virtualized /proc and /sys, virtualized locks etc.
● Process tree (PID namespace)
Featuring virtualized PIDs, so that the init PID is 1
● Network (net namespace)
Virtual network device, its own IP addresses, set of netfilter and routing rules
● Devices
Plus if needed, any VE can be granted access to real devices like network
interfaces, serial ports, disk partitions, etc.
● IPC objects (IPC namespace)
shared memory, semaphores, messages
●
…
16
17. Kernel: Resource Management
Managed resource sharing and limiting.
● User Beancounters per-CT resource
counters, limits, and guarantees
(kernel memory, network buffers, phys pages, etc.)
●
Fair CPU scheduler (with shares and hard limits)
●
Two-level disk quota (first-level: per-CT quota;
second-level: ordinary user/group quota inside a CT)
● Disk I/O priority (also per-CT)
17
18. Kernel: Checkpointing/Migration
● Complete CT state can be saved in a file
– running processes
– opened files
– network connections, buffers, backlogs, etc.
– memory segments
● CT state can be restored later
● CT can be restored on a different server
19. Tools: CT control
# vzctl create 101 --ostemplate fedora-15
# vzctl set 101 --ipadd 20.21.22.23/24 --save
# vzctl start 101
# vzctl exec 101 ps ax
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 init
11830 ? Ss 0:00 syslogd -m 0
11897 ? Ss 0:00 /usr/sbin/sshd
11943 ? Ss 0:00 xinetd -stayalive -pidfile ...
12218 ? Ss 0:00 sendmail: accepting connections
12265 ? Ss 0:00 sendmail: Queue runner@01:00:00
13362 ? Ss 0:00 /usr/sbin/httpd
13363 ? S 0:00 _ /usr/sbin/httpd
..............................................
13373 ? S 0:00 _ /usr/sbin/httpd
6416 ? Rs 0:00 ps axf
# vzctl enter 101
bash# logout
# vzctl stop 101
# vzctl destroy 101
19
20. Feature: VSwap
● A new approach to memory management,
only two parameters to configure: RAM, swap
● Appeared in RHEL6-based OpenVZ kernel
● Swap is virtual, no actual I/O is performed
● Slow down to emulate real swap
● Only when actual global RAM shortage occurs,
virtual swap goes into the real swap
21. Feature: ploop
● Reimplementation of Linux loop device
● Modular architecture
● Support for different file formats (“plain”,
QCOW2, etc)
● Network storage is supported (NFS)
● Snapshots and fast provisioning
via stacked images
● Write tracker for faster live migration
22. Feature: CRIU
● Checkpoint/Restore (mostly) In Userspace
● Kernel manage processes, knows everything
● All efforts to merge CPT to Linux kernel failed
● Solution: let's do it in userpace!
● Minimal kernel intervention
● First set of patches was recently accepted
● See http://criu.org/
23. CRIU merge comment from akpm
Checkpoint/restart feature work.
A note on this: this is a project by various mad Russians to
perform c/r mainly from userspace, with various oddball
helper code added into the kernel where the need is
demonstrated.
So rather than some large central lump of code, what we
have is little bits and pieces popping up in various places
which either expose something new or which permit
something which is normally kernel-private to be modified.
The overall project is an ongoing thing. I've judged that the
size and scope of the thing means that we're more likely to be
successful with it if we integrate the support into mainline
piecemeal rather than allowing it all to develop out-of-tree.
24. Development
● Most of the magic is in the kernel
● Use RHEL kernels as a base
● Support it for many years
(RHEL4 kernels from 2006 are still supported)
● Merge bits and pieces upstream, then reuse
● Plans to include most of the kernel stuff till
RHEL7
25. LXC vs OpenVZ
● OpenVZ was off-the-mainline historically
– developing since 2000
● We are working on merging bits and pieces,
with more than 1500 patches in mainline
● Code in mainline is used by OpenVZ
● OpenVZ is production ready and stable
● LXC is a work-in-progress
– not a ready replacement for OpenVZ
● We will keep maintaining OpenVZ for a while
26. To mainline we go
● II Parallels is the driving force behind Linux
containers
● Collaborating with the Linux community to
move OpenVZ upstream into the Linux kernel
● Ensures that Linux delivers a single consistent
container technology
● vzctl will support mainline containers
● No rerun of Xen/KVM wars!
27. To sum it up
● Platform-independent
– as long as Linux supports it, we support it
● No problems with scalability or disk I/O
– lots of memory, lots of CPUs no prob
– native I/O speed
● Unbeatable density and performance
● Reliable, supported, free
● Plays well with others
The goal of Intel vConsolidate benchmark is to measure aggregated server performance in consolidation scenario: when different (and non-related unlike to LAMP benchmark) applications are running on the same box each in its own virtual environment (virtual machine or container). vConsolidate benchmark was developed by Intel in cooperation with other vendors. It runs separate workloads with Java (SPECjbb test), Mail, Web and Database VMs running concurrently. Each set of such VMs is called CSU - Consolidation Stack Unit. Performance metric is a geomean from throughput of each workload type: transactions/sec for Db, requests/sec for Web and java operations/sec for Java. The same type of metric is commonly used in other benchmarks like those from SPEC. vConsolidate benchmark is very similar to other virtualization specific benchmarks like VMMark from VMWare and SPECvirt, but the latter are more enterprise oriented and generate the load requiring a fast SAN storage and high end hardware. Since we believe containers benefits are even more prominent on commodity hardware we use vConsolidate benchmark to demonstrate that. OpenVZ demonstrates outstanding performance benefits over hypervisors in case of multi-processor environments and multi-threaded workloads: Almost 3x times better overall performance in case of single set of workloads (1 CSU). 2.1-2.2x times better overall performance on 5 and 10 CSU.
multi-server virtualization is actually not about virtualization; it's rather more about grid and clustering, so I'm not going to cover that.
Low manageability: many os to manage, must login to each, mass management is equally difficult to multiple physical. Low performance is/will be mitigated by Intel VT and AMD V, so it's not really an issue. After all, emulation approach looks strange: why do we run OS on top of another OS? OS is designed to be run on hardware, not something virtual.
LAMP (acronym for Linux, Apache, MySQL, PHP) software stack is widely used for building modern web sites on Linux. So it's quite natural that there is a need to understand how well such type of workloads run in virtualized environment and how many LAMPs virtualization can bare on a single piece of hardware. 2 important metrics are presented in this report: total number of serviced requests/sec and average response time. To measure LAMP software stack performance and density we use DVD-Store E-Commerce benchmark developed by Dell. OpenVZ shows the best performance over solutions tested: OpenVZ 38% faster than XenServer and more than x2 times faster than HyperV and ESXi under high load.
OpenVZ shows the best response time over solutions tested: 33% better response time compared to ESXi and x2 times better response time than XenServer and HyperV
The goal of Intel vConsolidate benchmark is to measure aggregated server performance in consolidation scenario: when different (and non-related unlike to LAMP benchmark) applications are running on the same box each in its own virtual environment (virtual machine or container). vConsolidate benchmark was developed by Intel in cooperation with other vendors. It runs separate workloads with Java (SPECjbb test), Mail, Web and Database VMs running concurrently. Each set of such VMs is called CSU - Consolidation Stack Unit. Performance metric is a geomean from throughput of each workload type: transactions/sec for Db, requests/sec for Web and java operations/sec for Java. The same type of metric is commonly used in other benchmarks like those from SPEC. vConsolidate benchmark is very similar to other virtualization specific benchmarks like VMMark from VMWare and SPECvirt, but the latter are more enterprise oriented and generate the load requiring a fast SAN storage and high end hardware. Since we believe containers benefits are even more prominent on commodity hardware we use vConsolidate benchmark to demonstrate that. OpenVZ demonstrates outstanding performance benefits over hypervisors in case of multi-processor environments and multi-threaded workloads: Almost 3x times better overall performance in case of single set of workloads (1 CSU). 2.1-2.2x times better overall performance on 5 and 10 CSU.
This is a natural step in evolution of the Operating Systems, and Linux is the first one. Virtualization is really needed by everyone, and will be a part of any OS kernel.
Resource management is a very important and very complex thing. Consider the story of CPU scheduler development: (1) Need for CPU shares – to balance CPU power between CTs. So every CT have a guaranteed minimum share but can use up to all CPU power if available. (1a) BUT as number of CTs per server grow, guaranteed minimum remains but power decreases, thus ppl complain, thus (2) Need for upper CPU limit. Hard limits the max CPU power, even if it's available. It solves 1a problem, but introduces another problem – power is not used even if available. Thus (3) Need for burstable CPU limit (not yet implemented). CT can have up to all CPU power but not always – say, limited per month or so.
I will actually show a two-minute live demo in a green-on-black terminal instead of this page.