Inroduction to Virtualization and Video Playback during a Live Migrated Virtual Machine hosting the server with its time analysis.
OS- Ubuntu
Hypervisor- KVM
4. virtualization
ā Virtualization is a framework or methodology of dividing the
resources of a computer into multiple execution environments.
ā The available resources such as CPU, memory, storage and I/O
devices are dynamically partitioned and shared.
ā x86 systems virtualization approaches:
ā A hosted architecture runs the virtualization layer as an
application on top of OS
ā a hypervisor (bare metal) architecture has a virtualization layer
installed directly on a fresh x86 based system.
3
5. components of virtualization
1. Hypervisor: It is also termed as virtualization layer, which is a
software layer that manages and hosts the VMās.
ā Type 1: It is a native or bare metal hypervisor that runs directly on
the host hardware. Thus it has direct access to the hardware
resources and handles allocation of resources to guests as well.
ā Type 2: It is also called as hosted hypervisor as it is installed and
run on top of a hosting OS. The host OS is responsible for
interfacing with the hardware.
2. Guest: It is a virtualized environment with its own OS and
applications. It runs on top of the hypervisor.
4
6. techniques of virtualization
Full Virtualization:
ā Full Virtualization is virtualization in which the guest operating
system is unaware that it is in a virtualized environment, and
therefore hardware is virtualized by the host operating system
so that the guest can issue commands to what it thinks is actual
hardware, but really are just simulated hardware devices
created by the host.
5
7. techniques of virtualization
Paravirtualization:
ā Paravirtualization is virtualization in which the guest operating
system (the one being virtualized) is aware that it is a guest and
accordingly has drivers that, instead of issuing hardware
commands, simply issue commands directly to the host
operating system. This also includes memory and thread
management as well, which usually require unavailable
privileged instructions in the processor.
6
8. techniques of virtualization
Hardware Assisted Virtualization:
ā Hardware Assisted Virtualization is a type of Full Virtualization
where the microprocessor architecture has special instructions
to aid the virtualization of hardware. These instructions might
allow a virtual context to be setup so that the guest can execute
privileged instructions directly on the processor without
affecting the host.
7
9. live vm migration
ā Live VM Migration can be performed based on two basic network
scenarios:
Local Area Network (LAN) and Wide Area Network (WAN).
ā Live Migration over LAN is easier for two reasons.
1. high- speed low-latency links in the LAN which makes migration
comparatively quicker
2. the VM can retain its IP address (es) after migration since the
hosts share the same IP address space.
8
10. benifits of live vm migration
ā Consolidation: Several underutilized small data centers can be
replaced with few larger ones.
ā Load balancing: It requires the transfer of VMās from an
overloaded host to a light loaded ones.
ā Scaling: Multiple sites need to be created at different
geographical sites to scale up as the cloud grows.
ā Disaster recovery and reliability: In times of catastrophe or any
fault occurrence, VMās can be migrated to mirrored sites across
cloud with minimal downtime.
ā Maintenance: Applications and data can be migrated to another
machine to free up the hardware for maintenance.
9
12. qemu
QEMU (short for Quick Emulator) is a free and open-source hosted
hypervisor that performs hardware virtualization (not to be confused
with hardware-assisted virtualization).
QEMU is a hosted virtual machine monitor: it emulates CPUs through
dynamic binary translation and provides a set of device models,
enabling it to run a variety of unmodiļ¬ed guest operating systems. It
also can be used together with KVM in order to run virtual machines
at near-native speed (requiring hardware virtualization extensions
on x86 machines). QEMU can also do CPU emulation for user-level
processes, allowing applications compiled for one architecture to
run on another.
11
13. kvm
Kernel-based Virtual Machine (KVM) is a virtualization infrastructure
for the Linux kernel that turns it into a hypervisor. It was merged
into the Linux kernel mainline in kernel version 2.6.20, which was
released on February 5, 2007.KVM requires a processor with hardware
virtualization extensions. KVM has also been ported to FreeBSD and
illumos in the form of loadable kernel modules.
12
14. libvirt
LIBVIRT is an open source API, daemon and management tool for
managing platform virtualization. It can be used to manage KVM,
Xen, VMware ESX, QEMU and other virtualization technologies. These
APIs are widely used in the orchestration layer of hypervisors in the
development of a cloud-based solution.
Internals : LIBVIRT itself is a C library, but it has bindings in other
languages, notably in Python, Perl, OCaml, Ruby, Java, JavaScript (via
Node.js) and PHP. libvirt for these programming languages is
composed of wrappers around another class/package called
libvirtmod. libvirtmodās implementation is closely associated with its
counterpart in C/C++ in syntax and functionality.
13
15. virtual machine manager
In computing, the Red Hat Virtual Machine Manager, also known as
virt-manager, is a desktop-driven virtual machine manager with
which users can manage virtual machines (VMs).
Features : Virtual Machine Manager allows users to:
ā create, edit, start and stop VMs
ā view and control of each VMās console
ā see performance and utilization statistics for each VM
ā view all running VMs and hosts, and their live performance or
resource utilization statistics.
ā use KVM, Xen or QEMU virtual machines, running either locally
or remotely.
ā use LXC containers
14
18. design considerations
At a high level we can consider a virtual machine to encapsulate
access to a set of physical resources. Providing live migration of
these VMs in a clustered server environment leads us to focus on
the physical resources used in such environments:
speciļ¬cally on
ā Memory
ā Network
ā Disk
17
19. migrating memory
When a VM is running a live service it is important that this transfer
occurs in a manner that balances the requirements of minimizing
both downtime and total migration time.
It is easiest to consider the trade-offs between these requirements
by generalizing memory transfer into three phases:
1. Push phase : The source VM continues running while certain
pages are pushed across the network to the new destination. To
ensure consistency, pages modiļ¬ed during this process must be
re-sent.
2. Stop-and-copy phase : The source VM is stopped, pages are
copied across to the destination VM, then the new VM is started.
3. Pull phase : The new VM executes and, if it accesses a page that
has not yet been copied, this page is faulted in (āpulledā) across
the network from the source VM.
18
21. local resources
A key challenge in managing the migration of OS instances is what to
do about resources that are associated with the physical machine
that they are migrating away from. While memory can be copied
directly to the new host, connections to local devices such as disks
and network interfaces demand additional consideration.
Two Key problems :
ā Network resources
ā Local storage
20
22. network resources
Requirement:
For network resources,a migrated OS should be able to maintain all
open network connections without relying on forwarding
mechanisms on the original host (which may be shut down following
migration), or on support from mobility or redirection mechanisms
that are not already present.
A migrating VM will include all protocol state (e.g. TCP PCBs), and will
carry its IP address with it.
Solution
For managing migration with respect to network in this environment
is to generate an unsolicited ARP reply from the migrated host,
advertising that the IP has moved to a new location. This will
reconļ¬gure peers to send packets to the new physical address, and
while a very small number of in-ļ¬ight packets may be lost, the
migrated domain will be able to continue using open connections
with almost no observable interference.
21
23. local resources
In the cluster, the migration of storage may be similarly addressed:
Most modern data centers consolidate their storage requirements
using a network-attached storage (NAS) device, in preference to
using local disks in individual servers. NAS has many advantages in
this environment, including simple centralised administration,
widespread vendor support, and reliance on fewer spindles leading
to a reduced failure rate. A further advantage for migration is that it
obviates the need to migrate disk storage, as the NAS is uniformly
accessible from all host machines in the cluster.
22
26. how pre-copy works
ā To log pages that are dirtied, shadow page tables are inserted
underneath the running OS.
ā The shadow tables are populated on demand by translating
sections of the guest page tables.
ā Translation for dirty logging:
ā All page-table entries (PTEs) are initially read-only mappings in
the shadow tables, regardless of what is permitted by the guest
tables.
ā If the guest tries to modify a page of memory, the resulting page
fault is trapped .
ā If write access is permitted by the relevant guest PTE then this
permission is extended to the shadow PTE.
ā At the same time, appropriate bit in the VMās dirty bitmap is set.
ā When the bitmap is copied to the control software at the start of
each pre-copying round, the bitmap is cleared and the shadow
page tables are destroyed and recreated as the migratee OS
continues to run.
25
28. test bed
Our laboratory testbed was setup to test the efļ¬cacy of the KVM live
migration in the same subnet in maintaining the network
connectivity over the network during the VM migration.
KVM actively works to transfer the CPU, memory and network states
of the VM from the source to the destination hosts. A shared storage
medium(NFS) is used to store the VMs disk state.
The testbed was used to evaluate the performance of a video
streaming website hosted in the VMās bridged guest OS. All the
systems run on latest Ubuntu 16.04 LTS OSes with the latest kernel.
27
30. use of shared storage
The Network File System (NFS) is a client/server application that lets
a computer user view and optionally store and update ļ¬les on a
remote computer as though they were on the userās own computer.
The NFS protocol is one of several distributed ļ¬le system standards
for network-attached storage (NAS).
NFS allows the user or system administrator to mount (designate as
accessible) all or a portion of a ļ¬le system on a server. The portion
of the ļ¬le system that is mounted can be accessed by clients with
whatever privileges are assigned to each ļ¬le (read-only or
read-write). NFS uses Remote Procedure Calls (RPC) to route
requests between clients and servers.
29
31. edge of nfs over iscsi for vm storage
ā Simpliļ¬ed operational model
NFS offers a greatly simpliļ¬ed operational model versus
traditional block storage. Resizing LUNs can sometimes be
problematic. Resizing NFS ļ¬lesystems is generally much easier.
ā Larger datastores
While VMFS LUNs top out just shy of 2 TB in size, NFS has no
such limits ā some arrays go as high as 16 TB.
ā Advanced functionality via ļ¬lesystems
NFS can offer advanced functionality above what a traditional
block device can offer because the storage device has control of
the ļ¬lesystem.
ā Open access
VMware is a bit speciļ¬c to VMware environments. NFS, on the
other hand, is a mature cross-platform speciļ¬cation that makes
it much easier to provide access to virtual machines for backup,
replication or other purposes.
30
35. The Server was migrated while a client was accessing the server
continuously and the network state was analyzed when simple ping
requests were being sent to the server and when live video was being
streamed at the clientās terminal. The video was being streamed
ļ¬awlessly and no visible packet loss was detected by the client,
since the percent of packets lost during the process was very low.
Total Migration Time = 18.563 min
Packet Loss = 4%
RTT Average = 1.113 ms
Min = 0.268 ms
Max = 2.396 ms
34
37. wan-based live vm migration
One of the key challenges in WAN-based live VM migration is
maintaining the network connectivity and preserving open
connections during and after the migration.
When a node (i.e., a VM) transits between different networks, its IP
address changes because the address identiļ¬es the nodeās location
in the network topology, as a result the communication with other
nodes is interrupted until they become aware of the new address.
Approach to achieve WAN-based live VM migration.
ā Mobile IP or Proxy Mobile IPv6
ā Some form of tunneling that bridges remote sites over the
Internet.
36
38. challenges
ā Network State Migration
Data Centers interconnected over a WAN tend to use different IP
address spaces. Therefore, a VM migrated between hosts over a
WAN in different DCās require a new IP address to be assigned by
the network on the destination host. In addition, the old
connections established with the old VM IP address space
would be discarded.
ā Disk State Migration
Migration across a LAN generally transfers the memory, CPU and
network states when a shared storage is used. Nonetheless, a
shared storage might not be available always across a WAN
based high latency low speed links.
37
39. proposed solutions
ā Network State Migration
1. Mobile IP or Proxy Mobile IPv6
Mobile IPv6 (MIPv6) protocol can be used to support migration of
VMs across WANs. MIPv6 has two advantages.
ā VM retains its original IP address, hence DNS updates are not needed
to locate services on the VM.
ā MIPv6 provides the ability to use route optimization that reduces
propagation delay of packets to and from the VM.
The main problem with this approach is that it requires the VM to
have modiļ¬ed protocol stack.
2. VXLAN
VXLAN is an IP multicast application and it uses IP multicast to
isolate network trafļ¬c. VXLAN controller is a management layer of
VXLAN and manages VTEPs. VXLAN controller conļ¬gures a mapping
between the VXLAN VNI and the IP multicast group. And it also
conļ¬gures VTEP to join the IP multicast group when the VM is
moved to the VTEP.
38
40. vxlan approach
ā Open vSwitch
Open vSwitch is software based virtual switch that resides
within the hypervisor or the management domain (e.g., Dom0 in
Xen). The Open vSwitch provides the connectivity between the
virtual machines and the physical interfaces.
39