CP-40 was the first operating system that implemented complete virtualization, i.e. it provided a virtual machine environment supporting all aspects of its target computer system (a S/360-40), such that other S/360 operating systems could be installed, tested, and used as if on a stand-alone machine. CP-40 supported fourteen simultaneous virtual machines. Each virtual machine ran in &quot;problem state&quot; – privileged instructions such as I/O operations caused exceptions, which were then caught by the control program and simulated. Similarly, references to virtual memory locations not present in main memory cause page faults, which again were handled by control program rather than reflected to the virtual machine. Further details on this implementation are found in CP/CMS (architecture). The basic architecture and user interface of CP-40 were carried forward into CP-67/CMS, which evolved to become IBM's current VM product line.
NTVDM stands for &quot;NT Virtual DOS Machine&quot;. WOW stands for &quot;Windows on Windows&quot;. They are both names for the same Win16 subsystem that runs under Windows NT, 2000, and XP. The Win16 subsystem is an emulated DOS subsystem that runs under NT-based Windows operating systems. It allows 16-bit applications to run as if they were being executed on a DOS computer, with that computer's multitasking and segmented memory model. The subsystem is preemptively multitasked, so that 16-bit DOS and Windows applications cannot crash the operating system. Within the subsystem, however, applications behave exactly as they do on a DOS/Win 3.x computer, so 16-bit applications within a Win16 subsystem can crash one another or the Win16 subsystem. When a DOS program running inside a VDM needs to access a peripheral, Windows will either allow this directly (rarely), or will present the DOS program with a Virtual Device Driver (VxD in short) which emulates the hardware using operating system functions.
Traditionally, every application requires its own server. Vendors are often unhappy to share physical servers and operating systems. This leads to a situation whereby a new server is bought for every new application being deployed. The practical result of this is that the application becomes closely coupled with the hardware leading to underutilisation of resources and increased cost of ownership.
In the virtualised model, a thin virtualisation layer, or hypervisor, is introduced which abstracts the physical hardware layer and presents a standardised subset of hardware to the virtual machines running on the system. Virtual machines are isolated from each other and run as processes on the host system. We have now broken the dependancies between applications and physical hardware and provided a safe and supportable mechanism for running multiple applications on a single hardware platform, thus optimising utilisation levels and reducing ownership costs.
Most servers operate and very low levels of CPU utilisation, typically around 10%. However, a CPU efficiency will remain constant up to about 85% utilisation. Any unused CPU cycles between 10% and 85% should be exploited. One way of doing this is to time divide CPU cycles across multiple workloads, in this case by running multiple virtual machines on a single CPU.
When purchasing new hardware for applications, organisations will often overprovision, that is, they will purchase more memory and CPU resources than will actually be used by the application. In a typical situation, the vast majority of servers in an organisation will use less than 10% of CPU resources during normal business hours. As idle hardware is a cost both in terms of acquisition and power consumption, our aim should be to drive up utilisation levels
Server virtualisation helps us to realise the concept of a “virtual infrastructure”. We no longer think of providing extra resources, be they storage, network, memory or CPU resources, to a single application. Resources now become part of a pool that virtual machines can draw upon, using as much or as little of each resource as they each need. Available resources can be taken from the pool and returned to the pool as required. This leads to a more efficient use of the IT infrastructure.
a hypervisor (or virtual machine monitor ) is a virtualization platform that allows multiple operating systems to run on a host computer at the same time. The term usually refers to an implementation using full virtualization. The term hypervisor apparently originated in IBM's CP-370 reimplementation of CP-67 for the System/370, released in 1972 as VM/370.
Type 1 hypervisor (or Type 1 virtual machine monitor ) is software that runs directly on a given hardware platform (as an operating system control program ). A &quot;guest&quot; operating system thus runs at the second level above the hardware. The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's current z/VM. More recent examples are Xen, VMware's ESX Server, and Sun's Hypervisor (released in 2005).
Type 2 hypervisor (or Type 2 virtual machine monitor ) is software that runs within an operating system environment. A &quot;guest&quot; operating system thus runs at the third level above the hardware. Examples include VMware Server and Microsoft Virtual Server.
Microsoft Virtual Server and Virtual PC are referred to by Microsoft as “hybrid VMMs”. They are clearly not Type 1, as they rely on a hosting operating system for much of their functionality. However, neither are they Type 2, as they run their virtual machines directly on the hardware whenever possible. In a hybrid VMM architecture, a small hypervisor kernel, sometimes referred to as as a µ-hypervisor , controls CPU and memory resources, but I/O resources are programmed by device drivers that run in a deprivileged service OS . While a hybrid VMM architecture offers the promise of retaining the best characteristics of Type 1 and Type 2 VMMs, it does introduce new challenges, including new performance overheads, due to frequent privilege-level transitions between guest OS and service OS through the µ-hypervisor,
Here, the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying the &quot;guest&quot; OS. This system call to the hypervisor is called a &quot;hypercall&quot; in Xen, Par r allels Workstation and Enomalism; it is implemented via a DIAG (&quot;diagnose&quot;) hardware instruction in IBM's CMS under V r M (which was the origin of the term hypervisor ). Examples include Win4Lin 9x, Sun's Logical Domains, and z/VM.
Full virtualization , is a virtualization technique used to implement a certain kind of virtual machine environment: one that provides a complete simulation of the underlying hardware. The result is a system in which all software capable of execution on the raw hardware can be run in the virtual machine. In particular, this includes all operating systems. Binary translation is the emulation of one instruction set by another through translation of code. Sequences of instructions are translated from the source to the target instruction set.
Intel VT is available on most Pentium 4 6x2, Pentium D 9x0, Xeon 3xxx/5xxx/7xxx , Core Duo (excluding T2300E) and Core 2 Duo processors (excluding the T5200, T5500, E4x00). AMD processors using Socket AM2, Socket S1, and Socket F include AMD Virtualization support. In May 2006, AMD introduced such versions of the Athlon 64, Turion 64, and 64-bit Sempron processors. AMD Virtualization is also supported by release two (x2xx series) of the Opteron processors.
As industry standard servers evolve, more systems meet the criteria for native virtualisation. At time of writing, the servers listed above all reached the minimum specifications required.
A virtual machine should contain all the essential hardware elements as found in a physical machine, but represented by virtual rather than real devices. Standard components will include: 1 or more network cards 1 or more storage controllers 1 or more virtual CPUs Virtual memory A mouse A keyboard Floppy and CD/DVD drives In special cases, a virtual machine can also be given access to the following physical devices on the host machine: Parallel ports Serial ports Sound cards
At its most basic, a virtual machine will consist of a virtual BIOS (Basic I/O System). The BIOS will load from a file rather than from a BIOS chip but will have all the functionality found in a normal BIOS and is usually based on an actual commercially available BIOS, such as the PheonixBIOS chip. The BIOS can be used to control the boot order of the virtual machine by allowing you to specify whether to boot from the virtual hard disk, a CD-ROM or even over the network via PXE.
Most virtualisation platforms allow you to define complex networking infrastructures within the virtualisation layer. Virtual switches can be created which provide network connectivity between virtual machines and also to external physical networks. This is usually achieved by linking the virtual machines to the virtual switches by means of virtual NICs. These NICs have all the features of physical NICs, such as MAC addresses and bound IP addresses. Virtual switches can then be connected to the physical network via uplink connections to the physical NICs on the host. These physical NICs normall operate as simple bridges and do not have associated IP addresses.
By combining internal and external virtual switches, more complex network environments can be created within the virtualisation layer. In the example above, two back-end systems provide database and application functionality to a web front-end server. The back-end systems cannot be accessed directly by users and are thus protected by the firewall functionality of the web server.
Virtual machines will see their allocated disk storage as storage devices attached to a storage adapter. In actuality, the storage devices will be monolithic files stored on a central file system. In the case of VMware, these will be VMDK files stored on a VMFS volume. Microsoft Virtual Server uses VDF files stored on an NTFS volume. The virtual machine will use these virtual disks as it needs. On a virtual machine running Windows Server 2003, they could be formated as NTFS and used for standard file storage. On a virtual machine running Linux, they could be formatted as ext3 file systems or used as swap devices, for instance. Physically, most standard storage systems are supported, such as direct attached SCSI, storage area networks, iSCSI and network attached storage.
On the host platform, one must consider how the operating system on the host is to be licensed and how the virtualisation software is to be licensed. In the case of Type 1 VMMs such as VMware ESX and Xen, where the virtualisation layer runs directly on the physical hardware, there are no associated host operating system licensing considerations. However, the virtualisation technology itself will generally be licensed on a per CPU socket basis. In the case of Type 2 VMMs, where the virtualisation software runs as an application within an operating system environment, there may be a licensing requirement for the host OS. There may or may not be a licensing requirement for the virtualisation technology itself. For instance, VMware Server, which runs on Windows or Linux, is available as a free download. Microsoft Virtualisation, a hybrid VMM that will be part of the Longhorn release of Windows Server, will be a free add-on. On the guest, operating system licensing is generally the same as it is on physical servers. An exception is when running Microsoft Virtual Server. When the host OS is Windows 2003 R2 Enterprise Edition, you will be licensed for the host OS and 4 virtual machines running Windows 2003 R2 Enterprise Edition. If the host OS is Windows 2003 R2 DataCenter, you will be licensed for the host OS and an unlimited number of virtual machines running any version of Windows 2003 R2. Guest applications may be licensed on a per CPU basis. In some cases, this may be restricted to the number of virtual CPUs presented to the virtual machine. In other cases, the software vendor may insist on licensing based on the number of physical CPUs in the host server.
Careful consideration must be given to application support issues when considering deployment of a virtual server solution. In some cases, although the application may run correctly and efficiently on a virtual machine, the software vendor may not be prepared to support such a deployment. Microsoft Virtual Server support policy: http://support.microsoft.com/kb/897613 Support policy for Microsoft software running in non-Microsoft hardware virtualization software: http://support.microsoft.com/kb/897615/ The following Windows Server System software is not supported within a Microsoft Virtual Server environment: Microsoft Speech Server Microsoft ISA Server 2000 Microsoft SharePoint Portal Server SharePoint Portal Server is currently not supported running within Virtual Server. Support for SharePoint Portal Server within Virtual Server is expected in a future release. Microsoft Identity Integration Server 2003 Microsoft Identity Integration Feature Pack Note Microsoft ISA Server 2006 is supported within a Microsoft Virtual Server 2005 R2 environment.
There are many reasons why we may choose to deploy servers as virtual machines. We will examine each in turn.
Gartner defines 3 types with progressively greater operational savings, return on investment and end user benefits Logical implement common processes and enable standard systems management procedures across the server applications. Physical Co-location of multiple platforms at fewer locations i.e. reduction in the number of data centres without altering the number of actual servers. Rational implementing multiple applications on fewer, more powerful platforms.
Perhaps the most obvious use of virtualisation is to consolidate existing server workloads onto fewer, more powerful systems. Often the first candidates selected for consolidation are those that could be categorised as infrastructure applications. These will include Active Directory domain controllers, DHCP servers, DNS servers, and file and print servers. These are often identified as having very low utilisation workloads. However, many datacenter workloads can also be good candidates for virtualisation. Another group of systems often targeted for virtualisations are those that run legacy operating systems such as Windows NT 4 Server. As the range of available hardware systems that can run these operating systems diminishes, it makes sense to move them to a platform that will ensure easy recoverability in the event of a hardware failure. The ability of most virtualisation platforms to partition CPU and memory resources means that virtual machines can be limited in the amount of these resources that they can use, ensuring that multiple workloads can be run side by side without any one application monopolising available computing power.
Many organisations find traditional business continuity and disaster recovery strategies difficult to implement. Bare metal restore tools often require that systems are recovered onto identical hardware and require a one-to-one correspondence between production and BC/DR systems. Virtualisation techniques can assist by making it easier to create and maintain “virtual” versions of existing production systems that can be restored rapidly onto any system supported by the preferred virtualisation platform. “Physical-to-Virtual” (P2V) techniques can be used to create these virtual machines which can then be either maintained or archived for later restore. Virtualisation can also make OS and application patching less risky. Copies of production virtual machines can be made or snapshots taken prior to patching. This makes it easier to roll back in the event of issues resulting from the patching process. Virtual networking makes it simple to isolate virtual machines which may be required to prevent malicious code such as viruses, adware and trojan horses from infecting other systems.
The concept of the dynamic datacenter envisages the ability to move virtual machines based on the changing requirements of individual workloads. Depending on the capabilities of the virtualisation platform in question, this could be either an automated process or a manual task. A typical scenario would be where an application requires an increased share of the computing resources available on a server. Other virtual machines running on the same server could be dynamically migrated to other systems, thus freeing up resources for use by the more demanding application.
Rapid provisioning of virtual machines Provide multiple VMs for testing quickly Use save state to start up quickly Maintain libraries of test systems Create arbitrary test scenarios Recreate reported issues Avoid use of production network Use snapshots to rollback to known state Wider test range for niche scenarios Provision multiple VMs with variations Simulate Complex environments
Virtual machines are, in essence, represented by a collection of files. These will typically consist consist of files representing: Virtual hardware configuration Virtual BIOS settings Virtual disks Other aspects such as snapshots, saved states, etc. This means that virtual servers can be treated in a similar way to other types of files. Provisioning a new virtual server can now be as easy as copying a set of template files. Migrating a server to a new location, such as a DR site, can be as easy as a file backup and restore or a network copy. We can clone or copy entire servers using file management techniques, save versions of virtual machines as they change and develop, and keep archives of servers for restore purposes. Using software and hardware based replication techniques, we can also create remote mirrors of running virtual machines for DR purposes.
Shared storage, in the form of fiber channel-based storage area networks (SAN), network attached storage (NAS) or iSCSI, plays a crucial role in the maintenance of a highly available virtual infrastructure. The file representing a virtual machine will be ideally located entirely in a central location that is accessible by a number of systems operating as a cluster. This allows us to move virtual machines as running processes from one physical host to another without having to move the files themselves. Many shared storage solutions also include snapshot, clone and replication technologies that can be leveraged by the virtual infrastructure.
Live migration is the process of moving virtual machines from one host to another without interrupting processing or user connectivity. This is accomplished through a combination of centralised shared storage and high speed copying of memory contents. Because the virtual machines files are centrally stored and accessible by every host taking part in the live migration, there is no need to move files at any time. Only the memory contents of the virtual machine are moved. This is usually accomplished by having a dedicated gigabit Ethernet link between the physical hosts. Virtual machine memory contents are copied across this link from one host to another. When all memory contents have been copied, updates to the memory are temporarily suspended while the running process representing the virtual machine is stopped on one host and restarted on another. Live migration can be used to achieve zero downtime maintenance of the host servers and to dynamically balance resource utilisation across the infrastructure.
The debate as to whether it is better to scale up or to scale out when designing a virtualisation infrastructure continues. While both approaches have advantages and disadvantages, it is generally agreed that scaling out is the best overall approach as it is the most cost effective solution while providing better utilisation levels.
Most of the leading virtualisation platforms provide all of the features outlined above. As new solutions appear on the market, they should be carefully evaluated before being implemented.
For most people, VMware Workstation was their first experience of i386-based virtualisation. It is an example of a Type 2 VMM in that it runs as an application on a host operating system. The product is aimed at those wishing to run multiple operating systems on a single desktop PC and will run on either Windows or Linux. This could be for software test and development or for training purposes. Supported guest operating systems include Windows, Linux, NetWare and Solaris.
VMware Server is VMware’s free virtualisation platform. Again, it is a Type 2 VMM and uses much of the same code as VMware Workstation. The product runs on any x86 hardware so there is no necessity to refer to any hardware compatibility list when installing it. VMware Server is intended as a “step up” product, allowing users to experience virtualisation before moving on to Type 2 hypervisor products.
VMware Infrastructure 3 represents VMware’s flagship virtualisation product and consists of two distinct elements: A Type 1 VMM platform called VMware ESX 3, and a a management application, VMware VirtualCenter 2. 4-way virtual SMP is provided for multi-threaded applications and up to 16GB of RAM can be provided to any individual virtual machine. Live migration is implemented via a licensed feature called VMotion. Also individually licensed are: VMware HA, a high availability technology that provides failover for virtual machines in the event of a hardware failure of a cluster node VMware Distributed Resource Scheduling which can be used to ensure uniform utilisation of resources across the infrastructure VMware Consolidated Backup, a technology that leverages the SAN infrastructure to provide high speed virtual machine backups
VMware Infrastructure 3 provides the ability to add additional processing resources to a cluster with zero downtime and without disrupting any business processes. Additional servers can be introduced and workloads can then be redistributed via live migrations of virtual machines.
The DRS feature improves resource allocation across all hosts in a VMware cluster. DRS collects resource usage information for all hosts and virtual machines in the cluster and generates recommendations for virtual machine placement. These recommendations can be applied automatically. Depending on the configured DRS automation level, DRS displays or applies the following recommendations: Initial placement . When you first power on a virtual machine in the cluster, DRS either places the virtual machine or makes a recommendation. Load balancing . At runtime, DRS tries to improve resource utilization across the luster either by performing automatic migrations of virtual machines (VMotion), or by providing recommendations for virtual machine migrations.
In a VMware HA solution, a set of ESX Server hosts is combined into a cluster with a shared pool of resources. VirtualCenter monitors all hosts in the cluster. If one of the hosts fails, VirtualCenter immediately responds by restarting each associated virtual machine on a different host. VMware HA fully integrated with DRS. If a host has failed and virtual machines have been restarted on other hosts, DRS can provide migration recommendations or migrate virtual machines for balanced resource allocation. If one or both of the source and target hosts of a migration fail, HA can help recover from that failure.
VMware Consolidated Backup (VCB) provides a fast and efficient method of backing up virtual machines by leveraging high speed fiber channel SAN data transfer and the virtual machine snapshot technology of VMware ESX. A Windows Server 2003 proxy server is attached to the SAN used by the VMware servers and is presented with the LUNs used by VMware to store virtual machine files. A VCB pre-backup script runs on the proxy which creates a snapshot of the virtual machine to be backed up and mounts it on the proxy. A third-party backup tool can then be used to move the data to disk and/or tape storage.
Microsoft has been providing virtualisation products since 2004 when it released Virtual PC. This was soon followed by Microsoft Virtual Server. Traditionally, Microsoft virtualistaion solutions have been managed via a web interface. However, this will change with the introduction of Virtual Machine Manager which functions as an MMC snapin. The next generation of Microsoft virtualisation will be called Windows Virtualization and is scheduled to be released within 18 months of the release of the next version of Windows Server, codenamed “Longhorn”
Virtual PC is a product aimed at desktop virtualisation, and as such is not considered a suitable product for production environments. However since it shares the same disk format as Virtual Server, it can be used as a development platform for virtual machines which will eventually run on Microsoft Virtual Server.
Support for Multiple CPUs 4 with Standard 32 with Enterprise Guest limited to single CPU Support for Intel VT and AMD Virtualization New processor designs from AMD and Intel specifically built with virtualisation in mind providing enhanced performance Will not speed up Windows guests (as long as VM additions are installed) but will improve performance for non-Windows guests Will improve install times for Windows
Microsoft Virtual Server 2005 supports clustering both at the guest and the host level. Virtual Machine Clustering Guests can failover to another guest on the same machine Uses shared SCSI on guests Virtual Host Clustering Can cluster host server enabling planned or unplanned moves to another server Requires use of havm.vbs script (available from Microsoft)
Windows Server virtualization is a hypervisor-based technology that is a part of Windows Server “Longhorn”. Windows hypervisor is a thin layer of software running directly on the hardware which works in conjunction with an optimized instance of Windows Server “Longhorn” that allows multiple operating system instances to run on a physical server simultaneously. It leverages the powerful enhancements of processors provides customers with a scalable, reliable, secure and highly available virtualization platform.
System Centre Virtual Machine Manager, an MCC snapin for managing Microsoft virtual environments, is currently in Beta - release is set for October 2007. Its key features will be: Managing and planning deployment of virtual servers. Works with both Virtual Server and Windows Virtualisation. Monitors existing physical servers to identify candidates for virtualisation. Manages load on virtual host servers. Identification of migration candidates is based on an analysis of both peaks and averages. The selection parameters are user definable. Performs physical to virtual migrations Uses VSS to copy data across Process can be either wizard based or scripted through PowerShell Supports Windows 2000 and Windows 2003 Servers
Systems Centre Virtual Machine Manager will feature a full set of preconfigured reports that integrate fully with the Systems Center database.
Virtual infrastructure is commonly used in test and development environments where there is consistent provisioning and teardown of virtual machines for testing purposes. With Virtual Machine Manager, administrators can selectively extend self-provisioning capabilities to user groups and be able to define quotas. The automated provisioning tool will manage the virtual machines through their lifecycles including teardowns. It will a llows self service provisioning of virtual servers using a website and a central library of templates. The administrator defines what templates are available to the user.
The forthcoming Windows Server Virtualisation will aim to build on the functionality of Virtual Server 2005. Key improvements will include: Support for 64bit OS’s Virtual SMP for guests Hot add of memory Hot add of processors Hot add of storage Hot add of networking Transition from web-based management to a MMC snap-in.
Xen is an open-source hypervisor solution released under the GNU licensing scheme. It is a solution that installs on bare metal and so requires no host operating system. Several products have started appearing on the market based on the Xen paravirtualisation code and many vendors are including Xen in their OS offerings including Novell, Red Hat and Solaris. The forthcoming Windows Server Virtualisation will share the same core architectural design as Xen.
XenSource offers a solution based on the Xen paravirtualisation approach. Whereas with hardware emulation based products, such as VMware, each virtual machine is presented with an emulated hardware layer that offers the guest operating system the illusion of a standard server with well-known hardware devices, in XenSource products, guests interface with the hypervisor via an efficient, low-level API, known as the hypercall API, rather than through hardware emulation. This allows the hypervisor and operating system to co-operate to optimally virtualize the underlying hardware and schedule guest CPU and I/O, resulting in tremendous performance, security and portability advantages.
XenSource offer a range of products aimed at different levels of usage. XenSource does not offer support for older versions of Windows, such as NT 4. Live Migration and Shared Storage support is promised for mid-2007.
The Xen open source hypervisor provides the foundation for Virtual Iron. It leverages the hardware-assisted virtualization capabilities built into the latest micro-processors to create an abstraction layer between physical hardware and virtual resources. Virtual Iron supports 32-bit Windows and 32 and 64-bit Linux operating systems, up to 8 CPUs per guest operating system, 32 CPUs per physical server, 96 GB memory, and multiple network and storage adaptors.
Virtual Iron’s Virtualization Manager provides a central place to control and automate virtual resources. It is a Java application with a client-server architecture and a high performance distributed object oriented database. The user interface uses a transactional job-based model to provide fault tolerant workflows with rollback. The While the hypervisor and service partition components of Virtual Iron are covered under the GNU public license, the Virtualisation Manager is the commercial property of Virtual Iron and provides much of the advanced features of the product.
Virtualization Manager’s built-in policy engine and event monitor allow users to customize the environment to optimize application performance, ensure availability, and simplify resource management. A remote virtual desktop provides graphical console, keyboard and mouse without client or server-side additions. Virtualization Manager provides the following capabilities: Physical infrastructure: Physical hardware discovery, bare metal provisioning, configuration, control, and monitoring Virtual Infrastructure: Virtual environment creation and hierarchy, visual status dashboards, access controls Virtual Servers: Create, Manage, Stop, Start, Migrate, LiveMigrate Policy-based Automation: LiveCapacity, LiveRecovery, LiveMaintenance, Rules Engine, Statistics, Event Monitor, Custom policies Reports: Resource utilization, System events
LiveMigration – moves a running virtual server from one physical server without pausing or impacting running applications. LiveCapacity – monitors virtual server CPU utilisation or other application needs to determine when a workload needs additional capacity. When a user-defined threshold is met, the virtual server is LiveMigrated to a physical server that has the necessary resources. LiveRecovery – monitors the status of physical resources and moves virtual servers to maintain uptime in the event of a hardware failure. LiveMaintenance – moves virtual servers to alternative locations without downtime when a physical server is taken offline for maintenance.
The Virtual Iron Platform requires a set of x86 servers linked via a standard Ethernet network.Virtualization Manager is installed on one server that is networked to the other servers. Operating system images, including installed enterprise applications, are stored in a network accessible location using NAS or SAN to be deployed on virtual servers. Alternatively, deployment applications can be used to install an operating system directly onto a virtual server. The Virtual Iron Virtualization Manager automatically inventories the physical infrastructure and presents it through the management user interface.When an administrator creates and configures virtual servers, the Virtualization Manager reserves the physical resources, configures the virtual server resources, deploys the operating system, and starts the virtual server.
Again, as with XenSource. Older versions of the Windows operating system are not supported by Virtual Iron. Virtual Iron is only supported on systems with the newer range of Intel and AMD processors that include virtualisation extensions.
Virtuozzo is an operating system level server virtualization solution. Virtuozzo creates isolated partitions or virtual environments (VEs) on a single physical server and OS instance to utilize hardware, software, data center and management efforts with maximum efficiency. The Virtuozzo low-overhead, efficient architecture maximizes server resources. Virtuozzo adds a portable layer to existing operating systems which adds a dynamic partition or Virtual Environment (VE) that resides on a common OS. The single thin Virtuozzo layer introduces only a small percentage of overhead and allows up to 100s of VEs to run on a physical server.
Parallels Workstation is the first desktop virtualization solution to include a lightweight hypervisor that directly controls some of the host computer’s hardware resources. It has strong OS support for guest virtual machines, including: The entire Windows family - 3.1, 3.11, 95, 98, Me, 2000, XP and 2003 Linux distributions from popular distributors like Red Hat, SuSE, Mandriva, Debian and Fedora Core FreeBSD “ Legacy” operating systems like OS/2, eComStation and MS-DOS. Parallels Workstation also supports next-generation CPUs built on Intel’s VT architecture, and will support AMD Pacifica architecture when it is released to the general public.
The HP Virtual Server Environment encompasses a number of fully integrated, complementary components that enhance the functionality and flexibility of a server environment. Control starts with HP Systems Insight Manager (SIM), a unified infrastructure management platform that provides common fault, configuration, performance, and asset management across all HP Integrity, HP 9000, and HP ProLiant servers and HP StorageWorks storage. The VSE management tools plug into SIM for a central point of administration for consistency and enhanced efficiency. Next is ongoing management and configuration. P Integrity Essentials Virtualization Manager is the first comprehensive, easy-to-use virtualization management tool. It allows you to instantly see the relationship between physical and virtual resources and easily perform configuration management tasks for all your VSE resources. Finally, server resources can be directed to the highest business priorities via intelligent policy engines that monitor service levels in real time and automatically adjust server resource allocation when needed. To help increase flexibility and systems administration efficiency, the HP VSE supports all operating systems that run on the HP Integrity server platform: HP-UX 11i, Microsoft® Windows® Server 2003, Linux, and OpenVMS. To allow you to choose the operating system that best meets your business needs, many of the components of the HP VSE, such as HP Systems Insight Manager and HP Integrity Virtual Machines, were designed specifically for a multi-OS environment.
Often, the most time consuming and difficult aspect of a server consolidation project will be the process of converting physical servers into virtual machines, often referred to as P2V. There are several commercial products available that can be used to accomplish this task. Converting servers can, in some cases, be performed while the server is “live” but will often require that applications are quiesced in order to maintain data consistency. Often, it is better to perform “cold” conversions. Some operating systems, moreover, will not support live transfers of data. The P2V process will usually consist of the following steps: Analysis of the source server to determine memory requirements, data capacities, etc. Creation of a target virtual machine Transfer of data from physical source to virtual target Transformation of the virtual machine to replace drivers, disable unwanted services, reset IP addresses, etc.
VMware Converter is managed through a simple, task based user interface that enables users to convert VMware virtual machines or third-party virtual machines and disk image formats to VMware virtual machines in three easy steps: • Step 1: Specify the source physical server, virtual machine or third-party format to convert. • Step 2: Specify the destination format, virtual machine name, and location for the new virtual machine to be created. • Step 3: Create/Convert to destination virtual machine and configure it. VMware Converter achieves faster speed of conversions through the use of sector based copying ( vs file level copying in other products). VMware Converter first takes a snapshot of the source machine before migrating the data, resulting in fewer failed conversions and no downtime on the source server. VMware Converter communicates directly with the guest OS running on the source physical machine for hot cloning these machines without any downtime and as such has no direct hardware level dependencies.
Platespin PowerConvert is marketed as an “Anywhere-to-Anywhere” conversion tool. The range of sources includes physical servers, virtual machines and system images. Targets can be physical servers, virtual machines or system images. So, as well performing traditional P2V-type operations, one can also perform physical-to-physical (P2P) operations for hardware upgrades, physical-to-image (P2I) and virtual-to-image (V2I) for DR purposes, and numerous other types of transformation.
Platespin PowerConvert features a simple drag-and-drop interface that can be used to automatically create conversion jobs, which can then be edited as necessary.
For customers wishing to convert Novell NetWare servers into virtual machines, there are no products that have been expressly designed to perform P2V operations. However, Portlock Storage Manager has been successfully used to copy entire volumes from a physical server to a virtual machine. The virtual machine can then be easily transformed to create a fully functional server.
Before consolidating servers onto a virtualised platform, it is important to gather as much information as possible about the existing environment to assist with sizing and planning. For this reason, several products have been developed which will gather inventory and performance data and use these data to halp with what is known as capacity planning. Commonly, these products will report on which servers are good candidates for virtualisation and which should be excluded from the virtualisation process, and also help you to size the virtualised environment. What-if scenarios can usually be run to examine the effects of using different types of target hardware, or excluding or including different types of servers.
The VMware Capacity Planner Data Collector is installed on a Windows system at the company site and uses WMI and registry calls to gather server information without requiring the use of agents. The VMware Capacity Planner Data Analyzer serves as the core analytical engine that performs all of the analysis required for intelligent capacity planning. The VMware Capacity Planner Dashboard is the front-end, Web-based user interface to the Information Warehouse and Data Analyzer. The Capacity Planner Dashboard delivers all the key capacity planning analysis and decision support to end-users in a secure and organized manner.
In contrast to VMware Capacity Planner, Platespin PowerRecon collects and stores all data onsite and analyses it locally. Again data is gathered without the use of agents. PowerRecon can be used to create and save P2V jobs that can then be used by Platespin PowerConvert.
VMware Lab Manager automates the setup, capture, storage and sharing of multi-machine software configurations. Development and test teams can access them on-demand through a self-service portal. Features include: Automatically and rapidly set up and tear down complex, multi-machine software configurations for use in development and test activities. Give every developer or test engineer the equivalent of their own fully-equipped data center with dedicated provisioning staff. Maintain a comprehensive library of customer and production system environments
With VMware ACE, security administrators package an IT-managed PC within a secured virtual machine and deploy it to an unmanaged physical PC. Features Provision secured, IT-managed endpoints on unmanaged PCs Secure confidential data on endpoint PCs Run multiple secure PC environments on a single PC Dramatically lower the cost of business continuity
Companies can host individual desktops inside virtual machines that are running in their data center. Users access these desktops remotely from a PC or a thin client using a remote display protocol. Since applications are managed centrally at the corporate data center, organizations gain better control over their desktops. Installations, upgrades, patches and backups can be done with more confidence without user intervention.
Policy-Based Virtual Service Life Cycle Management Provisioning of Virtual Servers Disaster Recovery Virtual Servers backup/archiving Data centre daily operations automation Virtualised environments optimisation High availability of Virtual Servers Change management tracking Implement best practices and business policies
Backing up and replicating virtual machines for the purposes of rapid recovery and DR has traditionally been a challenge. Vizioncore have produced two applications that solve these issues by leveraging the VMware software development kit (SDK) which is made public by VMware. esxRanger Professional creates backups of running virtual machines on VMware ESX servers by taking advantage of the ability to take point-in-time snapshots of virtual machines. Backups can occur over a LAN connection or, by leveraging VMware Consolidated Backup techniques, over a high speed SAN connection. esxRanger maintains a catalogue of backup images which can be rotated in the same way as standard backup media. esxReplicator creates replicas of running virtual machines on a secondary VMware ESX platform for the purposes of business continuity and disaster recovery. It uses the same snapshotting technology as used by esxRanger to achieve this.
esXpress and Virtual Solution Box both provide online backups of virtual machines and are implemented as virtual appliances. That is to say, the application is downloaded and installed as a prebuilt, preconfigured virtual machine which is then customised as necessary by the end user.
esxCharter is a Vizioncore product which allows you to view key performance metrics on a VMware ESX platform in an easy to use dashboard presentation. Key features include: Provides top-down multi-level view that easily allows administrators to identify bottlenecks or other problems. Provides both “real-time” and historical monitoring of hosts and virtual machines. Allows shares adjustment for both individual or groups of virtual machines. Offers threshold alerting. Supports VMware Infrastructure 3.
For customers that have already deployed VMware ESX 2.x and wish to upgrade to VMware ESX 3.x, migrating virtual machines to the new platform can be very time consuming and difficult. esxMigrator allows users to perform the migration with the minimum amount of downtime while always maintaining a failback position.
If you haven’t begun consolidating on virtualized servers, now is the time to explore your options. The benefits are proven, and virtualization is seeing exceptionally rapid adoption. Evaluate your applications for potential consolidation. Legacy applications running on older operating systems are a common first target, but virtualization and consolidation are moving rapidly toward the mid-tier and back-end of the datacenter. Understand the differences between various virtualization solutions. Cost, functionality and performance vary considerably and value will differ depending on your IT and business environment. Look closely at the licensing and support policies of your software vendors. Licensing and support policies are in flux with respect to both virtualization and multi-core processors, and both issues can strongly impact the ROI of a consolidation project. Start small. Virtualization and consolidation involve new products, technologies, usage models and IT procedures, so a small pilot deployment is recommended before consolidating on a broad scale.
Work with business units to manage expectations. Business decision-makers may be reluctant to run their applications on shared physical servers until they understand the benefits and safeguards. Beware of “virtual sprawl.” Virtual servers are extremely easy to deploy and provision, but don’t abandon all constraint. Every new virtual server introduces new OS and application instances, which can increase licensing, patching and general management costs. Consider blades as a complementary consolidation strategy. Blade servers help to consolidate and optimize physical infrastructure, while virtualization software optimizes the use of those physical resources. The combination can be especially effective. Integrate server consolidation with a broader consolidation strategy. Consolidation of facilities, storage and data are equally important, and provide a solid foundation for application consolidation on virtual servers. Develop a framework for continuous consolidation: Products and technologies are changing rapidly. Success will require careful planning, a long-term strategy, and an approach that comprehends benefits, risks and costs.
An Introduction to Server Virtualisation Alan McSweeney
Virtualisation is a framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.
Typically, in order to virtualize, you would use a layer of software that provides the illusion of a "real" machine to multiple instances of "virtual machines". This layer is traditionally called the Virtual Machine Monitor (VMM) or “hypervisor”.
The hypervisor could run directly on the real hardware or it could run as an application on top of a host operating system.
Type 1 VMM IBM CP/CMS VMware ESX Windows Virtualisation (2008) Xen Virtual Iron Hardware VMM Guest VM Guest VM Guest VM
Type 2 VMM VMware Server Hardware VMM Guest VM Guest VM Guest VM Host OS
Hybrid VMM MS Virtual Server MS Virtual PC Hardware VMM Host VM Guest VM Guest VM
Centralized Management: Reports Full set of reports, integration with MOM database Actions one click away in context sensitive Actions Pane
Self Service Portal Ability to control owned virtual machines Thumbnails of all owned virtual machines
Self-Service Portal Provisioning User selects from list of templates Administrator has associated with that user
Self-Service Portal Provisioning New virtual machine ready for use, Terminal Services connection information automatically emailed to user.
Virtual Server 2005 vs Windows Server Virtualization Virtual Server 2005 R2 Windows Server Virtualization 32-bit VMs? Yes Yes 64-bit VMs? No Yes Multi-processor VMs? No Yes, up to 8 processor VMs VM memory support? 3.6 GB per VM More than 32 GB per VM Hot add memory/processors? No Yes Hot add storage/networking? No Yes Can be managed by System Center Virtual Machine Manager? Yes Yes Microsoft Cluster support? Yes Yes Scriptable / Extensible? Yes, COM Yes, WMI Number of running VMs? 64 More than 64. As many as hardware will allow. User interface Web Interface MMC 3.0 Interface
XenSource Products N/A N/A Mid-2007 Shared storage N/A N/A Mid-2007 Live Migration Red Hat EL 3.6, 3.7, 3.8, 4.1, 4.2, 4.3, 4.4, 5.0; SUSE SLES 9.2, 9.3, 10.1; Debian Sarge N/A (Windows guests support only) Red Hat EL 3.6, 3.7, 3.8, 4.1, 4.2, 4.3, 4.4, 5.0; SUSE SLES 9.2, 9.3, 10.1; Debian Sarge Linux guest support Windows Server 2003; Windows XP; Windows 2000 Server Windows Server 2003; Windows XP; Windows 2000 Server Windows Server 2003; Windows XP; Windows 2000 Server Windows guest support Developers, testers, support, IT enthusiasts Windows IT professionals Enterprise IT, system integrators User Profile
An enterprise ready native virtualisation platform
Uses hardware-assisted virtualisation technologies of Intel VT and AMD-V processors
Based on an open source hypervisor derived from the Xen open source project
No software need be installed on physical hardware
Virtual Iron Components Operating systems that are fully virtualised on a physical server Varies Guest operating systems Controls virtual servers through an agent in the service partition Commercial Virtualisation Manager Second software loaded when physical server boots. Manages virtual server creation and configuration and all I/O. GPL Service Partition First software loaded when physical server boots. Manages all hardware resources GPL Hypervisor Function License Component
LiveMigration – moves a running virtual server from one physical server without pausing or impacting running applications
LiveCapacity – monitors virtual server CPU utilisation or other application needs to determine when a workload needs additional capacity. When a user-defined threshold is met, the virtual server is LiveMigrated to a physical server that has the necessary resources
LiveRecovery – monitors the status of physical resources and moves virtual servers to maintain uptime in the event of a hardware failure
LiveMaintenance – moves virtual servers to alternative locations without downtime when a physical server is taken offline for maintenance
Supported Configurations Up to 16 Virtual disks per virtual server Up to 5 Virtual NIC adapters per virtual server Up to 5 Virtual servers per physical server CPU Up to 96GB RAM per Physical Server Up to 8 Processors per virtual Server 100s per virtual data centre Virtualised Nodes Intel Xeon with Intel VT AMD Opteron with AMD-V Processors 32 and 64-bit Red Hat Enterprise Linux 4 32 and 64-bit SUSE Linux Enterprise Server 9 32-bit Windows XP 32-bit Windows 2003 Operating systems Support Feature