SlideShare a Scribd company logo
1 of 57
Download to read offline
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 1 of 57
Institute of Technology Tallaght,
Department of Computing
BACHELOR OF SCIENCE (Hons) in IT MANAGEMENT
Semester 8
Module: PROJECT
Project Title:
Benefits of switching server infrastructure to a private cloud
or virtualised environment for SME’s
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 2 of 57
Contents
..................................................................................................... 1
Chapter 1 Project Proposal .................................................................... 3
Introduction ...................................................................................... 3
Chapter 2 Literary Review .................................................................... 11
Introduction ..................................................................................... 11
Technology Description........................................................................ 11
Virtualisation Products......................................................................... 14
Potential Benefits .............................................................................. 22
Case Studies ..................................................................................... 23
Conclusions ...................................................................................... 23
Chapter 3 Case Study .......................................................................... 24
Introduction ..................................................................................... 24
Background ...................................................................................... 24
Server Setups ................................................................................. 26
SAN Configuration............................................................................ 30
Network Configuration ...................................................................... 34
Server Testing................................................................................... 36
Server Scans .................................................................................. 36
Server Performance ......................................................................... 37
Hypervisor Overhead ........................................................................ 37
File Transfers................................................................................. 39
Benchmarking Software Comparisons..................................................... 40
SiSoft's SANDRA 2013 ........................................................................ 40
AIDA64 Extreme Edition..................................................................... 41
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 3 of 57
NovaBench .................................................................................... 42
iSCSI Performance ........................................................................... 43
SQL Performance............................................................................. 44
Pricing............................................................................................ 46
Conclusions ...................................................................................... 47
Chapter 4 Final Project Review .............................................................. 47
Introduction ..................................................................................... 47
Implementation and Testing .................................................................. 48
Power Comparisons.......................................................................... 50
Issues Encountered .......................................................................... 51
Demonstrations............................................................................... 51
Implementation .............................................................................. 51
Conclusion and Future Development ........................................................ 52
References and Bibliography.................................................................. 53
Virtualisation Products......................................................................... 54
Appendices ...................................................................................... 56
Chapter 1 Project Proposal
Introduction
Company currently maintains 34 servers in 3 communications rooms in its main site
and an additional 11 servers located in its Disaster Recovery facility. These servers
are a combination of data, application and backup servers.
The following are the main reasons behind considering an implementation of a
Virtualised server environment.
• The primary one is flexibility in server rollout and also in providing server
failover.
• The secondary reason is in reducing communications room power consumption
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 4 of 57
• Company have an issue with consolidating rack space as they have 3 full
server racks with very little room for manoeuvring.
• Company also wish to investigate any additional cost savings from a licensing
viewpoint.
Please note that Company does not wish to completely migrate all their existing
servers to a virtual platform. It does however wish to implement a virtual
environment which will work in tandem with standalone servers.
Company has traditionally implemented server builds on a server by server basis on
individual hardware platforms. This has resulted in a very stable server environment
with each server performing its allotted role. As a strong emphasis is placed on
server failover redundancy, this has led to Company accumulating a large number of
production servers. This creates many issues outlined in the introduction.
Current status, gaps in understanding
Company currently only possess limited knowledge of virtualisation throughout the
organisation.
It is a major step to migrate to a virtualised environment. Anecdotal evidence
suggests that the virtual world offers far more flexibility and reliability than the
hardware world. Company need to gain knowledge and exposure to virtualisation. It
would be expected that this would be fulfilled through knowledge from books,
training courses and practical exposure to the technology.
Opportunity of new ideas to address gaps
Virtualisation is seen as a way of overcoming some of the problems facing Company
but it is also in many ways a leap of faith. It can be difficult for experienced staff
who are used to a certain way of operating to change into a completely new
environment. Again training, knowledge and exposure will help build up confidence
with the new environment. Documentation will also be vital as a way of having a
physical handle on virtual servers. Company may even need to introduce a different
naming nomenclature for virtual servers to aid with identifying their location. It is
also envisaged that some form of visual aid will be required to allow Company
visualise the virtual servers.
Specific statement of objectives
The objective of the project is to determine the tangible benefits of server
virtualisation by means of a case study. The four main objectives would be:
• Identify Virtualisation platform.
There are 2 Virtualisation platforms which Company will be evaluating, which are:
• VMwares VSphere 5 ESXi
• Microsoft’s Hyper-V 2008
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 5 of 57
•
Citrix’s XenServer which is also a recognised virtualisation platform will not be
considered due to senior management decision to concentrate on the two main
providers in this field.
There will be a brief synopsis of the evaluation process of the two contenders along
with reasons which explain Company’s decision.
• Evaluation of Virtualisation Platforms
Company will use the following criteria in their virtualisation platform
selection process:
Performance
Cost
Ease of Use
Ease of Server Rollout
Ease of Adoption
The initial testing and comparisons will be carried out between Microsoft
Hyper-V 2008 and VMware’s EXSi 5.0. Both of these hypervisors will be
installed onto HP Proliant DL360 servers. Both servers will have 12GB of ram
and 4 X 146GB SAS disks in a RAID 5 configuration. Virtual machines hosts will
be configured and hosted under both platforms. The testing will consist of
determining which hosting platform is better from a performance viewpoint
and also from a server management perspective. Virtual machine performance
will be evaluated under both environments. Stage 2 of the implementation
would then involve the installation of a basic iSCSI SAN environment in
conjunction with the chosen virtualisation software.
• Implement Virtualisation infrastructure consisting of redundant servers and SAN.
The expected out come of the project would be for a cluster of Virtual Server hosts
configured in a failover arrangement. Both servers would have an iSCSI connection to
a SAN. The SAN is envisaged as a high end storage device which would be RAID5/6
based, have redundant power supplies and redundant array controllers. This would
provide a sufficient level of redundancy to meet Company’s needs. At a later stage
of the project, it would be envisaged that a similar arrangement is installed in our
DRS. This configuration may utilise some of the older hardware made redundant in
the main office. This would allow Company the option of implementing redundancy
or replication between the SAN’s.
• Migrate application and data servers to the virtualised environment.
This would involve the selection of several of the existing physical servers which are
reaching end of life and provisioning them in a virtual form.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 6 of 57
Company have a main data file server and a SQL server identified as the initial
targets for replacement. A second Exchange 2007 server is also expected to be
hosted in virtual form.
• Quantify the benefits of the change in terms of cost savings and flexibility.
This will include quantifying costs saving in relation to electricity costs and software
licence costs. It is also intended to gauge the effect of virtualisation on IT staff in
terms of exposure to new leading edge technology. Company also expect to have
increased options with regard to implementing server contingency in terms of the
failover for the actual server plus the addition of redundant servers on which
Company can implement server failover procedures.
Statement of Proposed Work
The proposed scope of work involves the installation of virtualised software on a new
cluster of servers, linked to a standalone storage device.
Company will then migrate selected applications and business functions to their new
virtualised environment. The sequence of tasks needed to achieve this goal is listed
below.
Sequence of tasks needed to achieve results
• Identify Virtualisation platform.
• Implement Virtualisation infrastructure consisting of redundant servers and
SAN.
• Migrate application and data servers to the virtualised environment.
• Determine any reduction in power consumption and any other tangible
benefits.
Identify Virtualisation platform.
There are 2 Virtualisation platforms which we will be evaluating.
VMwares VSphere 5 and Microsoft’s Hyper-V.
Microsoft’s Hyper-V
The Hyper-V role will be enabled on a Windows 2008 R2 64B platform. The server
specification was Intel Pentium III Xeon 2GHz processor, 12 GB ram and 4 X 146GB
RAID5 disk drives.
Hyper-V requires an x64 based processor plus hardware assisted virtualisation. This is
a BIOS enabled feature which is available in newer servers.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 7 of 57
Microsoft provides a Hardware Assisted Virtualisation detection tool. This will check
if your processor supports HAV and if it is enabled.
This feature is available in the HP Proliant G5 + 6 range of servers.
It is not available in the older HP’s Proliant G4 range for example.
To install Hyper-V, follow the instructions from the Microsoft Web site:
To install Hyper-V on a full installation of Windows Server 2008
1. Click Start, and then click Server Manager.
2. In the Roles Summary area of the Server Manager main window, click Add Roles.
3. On the Select Server Roles page, click Hyper-V.
4. On the Create Virtual Networks page, click one or more network adapters if you want to make their network
connection available to virtual machines.
5. On the Confirm Installation Selections page, click Install.
6. The computer must be restarted to complete the installation. Click Close to finish the wizard, and then click Yes to
restart the computer.
7. After you restart the computer, log on with the same account you used to install the role. After the Resume
Configuration Wizard completes the installation, click Close to finish the wizard.
Once installed, create your virtual machines. Again the following instruction is from
Microsoft:
To create and set up a virtual machine
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. From the Action pane, click New, and then click Virtual Machine.
3. From the New Virtual Machine Wizard, click Next.
4. On the Specify Name and Location page, specify what you want to name the virtual machine and where you want to
store it.
5. On the Memory page, specify enough memory to run the guest operating system you want to use on the virtual
machine.
6. On the Networking page, connect the network adapter to an existing virtual network if you want to establish network
connectivity at this point.
Note
If you want to use a remote image server to install an operating system on your test virtual machine, select the
external network.
7. On the Connect Virtual Hard Disk page, specify a name, location, and size to create a virtual hard disk so you can
install an operating system on it.
8. On the Installation Options page, choose the method you want to use to install the operating system:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 8 of 57
• Install an operating system from a boot CD/DVD-ROM. You can use either physical media or an image file
(.iso file).
• Install an operating system from a boot floppy disk.
• Install an operating system from a network-based installation server. To use this option, you must configure
the virtual machine with a legacy network adapter connected to an external virtual network. The external
virtual network must have access to the same network as the image server.
9. Click Finish.
1) VMwares VSphere 5
A HP Proliant DL360 G6 was used to install the free version of VMWares VSphere ESXi
5.0 Hypervisor.
The server specification was Intel Pentium III Xeon 2GHz processor, 12 GB ram and 4
X 146GB RAID5 disk drives. The installation process for VSphere is to register for and
download an ISO file image of VSphere v5.0. You then create a bootable cd and
follow the setup prompts from the installation procedure. Once installed the server
is accessible by its IP Address. When you connect, you are prompted to install the
vSphere client on you local machine. This is the main administration console for
VSphere.
• Implement Virtualisation infrastructure consisting of redundant servers and
SAN.
The planned implementation would be for a cluster of Virtual Server hosts configured
in a failover arrangement. Both servers would have an iSCSI connection to a SAN. The
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 9 of 57
SAN is envisaged as an EMC device which would be RAID5/6 based, have redundant
power supplies and redundant array controllers. This would provide a sufficient level
of redundancy to meet our needs but a second stage of the project would envisage a
similar arrangement installed in our DRS. This would allow us the option of
implementing redundancy or replication between the SAN’s.
• Migrate application and data servers to the virtualised environment.
This would involve the selection of several of the existing physical servers which are
reaching end of life and provisioning them in a virtual form.
We have the main data file server, a SQL server identified as the initial targets for
replacement. A second Exchange 2007 server is also expected to be hosted in virtual
form.
• Determine any reduction in power consumption and any other tangible
benefits.
Techniques and data to be used
It is intended to document each installation process at each stage of the project.
Data measurement relating to power usage before and after the implementation will
be collected to gauge some of the projects benefits.
Measure of success
Two measures of success would be in improved Server Rollout times and reduced
server power consumption.
• Server RollOut.
Currently if a new server is required the following procedure must be followed.
Capital Expenditure approval must be sought in order to purchase a new server.
Once done the server must be ordered via a hardware supplier. Part of the ordering
process involves the raising of purchase orders. Once the hardware reseller has
obtained the server, additional memory, disks, power supplies it will be shipped to
our stores department. When the shipment arrives, it is assembled and a server
image is applied and the server is joined to our network. SQL and any additional
software is installed and the server is put into production. This is quite a lot of work
but the main drawback here is in waiting on parts of the server to be ready.
Invariably there are delays with some component of the order. In most
environments, servers are requested from the IT department from development staff
who expect the server to arrive the next day. The benefits of a virtualised system
here would allow the IT department to provision servers at very quick notice.
• Power Consumption.
It is quite difficult to ascertain the amount of power being used by a server. Using
utilities provided by HP and a cost formula provide by ESB, it was possible to
determine the approximate power consumption of the following types of server:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 10 of 57
HP DL320G3, HP DL380G4, HP DL380G5
The ESB provided this Formula for Estimating Energy Consumption:
(Wattage × Hours Used Per Day) ÷ 1000 = Daily Kilowatt-hour (kWh) consumption
1 kilowatt (kW) = 1,000 Watts
HP provided a power calculation utility which allowed the calculation of the power
usage of the 3 servers as follows:
DL320 consumes 170W of power
DL380G4 consumes 400W of power
DL380G5 consumes 300W of power
Putting these figures into the formula for the DL380:
400W X 365 (days) X 24(hours)/1,000 X .20(price per kilowatt hour)
= €700
That is €700 per DL380 per year in electricity costs.
Performance
It is anticipated that once both Hyper-V and VMware are running that application
servers will be run under the hosts. Performance of the client machines will be
undertaken and comparisons made. Client virtual machines running MS SQL 2005/8
will be compared under both hosts. Performance measurements will also be taken of
the host machines while they run their client machines. Performance comparisons
will also be carried out between the virtualisation platforms and the existing
physical servers. These tests will include network file transfers and MS SQL
performance tests. It is hoped that this data will provide useful information with a
view to building a business case for implementing a private cloud infrastructure.
Ease of Use and Migration
Several factors will be considered in this section. Server installations under both
virtualisation platforms; ease of management and accessibility; server backup and
failover options; ease of server modification; migration of virtual clients between
the two applications.
Adoption by Staff
Company runs physical servers running predominately Windows operating systems.
No virtualisation systems have been operational within Company previously. No
formal virtualisation training is planned initially. IT administrators will be provided
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 11 of 57
with the new virtualisation infrastructure details and will be asked for feedback in
relation to their choice of environment.
Conclusion
The initial project implementation is to configure introductory versions of the
virtualisation platforms from both Microsoft and VMware onto identical HP Proliant
hardware platforms. Comparisons will be made between the two implementations
will be used as a basis for deciding which platform to progress further. It will also be
used for gaining exposure to the virtual server environment. This data will be used to
convince senior management of the viability of a private cloud environment in
Company.
Chapter 2 Literary Review
Introduction
For the purpose of this assignment virtualisation will be explained in detail. The
basic components of a virtualisation system will be examined with a view to
explaining its operation. Different products from several vendors will be examined to
see how they compare. The main features of these products will be explained.
Reasons for product selection will be outlined. Anticipated benefits that
virtualisation could bring to a Small or Medium sized Enterprises server infrastructure
will be developed. A description of the testing environment will also be outlined.
Technology Description
Virtualisation has developed as a mainstream technology due to limitations of the
architecture of current microprocessors. The x86 computer architecture derives from
the original 8086 cpu. Operating systems running under x86 architecture use a series
of rings to gain access to hardware resources. This design was meant to run a single
operating system and its associated applications on a single hardware platform as
shown in figure 2.1:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 12 of 57
Figure 2.1 x86 Architecture Layout
In practise, this would mean each individual application or function being installed
on a separate server in order to avoid application conflicts. SQL server would be run
on one physical server, a web server installed on a second server and file and print
services on another server. The net result of this mode of operation is a proliferation
of servers for organisations and also these server’s are underutilised. Studies by IDC
Research have shown that most server‘s are idle for 80% of their time (VMware
2012). A Microsoft technical case study from June 2008 determined that the average
processor utilisation figure for their data centre servers was 9.75% in September
2005. Virtualisation allows operating systems derive the maximum benefit from x86
based hardware using a combination of technologies built into the latest processors
such as Intel’s Hardware Assisted Virtualisation (Intel 2012) and software
virtualisation products. This allows the creation of several virtual machines which
run on and share access to the underlying hardware. This offers the advantages of
consolidating individual server roles into a single physical server thus maximising use
from that server. The reduction in physical servers also provides cost savings with
respect to energy consumption in server rooms. Virtualisation is also becoming a
more mainstream technology and is being widely adopted with growth rates
expected to continue over the coming years (Computer Weekly, 2012).
Virtualisation is achieved by the addition of an additional layer of software which
runs directly on top of the hardware which is called a hypervisor. The virtual
machines run on top of the hypervisor and have no direct access to the system
hardware. The hypervisor is responsible for providing access to the hardware and it
ensures that no conflicts occur. Each virtual machine has access to its own virtual
processor, memory, disk and I/O devices. A virtual machine manager will manage
the concurrent operation of the virtual machines ensuring that conflicts do not
occur.
The hypervisor act’s as the interface between hardware and the virtualisation
software. It achieves this by utilising the ring structure of the microprocessor. The
ring structure is implemented in a microprocessor in order to facilitate access to
system hardware and also to provide different levels of system access (Virtualisation,
2012). Ring0 is the inner most ring with direct access to hardware. This ring provides
the highest level of system access and is also known as kernel mode. Rings 1 and 2
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 13 of 57
are typically used by the guest OS and device drivers and ring3 is used by
applications. A hypervisor runs in Ring0 as it requires full access to system hardware.
As the hypervisor has direct access to system hardware, it must be very efficient and
provide very good performance. Device driver’s use rings 1 and 2 to make system
kernel level calls to the hypervisor to request access to hardware. Applications run in
the outer layer 3 and make system requests through device drivers. There are two
different types of hypervisors. A type 1 hypervisor runs directly on the hardware of
the system. A type 2 hypervisor runs on top of an operating system which it requires
in order to run. Type1 hypervisors are used by main stream virtualisation
applications.
Currently there is no defined open standard regarding virtualisation. There is an
Open Virtualization Alliance which is promoting an open standard for virtualised
machines using Kernel based solutions. This is a linux based solution and KVM is open
source software that is included as part of the linux kernel. The main virtualisation
market leaders develop their products using their own proprietary methods. There
are three different techniques for handling access to the processor kernel during
virtualisation.
• Virtualisation using Binary Translation
• Paravirtualisation
• Hardware Assisted Virtualisation
Binary Translation.
There are two aspects to binary translation. Firstly, calls to the kernel from guest
operating systems are translated at a binary level into virtualisation calls. Some
system calls from the operating systems cannot be virtualised and they must be
intercepted and translated using binary translation. Any system call from a guest
operating system that is directed to accessing an I/O device is redirected using
binary translation to avoid conflicts between guest operating systems. The guest
operating system is unaware that it is being virtualised. In order to minimise the
effect this translation has on system performance, the virtualisation software will
inspect groups of instructions rather than individual instructions. This minimises the
performance effect of translation. Secondly, user requests from applications running
in the guest operating systems are executed directly on the processor. The net result
of this is very high level performance. Guest operating systems are unaware that
they are running in a virtualised environment.
Paravirtualisation
This approach requires the addition of an additional virtualisation layer between the
guest OS and the system hardware. This is a paravirtualisation management module
and it operates in conjunction with an operating system which has been modified
slightly to work in a virtual environment. The guest operating system makes calls to
the virtualisation layer via an API which in turn communicates with the hardware.
Because the entire hardware is not being virtualised, Paravirtualisation results in
better overall performance. The disadvantage of Paravirtualisation is that some
operating systems may not be amenable to modification to allow it to run in a
virtualised format.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 14 of 57
Hardware Assisted Virtualisation
As early as 1974 it has been recognised that Hardware Assisted Virtualisation
techniques offered the optimum means of implementing virtualisation on single
servers. Computer scientists Gerald Popek and Robert Goldberg produced an article
in that year entitled “Formal Requirements for Virtualizable Third Generation
Architectures” in which they define three properties for a hypervisor – efficiency,
resource control and equivalency (Popek-Goldberg, 1974). Since 2006 the latest
model processors from both Intel and AMD offer capabilities which allow this
functionality, aimed specifically at virtualisation. These processor enhancements
allow for improved performance from virtualisation software without modification of
the guest operating system. This technology (Intel’s Virtualisation Technology (VT-x)
and AMD’s AMD-V) allow new processors to provide a new root mode of operation
which runs below ring 0. The advantage of this is that system calls from guest OS go
directly to the hypervisor which remove the necessity of performing either binary
translation or paravitualisation.
For the purposes of this discussion we will describe the operation of VMware, Hyper-
V and Citrix Xen server. For the purposes of the case study we will concentrate on
the VMware and Microsoft offerings. This is due to a constraint on hardware
resources. We will then look at where these products compare and their differences.
Virtualisation Products
According to the Gartner Magic Quadrant for x86 Server Virtualisation Infrastructure
report (June 2011), 40% of the x86 architecture workloads have been implemented in
a virtualised environment. Growth is expected to increase fivefold between 2010 and
2015. The 3 main virtualisation products in today’s market are according to
Gartner’s report:
• VMware vSphere
• Citrix Xen Server
• Microsoft HyperV
The Gartner figures for the past two years are reproduced here in Figures 2.2 and
2.3.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 15 of 57
Figure 2.2 Magic Quadrant for x86 Server Virtualisation Infrastructure 2011
Figure 2.3 Magic Quadrant for x86 Server Virtualisation Infrastructure 2012
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 16 of 57
Over this two year period, VMware remains as the market leader with Microsoft
increasing market share. Oracle appears to be gaining the most momentum, moving
into the ‘challenger’ position. A key development in this market is that virtualisation
is now becoming adopted by SME as a viable and affordable option.
The three virtualisation products all use type1 hypervisors, which run directly on the
underlying hardware.
VMware vSphere.
VMware is the current market leader in the virtualisation arena with approximately
80% market share. It is facing increasing competition especially from Microsoft who
are gaining market share on an annual basis (Market Share 2012). It was acquired by
EMC in 2004 and this now provides seamless integration with the EMC range of SAN
storage devices (SAN 2012). It is very mature technology in 2012 and has provided
many innovations in the virtualisation area. The main drawback with regard to
VMware products is regarding licensing fees which are the highest among
virtualisation vendors.
VMware Virtualization allows you run multiple virtual machines on a single physical
machine, with each virtual machine sharing the resources of that one physical
computer. This changes the traditional model of a single server running a single
application to a single physical server running three separate virtual machines as
shown in figure 2.4:
Figure 2.4 VMware Architecture Layout
Each virtual machine can run its own separate applications.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 17 of 57
Virtual Machines can run a variety of different operating systems e.g. Windows 2003,
Windows 2008 64b and Linux. Each of these virtual machines can in turn run their
own separate applications. The latest version of VMware’s virtualisation software is
vSphere5.
VSphere creates a fully functional virtual machine complete with its own separate
CPU, RAM, disk and network controller. This is achieved by an additional technology
layer called a hypervisor. The Hypervisor is a virtual machine manager which
allocates hardware resources dynamically and transparently. The VSphere hypervisor
runs directly on the computer hardware without the need for an additional
underlying operating system. It was the first x86 hypervisor to do so. VMware
VSphere 5 is the latest incarnation of the VSphere product line. In VMware VSphere
5, the hypervisor is ESXi and it is now in its fifth generation. Previous versions of
VSphere required a Linux based Service Console as an underlying element of the
hypervisor. ESXi does not require this service console and as a result has a very small
foot print of 100mb. The value of this small footprint is that it reduces the attack
surface for external threats and lowers the number of patches required, resulting in
a more stable environment. ESXi replaces the service console with a small linux
kernel, VMkernel. This removes the necessity for linux skills to manage the server.
On initial server startup, a VMkernel is loaded and becomes the first virtual machine
running under VMware. The VMkernel manages the virtual machines access to
physical hardware: ram, disk, I/O and memory.
A significant difference between VSphere and its main rivals (Hyper-V and Xen
Server) is in how VSphere handles I/O. The VMkernel handles I/O directly in VSphere.
Hyper-V and Xen handle I/O indirectly through the parent partition. The advantage
of handling I/O directly within the hypervisor is greater through put and lower
overhead. The disadvantage is more limited hardware compatibility. For example,
once Windows 2008 supports a particular hardware type, this is then also supported
by Hyper-V. In order for additional hardware support to be added for VSphere, the
hypervisor must be updated because this is where the I/O stack and device drivers
reside. The advantages of direct I/O far outweigh this. An indirect I/O model directs
all virtual machine network and storage traffic through the underlying OS which can
lead to all the virtual machines being in contention to use the same OS drivers. It
may also mean that these drivers are not optimised for a virtualised environment.
Figure 2.5 below (reproduced from VMware) helps illustrate this difference.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 18 of 57
Figure 2.5 Virtual driver comparison.
Having the drivers in the hypervisor allows for optimum performance as any CPU or
memory resources required by virtual machines can be optimized by ESXi. Virtual
machines are also talking directly with the external devices through the drivers.
VSphere also allows virtual machines running different OS. It also allows its memory
management techniques (which allow for changing memory allocations) to be applied
to all host machines.
vSphere provides networking support to its virtual machine clients via virtual
networking using virtual Ethernet adapters and virtual switches. Each virtual
machine is configured with at least one virtual Ethernet adapter. Virtual Switches
operate in the same way as a physical switch and allow virtual machines to
communicate with each other.
In order to provide network connectivity, VMware ESXi provides virtual switches and
virtual network cards. These operate in similar fashion as their physical equivalents.
The virtual switch allows virtual machines to which it is connected to communicate.
The structure of a VMware is a modular one which allows for a great deal of
flexibility. Figure 2.6 reproduced from VMware represents this:
Figure 2.6 Virtual Environment.
This format allows for connection from a physical network card through to the virtual
network cards of each virtual machine. The virtual network cards are emulated
versions of an AMD Lance PCnet32 adapter (vLance) or a virtual device (e1000) that
emulates an Intel E1000 adapter.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 19 of 57
The virtual switch is a key networking component in ESXi. Each ESXi server can host
248 virtual switches. There are three basic components to a virtual switch:
• Layer 2 forwarding unit
• VLAN unit
• layer 2 security unit
The virtual switch operates in the same functional manner as a physical switch.
vSphere’s storage virtualisation feature is an interface to physical storage devices
from the virtual machines. One way of gaining additional flexibility from a VSphere
environment is by providing shared storage on a SAN. A SAN is a dedicated network
which provides access to unified storage devices located on that network. A unified
storage device combines both file based and block based access in a single device.
Block level storage is a disk volume which an operating system connects to and uses
as a disk. File level storage is a method of sharing files with users. Using a SAN in
conjunction with vSphere allows the option of storing the virtual machine files in a
central location and for building in server redundancy to the VSphere hosts. SAN
storage is block based storage and vSphere makes this accessible using its VMware
Virtual Machine File System. This is a high performance file system which is designed
with multi server environments in mind. It is also designed for a virtual server
environment. vSphere provides various feature in relation to data storage
management.
vSphere manages the amount of resources assigned to virtual disks using a
mechanism called Thin Provisioning. Thin Provisioning disks are virtual disks which
start small and grow as data is written to them. A thick disk by comparison has all its
space allocated to it when it is created. This allows for more efficient memory
management by the host server. iSCSI support and performance has also improved in
vSphere 5 and it also now includes CHAP authentication between client and server.
Existing virtual disks can be increased on the fly as long as the guest OS supports
this. Virtual disks are stored as VMDK files. Site Recovery Manager is a very important
feature of vSphere5. Site Recovery Manager uses a set of API’s which integrate SAN
based storage so it can handle virtual machine, host and storage failover. It also
allows it to manage the array based replication. There are also a separate set of
API’s which deal with Data Protection. vSphere 5 now also supports Fibre Channel
over Ethernet. This is where fibre channel frames are encapsulated in Ethernet
networks thus allowing fibre channel to use 10Gb Ethernet networks.
vSPhere has a requirement for a separate vSPhere 5 licence per system processor.
Citrix Xen Server.
Citrix Systems original product lines were based around client server remote desktop
offerings. Recognising that virtualisation was an area of opportunity, Citrix systems
acquired XenSource Inc in 2007 and incorporated the XenSource products into the
Citrix range of products. Xen Server is the virtualisation offering from Citrix which is
based on the Xen Opensource hypervisor. Xen originated from a Cambridge
University research project which resulted in the formation of XenSource Inc.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 20 of 57
XenServer 6.0 is the current latest release. XenServer runs on a version of Linux
which has been modified by Citrix. In a Xen based system, the hypervisor has the
most privileged access to the system hardware. The hypervisor launches an initial
guest operating system on boot up which is called Domain 0. This has full access to
all system hardware. Any virtual machines which run under Xen are referred to as
user domains. User domains can run unmodified copies of an operating system if the
underlying processor provides support for IntelVT-x or AMD-V. If the processor does
not have this functionality, Xen runs a paravirtualised version of the operating
system.
It allows the creation of virtual machines and running of these on a single host
server. The XenMotion feature allows live migration of virtual machines between
XenServer hosts to facilitate maintenance or performance issues.
Citrix offer very competitive pricing for Xen server while it offers much of the
functionality of VMware. There is also a free version that provides a virtualisation
platform with a reasonable large amount of functionality.
Hyper-V.
Microsoft have been gaining market share over the past two years according to the
Garnet Magic Quadrant report. Microsoft is currently placing a strong emphasis on
Hyper-V and cloud computing. Hyper-V is a component of Windows Server 2008 and
is Microsoft’s hypervisor. It is only available in 64b versions of Windows 2008 but it
does allow the creation of both 32b and 64b virtual machines. Each Hyper-V virtual
machine is independent and isolated from each other. They may also have multiple
processors assigned to them and up to 64GB of ram. The Hyper-V role is a type 1
hypervisor and runs directly on top of the underlying hardware. A parent partition
runs the Windows 2008 server component which contains the management software
for managing virtual machines. These machines run in separate Child partitions as
displayed in figure 2.7:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 21 of 57
Figure 2.7 Hyper-V Operation
Virtual machines are called partitions in the Hyper-V environment. The initial
partition, the parent partition, must run Windows Server 2008. Subsequent partitions
can run other supported operating system’s (Windows 2003, Windows 2008, NT and
Linux). The hypervisor does not contain any drivers; these are hosted in the parent
partition. The aim of this is to provide a more secure environment. An MMC snap-in
running in the parent partition allows administration of partitions. Microsoft defined
the virtual hard disk (VHD) format to provide storage for virtual machines. VHD’s are
files but virtual machines see them as physical disks. On systems running Hyper-V it
is recommended to run Windows 2008 in its server core mode, which offers a smaller
set of system functions and therefore a reduced attack surface. Using the Virtual
Desktop Infrastructure, Hyper-V can be used to provide Windows Client desktop to
any client who supports RDP. This can provide a very flexible means of providing
secure desktops to clients which run different functions.
The network architecture of Hyper-V is similar in format to ESXi and is displayed in
Figure 2.8
Figure 2.8 Hyper-V network architecture (from Virtuatopia).
The above diagram displays the path from the external physical network through to
the virtual network adapters in the client virtual machines. The parent partition
hosts Windows 2008 server which in turn runs a Virtual Network switch. Each virtual
machine running in a separate child partition connects to the virtual network switch
via its virtual network adapter.
Hyper-V has several key features which are worth a mention. Dynamic memory
allows memory to be dynamically reallocated between different virtual machines
depending on their workload. Live Migration is a feature which allows virtual
machines to be moved to a different host which can afford them better
performance. Virtual machines can also be moved for maintenance purposes. Virtual
machines running under Hyper-V are also designed to consume less power than their
physical counterparts. This is achieved by cooperation between Hyper-V and features
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 22 of 57
build into later model processors. It also utilizes Second Level Address Translation.
This is a feature which reduces memory and CPU utilization for virtual machines.
Networking support has also been improved through the support of ‘jumbo frames’.
This means that network through put is increased for virtual machines as frames of
up to 9014 bytes can be processed.
Potential Benefits
There are several potential benefits which virtualisation can bring to an
organisation. Server rollout times can be measured in hours rather than weeks.
Without virtualisation, server provisioning was a complicated process involving
raising PO’s, ordering hardware, assembling hardware, racking and cabling the server
and then installing an OS along with patches. Such a time frame could extend to
weeks. With virtualisation, server templates are maintained which can include the
OS and a database application and these can be rolled out in a timeframe of hours.
Once servers are in production, vSphere and Hyper-V can provide a level of security
in relation to application updates. Prior to any application updates being applied to
a production server, a pre installation snapshot of the production server can be
taken. This will allow you to roll back to a known working environment in the event
of any application problems.
Other benefits can be in the area of cost savings. Reducing a companies server count
leads to lower electricity consumption, lower cooling requirements and less money
spent on new server provisioning. According to the VMware website (VMware 2012),
cost savings can be made as follows:
• Increase hardware utilization by 50-70%
• Decreased hardware and software capital costs by 40%
• Decreased operating costs by 50-70%
VMware also estimate that the server administration burden is reduced which frees
up time for administrators to perform more productive work. Another area which
may be of potential benefit is in the provisioning of older OS implementations. If a
company has migrated to Windows 7 as its standard desktop platform, there may be
situations of backward hardware compatibility which require an older OS (XP or NT).
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 23 of 57
In this case, an XP image can be implemented on the VMware server and accessed as
a virtual host by relevant users. Similarly new operating systems like Windows 8 can
be provisioned and made available across the network. This can be very beneficial
for developers who can test software without having to wait for new tablet
platforms to arrive.
Benefits are also expected to be accrued in the area of energy saving and
efficiencies in temperature management of comms rooms. Temperature readings will
be taken from each comms room and as servers are retired and replaced with virtual
equivalents, a measure of the saving may be gauged.
The initial testing environment will consist of two HP Proliant DL360 G6 servers with
an Intel Xeon 2Ghz E5504 processor, 12Gb of ram and 4 X 146 GB SAS disk drives in a
RAID 5 configuration. One server will host VMwares ESXi software while the second
will host Windows 2008 R2 with the Hyper-V role enabled. Is it hoped to introduce a
basic level SAN into the testing environment at a later stage. Windows 2008 R2
clients running MS SQL 2008 will be deployed under both environments. Comparisons
will be made regarding SQL and server performance. Feedback from network
administrators regarding their virtualisation preferences will also be sought. It is also
hoped to introduce an iSCSI san storage appliance which will be used for storing
virtual machines and providing data access. Simplicity of access to this device will be
recorded.
Case Studies
There have been several case studies carried out in the area of server virtualisation
implementations (Northern Ireland Water, 2012 and Dell, 2012). These claim great
savings and efficiencies for organisations. The aim of this study is to document the
implementation of the virtualisation system. It is also to provide reasons for the
decision on choosing a virtualisation platform. Ease of use and integration with a
basic SAN will also be explored.
Conclusions
The apparent benefits of virtualisation to SME’s seem to outweigh the risks
associated with upgrading a company’s server infrastructure to a virtualised
environment. According to a survey from the Corporate IT Forum, virtualisation is
now a mainstream technology that people are comfortable with. It is hoped that the
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 24 of 57
case study will provide some evidence that confirms this. There is a strong business
emphasis on cost savings currently and this is pushing virtualisation to the fore as a
means of reducing IT costs. This cost reduction also comes with the added benefit of
infrastructure modernisation. The cost savings will become more apparent
throughout the years of implementation. There are a variety of virtualisation options
available from many different vendors with different pricing schedules available for
different needs. It would be recommended that IT departments investigate the entry
level virtualisation options for their needs as way of increasing their exposure to a
virtualised environment. Running a virtualised environment offers many challenges
with regards to embracing new technology and also with regard to managing virtual
servers. The ease of server rollout can lead to server sprawl. Efficient use of virtual
server management tools is vital to a successful implementation of virtualisation.
Chapter 3 Company Case Study
Introduction
The purpose of this paper is to compare and contrast the performance of Microsoft’s
Hyper-V virtualisation platform with that of VMware’s ESXi equivalent. Server
virtualisation does appear to offer many advantages for businesses which include the
following:
• Consolidation of servers
• Improved response time to business requirements
• Sharing of computing resources to improve utilization
• Improving Flexibility, High Availability and Performance
• Reduced power consumption
This paper will discuss the implementation and design of both virtualisation
platforms. It intends to compare and contrast both platforms with a view to
identifying the virtualisation application most suited to providing a virtualisation
platform on which to base a virtualisation strategy.
Background
In order to evaluate which virtualisation application is best suited to Company, two
identical servers have had the virtualisation software from Microsoft and VMware
installed. The hardware specification of the servers is as follows:
• Server: HP Proliant DL306 G6
• Processor: Intel Xeon 2GHz Quad Core E5504
• Ram: 12GB
• Network Cards: Built-in HP NC382i DP Multifunction Gigabit Server Adapter
PCIe: HP NC382T PCIe DP Multifunction Gigabit Server Adapter
• Storage:
Internal: 4 X 146GB SAS disks (RAID5 configuration)
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 25 of 57
External: 1 X 16TB iSCSI Synology SCSI Storage Area Network
The basic network configuration used for this implementation is as shown in Figure 1:
Figure: 1 Network Configuration
This provides a layered network environment with the storage device located on an
isolated SAN network and the virtualisation servers located on the production lan.
Utilising this configuration we will endeavour to configure both the ESXi server and
the Hyper-V server with as similar configuration as possible. We then run some
comparative test’s to compare the performance of both systems. Some of the testing
will include:
• Network file transfers
• Server scans
• Server start-up times
• SQL Operations
The server testing inventory will consist of the following servers listed in Table1:
Name uProcessor RAM Role OS Hosted
On
Physical PSA 4GB File Server Windows na
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 26 of 57
Server A 2003
Server
Physical
Server B
PSB 4GB SQL Server Windows
2003
Server
na
Virtual
Server A
VSA File Server Windows
2008
Server R2
64B
Hyper-V
Virtual
Server B
VSB SQL Server Windows
2008
Server R2
64B
Hyper-V
Virtual
Server C
VSC File Server Windows
2008
Server R2
64B
ESXi
Virtual
Server D
VSD SQL Server Windows
2008
Server R2
64B
ESXi
Table1: Server Descriptions
This should allow for a comprehensive range of data from both SQL and file servers
for comparison.
Server Setups
To install the Hyper-V on a server, you need to obtain and install Windows 2008 R2
64B operating system. The system requirements for Hyper-V from Microsoft are as
follows:
• Processor: x64 compatible processor with Intel VT or AMD-V technology
enabled.
• Hardware Data Execution Prevention (DEP), specifically Intel XD bit (execute
disable bit) or AMD NX bit (no execute bit), must be available and enabled.
• Minimum CPU speed : 1.4 GHz; Recommended: 2 GHz or faster
• RAM : Minimum: 1 GB RAM; Recommended: 2 GB RAM or greater (additional
RAM is required for each running guest operating system); Maximum 1 TB
• Available disk space : Minimum: 8 GB; Recommended: 20 GB or greater
(additional disk space needed for each guest operating system)
• DVD ROM drive
• Display : Super VGA (800 × 600) or higher resolution monitor
• Other : Keyboard and Microsoft Mouse or compatible pointing device
It is important to note that a 64b processor with the virtualisation technology
enabled is required. Hyper-V only runs under a 64b operating system.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 27 of 57
This server is configured using the HP SmartStart software which allows configuration
of the disks into a RAID 5 disk array. Once Windows has been installed and updated
with the latest security fixes, you must enable the Hyper-V role. Installing this role
requires a server reboot. Upon the reboot, the Hyper-V role is enabled and this loads
a type 1 hypervisor before the Windows 2008 operating system in true type 1
hypervisor fashion. This allows Hyper-V to manage access to the underlying hardware
for the client virtual machines. Hyper-V functionality is based on the concept of a
Parent partition and multiple child partitions for each of its individual virtual
machines. The existing Windows 2008 host operating system is placed into the parent
partition. Each of the host server’s virtual machines is placed into a Child partition.
In order to provide this functionality, the host OS has its physical NIC unbound from
the TCP/IP protocol. This provides an interface layer between the physical nic and
the Hyper-V software. The interface layer is known as a Virtual Switch. This
connection between the nic and the virtual switch is the only connection between
the physical hardware and the virtualisation software. This original nic only has one
network protocol bound to it, which is the MS Virtualisation Network Switch
Protocol. Each child partition has its own individual virtual nic associated with it.
These virtual nic’s communicate with the parent partition using the MS VNS protocol.
The virtual switches operate in a similar fashion to a physical Ethernet switch. They
provide connectivity between the virtual hosts on the Hyper-V server, as shown in
figure 2.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 28 of 57
Figure 2: Hyper-V Network Architecture
In order to examine all the nic’s in the Hyper-V server you need to go to Start –
Control Panel – Network and Sharing Centre – Change Adapter Settings. This will
display all of the nics in the server. If you examine the network properties of the
original nic, you will see it only has the MSVNS protocol bound to it, as shown in
figure 3:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 29 of 57
Figure 3: Network Bindings of Original NIC
Installing the Hyper-V role creates an additional nic which inherits the original nics
protocol bindings.
To make the network card available to virtual machine clients, you need to configure
it on the Hyper-V server. This is done as follows – Start – Administrative Tools –
Hyper-V Manager and select Virtual Network Manager. From there you need to create
a new virtual network which is associated with a physical nic. This also creates a
Microsoft Virtual Switch for each physical nic in the server. This step is necessary in
order to provide access to a virtual nic for any virtual machines created. This allows
for great flexibility when setting up virtual machines as they can be assigned to
different virtual networks. On the virtual machines, each virtual nic connects into a
virtual switch. The virtual switch you connect to determine the other machines in
your virtual network with whom you can communicate. These nics have the
functionality of a standard physical nic and are identified as a Microsoft Virtual
Machine Bus Network Adapter. Virtual machines will operate in the same manner as
their physical counterparts.
The Virtual Network Manager allows for the creation of three different categories of
virtual networks:
• External virtual network
• Internal virtual network
• Private virtual network
An External virtual network is used to allow host virtual machines connect with other
network resources which are located on the Company network. Virtual server clients
can communicate with other virtual clients on the host server.
An Internal virtual network is an isolated network. This is used to allow virtual hosts
communicate with each other on their own private network. It is typically used as a
testing environment. This network is not bound to the host server’s physical nic thus
providing isolation.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 30 of 57
A Private virtual network is a type of that that only allows communication between
virtual hosts on the same virtualisation server. It provides isolation from all other
network traffic on the host server including any management traffic to the host
server.
Each virtual nic on a server connects into a Virtual Switch on the host server. The
virtual switch is enabled once the Hyper-V role is installed. This is a fully functional
layer 2 switch. This switch supports the NDIS specification and the Windows Filtering
Platform callout drivers. This allows software vendors to create plug-ins which
provide enhanced security and networking capabilities.
SAN Configuration
Virtual server images vary in size depending of the virtual server’s configuration.
Most servers would require a virtual hard disk in the order of GB’s. In order to
provide sufficient storage capability for these virtual machine images, it is necessary
to provide additional storage space. This can be implemented using a basic iSCSI
Storage Area Network. An iSCSI SAN offers may advantages over the traditional Fibre
Channel SAN implementation. Some of these advantages include easier installation
process; simplified maintenance and also it is a more cost effective solution for
SME’s. The Small Computer Standards Interface is a means of attaching additional
storage devices to server hardware. It defines a set of commands which govern file
transfer and also protocols associated with the transfer. iSCSI is a mechanism for
transferring these commands over the TCP/IP protocol thereby extending the range
and flexibility in providing access to storage media. A SAN is a means of providing
extensible storage for servers while offloading some of the additional processing
associated with providing this storage from the server to the SAN. Performance is
also improved by the provision of a separate storage LAN for the storage devices.
A SAN provides many benefits for the provisioning of shared storage. It can allow for
the consolidation of storage resources into a common pool. It is very scalable and
storage can be resized to meet specific requirements. The SAN can also be managed
remotely and provide a single point of control. Storage can also be shared among
different servers providing consolidation and also a reduction in backup time. Backup
traffic can also be confined to the SAN network. The SAN storage provided to remote
servers also appears as locally attached storage. The server considers the disk in the
same fashion it would a local drive. This allows for a lot more flexibility with regard
to disk management for the servers. SAN networks are traditionally implemented
using fibre channel but this is considerable more expensive that the IP based solution
implemented in our environment. It could be considered for implementation at a
later stage.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 31 of 57
There are different types of storage media available for servers to access ranging
from internal disks to external disks arrays. SCSI stands for Small Computer System
Interface and is a means of attaching additional storage to a computer. SCSI is the
main protocol used for communication between servers and storage. This storage is
normally internal within a server. The SCSI commands allow access to this storage
and operate at block level, which is low level disk access. As far as a server is
concerned, the disk appears as if it is located within the server rather than at a
remote location. iSCSI is a mechanism whereby the SCSI commands are encapsulated
within TCP/IP packets. This means that SCSI Control Descriptor Blocks (read/Write
commands) are sent over the IP network in IP packets to their destination. This
allows servers to connect to SAN storage anywhere on the IP network, thus providing
great flexibility in SAN provisioning. The remote storage devices appear to the server
as internally attached devices.
The storage solution implemented by Company was a Synology DSM 4.0 Rackstation.
This is a rack mountable unit capable of holding 10 X 2TB disks. These are configured
on the device in a RAID6 configuration of 14TB volume size. The RAID6 array allows
for the failure of 2 physical disks. This is an important consideration given the large
size of the volume. In the event of disk failure, the time involved rebuilding from a
single disk failure could allow for the occurrence of a second disk failure.
As the SAN will be storing virtual machines, maximum uptime is a requirement for
any SAN. Apart from the RAID6 configuration, any SAN implementation should also
contain a dual PSU configuration for added resilience. Consideration should also be
given to SAN failover. A cheaper backup SAN device could be considered at a future
date. This could allow for the replication of server images between the two SAN
devices.
The Synology Storage Manager is displayed in Figure 4 and it shows a number of
important SAN concepts. This is the tool used to create and manage storage areas on
the SAN device.
Figure 4: Storage Configuration
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 32 of 57
The iSCSI Target tab allows us to create individually named and accessible storage
blocks. The size of these units is specified when created. In this instance we have
created 3 X 500GB logical storage units which reside on the physical SAN. Each of
these logical devices is identified by a Logical Unit Number. The LUN is a field within
SCSI that identifies the Logically Addressable Unit within a Target SCSI device. Each
LUN represents an individually addressable storage area. It can be up to 64b is
length. The iSCSI name is an independent identifier which is current for the duration
of the storage unit. The associated iSCSI address consists of the iSCSI name, IP
address and TCP port number. Depending on the iSCSI Target software, different
security settings can allow the specification of which remote server IP addresses are
allowed connect to each LUN. CHAP authentication may also be enabled which
requires the remote host to specify a password in order to connect. The remote
server will connect to the LUN using an iSCSI Initiator. This is a software client which
runs on servers. It is built into Windows 2008 R2. The iSCSI Initiator uses the NIC and
its network stack to emulate SCSI devices. Once the iSCSI Initiator has connected to
the remote iSCSI LUN, it treats the storage area the same way it would a locally
attached disk. It is able to format and manage the iSCSI LUN’s. Once the iSCSI
Target has been connected to by the iSCSI Initiator software, the storage area
appears in the local disk management console. It will be identified under disk
properties as an ‘iSCSI Storage SCSI Disk Device’. There are a variety of iSCSI
emulators available for a host of different operating systems.
If we run the iSCSI Initiator software on the Hyper-V host server we see the
associated iSCSI clients as displayed in Figure 5:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 33 of 57
Figure 5: iSCSI Targets
These targets correlate directly to the storage volumes created on the SAN device.
The status is also displayed and we can see that ‘San3’ is connected. This provides
what appears as a locally attached 1TB storage disk for the Hyper-V server. The iSCSI
clients are identified by an IQN, which is a unique string that identifies each iSCSI
initiator. The discovery tab allows you to specify remote SAN devices upon which
iSCSI Targets have been enabled for discovery. The iSCSI Target software on the
Synology unit allows the entering of a CHAP password. This password will be required
for the remote iSCSI Initiator software to make a connection. Other iSCSI target
applications allow the specification of specific remote IP addresses as clients. Both
of these options should be considered in order to provide a secure operating
environment.
The ESXi server also contains an iSCSI Initiator. This is found under the ‘Storage
Adapters’ tab in ESXi. The SAN storage device IP address can be added to the list of
iSCSI devices. This provides us with a listing of the same storage areas on our SAN as
shown in Figure 6:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 34 of 57
Figure 6: ESXi iSCSI Targets
To add any of these devices, we need to select the ‘Storage’ tab, add storage –
Disk/LUN and select the LUN of the storage area we require. This will then be
displayed as a locally attached data store under storage. Virtual machines and
operating system source files may be stored here as a result.
Network Configuration
Once the SAN disk array has been created, it is necessary to make this SAN available
to network servers. For maximum efficiency it is best to utilise GB switches and
network cards. On the host servers, it is also advisable to enable NIC Teaming. This
is where two individual NIC’s are combined into one virtual NIC using software from
the NIC vendor. The result of this is better performance and NIC resilience. On our
Hyper-V server this was achieved using the HP Network Configuration Utility. To
configure on the ESXi server it was enabled after using the ESXi vSphere Client to
connect to the ESXi server and then going into Configuration – Networking. This
allows you view and configure networking options for the server as shown in Figure
7:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 35 of 57
Figure 7: ESXi Network Configuration
To create a nic team, select the properties of the vSwitch, then ‘edit’ and then
select the ‘NIC Teaming’ tab. This creates a redundant network pair for the ESXi
vSwitch.
By selecting each switch, additional NIC’s can be added or removed as required. This
interface also displays in a very clear fashion the Virtual Switches which are
configured on the host server. In our configuration, we have two virtual switches,
one for the production lan and the second for the SAN network. The SAN network
switch is currently only operating at 100MB and is not ideal. It is envisaged to
upgrade this switch to a redundant GB switch at later date.
In order to provide access between the SAN network and the LAN network, each
server will require an additional NIC which will connect into the separate SAN
switch. Ideally this will be a dual port nic. In our servers, both had a HP NC382T PCIe
DP Multifunction Gigabit Server Adapter added. This was teamed using the HP
software and connected to the SAN switch.
Another feature that should be enabled on the network is Jumbo Frames. Standard
Ethernet frame size is 1500 bytes. Enabling Jumbo frames increases the frame size to
9000 bytes. This should result in increased performance for Windows servers as the
default block size for the NTFS file system is 8,192 bytes. Jumbo frames offer less
overhead and better network performance and higher throughput. On a HP Proliant
server, the size of the Jumbo frames can be set using the HP Network Configuration
utility. Please be aware that your SAN device must also have the Jumbo Frames
configuration enable or it will not communicate with your host server.
It is normally considered best practise to have the iSCSI traffic on a separate network
than normal production data. There are several reasons for doing this. Having the
SAN devices on a separate network offers additional security options both in terms of
restricting access to the SAN and also protecting the SAN devices from any malware
which may infect production machines.
To maximise performance for the SAN network, a gigabit Ethernet switch should be
configured in a redundant failover manner to maximise uptime. All the ports should
be configured for full duplex operation.
Another reason for SAN network isolation is to segregate the iSCSI traffic. iSCSI
traffic is unicast in nature and does have the potential to utilise the entire link.
Once the SAN is configured, a software tool is run on the SAN called an iSCSI Target.
This allows the SAN storage devices to be identified by means of a Logical Unit
Identifier. The remote servers run the iSCSI Initiator software to locate and connect
to the storage areas on the SAN. This iSCSI traffic is also generally sent in clear text
(it can be encrypted using CHAP) so having it on a separate network segment will
enhance security.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 36 of 57
Server Testing
Server Scans
As part of the testing, comparisons were made against a production physical server,
PSA. This server runs Windows 2003 32b operating system, and is a DL380 G5 with
4MB ram. A malware scan was run on the physical server and it completed the scan
in 1 hour 32 minutes as shown in figure 8:
Figure 8 – Malware scan result on physical server
The data stores from this server were replicated to a virtual server (VSA) running on
the Hyper-V host. VSA was assigned 6GB of RAM and the Windows 2008 R2 64b OS.
The same scan was run on this virtual (VSA) and its scan times were as shown in
figure 9:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 37 of 57
Figure 9 – Malware scan result on virtual server
This is considerable faster than on its physical equivalent. We do however have to
take into consideration that the virtual server has 6GB of ram compared to the
physical servers 4GB.
Server Performance
The following categories will be used for evaluating server performance.
• Hypervisor Overhead
• File Transfers
• Benchmarking Software Comparisons
• iSCSI Performance
• SAN Connection Performance
• SQL Performance
Hypervisor Overhead
One operational feature that was advised by a disk alert on the host Hyper-V server
was that that the free disk space dropped to 159Mb on its D: drive. This drive is
where the virtual machines it hosted were stored. Each virtual machine has one or
more virtual hard disks associated with it. The location of these files is specified
within Hyper-V manager. The Virtual Machines folder also contains a ‘snapshots’
folder which are the online backups associated with each virtual machine. As a result
of the lack of free disk space the Hyper-V server was no longer able to run the
hosted virtual machine’s, placing them into a ‘paused state’. These virtual hosts
were no longer accessible from the network. Once disk space was cleared on the
host server, they were able to be restarted and resume operation.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 38 of 57
Upon monitoring the folder which stores the snapshot data, it was found that this
folder contained several large .AVHD files which grew each evening as shown in
figure 10 below.
Figure 10: AVHD File Growth
You will note that the .VHD files remain static but the .AVHD files grow in size. This
is because the snapshot information relating to server changes is saved in .AVHD
format. This specifies the difference in information between it and its associated
.VHD file. Each .VHD file has an associated Automatic Virtual Hard Disk (AVHD) file
associated with it. Microsoft advises not to directly delete the AVHD files but to use
Hyper-V manager to delete the entire snapshot from the snapshot tree. There does
not appear to be a lot of configurable options relating to managing snapshots in the
Hyper-V console. The Microsoft website also does not have a lot of readily available
information relating to managing snapshots. In order to remove the AVHD files I
found I had to use the following procedure:
• Delete the Snapshot via Hyper-V Console
• Turn Off the virtual machine (not Shutdown) via Hyper-V Console
• Leave the virtual machine off long enough for the merge to complete. You need to scroll
across the Hyper-V Console in order for this status information to be displayed:
Figure 11: Snapshot Merge Process Status
Once this merge process completed, the AVHD is removed from the disk thus freeing
up disk space. The time involved for this operation relates to the size of the virtual
machine. This will have to be factored in for production machines. It does again
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 39 of 57
highlight the need for providing sufficiently large storage areas for the virtual
machines at the setup phase.
The snapshot process for ESXi works in a similar fashion to Hyper-V. Right click the
server and select take snapshot. The snapshot image data is stored as 3 files- a
virtual disk, a virtual machine log file and a snapshot file. These files are stored in
the same folder as the original virtual machine.
While not directly related to virtualisation, using the Windows 2008 R2 64b OS on
both the client and host, it was noted that a Windows folder was created called
‘winsnx’. This folder was found to be several gigabytes in size. In order to reduce
this in size, the following command was run:
DISM.exe /online /Cleanup-Image /spsuperseded
This resulted in an increase in free disk space on the C: partition of a host machine
of 2.3 GB. The winsnx folder contains various different components of the OS. The
DISM command removes backup files created during service pack installation. As a
function of server maintenance, the running of this command should ensure that the
Windows 2008 R2 servers have sufficient free disk space to function correctly.
File Transfers
To test network performance, a 2.93GB file was copied over the network to both
virtual clients. The results of this are listed below
Time to Copy(Seconds)
Hyper-V Client 3:41:09
ESXi Client 00:56:09
Physical Server 01:22:35
Inspection of the nic’s on both servers report that the ESXi server was running at
1Gbps while the Hyper-V client reported that it was running at 10Gbps. Examination
of the host Hyper-V server also showed that it reported a 10Gbps running speed.
This is explained by the Hyper-V virtual network switch running at 10Gbps. Further
examination of the physical nic’s showed that they were running at the expected
1Gbps speed.
Upon enabling the Jumbo Frames option on the Hyper-V client virtual server, the
transfer time of this file dropped to 03:24:78. The jumbo frame option was then
enabled on the Hyper-V host server. The file transfer was tried again after both
servers were rebooted. The transfer this time tool 03:45:60. It initially appeared to
copy faster but practically stopped after approximately 2 thirds completion. The
same file was copied to another Hyper-V host in 01:20:08. After turning off Jumbo
frames on the Hyper-V host, the file was copying again. This time it again completed
2 thirds of the file in about 50s but took 03:26:23 to coy the entire file. This trait
was observed consistently over a number of occasions. Using the RoboCopy utility
produced a much more even performance and also copied the file in 2.31:11
Redoing the file copy testing with RoboCopy produced the following results.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 40 of 57
Time to Copy Speed(Bytes/Sec)
Hyper-V Client 03:19:00 15803923
ESXi Client 02:45:00 19124740
Physical Server 11:44:00 4481021
A further copy comparison was carried out copying a compressed file between the
three clients. The results were as follows.
File Size(MB) Time to Copy Speed(Bytes/Sec)
Hyper-V Client 263.7 00:00:21 12714995
ESXi Client 263.7 00:00:04 65217943
Physical Server 263.7 00:00:09 29553295
File Size(GB) Time to Copy Speed(Bytes/Sec)
Hyper-V Client 2.9 00:07:58 6520640
ESXi Client 2.9 00:09:49 5287839
Physical Server 2.9 00:09:21 5553035
This test gives a clear advantage to the ESXi client in performing network operations,
even over its older physical rival.
Benchmarking Software Comparisons
We will utilise several benchmarking tools which will run tests on different aspects
of the server hardware. The following tools will be used to evaluate the various
hardware components of Company’s servers.
• CPU/Processor - SiSoft's SANDRA 2013
• GPU/Graphics Board - NovaBench
• Memory - AIDA64 Extreme Edition
Please note that it is important to read the documentation that is provided with
these test tools, as it may affect any data residing on servers.
These tools were run on two virtual servers – VSD the ESXi client and VSB the Hyper-V
client. Both of these servers have similar capabilities as per table 1.
SiSoft's SANDRA 2013
The SiSoft Sandra 2013 was used to compare processor performance. Upon launching
the application, select Benchmarks – Processor – Processor Arithmetic. We also
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 41 of 57
tested Benchmarks – Processor – Processor Multi-core Efficiency. A sample of the
results from this test is displayed in Figure X below.
The results are very close but do appear to give a slight edge performance wise to
the Hyper-V client.
AIDA64 Extreme Edition
The AIDA64 Extreme Edition provides a Disk and Memory Benchmark facility. Disk
tests were run against both virtual servers. The Results of this are displayed in Figure
12:
Figure 12: AIDA64 Results
These results indicate better disk performance from the VMware client.
AIDA64 was also used to determine the cache and memory benchmark for both
servers. These are displayed below
Figure 13: ESXi Client Memory Benchmark
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 42 of 57
Figure 14: Hyper-V Client Memory Benchmark
Again with these tests, the data suggests that there is very little difference in
performance between the two platforms.
NovaBench
This is a simple tool to use and it provided some interesting timings. It was run on
two Windows 2008 R2 virtual servers and it tested processor and memory/disk
writing speeds. These results are displayed in Figure 15.
Figure 15: NovaBench Results
These results appear to indicate that the VMware client is providing faster processor
performance while the Hyper-V client is performing better at disk and memory
writing.
The results from the bench marking tools provide evidence that both virtual servers
perform at similar operational levels. There is no clear distinctive case to be made
for one platform over the other on the basis of these tests.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 43 of 57
iSCSI Performance
To test iSCSI performance a utility call IOMeter was installed and run on each virtual
machine. Please note that this utility creates a local test file (iobw.tst) as part of
the test process. This file can grow dynamically to over 4GB is size. Attention needs
to be paid to free disk space on the C: drive as a result.
IOMeter can be configured to perform a series of different IO operations on the
server disks. These categories are selected under the ‘Access Specification’s tab. For
our testing we selected the same series of different block sizes on both servers.
The results from IOMeter are displayed below in Figures 16 and 17.
Figure 16: Hyper-V Client IOMeter Result
Figure 17: ESXi Client IOMeter Result
The comparison shows that I/O performance is again better on the ESXi client.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 44 of 57
SAN Connection Performance
SQL Performance
One of the test’s performed on the virtual MSSQL server’s was stress testing using
the Microsoft SQLIOSIM utility. To quote from Microsoft this tool is used to
“perform reliability and integrity tests on disk subsystems. These tests simulate read, write,
checkpoint, backup, sort, and read-ahead activities for Microsoft SQL Server. However, if you have to
perform benchmark tests and to determine I/O capacity of the storage system, you should use the
SQLIO tool.”
The SQLIO tool is “a tool provided by Microsoft which can also be used to determine the I/O
capacity of a given configuration.”
According to the SQLIO documentation it will provide performance statistics on
throughput, disk latency and the number of I/O’s per second i.e. it should provide
enough information to get a basic understanding of the system’s current I/O
performance. To configure SQLIO to test all disk partitions, please ensure that the
param.txt includes the c: and d: drives.
SQLIO has the following command line options:
Table 2 – Commonly used SQLIO.exe options
Option Description
-o Specify the number of outstanding I/O requests. Increasing the queue depth may
result in a higher total throughput. However, increasing this number too high may
result in problems (described in more detail below). Common values for this are 8,
32, and 64.
-LS Capture disk latency information. Capturing latency data is recommended when
testing a system.
-k Specify either R or W (read or write).
-s Duration of test (seconds). For initial tests, running for 5-10 minutes per I/O size
is recommended to get a good idea of I/O performance.
-b Size of the IO request in bytes.
-f Type of IO to issue. Either ‘random’ or ‘sequential’.
-F Name of the file which will contain the test files to run SQLIO against.
Using the SQLIO tool the following command was executed:
sqlio -kR -s360 -fsequential -o8 -b256 -LS -Fparam.txt
The results of this are displayed in table 3 below.
Hyper-V Client ESXi Client
IOs/sec 419.34 806.84
MBs/sec 104.83 201.71
Min_Latency(ms) 14 0
Avg_Latency(ms) 75 39
Max_Latency(ms) 609 1304
The following command was also issued:
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 45 of 57
sqlio -kW -s10 -frandom -o8 -b8 -LS -Fparam.txt
Its results were as per table 4:
Hyper-V Client ESXi Client
IOs/sec 74.85 198.99
MBs/sec 0.58 1.55
Min_Latency(ms) 10 5
Avg_Latency(ms) 419 158
Max_Latency(ms) 914 2096
These results correlate with the network transfer results. The I/O of the ESXi client
is much more efficient than the Hyper-V client according to this data. The Average
latency figures also indicate that the ESXi client has a shorter delay time that it’s
Hyper-V rival.
A comparison was also conducted on a replica of Company’s SQL database. The
database was backed up to disk and the resultant .BAK file was compressed using the
7Z compression utility. This file was then copied over the network to both test hosts.
The file size was 293mb compressed and 3.22GB when extracted.
The data transfer rate of the file copy was as follows
Hyper-V Client 2780.137 MegaBytes/min
ESXi Client 1195.283 MegaBytes/min
The time taken to extract the file on each host was as follows
Hyper-V Client 1.22.28s
ESXi Client 2.47.32s
The time take to restore the database from local D: drive was as follows
Hyper-V Client 08.43.88s
ESXi Client 05.57.66s
Once the database was functional on both servers tasks were run and performance
times noted.
Database Backup to local D: drive
Hyper-V Client 5.50.26s
ESXi Client 4.21.85s
Once the databases were operational on both servers, SQL queries were run on the
SQL databases to test response times. The queries are outlined in the appendix.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 46 of 57
The results of this were as follows:
Client Query # Initial Run
Time
Cached
Time
Rows Processed
Hyper-V Client 1 00:00:19 00:00:07 153709
ESXi Client 1 00:00:16 00:00:06 153709
Hyper-V Client 2 00:00:00 00:00:00 443
ESXi Client 2 00:00:00 00:00:00 443
Hyper-V Client 3 00:00:15 00:00:03 32
ESXi Client 3 00:00:32 00:00:03 32
Hyper-V Client 1-3 Combined 00:00:10 00:00:10 154184
ESXi Client 1-3 Combined 00:00:11 00:00:10 154184
Once a SQL query is run it is cached in memory in order to reduce future run times.
The ESXi client shows a slight performance advantage over the Hyper-V client.
Pricing
Microsoft has simplified their pricing structure for their Windows 2012 product line.
There are now just two different editions which are differentiated based upon the
number of virtualised clients they run. The Standard Edition allows 2 virtual clients
to be run while the Datacentre Edition allows an unlimited number of virtual clients.
The current costing for both versions is as listed below:
WinSvrStd 2012 SNGL OLP NL 2Proc € 910
WinSvrDataCtr 2012 SNGL OLP NL 2Proc Qlfd € 5,328
Each licence covers two processors per server. Adding additional processors to the
host servers requires that you obtain an addition Windows Standard licence. This is
very attractive pricing as any Windows server clients running under the host
Datacentre server are covered by the host licence. This could prove to be a very cost
effective way of covering the cost of Windows server licensing for many
organisations, while at the same time realising the benefits of virtualisation.
VMware have a different pricing structure in place whereby end users pay an annual
licence subscription as follows:
HP VMw vSphere Std 1P 1yr E-LTU € 1073
Each additional server processor will also require its own ESXi licence.
To assist with managing multiple ESXi servers, VMware provide a management centre
application, vCentre, which is also available as an annual subscription.
HP VMw vCntr Srv Std 1yr E-LTU €5105
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 47 of 57
This is possible beyond the budget of most SME’s. The additional cost of the ESXi
licence needs to be incorporated into the annual budget. Given that it is a recurring
expense it may be offset an operation expense as opposed to a fixed asset charge.
Conclusions
From the testing and implementation configuration carried out relating to this
project, the following basic conclusions can be made. That there is very little
difference in actual server processing performance but a large difference in input
output operations of the two candidates. The other main difference relates to the
host virtual server maintenance. During the course of the testing there was very
little if any maintenance needed to be carried out on the ESXi server. The Hyper-V
server on the other hand encountered an issue relating to disk space which caused a
client server outage and also required a solution to be implemented. Another area of
maintenance was that of the monthly application of the latest Microsoft security
fixes. These required application to the host Hyper-V server. As a consequence of
this, its client servers will need additional down time which will need to be factored
into the maintenance schedule for the Hyper-V clients.
I think that the evaluation process showed that there is great potential and
flexibility along with business value potential to be gained from a virtualisation
implementation. During the evaluation, 4 additional Windows 2008 R2 servers were
added to the company network. The speed and ease with which these servers were
added was a vast improvement on the process involved with obtaining and installing
a physical server. The decision regarding which platform to choose is difficult to
make. The pricing for an Enterprise version licence of Hyper-V is a compelling
argument in its favour over ESXi as it can allow a SME to consolidate its licensing.
The familiarity of the Hyper-V interface is also a great advantage to Hyper-V. Against
that is the addition maintenance of security patches, disk usage and network
throughput. Is the cost factor the most compelling argument in the decision for
choosing Company’s virtualisation platform?
Chapter 4 Final Project Review
Introduction
At the beginning of this project, Company had in production 45 physical servers.
Server room space was becoming an issue along with concerns relating to server
power consumption and the related costs of server room cooling. The other major
driver behind this implementation was the lack of flexibility in providing new server
builds to the Business Development department. We will examine the
implementation to assist in determining progress made regarding virtualisation.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 48 of 57
Implementation and Testing
Prior to implementation Company server infrastructure consisted of a flat network
environment and physical servers. This server infrastructure has been enhanced with
the addition of a fully functional VMware ESXi server hosting two clients and a fully
functional Windows 2008 Hyper-V server also hosting two clients. The network
infrastructure has also gained an additional network segment which hosts Company’s
SAN in an isolated network environment. The addition of a functional iSCSI based
Storage Area Network is also a major enhancement to Company’s overall network
infrastructure. This is a positive development for Company before we begin to
evaluate any factors which virtualisation may have on Company.
One of the main issues which arose during this implementation was that of server
identification and location. Virtual servers are quite easy to deploy and connect to
the host network. Knowing the location of where these servers are hosted is
essential for the successful operations of Company’s server infrastructure. One
addition to the virtual server basic configuration was to add an additional
information item to the BGInfo background display identifying the virtualisation host
server, as shown in figure 1 below.
Figure 1: BGInfo Display Information
This provides the logged on user with the name of the client host server. The client
server also has a different naming nomenclature than its physical counter parts by
the addition of the letter ‘v’ in its hostname. These steps can help identify that a
server is virtual and may be of benefit during a troubleshooting process. Further to
this a spread sheet is maintained by Company which identifies all hardware assets.
An additional Virtual Server section was added here which also includes details of
any iSCSI targets which the virtual machine attaches to.
Management of the SAN is also a very important element of this project. Figure 2
below displays the current iSCSI targets which have been configured on the Synology
Rackstation. These devices are configured with a standard naming convention in
keeping with Company’s policy.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 49 of 57
Figure 2: iSCSI Target Configuration
Monitoring and management of this device is very important as it hosts the virtual
client VHD files. It also allows administrators to keep track of which servers are
connected to individual iSCSI targets. Running the iSCSI initiator software on a host
server allows confirmation that the connection has been made to the iSCSI Target.
Figure 3: iSCSI Initiator Report
It is extremely important all iSCSI targets are documented upon creation and also the
servers which connect to them. This is a process which will be evolving over time.
SAN management is as important as virtual server management. Management of both
of these components of the virtual environment is essential for the long term success
of this virtualisation implementation. In the event of a server outage, having the
correct documentation will allow for an easier and quicker server restoration
process.
Another issue which came to light during the course of this exercise was the lack of
acceptance of the ESXi application by server administrators. Company’s
administrators are primarily from a Windows based skill set. As a result
Administrators felt more comfortable working in the familiar Windows tool set
provided for Hyper-V management. This resulted in more time spend on the Hyper-V
server than the ESXi server. The ESXi tools are installed locally after a http
connection is made to the ESXi server. This prompts the user to install locally a
vSphere Client for managing both the ESXi server and its clients. The interface
subsequently installs provides this access in a very clear and efficient manner. Some
of the configuration diagrams are extremely clear and informative but it is a very
different interface from standard Windows tools. If choosing the VMware path,
training and familiarisation with the ESXi tools is a necessity.
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 50 of 57
Power Comparisons
Prior to this virtualisation work beginning, temperature graphs were taken from a
Comms Room using the ServersCheck software and its associated temperature
sensor. Figure 4 displays the resultant temperature graph for a 24 hour period with
the temperature display at value of approximately 17°.
Figure 4: Temperature Graph Full Server Compliment
As a consequence of the virtualisation work being undertaken, two older Proliant
DL380 G4 servers were decommissioned and powered off. They were replaced in the
Comms Room by the Synology Rack station. After a setting in period of two months,
a second temperature reading was taken and this is displayed in figure 5.
Figure 5: Temperature Graph after Server Retirement
X00077979 BSc (Hons) IT Management Project
Tuesday, 06 August 2013 Page 51 of 57
The temperature in this comms room appears to have dropped closer to 16°. While
not a scientific experiment this exercise does gives an indication of the benefits of
using virtualisation to reduce power consumption and also highlights the difference
in power consumption between different generation servers.
Issues Encountered
During the testing phase, the main issue which came to light was the large amounts
of usage of server disk space. This is for both the host servers and its clients. This
can be mitigated against by the correct sizing of client servers. One of the
advantages of virtualisation is that in the event of needing additional resources, the
client server can be powered off and additional resources allocated. Once the client
is back online, utilising the standard disk management tools within Windows can
allow the allocation of the additional disk space to the relevant partition. Attention
also needs to be placed on the monitoring of Snapshot folder size as this was an issue
which came to light during the Case study. The addition of disk space alerts into
network monitoring applications is essential. It is also very important to perform a
weekly physical check of each host virtualisation server to ensure that sufficient disk
space is available. The server snapshot facility is extremely useful tool for
administrators but it does utilise large amounts of disk space for each server. For
this reason, sufficient administration resources and administration processes must be
allocated to the administration of the host servers.
Demonstrations
As part of implementation process, a demonstration was arranged to display the ESXi
and Hyper-V platforms and to give an introduction to virtualisation. The audience
included Business Analysts whose roles are mainly responsible for database
administration, reporting and Business Intelligence functions. Senior management
was also present. The general response was very favourable. One of the topics
covered was that of virtual appliances and that many organisations had their
applications available as virtual appliances which can be downloaded and installed in
a reasonable quick timeframe. The network monitoring application Naigos was
downloaded and its integration as a virtual machine client on the ESXi server was
demonstrated. The ease at which this application was up and running was clearly
evident to those present and prompted a lot of positive feedback. One of the other
noticeable observations from the demonstration was that the Business Analysts were
able to see that the flexibility of a virtual environment could allow them the option
to have additional servers to perform specific tasks. Discussions were started relating
too how additional servers could assist with reporting and scheduling tasks. Senior
management were also impressed and subsequent to the demonstration expressed a
desire to proceed to the next stage in virtualisation implementation.
Implementation
During the course of the evaluation, Company’s main data file server developed an
issue which caused it to become unreliable as a file server. As a result a virtual
server was configured on the Hyper-V host and Company’s data copied to this server.
Login scripts were changed for users and a server migration carried out from the
unreliable physical host to the new virtual server. The nightly backup schedule was
altered to include the new virtual host. Thus far there has been no noticeable
Benefits of switching to a private cloud or virtual environment
Benefits of switching to a private cloud or virtual environment
Benefits of switching to a private cloud or virtual environment
Benefits of switching to a private cloud or virtual environment
Benefits of switching to a private cloud or virtual environment
Benefits of switching to a private cloud or virtual environment

More Related Content

What's hot

SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
 
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdidHitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdidChetan Gabhane
 
VMmark virtualization performance of Micron Enterprise PCIe SSD-based SAN
VMmark virtualization performance of Micron Enterprise PCIe SSD-based SANVMmark virtualization performance of Micron Enterprise PCIe SSD-based SAN
VMmark virtualization performance of Micron Enterprise PCIe SSD-based SANPrincipled Technologies
 
Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...
Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...
Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...Principled Technologies
 
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5 Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5 Principled Technologies
 
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...Principled Technologies
 
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackup
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackupDATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackup
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackupSymantec
 
Defining the Value of a Modular, Scale out Storage Architecture
Defining the Value of a Modular, Scale out Storage ArchitectureDefining the Value of a Modular, Scale out Storage Architecture
Defining the Value of a Modular, Scale out Storage ArchitectureNetApp
 
Benchmarking a Scalable and Highly Available Architecture for Virtual Desktops
Benchmarking a Scalable and Highly Available Architecture for Virtual DesktopsBenchmarking a Scalable and Highly Available Architecture for Virtual Desktops
Benchmarking a Scalable and Highly Available Architecture for Virtual DesktopsDataCore Software
 
-bheritas5fundamentals-
-bheritas5fundamentals--bheritas5fundamentals-
-bheritas5fundamentals-raryal
 
Grid rac preso 051007
Grid rac preso 051007Grid rac preso 051007
Grid rac preso 051007Sal Marcus
 
Dell PowerEdge VRTX and M-series compute nodes configuration study
Dell PowerEdge VRTX and M-series compute nodes configuration studyDell PowerEdge VRTX and M-series compute nodes configuration study
Dell PowerEdge VRTX and M-series compute nodes configuration studyPrincipled Technologies
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 
New Features For Your Software Defined Storage
New Features For Your Software Defined StorageNew Features For Your Software Defined Storage
New Features For Your Software Defined StorageDataCore Software
 
Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...
Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...
Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...Symantec
 
V Mware Qualcomm Case1
V Mware Qualcomm Case1V Mware Qualcomm Case1
V Mware Qualcomm Case1davidbe
 
Virtualising your mission-critical applications
Virtualising your mission-critical applicationsVirtualising your mission-critical applications
Virtualising your mission-critical applicationsguest24ab95c
 
Configuring a failover cluster on a Dell PowerEdge VRTX
Configuring a failover cluster on a Dell PowerEdge VRTXConfiguring a failover cluster on a Dell PowerEdge VRTX
Configuring a failover cluster on a Dell PowerEdge VRTXPrincipled Technologies
 

What's hot (20)

Groupe Mutuel case study
Groupe Mutuel case studyGroupe Mutuel case study
Groupe Mutuel case study
 
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
 
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdidHitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
 
VMmark virtualization performance of Micron Enterprise PCIe SSD-based SAN
VMmark virtualization performance of Micron Enterprise PCIe SSD-based SANVMmark virtualization performance of Micron Enterprise PCIe SSD-based SAN
VMmark virtualization performance of Micron Enterprise PCIe SSD-based SAN
 
Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...
Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...
Dell PowerEdge M820 blade storage solution: Software cost advantages with Mic...
 
Implementing hyperv
Implementing hypervImplementing hyperv
Implementing hyperv
 
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5 Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
Simplified VDI: Dell PowerEdge VRTX & Citrix XenDesktop 7.5
 
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
 
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackup
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackupDATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackup
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackup
 
Defining the Value of a Modular, Scale out Storage Architecture
Defining the Value of a Modular, Scale out Storage ArchitectureDefining the Value of a Modular, Scale out Storage Architecture
Defining the Value of a Modular, Scale out Storage Architecture
 
Benchmarking a Scalable and Highly Available Architecture for Virtual Desktops
Benchmarking a Scalable and Highly Available Architecture for Virtual DesktopsBenchmarking a Scalable and Highly Available Architecture for Virtual Desktops
Benchmarking a Scalable and Highly Available Architecture for Virtual Desktops
 
-bheritas5fundamentals-
-bheritas5fundamentals--bheritas5fundamentals-
-bheritas5fundamentals-
 
Grid rac preso 051007
Grid rac preso 051007Grid rac preso 051007
Grid rac preso 051007
 
Dell PowerEdge VRTX and M-series compute nodes configuration study
Dell PowerEdge VRTX and M-series compute nodes configuration studyDell PowerEdge VRTX and M-series compute nodes configuration study
Dell PowerEdge VRTX and M-series compute nodes configuration study
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
New Features For Your Software Defined Storage
New Features For Your Software Defined StorageNew Features For Your Software Defined Storage
New Features For Your Software Defined Storage
 
Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...
Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...
Physical to Virtual (P2V) & Backup to Virtual (B2V) Conversions with Backup E...
 
V Mware Qualcomm Case1
V Mware Qualcomm Case1V Mware Qualcomm Case1
V Mware Qualcomm Case1
 
Virtualising your mission-critical applications
Virtualising your mission-critical applicationsVirtualising your mission-critical applications
Virtualising your mission-critical applications
 
Configuring a failover cluster on a Dell PowerEdge VRTX
Configuring a failover cluster on a Dell PowerEdge VRTXConfiguring a failover cluster on a Dell PowerEdge VRTX
Configuring a failover cluster on a Dell PowerEdge VRTX
 

Similar to Benefits of switching to a private cloud or virtual environment

Whitepaper Server Virtualisation And Storage Management
Whitepaper   Server Virtualisation And Storage ManagementWhitepaper   Server Virtualisation And Storage Management
Whitepaper Server Virtualisation And Storage ManagementAlan McSweeney
 
Migrating Existing ASP.NET Web Applications to Microsoft Azure
Migrating Existing ASP.NET Web Applications to Microsoft AzureMigrating Existing ASP.NET Web Applications to Microsoft Azure
Migrating Existing ASP.NET Web Applications to Microsoft AzureIlyas F ☁☁☁
 
VMworld 2013: Architecting the Software-Defined Data Center
VMworld 2013: Architecting the Software-Defined Data Center VMworld 2013: Architecting the Software-Defined Data Center
VMworld 2013: Architecting the Software-Defined Data Center VMworld
 
V sphere 5 roadshow final
V sphere 5 roadshow finalV sphere 5 roadshow final
V sphere 5 roadshow finalbluechipper
 
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalsp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalKunal Khairnar
 
Virtualization In Software Testing
Virtualization In Software TestingVirtualization In Software Testing
Virtualization In Software TestingColloquium
 
ACIC Rome & Veritas: High-Availability and Disaster Recovery Scenarios
ACIC Rome & Veritas: High-Availability and Disaster Recovery ScenariosACIC Rome & Veritas: High-Availability and Disaster Recovery Scenarios
ACIC Rome & Veritas: High-Availability and Disaster Recovery ScenariosAccenture Italia
 
Axiom Virtualization Overview
Axiom Virtualization OverviewAxiom Virtualization Overview
Axiom Virtualization Overviewsmoots
 
USING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERS
USING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERSUSING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERS
USING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERSJuniper Networks
 
Removing Storage Related Barriers to Server and Desktop Virtualization
Removing Storage Related Barriers to Server and Desktop VirtualizationRemoving Storage Related Barriers to Server and Desktop Virtualization
Removing Storage Related Barriers to Server and Desktop VirtualizationDataCore Software
 
Azure DRaaS v0.7
Azure DRaaS v0.7Azure DRaaS v0.7
Azure DRaaS v0.7Luca Mauri
 
Managing the move to virtualization and cloud
Managing the move to virtualization and cloudManaging the move to virtualization and cloud
Managing the move to virtualization and cloudBhaskar Jayaraman
 
Unified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphereUnified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphereHitachi Vantara
 
VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...
VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...
VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...VMworld
 
Whitepaper: Evolution of the Software Defined Data Center - Happiest Minds
Whitepaper: Evolution of the Software Defined Data Center - Happiest MindsWhitepaper: Evolution of the Software Defined Data Center - Happiest Minds
Whitepaper: Evolution of the Software Defined Data Center - Happiest MindsHappiest Minds Technologies
 
Datacenter migration using vmware
Datacenter migration using vmwareDatacenter migration using vmware
Datacenter migration using vmwareWilson Erique
 
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...Eduardo Patrocinio
 
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Principled Technologies
 

Similar to Benefits of switching to a private cloud or virtual environment (20)

Whitepaper Server Virtualisation And Storage Management
Whitepaper   Server Virtualisation And Storage ManagementWhitepaper   Server Virtualisation And Storage Management
Whitepaper Server Virtualisation And Storage Management
 
Migrating Existing ASP.NET Web Applications to Microsoft Azure
Migrating Existing ASP.NET Web Applications to Microsoft AzureMigrating Existing ASP.NET Web Applications to Microsoft Azure
Migrating Existing ASP.NET Web Applications to Microsoft Azure
 
VMworld 2013: Architecting the Software-Defined Data Center
VMworld 2013: Architecting the Software-Defined Data Center VMworld 2013: Architecting the Software-Defined Data Center
VMworld 2013: Architecting the Software-Defined Data Center
 
V sphere 5 roadshow final
V sphere 5 roadshow finalV sphere 5 roadshow final
V sphere 5 roadshow final
 
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_finalsp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
sp_p_wp_2013_v1_vmware_technology_stack___opportunities_for_isv_s_final
 
Virtualization In Software Testing
Virtualization In Software TestingVirtualization In Software Testing
Virtualization In Software Testing
 
ACIC Rome & Veritas: High-Availability and Disaster Recovery Scenarios
ACIC Rome & Veritas: High-Availability and Disaster Recovery ScenariosACIC Rome & Veritas: High-Availability and Disaster Recovery Scenarios
ACIC Rome & Veritas: High-Availability and Disaster Recovery Scenarios
 
Migrating sap wft_case_study_final (1)
Migrating sap wft_case_study_final (1)Migrating sap wft_case_study_final (1)
Migrating sap wft_case_study_final (1)
 
Axiom Virtualization Overview
Axiom Virtualization OverviewAxiom Virtualization Overview
Axiom Virtualization Overview
 
USING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERS
USING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERSUSING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERS
USING SOFTWARE-DEFINED DATA CENTERS TO ENABLE CLOUD BUILDERS
 
Removing Storage Related Barriers to Server and Desktop Virtualization
Removing Storage Related Barriers to Server and Desktop VirtualizationRemoving Storage Related Barriers to Server and Desktop Virtualization
Removing Storage Related Barriers to Server and Desktop Virtualization
 
Azure DRaaS v0.7
Azure DRaaS v0.7Azure DRaaS v0.7
Azure DRaaS v0.7
 
Managing the move to virtualization and cloud
Managing the move to virtualization and cloudManaging the move to virtualization and cloud
Managing the move to virtualization and cloud
 
Unified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphereUnified Compute Platform Pro for VMware vSphere
Unified Compute Platform Pro for VMware vSphere
 
Server virtualization vendor landscape
Server virtualization vendor landscapeServer virtualization vendor landscape
Server virtualization vendor landscape
 
VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...
VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...
VMworld 2015: Rethinking Enterprise Storage: Rise Of Hyper Converged Infrastr...
 
Whitepaper: Evolution of the Software Defined Data Center - Happiest Minds
Whitepaper: Evolution of the Software Defined Data Center - Happiest MindsWhitepaper: Evolution of the Software Defined Data Center - Happiest Minds
Whitepaper: Evolution of the Software Defined Data Center - Happiest Minds
 
Datacenter migration using vmware
Datacenter migration using vmwareDatacenter migration using vmware
Datacenter migration using vmware
 
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
 
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...
 

Benefits of switching to a private cloud or virtual environment

  • 1. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 1 of 57 Institute of Technology Tallaght, Department of Computing BACHELOR OF SCIENCE (Hons) in IT MANAGEMENT Semester 8 Module: PROJECT Project Title: Benefits of switching server infrastructure to a private cloud or virtualised environment for SME’s
  • 2. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 2 of 57 Contents ..................................................................................................... 1 Chapter 1 Project Proposal .................................................................... 3 Introduction ...................................................................................... 3 Chapter 2 Literary Review .................................................................... 11 Introduction ..................................................................................... 11 Technology Description........................................................................ 11 Virtualisation Products......................................................................... 14 Potential Benefits .............................................................................. 22 Case Studies ..................................................................................... 23 Conclusions ...................................................................................... 23 Chapter 3 Case Study .......................................................................... 24 Introduction ..................................................................................... 24 Background ...................................................................................... 24 Server Setups ................................................................................. 26 SAN Configuration............................................................................ 30 Network Configuration ...................................................................... 34 Server Testing................................................................................... 36 Server Scans .................................................................................. 36 Server Performance ......................................................................... 37 Hypervisor Overhead ........................................................................ 37 File Transfers................................................................................. 39 Benchmarking Software Comparisons..................................................... 40 SiSoft's SANDRA 2013 ........................................................................ 40 AIDA64 Extreme Edition..................................................................... 41
  • 3. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 3 of 57 NovaBench .................................................................................... 42 iSCSI Performance ........................................................................... 43 SQL Performance............................................................................. 44 Pricing............................................................................................ 46 Conclusions ...................................................................................... 47 Chapter 4 Final Project Review .............................................................. 47 Introduction ..................................................................................... 47 Implementation and Testing .................................................................. 48 Power Comparisons.......................................................................... 50 Issues Encountered .......................................................................... 51 Demonstrations............................................................................... 51 Implementation .............................................................................. 51 Conclusion and Future Development ........................................................ 52 References and Bibliography.................................................................. 53 Virtualisation Products......................................................................... 54 Appendices ...................................................................................... 56 Chapter 1 Project Proposal Introduction Company currently maintains 34 servers in 3 communications rooms in its main site and an additional 11 servers located in its Disaster Recovery facility. These servers are a combination of data, application and backup servers. The following are the main reasons behind considering an implementation of a Virtualised server environment. • The primary one is flexibility in server rollout and also in providing server failover. • The secondary reason is in reducing communications room power consumption
  • 4. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 4 of 57 • Company have an issue with consolidating rack space as they have 3 full server racks with very little room for manoeuvring. • Company also wish to investigate any additional cost savings from a licensing viewpoint. Please note that Company does not wish to completely migrate all their existing servers to a virtual platform. It does however wish to implement a virtual environment which will work in tandem with standalone servers. Company has traditionally implemented server builds on a server by server basis on individual hardware platforms. This has resulted in a very stable server environment with each server performing its allotted role. As a strong emphasis is placed on server failover redundancy, this has led to Company accumulating a large number of production servers. This creates many issues outlined in the introduction. Current status, gaps in understanding Company currently only possess limited knowledge of virtualisation throughout the organisation. It is a major step to migrate to a virtualised environment. Anecdotal evidence suggests that the virtual world offers far more flexibility and reliability than the hardware world. Company need to gain knowledge and exposure to virtualisation. It would be expected that this would be fulfilled through knowledge from books, training courses and practical exposure to the technology. Opportunity of new ideas to address gaps Virtualisation is seen as a way of overcoming some of the problems facing Company but it is also in many ways a leap of faith. It can be difficult for experienced staff who are used to a certain way of operating to change into a completely new environment. Again training, knowledge and exposure will help build up confidence with the new environment. Documentation will also be vital as a way of having a physical handle on virtual servers. Company may even need to introduce a different naming nomenclature for virtual servers to aid with identifying their location. It is also envisaged that some form of visual aid will be required to allow Company visualise the virtual servers. Specific statement of objectives The objective of the project is to determine the tangible benefits of server virtualisation by means of a case study. The four main objectives would be: • Identify Virtualisation platform. There are 2 Virtualisation platforms which Company will be evaluating, which are: • VMwares VSphere 5 ESXi • Microsoft’s Hyper-V 2008
  • 5. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 5 of 57 • Citrix’s XenServer which is also a recognised virtualisation platform will not be considered due to senior management decision to concentrate on the two main providers in this field. There will be a brief synopsis of the evaluation process of the two contenders along with reasons which explain Company’s decision. • Evaluation of Virtualisation Platforms Company will use the following criteria in their virtualisation platform selection process: Performance Cost Ease of Use Ease of Server Rollout Ease of Adoption The initial testing and comparisons will be carried out between Microsoft Hyper-V 2008 and VMware’s EXSi 5.0. Both of these hypervisors will be installed onto HP Proliant DL360 servers. Both servers will have 12GB of ram and 4 X 146GB SAS disks in a RAID 5 configuration. Virtual machines hosts will be configured and hosted under both platforms. The testing will consist of determining which hosting platform is better from a performance viewpoint and also from a server management perspective. Virtual machine performance will be evaluated under both environments. Stage 2 of the implementation would then involve the installation of a basic iSCSI SAN environment in conjunction with the chosen virtualisation software. • Implement Virtualisation infrastructure consisting of redundant servers and SAN. The expected out come of the project would be for a cluster of Virtual Server hosts configured in a failover arrangement. Both servers would have an iSCSI connection to a SAN. The SAN is envisaged as a high end storage device which would be RAID5/6 based, have redundant power supplies and redundant array controllers. This would provide a sufficient level of redundancy to meet Company’s needs. At a later stage of the project, it would be envisaged that a similar arrangement is installed in our DRS. This configuration may utilise some of the older hardware made redundant in the main office. This would allow Company the option of implementing redundancy or replication between the SAN’s. • Migrate application and data servers to the virtualised environment. This would involve the selection of several of the existing physical servers which are reaching end of life and provisioning them in a virtual form.
  • 6. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 6 of 57 Company have a main data file server and a SQL server identified as the initial targets for replacement. A second Exchange 2007 server is also expected to be hosted in virtual form. • Quantify the benefits of the change in terms of cost savings and flexibility. This will include quantifying costs saving in relation to electricity costs and software licence costs. It is also intended to gauge the effect of virtualisation on IT staff in terms of exposure to new leading edge technology. Company also expect to have increased options with regard to implementing server contingency in terms of the failover for the actual server plus the addition of redundant servers on which Company can implement server failover procedures. Statement of Proposed Work The proposed scope of work involves the installation of virtualised software on a new cluster of servers, linked to a standalone storage device. Company will then migrate selected applications and business functions to their new virtualised environment. The sequence of tasks needed to achieve this goal is listed below. Sequence of tasks needed to achieve results • Identify Virtualisation platform. • Implement Virtualisation infrastructure consisting of redundant servers and SAN. • Migrate application and data servers to the virtualised environment. • Determine any reduction in power consumption and any other tangible benefits. Identify Virtualisation platform. There are 2 Virtualisation platforms which we will be evaluating. VMwares VSphere 5 and Microsoft’s Hyper-V. Microsoft’s Hyper-V The Hyper-V role will be enabled on a Windows 2008 R2 64B platform. The server specification was Intel Pentium III Xeon 2GHz processor, 12 GB ram and 4 X 146GB RAID5 disk drives. Hyper-V requires an x64 based processor plus hardware assisted virtualisation. This is a BIOS enabled feature which is available in newer servers.
  • 7. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 7 of 57 Microsoft provides a Hardware Assisted Virtualisation detection tool. This will check if your processor supports HAV and if it is enabled. This feature is available in the HP Proliant G5 + 6 range of servers. It is not available in the older HP’s Proliant G4 range for example. To install Hyper-V, follow the instructions from the Microsoft Web site: To install Hyper-V on a full installation of Windows Server 2008 1. Click Start, and then click Server Manager. 2. In the Roles Summary area of the Server Manager main window, click Add Roles. 3. On the Select Server Roles page, click Hyper-V. 4. On the Create Virtual Networks page, click one or more network adapters if you want to make their network connection available to virtual machines. 5. On the Confirm Installation Selections page, click Install. 6. The computer must be restarted to complete the installation. Click Close to finish the wizard, and then click Yes to restart the computer. 7. After you restart the computer, log on with the same account you used to install the role. After the Resume Configuration Wizard completes the installation, click Close to finish the wizard. Once installed, create your virtual machines. Again the following instruction is from Microsoft: To create and set up a virtual machine 1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V Manager. 2. From the Action pane, click New, and then click Virtual Machine. 3. From the New Virtual Machine Wizard, click Next. 4. On the Specify Name and Location page, specify what you want to name the virtual machine and where you want to store it. 5. On the Memory page, specify enough memory to run the guest operating system you want to use on the virtual machine. 6. On the Networking page, connect the network adapter to an existing virtual network if you want to establish network connectivity at this point. Note If you want to use a remote image server to install an operating system on your test virtual machine, select the external network. 7. On the Connect Virtual Hard Disk page, specify a name, location, and size to create a virtual hard disk so you can install an operating system on it. 8. On the Installation Options page, choose the method you want to use to install the operating system:
  • 8. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 8 of 57 • Install an operating system from a boot CD/DVD-ROM. You can use either physical media or an image file (.iso file). • Install an operating system from a boot floppy disk. • Install an operating system from a network-based installation server. To use this option, you must configure the virtual machine with a legacy network adapter connected to an external virtual network. The external virtual network must have access to the same network as the image server. 9. Click Finish. 1) VMwares VSphere 5 A HP Proliant DL360 G6 was used to install the free version of VMWares VSphere ESXi 5.0 Hypervisor. The server specification was Intel Pentium III Xeon 2GHz processor, 12 GB ram and 4 X 146GB RAID5 disk drives. The installation process for VSphere is to register for and download an ISO file image of VSphere v5.0. You then create a bootable cd and follow the setup prompts from the installation procedure. Once installed the server is accessible by its IP Address. When you connect, you are prompted to install the vSphere client on you local machine. This is the main administration console for VSphere. • Implement Virtualisation infrastructure consisting of redundant servers and SAN. The planned implementation would be for a cluster of Virtual Server hosts configured in a failover arrangement. Both servers would have an iSCSI connection to a SAN. The
  • 9. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 9 of 57 SAN is envisaged as an EMC device which would be RAID5/6 based, have redundant power supplies and redundant array controllers. This would provide a sufficient level of redundancy to meet our needs but a second stage of the project would envisage a similar arrangement installed in our DRS. This would allow us the option of implementing redundancy or replication between the SAN’s. • Migrate application and data servers to the virtualised environment. This would involve the selection of several of the existing physical servers which are reaching end of life and provisioning them in a virtual form. We have the main data file server, a SQL server identified as the initial targets for replacement. A second Exchange 2007 server is also expected to be hosted in virtual form. • Determine any reduction in power consumption and any other tangible benefits. Techniques and data to be used It is intended to document each installation process at each stage of the project. Data measurement relating to power usage before and after the implementation will be collected to gauge some of the projects benefits. Measure of success Two measures of success would be in improved Server Rollout times and reduced server power consumption. • Server RollOut. Currently if a new server is required the following procedure must be followed. Capital Expenditure approval must be sought in order to purchase a new server. Once done the server must be ordered via a hardware supplier. Part of the ordering process involves the raising of purchase orders. Once the hardware reseller has obtained the server, additional memory, disks, power supplies it will be shipped to our stores department. When the shipment arrives, it is assembled and a server image is applied and the server is joined to our network. SQL and any additional software is installed and the server is put into production. This is quite a lot of work but the main drawback here is in waiting on parts of the server to be ready. Invariably there are delays with some component of the order. In most environments, servers are requested from the IT department from development staff who expect the server to arrive the next day. The benefits of a virtualised system here would allow the IT department to provision servers at very quick notice. • Power Consumption. It is quite difficult to ascertain the amount of power being used by a server. Using utilities provided by HP and a cost formula provide by ESB, it was possible to determine the approximate power consumption of the following types of server:
  • 10. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 10 of 57 HP DL320G3, HP DL380G4, HP DL380G5 The ESB provided this Formula for Estimating Energy Consumption: (Wattage × Hours Used Per Day) ÷ 1000 = Daily Kilowatt-hour (kWh) consumption 1 kilowatt (kW) = 1,000 Watts HP provided a power calculation utility which allowed the calculation of the power usage of the 3 servers as follows: DL320 consumes 170W of power DL380G4 consumes 400W of power DL380G5 consumes 300W of power Putting these figures into the formula for the DL380: 400W X 365 (days) X 24(hours)/1,000 X .20(price per kilowatt hour) = €700 That is €700 per DL380 per year in electricity costs. Performance It is anticipated that once both Hyper-V and VMware are running that application servers will be run under the hosts. Performance of the client machines will be undertaken and comparisons made. Client virtual machines running MS SQL 2005/8 will be compared under both hosts. Performance measurements will also be taken of the host machines while they run their client machines. Performance comparisons will also be carried out between the virtualisation platforms and the existing physical servers. These tests will include network file transfers and MS SQL performance tests. It is hoped that this data will provide useful information with a view to building a business case for implementing a private cloud infrastructure. Ease of Use and Migration Several factors will be considered in this section. Server installations under both virtualisation platforms; ease of management and accessibility; server backup and failover options; ease of server modification; migration of virtual clients between the two applications. Adoption by Staff Company runs physical servers running predominately Windows operating systems. No virtualisation systems have been operational within Company previously. No formal virtualisation training is planned initially. IT administrators will be provided
  • 11. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 11 of 57 with the new virtualisation infrastructure details and will be asked for feedback in relation to their choice of environment. Conclusion The initial project implementation is to configure introductory versions of the virtualisation platforms from both Microsoft and VMware onto identical HP Proliant hardware platforms. Comparisons will be made between the two implementations will be used as a basis for deciding which platform to progress further. It will also be used for gaining exposure to the virtual server environment. This data will be used to convince senior management of the viability of a private cloud environment in Company. Chapter 2 Literary Review Introduction For the purpose of this assignment virtualisation will be explained in detail. The basic components of a virtualisation system will be examined with a view to explaining its operation. Different products from several vendors will be examined to see how they compare. The main features of these products will be explained. Reasons for product selection will be outlined. Anticipated benefits that virtualisation could bring to a Small or Medium sized Enterprises server infrastructure will be developed. A description of the testing environment will also be outlined. Technology Description Virtualisation has developed as a mainstream technology due to limitations of the architecture of current microprocessors. The x86 computer architecture derives from the original 8086 cpu. Operating systems running under x86 architecture use a series of rings to gain access to hardware resources. This design was meant to run a single operating system and its associated applications on a single hardware platform as shown in figure 2.1:
  • 12. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 12 of 57 Figure 2.1 x86 Architecture Layout In practise, this would mean each individual application or function being installed on a separate server in order to avoid application conflicts. SQL server would be run on one physical server, a web server installed on a second server and file and print services on another server. The net result of this mode of operation is a proliferation of servers for organisations and also these server’s are underutilised. Studies by IDC Research have shown that most server‘s are idle for 80% of their time (VMware 2012). A Microsoft technical case study from June 2008 determined that the average processor utilisation figure for their data centre servers was 9.75% in September 2005. Virtualisation allows operating systems derive the maximum benefit from x86 based hardware using a combination of technologies built into the latest processors such as Intel’s Hardware Assisted Virtualisation (Intel 2012) and software virtualisation products. This allows the creation of several virtual machines which run on and share access to the underlying hardware. This offers the advantages of consolidating individual server roles into a single physical server thus maximising use from that server. The reduction in physical servers also provides cost savings with respect to energy consumption in server rooms. Virtualisation is also becoming a more mainstream technology and is being widely adopted with growth rates expected to continue over the coming years (Computer Weekly, 2012). Virtualisation is achieved by the addition of an additional layer of software which runs directly on top of the hardware which is called a hypervisor. The virtual machines run on top of the hypervisor and have no direct access to the system hardware. The hypervisor is responsible for providing access to the hardware and it ensures that no conflicts occur. Each virtual machine has access to its own virtual processor, memory, disk and I/O devices. A virtual machine manager will manage the concurrent operation of the virtual machines ensuring that conflicts do not occur. The hypervisor act’s as the interface between hardware and the virtualisation software. It achieves this by utilising the ring structure of the microprocessor. The ring structure is implemented in a microprocessor in order to facilitate access to system hardware and also to provide different levels of system access (Virtualisation, 2012). Ring0 is the inner most ring with direct access to hardware. This ring provides the highest level of system access and is also known as kernel mode. Rings 1 and 2
  • 13. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 13 of 57 are typically used by the guest OS and device drivers and ring3 is used by applications. A hypervisor runs in Ring0 as it requires full access to system hardware. As the hypervisor has direct access to system hardware, it must be very efficient and provide very good performance. Device driver’s use rings 1 and 2 to make system kernel level calls to the hypervisor to request access to hardware. Applications run in the outer layer 3 and make system requests through device drivers. There are two different types of hypervisors. A type 1 hypervisor runs directly on the hardware of the system. A type 2 hypervisor runs on top of an operating system which it requires in order to run. Type1 hypervisors are used by main stream virtualisation applications. Currently there is no defined open standard regarding virtualisation. There is an Open Virtualization Alliance which is promoting an open standard for virtualised machines using Kernel based solutions. This is a linux based solution and KVM is open source software that is included as part of the linux kernel. The main virtualisation market leaders develop their products using their own proprietary methods. There are three different techniques for handling access to the processor kernel during virtualisation. • Virtualisation using Binary Translation • Paravirtualisation • Hardware Assisted Virtualisation Binary Translation. There are two aspects to binary translation. Firstly, calls to the kernel from guest operating systems are translated at a binary level into virtualisation calls. Some system calls from the operating systems cannot be virtualised and they must be intercepted and translated using binary translation. Any system call from a guest operating system that is directed to accessing an I/O device is redirected using binary translation to avoid conflicts between guest operating systems. The guest operating system is unaware that it is being virtualised. In order to minimise the effect this translation has on system performance, the virtualisation software will inspect groups of instructions rather than individual instructions. This minimises the performance effect of translation. Secondly, user requests from applications running in the guest operating systems are executed directly on the processor. The net result of this is very high level performance. Guest operating systems are unaware that they are running in a virtualised environment. Paravirtualisation This approach requires the addition of an additional virtualisation layer between the guest OS and the system hardware. This is a paravirtualisation management module and it operates in conjunction with an operating system which has been modified slightly to work in a virtual environment. The guest operating system makes calls to the virtualisation layer via an API which in turn communicates with the hardware. Because the entire hardware is not being virtualised, Paravirtualisation results in better overall performance. The disadvantage of Paravirtualisation is that some operating systems may not be amenable to modification to allow it to run in a virtualised format.
  • 14. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 14 of 57 Hardware Assisted Virtualisation As early as 1974 it has been recognised that Hardware Assisted Virtualisation techniques offered the optimum means of implementing virtualisation on single servers. Computer scientists Gerald Popek and Robert Goldberg produced an article in that year entitled “Formal Requirements for Virtualizable Third Generation Architectures” in which they define three properties for a hypervisor – efficiency, resource control and equivalency (Popek-Goldberg, 1974). Since 2006 the latest model processors from both Intel and AMD offer capabilities which allow this functionality, aimed specifically at virtualisation. These processor enhancements allow for improved performance from virtualisation software without modification of the guest operating system. This technology (Intel’s Virtualisation Technology (VT-x) and AMD’s AMD-V) allow new processors to provide a new root mode of operation which runs below ring 0. The advantage of this is that system calls from guest OS go directly to the hypervisor which remove the necessity of performing either binary translation or paravitualisation. For the purposes of this discussion we will describe the operation of VMware, Hyper- V and Citrix Xen server. For the purposes of the case study we will concentrate on the VMware and Microsoft offerings. This is due to a constraint on hardware resources. We will then look at where these products compare and their differences. Virtualisation Products According to the Gartner Magic Quadrant for x86 Server Virtualisation Infrastructure report (June 2011), 40% of the x86 architecture workloads have been implemented in a virtualised environment. Growth is expected to increase fivefold between 2010 and 2015. The 3 main virtualisation products in today’s market are according to Gartner’s report: • VMware vSphere • Citrix Xen Server • Microsoft HyperV The Gartner figures for the past two years are reproduced here in Figures 2.2 and 2.3.
  • 15. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 15 of 57 Figure 2.2 Magic Quadrant for x86 Server Virtualisation Infrastructure 2011 Figure 2.3 Magic Quadrant for x86 Server Virtualisation Infrastructure 2012
  • 16. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 16 of 57 Over this two year period, VMware remains as the market leader with Microsoft increasing market share. Oracle appears to be gaining the most momentum, moving into the ‘challenger’ position. A key development in this market is that virtualisation is now becoming adopted by SME as a viable and affordable option. The three virtualisation products all use type1 hypervisors, which run directly on the underlying hardware. VMware vSphere. VMware is the current market leader in the virtualisation arena with approximately 80% market share. It is facing increasing competition especially from Microsoft who are gaining market share on an annual basis (Market Share 2012). It was acquired by EMC in 2004 and this now provides seamless integration with the EMC range of SAN storage devices (SAN 2012). It is very mature technology in 2012 and has provided many innovations in the virtualisation area. The main drawback with regard to VMware products is regarding licensing fees which are the highest among virtualisation vendors. VMware Virtualization allows you run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer. This changes the traditional model of a single server running a single application to a single physical server running three separate virtual machines as shown in figure 2.4: Figure 2.4 VMware Architecture Layout Each virtual machine can run its own separate applications.
  • 17. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 17 of 57 Virtual Machines can run a variety of different operating systems e.g. Windows 2003, Windows 2008 64b and Linux. Each of these virtual machines can in turn run their own separate applications. The latest version of VMware’s virtualisation software is vSphere5. VSphere creates a fully functional virtual machine complete with its own separate CPU, RAM, disk and network controller. This is achieved by an additional technology layer called a hypervisor. The Hypervisor is a virtual machine manager which allocates hardware resources dynamically and transparently. The VSphere hypervisor runs directly on the computer hardware without the need for an additional underlying operating system. It was the first x86 hypervisor to do so. VMware VSphere 5 is the latest incarnation of the VSphere product line. In VMware VSphere 5, the hypervisor is ESXi and it is now in its fifth generation. Previous versions of VSphere required a Linux based Service Console as an underlying element of the hypervisor. ESXi does not require this service console and as a result has a very small foot print of 100mb. The value of this small footprint is that it reduces the attack surface for external threats and lowers the number of patches required, resulting in a more stable environment. ESXi replaces the service console with a small linux kernel, VMkernel. This removes the necessity for linux skills to manage the server. On initial server startup, a VMkernel is loaded and becomes the first virtual machine running under VMware. The VMkernel manages the virtual machines access to physical hardware: ram, disk, I/O and memory. A significant difference between VSphere and its main rivals (Hyper-V and Xen Server) is in how VSphere handles I/O. The VMkernel handles I/O directly in VSphere. Hyper-V and Xen handle I/O indirectly through the parent partition. The advantage of handling I/O directly within the hypervisor is greater through put and lower overhead. The disadvantage is more limited hardware compatibility. For example, once Windows 2008 supports a particular hardware type, this is then also supported by Hyper-V. In order for additional hardware support to be added for VSphere, the hypervisor must be updated because this is where the I/O stack and device drivers reside. The advantages of direct I/O far outweigh this. An indirect I/O model directs all virtual machine network and storage traffic through the underlying OS which can lead to all the virtual machines being in contention to use the same OS drivers. It may also mean that these drivers are not optimised for a virtualised environment. Figure 2.5 below (reproduced from VMware) helps illustrate this difference.
  • 18. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 18 of 57 Figure 2.5 Virtual driver comparison. Having the drivers in the hypervisor allows for optimum performance as any CPU or memory resources required by virtual machines can be optimized by ESXi. Virtual machines are also talking directly with the external devices through the drivers. VSphere also allows virtual machines running different OS. It also allows its memory management techniques (which allow for changing memory allocations) to be applied to all host machines. vSphere provides networking support to its virtual machine clients via virtual networking using virtual Ethernet adapters and virtual switches. Each virtual machine is configured with at least one virtual Ethernet adapter. Virtual Switches operate in the same way as a physical switch and allow virtual machines to communicate with each other. In order to provide network connectivity, VMware ESXi provides virtual switches and virtual network cards. These operate in similar fashion as their physical equivalents. The virtual switch allows virtual machines to which it is connected to communicate. The structure of a VMware is a modular one which allows for a great deal of flexibility. Figure 2.6 reproduced from VMware represents this: Figure 2.6 Virtual Environment. This format allows for connection from a physical network card through to the virtual network cards of each virtual machine. The virtual network cards are emulated versions of an AMD Lance PCnet32 adapter (vLance) or a virtual device (e1000) that emulates an Intel E1000 adapter.
  • 19. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 19 of 57 The virtual switch is a key networking component in ESXi. Each ESXi server can host 248 virtual switches. There are three basic components to a virtual switch: • Layer 2 forwarding unit • VLAN unit • layer 2 security unit The virtual switch operates in the same functional manner as a physical switch. vSphere’s storage virtualisation feature is an interface to physical storage devices from the virtual machines. One way of gaining additional flexibility from a VSphere environment is by providing shared storage on a SAN. A SAN is a dedicated network which provides access to unified storage devices located on that network. A unified storage device combines both file based and block based access in a single device. Block level storage is a disk volume which an operating system connects to and uses as a disk. File level storage is a method of sharing files with users. Using a SAN in conjunction with vSphere allows the option of storing the virtual machine files in a central location and for building in server redundancy to the VSphere hosts. SAN storage is block based storage and vSphere makes this accessible using its VMware Virtual Machine File System. This is a high performance file system which is designed with multi server environments in mind. It is also designed for a virtual server environment. vSphere provides various feature in relation to data storage management. vSphere manages the amount of resources assigned to virtual disks using a mechanism called Thin Provisioning. Thin Provisioning disks are virtual disks which start small and grow as data is written to them. A thick disk by comparison has all its space allocated to it when it is created. This allows for more efficient memory management by the host server. iSCSI support and performance has also improved in vSphere 5 and it also now includes CHAP authentication between client and server. Existing virtual disks can be increased on the fly as long as the guest OS supports this. Virtual disks are stored as VMDK files. Site Recovery Manager is a very important feature of vSphere5. Site Recovery Manager uses a set of API’s which integrate SAN based storage so it can handle virtual machine, host and storage failover. It also allows it to manage the array based replication. There are also a separate set of API’s which deal with Data Protection. vSphere 5 now also supports Fibre Channel over Ethernet. This is where fibre channel frames are encapsulated in Ethernet networks thus allowing fibre channel to use 10Gb Ethernet networks. vSPhere has a requirement for a separate vSPhere 5 licence per system processor. Citrix Xen Server. Citrix Systems original product lines were based around client server remote desktop offerings. Recognising that virtualisation was an area of opportunity, Citrix systems acquired XenSource Inc in 2007 and incorporated the XenSource products into the Citrix range of products. Xen Server is the virtualisation offering from Citrix which is based on the Xen Opensource hypervisor. Xen originated from a Cambridge University research project which resulted in the formation of XenSource Inc.
  • 20. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 20 of 57 XenServer 6.0 is the current latest release. XenServer runs on a version of Linux which has been modified by Citrix. In a Xen based system, the hypervisor has the most privileged access to the system hardware. The hypervisor launches an initial guest operating system on boot up which is called Domain 0. This has full access to all system hardware. Any virtual machines which run under Xen are referred to as user domains. User domains can run unmodified copies of an operating system if the underlying processor provides support for IntelVT-x or AMD-V. If the processor does not have this functionality, Xen runs a paravirtualised version of the operating system. It allows the creation of virtual machines and running of these on a single host server. The XenMotion feature allows live migration of virtual machines between XenServer hosts to facilitate maintenance or performance issues. Citrix offer very competitive pricing for Xen server while it offers much of the functionality of VMware. There is also a free version that provides a virtualisation platform with a reasonable large amount of functionality. Hyper-V. Microsoft have been gaining market share over the past two years according to the Garnet Magic Quadrant report. Microsoft is currently placing a strong emphasis on Hyper-V and cloud computing. Hyper-V is a component of Windows Server 2008 and is Microsoft’s hypervisor. It is only available in 64b versions of Windows 2008 but it does allow the creation of both 32b and 64b virtual machines. Each Hyper-V virtual machine is independent and isolated from each other. They may also have multiple processors assigned to them and up to 64GB of ram. The Hyper-V role is a type 1 hypervisor and runs directly on top of the underlying hardware. A parent partition runs the Windows 2008 server component which contains the management software for managing virtual machines. These machines run in separate Child partitions as displayed in figure 2.7:
  • 21. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 21 of 57 Figure 2.7 Hyper-V Operation Virtual machines are called partitions in the Hyper-V environment. The initial partition, the parent partition, must run Windows Server 2008. Subsequent partitions can run other supported operating system’s (Windows 2003, Windows 2008, NT and Linux). The hypervisor does not contain any drivers; these are hosted in the parent partition. The aim of this is to provide a more secure environment. An MMC snap-in running in the parent partition allows administration of partitions. Microsoft defined the virtual hard disk (VHD) format to provide storage for virtual machines. VHD’s are files but virtual machines see them as physical disks. On systems running Hyper-V it is recommended to run Windows 2008 in its server core mode, which offers a smaller set of system functions and therefore a reduced attack surface. Using the Virtual Desktop Infrastructure, Hyper-V can be used to provide Windows Client desktop to any client who supports RDP. This can provide a very flexible means of providing secure desktops to clients which run different functions. The network architecture of Hyper-V is similar in format to ESXi and is displayed in Figure 2.8 Figure 2.8 Hyper-V network architecture (from Virtuatopia). The above diagram displays the path from the external physical network through to the virtual network adapters in the client virtual machines. The parent partition hosts Windows 2008 server which in turn runs a Virtual Network switch. Each virtual machine running in a separate child partition connects to the virtual network switch via its virtual network adapter. Hyper-V has several key features which are worth a mention. Dynamic memory allows memory to be dynamically reallocated between different virtual machines depending on their workload. Live Migration is a feature which allows virtual machines to be moved to a different host which can afford them better performance. Virtual machines can also be moved for maintenance purposes. Virtual machines running under Hyper-V are also designed to consume less power than their physical counterparts. This is achieved by cooperation between Hyper-V and features
  • 22. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 22 of 57 build into later model processors. It also utilizes Second Level Address Translation. This is a feature which reduces memory and CPU utilization for virtual machines. Networking support has also been improved through the support of ‘jumbo frames’. This means that network through put is increased for virtual machines as frames of up to 9014 bytes can be processed. Potential Benefits There are several potential benefits which virtualisation can bring to an organisation. Server rollout times can be measured in hours rather than weeks. Without virtualisation, server provisioning was a complicated process involving raising PO’s, ordering hardware, assembling hardware, racking and cabling the server and then installing an OS along with patches. Such a time frame could extend to weeks. With virtualisation, server templates are maintained which can include the OS and a database application and these can be rolled out in a timeframe of hours. Once servers are in production, vSphere and Hyper-V can provide a level of security in relation to application updates. Prior to any application updates being applied to a production server, a pre installation snapshot of the production server can be taken. This will allow you to roll back to a known working environment in the event of any application problems. Other benefits can be in the area of cost savings. Reducing a companies server count leads to lower electricity consumption, lower cooling requirements and less money spent on new server provisioning. According to the VMware website (VMware 2012), cost savings can be made as follows: • Increase hardware utilization by 50-70% • Decreased hardware and software capital costs by 40% • Decreased operating costs by 50-70% VMware also estimate that the server administration burden is reduced which frees up time for administrators to perform more productive work. Another area which may be of potential benefit is in the provisioning of older OS implementations. If a company has migrated to Windows 7 as its standard desktop platform, there may be situations of backward hardware compatibility which require an older OS (XP or NT).
  • 23. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 23 of 57 In this case, an XP image can be implemented on the VMware server and accessed as a virtual host by relevant users. Similarly new operating systems like Windows 8 can be provisioned and made available across the network. This can be very beneficial for developers who can test software without having to wait for new tablet platforms to arrive. Benefits are also expected to be accrued in the area of energy saving and efficiencies in temperature management of comms rooms. Temperature readings will be taken from each comms room and as servers are retired and replaced with virtual equivalents, a measure of the saving may be gauged. The initial testing environment will consist of two HP Proliant DL360 G6 servers with an Intel Xeon 2Ghz E5504 processor, 12Gb of ram and 4 X 146 GB SAS disk drives in a RAID 5 configuration. One server will host VMwares ESXi software while the second will host Windows 2008 R2 with the Hyper-V role enabled. Is it hoped to introduce a basic level SAN into the testing environment at a later stage. Windows 2008 R2 clients running MS SQL 2008 will be deployed under both environments. Comparisons will be made regarding SQL and server performance. Feedback from network administrators regarding their virtualisation preferences will also be sought. It is also hoped to introduce an iSCSI san storage appliance which will be used for storing virtual machines and providing data access. Simplicity of access to this device will be recorded. Case Studies There have been several case studies carried out in the area of server virtualisation implementations (Northern Ireland Water, 2012 and Dell, 2012). These claim great savings and efficiencies for organisations. The aim of this study is to document the implementation of the virtualisation system. It is also to provide reasons for the decision on choosing a virtualisation platform. Ease of use and integration with a basic SAN will also be explored. Conclusions The apparent benefits of virtualisation to SME’s seem to outweigh the risks associated with upgrading a company’s server infrastructure to a virtualised environment. According to a survey from the Corporate IT Forum, virtualisation is now a mainstream technology that people are comfortable with. It is hoped that the
  • 24. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 24 of 57 case study will provide some evidence that confirms this. There is a strong business emphasis on cost savings currently and this is pushing virtualisation to the fore as a means of reducing IT costs. This cost reduction also comes with the added benefit of infrastructure modernisation. The cost savings will become more apparent throughout the years of implementation. There are a variety of virtualisation options available from many different vendors with different pricing schedules available for different needs. It would be recommended that IT departments investigate the entry level virtualisation options for their needs as way of increasing their exposure to a virtualised environment. Running a virtualised environment offers many challenges with regards to embracing new technology and also with regard to managing virtual servers. The ease of server rollout can lead to server sprawl. Efficient use of virtual server management tools is vital to a successful implementation of virtualisation. Chapter 3 Company Case Study Introduction The purpose of this paper is to compare and contrast the performance of Microsoft’s Hyper-V virtualisation platform with that of VMware’s ESXi equivalent. Server virtualisation does appear to offer many advantages for businesses which include the following: • Consolidation of servers • Improved response time to business requirements • Sharing of computing resources to improve utilization • Improving Flexibility, High Availability and Performance • Reduced power consumption This paper will discuss the implementation and design of both virtualisation platforms. It intends to compare and contrast both platforms with a view to identifying the virtualisation application most suited to providing a virtualisation platform on which to base a virtualisation strategy. Background In order to evaluate which virtualisation application is best suited to Company, two identical servers have had the virtualisation software from Microsoft and VMware installed. The hardware specification of the servers is as follows: • Server: HP Proliant DL306 G6 • Processor: Intel Xeon 2GHz Quad Core E5504 • Ram: 12GB • Network Cards: Built-in HP NC382i DP Multifunction Gigabit Server Adapter PCIe: HP NC382T PCIe DP Multifunction Gigabit Server Adapter • Storage: Internal: 4 X 146GB SAS disks (RAID5 configuration)
  • 25. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 25 of 57 External: 1 X 16TB iSCSI Synology SCSI Storage Area Network The basic network configuration used for this implementation is as shown in Figure 1: Figure: 1 Network Configuration This provides a layered network environment with the storage device located on an isolated SAN network and the virtualisation servers located on the production lan. Utilising this configuration we will endeavour to configure both the ESXi server and the Hyper-V server with as similar configuration as possible. We then run some comparative test’s to compare the performance of both systems. Some of the testing will include: • Network file transfers • Server scans • Server start-up times • SQL Operations The server testing inventory will consist of the following servers listed in Table1: Name uProcessor RAM Role OS Hosted On Physical PSA 4GB File Server Windows na
  • 26. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 26 of 57 Server A 2003 Server Physical Server B PSB 4GB SQL Server Windows 2003 Server na Virtual Server A VSA File Server Windows 2008 Server R2 64B Hyper-V Virtual Server B VSB SQL Server Windows 2008 Server R2 64B Hyper-V Virtual Server C VSC File Server Windows 2008 Server R2 64B ESXi Virtual Server D VSD SQL Server Windows 2008 Server R2 64B ESXi Table1: Server Descriptions This should allow for a comprehensive range of data from both SQL and file servers for comparison. Server Setups To install the Hyper-V on a server, you need to obtain and install Windows 2008 R2 64B operating system. The system requirements for Hyper-V from Microsoft are as follows: • Processor: x64 compatible processor with Intel VT or AMD-V technology enabled. • Hardware Data Execution Prevention (DEP), specifically Intel XD bit (execute disable bit) or AMD NX bit (no execute bit), must be available and enabled. • Minimum CPU speed : 1.4 GHz; Recommended: 2 GHz or faster • RAM : Minimum: 1 GB RAM; Recommended: 2 GB RAM or greater (additional RAM is required for each running guest operating system); Maximum 1 TB • Available disk space : Minimum: 8 GB; Recommended: 20 GB or greater (additional disk space needed for each guest operating system) • DVD ROM drive • Display : Super VGA (800 × 600) or higher resolution monitor • Other : Keyboard and Microsoft Mouse or compatible pointing device It is important to note that a 64b processor with the virtualisation technology enabled is required. Hyper-V only runs under a 64b operating system.
  • 27. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 27 of 57 This server is configured using the HP SmartStart software which allows configuration of the disks into a RAID 5 disk array. Once Windows has been installed and updated with the latest security fixes, you must enable the Hyper-V role. Installing this role requires a server reboot. Upon the reboot, the Hyper-V role is enabled and this loads a type 1 hypervisor before the Windows 2008 operating system in true type 1 hypervisor fashion. This allows Hyper-V to manage access to the underlying hardware for the client virtual machines. Hyper-V functionality is based on the concept of a Parent partition and multiple child partitions for each of its individual virtual machines. The existing Windows 2008 host operating system is placed into the parent partition. Each of the host server’s virtual machines is placed into a Child partition. In order to provide this functionality, the host OS has its physical NIC unbound from the TCP/IP protocol. This provides an interface layer between the physical nic and the Hyper-V software. The interface layer is known as a Virtual Switch. This connection between the nic and the virtual switch is the only connection between the physical hardware and the virtualisation software. This original nic only has one network protocol bound to it, which is the MS Virtualisation Network Switch Protocol. Each child partition has its own individual virtual nic associated with it. These virtual nic’s communicate with the parent partition using the MS VNS protocol. The virtual switches operate in a similar fashion to a physical Ethernet switch. They provide connectivity between the virtual hosts on the Hyper-V server, as shown in figure 2.
  • 28. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 28 of 57 Figure 2: Hyper-V Network Architecture In order to examine all the nic’s in the Hyper-V server you need to go to Start – Control Panel – Network and Sharing Centre – Change Adapter Settings. This will display all of the nics in the server. If you examine the network properties of the original nic, you will see it only has the MSVNS protocol bound to it, as shown in figure 3:
  • 29. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 29 of 57 Figure 3: Network Bindings of Original NIC Installing the Hyper-V role creates an additional nic which inherits the original nics protocol bindings. To make the network card available to virtual machine clients, you need to configure it on the Hyper-V server. This is done as follows – Start – Administrative Tools – Hyper-V Manager and select Virtual Network Manager. From there you need to create a new virtual network which is associated with a physical nic. This also creates a Microsoft Virtual Switch for each physical nic in the server. This step is necessary in order to provide access to a virtual nic for any virtual machines created. This allows for great flexibility when setting up virtual machines as they can be assigned to different virtual networks. On the virtual machines, each virtual nic connects into a virtual switch. The virtual switch you connect to determine the other machines in your virtual network with whom you can communicate. These nics have the functionality of a standard physical nic and are identified as a Microsoft Virtual Machine Bus Network Adapter. Virtual machines will operate in the same manner as their physical counterparts. The Virtual Network Manager allows for the creation of three different categories of virtual networks: • External virtual network • Internal virtual network • Private virtual network An External virtual network is used to allow host virtual machines connect with other network resources which are located on the Company network. Virtual server clients can communicate with other virtual clients on the host server. An Internal virtual network is an isolated network. This is used to allow virtual hosts communicate with each other on their own private network. It is typically used as a testing environment. This network is not bound to the host server’s physical nic thus providing isolation.
  • 30. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 30 of 57 A Private virtual network is a type of that that only allows communication between virtual hosts on the same virtualisation server. It provides isolation from all other network traffic on the host server including any management traffic to the host server. Each virtual nic on a server connects into a Virtual Switch on the host server. The virtual switch is enabled once the Hyper-V role is installed. This is a fully functional layer 2 switch. This switch supports the NDIS specification and the Windows Filtering Platform callout drivers. This allows software vendors to create plug-ins which provide enhanced security and networking capabilities. SAN Configuration Virtual server images vary in size depending of the virtual server’s configuration. Most servers would require a virtual hard disk in the order of GB’s. In order to provide sufficient storage capability for these virtual machine images, it is necessary to provide additional storage space. This can be implemented using a basic iSCSI Storage Area Network. An iSCSI SAN offers may advantages over the traditional Fibre Channel SAN implementation. Some of these advantages include easier installation process; simplified maintenance and also it is a more cost effective solution for SME’s. The Small Computer Standards Interface is a means of attaching additional storage devices to server hardware. It defines a set of commands which govern file transfer and also protocols associated with the transfer. iSCSI is a mechanism for transferring these commands over the TCP/IP protocol thereby extending the range and flexibility in providing access to storage media. A SAN is a means of providing extensible storage for servers while offloading some of the additional processing associated with providing this storage from the server to the SAN. Performance is also improved by the provision of a separate storage LAN for the storage devices. A SAN provides many benefits for the provisioning of shared storage. It can allow for the consolidation of storage resources into a common pool. It is very scalable and storage can be resized to meet specific requirements. The SAN can also be managed remotely and provide a single point of control. Storage can also be shared among different servers providing consolidation and also a reduction in backup time. Backup traffic can also be confined to the SAN network. The SAN storage provided to remote servers also appears as locally attached storage. The server considers the disk in the same fashion it would a local drive. This allows for a lot more flexibility with regard to disk management for the servers. SAN networks are traditionally implemented using fibre channel but this is considerable more expensive that the IP based solution implemented in our environment. It could be considered for implementation at a later stage.
  • 31. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 31 of 57 There are different types of storage media available for servers to access ranging from internal disks to external disks arrays. SCSI stands for Small Computer System Interface and is a means of attaching additional storage to a computer. SCSI is the main protocol used for communication between servers and storage. This storage is normally internal within a server. The SCSI commands allow access to this storage and operate at block level, which is low level disk access. As far as a server is concerned, the disk appears as if it is located within the server rather than at a remote location. iSCSI is a mechanism whereby the SCSI commands are encapsulated within TCP/IP packets. This means that SCSI Control Descriptor Blocks (read/Write commands) are sent over the IP network in IP packets to their destination. This allows servers to connect to SAN storage anywhere on the IP network, thus providing great flexibility in SAN provisioning. The remote storage devices appear to the server as internally attached devices. The storage solution implemented by Company was a Synology DSM 4.0 Rackstation. This is a rack mountable unit capable of holding 10 X 2TB disks. These are configured on the device in a RAID6 configuration of 14TB volume size. The RAID6 array allows for the failure of 2 physical disks. This is an important consideration given the large size of the volume. In the event of disk failure, the time involved rebuilding from a single disk failure could allow for the occurrence of a second disk failure. As the SAN will be storing virtual machines, maximum uptime is a requirement for any SAN. Apart from the RAID6 configuration, any SAN implementation should also contain a dual PSU configuration for added resilience. Consideration should also be given to SAN failover. A cheaper backup SAN device could be considered at a future date. This could allow for the replication of server images between the two SAN devices. The Synology Storage Manager is displayed in Figure 4 and it shows a number of important SAN concepts. This is the tool used to create and manage storage areas on the SAN device. Figure 4: Storage Configuration
  • 32. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 32 of 57 The iSCSI Target tab allows us to create individually named and accessible storage blocks. The size of these units is specified when created. In this instance we have created 3 X 500GB logical storage units which reside on the physical SAN. Each of these logical devices is identified by a Logical Unit Number. The LUN is a field within SCSI that identifies the Logically Addressable Unit within a Target SCSI device. Each LUN represents an individually addressable storage area. It can be up to 64b is length. The iSCSI name is an independent identifier which is current for the duration of the storage unit. The associated iSCSI address consists of the iSCSI name, IP address and TCP port number. Depending on the iSCSI Target software, different security settings can allow the specification of which remote server IP addresses are allowed connect to each LUN. CHAP authentication may also be enabled which requires the remote host to specify a password in order to connect. The remote server will connect to the LUN using an iSCSI Initiator. This is a software client which runs on servers. It is built into Windows 2008 R2. The iSCSI Initiator uses the NIC and its network stack to emulate SCSI devices. Once the iSCSI Initiator has connected to the remote iSCSI LUN, it treats the storage area the same way it would a locally attached disk. It is able to format and manage the iSCSI LUN’s. Once the iSCSI Target has been connected to by the iSCSI Initiator software, the storage area appears in the local disk management console. It will be identified under disk properties as an ‘iSCSI Storage SCSI Disk Device’. There are a variety of iSCSI emulators available for a host of different operating systems. If we run the iSCSI Initiator software on the Hyper-V host server we see the associated iSCSI clients as displayed in Figure 5:
  • 33. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 33 of 57 Figure 5: iSCSI Targets These targets correlate directly to the storage volumes created on the SAN device. The status is also displayed and we can see that ‘San3’ is connected. This provides what appears as a locally attached 1TB storage disk for the Hyper-V server. The iSCSI clients are identified by an IQN, which is a unique string that identifies each iSCSI initiator. The discovery tab allows you to specify remote SAN devices upon which iSCSI Targets have been enabled for discovery. The iSCSI Target software on the Synology unit allows the entering of a CHAP password. This password will be required for the remote iSCSI Initiator software to make a connection. Other iSCSI target applications allow the specification of specific remote IP addresses as clients. Both of these options should be considered in order to provide a secure operating environment. The ESXi server also contains an iSCSI Initiator. This is found under the ‘Storage Adapters’ tab in ESXi. The SAN storage device IP address can be added to the list of iSCSI devices. This provides us with a listing of the same storage areas on our SAN as shown in Figure 6:
  • 34. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 34 of 57 Figure 6: ESXi iSCSI Targets To add any of these devices, we need to select the ‘Storage’ tab, add storage – Disk/LUN and select the LUN of the storage area we require. This will then be displayed as a locally attached data store under storage. Virtual machines and operating system source files may be stored here as a result. Network Configuration Once the SAN disk array has been created, it is necessary to make this SAN available to network servers. For maximum efficiency it is best to utilise GB switches and network cards. On the host servers, it is also advisable to enable NIC Teaming. This is where two individual NIC’s are combined into one virtual NIC using software from the NIC vendor. The result of this is better performance and NIC resilience. On our Hyper-V server this was achieved using the HP Network Configuration Utility. To configure on the ESXi server it was enabled after using the ESXi vSphere Client to connect to the ESXi server and then going into Configuration – Networking. This allows you view and configure networking options for the server as shown in Figure 7:
  • 35. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 35 of 57 Figure 7: ESXi Network Configuration To create a nic team, select the properties of the vSwitch, then ‘edit’ and then select the ‘NIC Teaming’ tab. This creates a redundant network pair for the ESXi vSwitch. By selecting each switch, additional NIC’s can be added or removed as required. This interface also displays in a very clear fashion the Virtual Switches which are configured on the host server. In our configuration, we have two virtual switches, one for the production lan and the second for the SAN network. The SAN network switch is currently only operating at 100MB and is not ideal. It is envisaged to upgrade this switch to a redundant GB switch at later date. In order to provide access between the SAN network and the LAN network, each server will require an additional NIC which will connect into the separate SAN switch. Ideally this will be a dual port nic. In our servers, both had a HP NC382T PCIe DP Multifunction Gigabit Server Adapter added. This was teamed using the HP software and connected to the SAN switch. Another feature that should be enabled on the network is Jumbo Frames. Standard Ethernet frame size is 1500 bytes. Enabling Jumbo frames increases the frame size to 9000 bytes. This should result in increased performance for Windows servers as the default block size for the NTFS file system is 8,192 bytes. Jumbo frames offer less overhead and better network performance and higher throughput. On a HP Proliant server, the size of the Jumbo frames can be set using the HP Network Configuration utility. Please be aware that your SAN device must also have the Jumbo Frames configuration enable or it will not communicate with your host server. It is normally considered best practise to have the iSCSI traffic on a separate network than normal production data. There are several reasons for doing this. Having the SAN devices on a separate network offers additional security options both in terms of restricting access to the SAN and also protecting the SAN devices from any malware which may infect production machines. To maximise performance for the SAN network, a gigabit Ethernet switch should be configured in a redundant failover manner to maximise uptime. All the ports should be configured for full duplex operation. Another reason for SAN network isolation is to segregate the iSCSI traffic. iSCSI traffic is unicast in nature and does have the potential to utilise the entire link. Once the SAN is configured, a software tool is run on the SAN called an iSCSI Target. This allows the SAN storage devices to be identified by means of a Logical Unit Identifier. The remote servers run the iSCSI Initiator software to locate and connect to the storage areas on the SAN. This iSCSI traffic is also generally sent in clear text (it can be encrypted using CHAP) so having it on a separate network segment will enhance security.
  • 36. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 36 of 57 Server Testing Server Scans As part of the testing, comparisons were made against a production physical server, PSA. This server runs Windows 2003 32b operating system, and is a DL380 G5 with 4MB ram. A malware scan was run on the physical server and it completed the scan in 1 hour 32 minutes as shown in figure 8: Figure 8 – Malware scan result on physical server The data stores from this server were replicated to a virtual server (VSA) running on the Hyper-V host. VSA was assigned 6GB of RAM and the Windows 2008 R2 64b OS. The same scan was run on this virtual (VSA) and its scan times were as shown in figure 9:
  • 37. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 37 of 57 Figure 9 – Malware scan result on virtual server This is considerable faster than on its physical equivalent. We do however have to take into consideration that the virtual server has 6GB of ram compared to the physical servers 4GB. Server Performance The following categories will be used for evaluating server performance. • Hypervisor Overhead • File Transfers • Benchmarking Software Comparisons • iSCSI Performance • SAN Connection Performance • SQL Performance Hypervisor Overhead One operational feature that was advised by a disk alert on the host Hyper-V server was that that the free disk space dropped to 159Mb on its D: drive. This drive is where the virtual machines it hosted were stored. Each virtual machine has one or more virtual hard disks associated with it. The location of these files is specified within Hyper-V manager. The Virtual Machines folder also contains a ‘snapshots’ folder which are the online backups associated with each virtual machine. As a result of the lack of free disk space the Hyper-V server was no longer able to run the hosted virtual machine’s, placing them into a ‘paused state’. These virtual hosts were no longer accessible from the network. Once disk space was cleared on the host server, they were able to be restarted and resume operation.
  • 38. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 38 of 57 Upon monitoring the folder which stores the snapshot data, it was found that this folder contained several large .AVHD files which grew each evening as shown in figure 10 below. Figure 10: AVHD File Growth You will note that the .VHD files remain static but the .AVHD files grow in size. This is because the snapshot information relating to server changes is saved in .AVHD format. This specifies the difference in information between it and its associated .VHD file. Each .VHD file has an associated Automatic Virtual Hard Disk (AVHD) file associated with it. Microsoft advises not to directly delete the AVHD files but to use Hyper-V manager to delete the entire snapshot from the snapshot tree. There does not appear to be a lot of configurable options relating to managing snapshots in the Hyper-V console. The Microsoft website also does not have a lot of readily available information relating to managing snapshots. In order to remove the AVHD files I found I had to use the following procedure: • Delete the Snapshot via Hyper-V Console • Turn Off the virtual machine (not Shutdown) via Hyper-V Console • Leave the virtual machine off long enough for the merge to complete. You need to scroll across the Hyper-V Console in order for this status information to be displayed: Figure 11: Snapshot Merge Process Status Once this merge process completed, the AVHD is removed from the disk thus freeing up disk space. The time involved for this operation relates to the size of the virtual machine. This will have to be factored in for production machines. It does again
  • 39. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 39 of 57 highlight the need for providing sufficiently large storage areas for the virtual machines at the setup phase. The snapshot process for ESXi works in a similar fashion to Hyper-V. Right click the server and select take snapshot. The snapshot image data is stored as 3 files- a virtual disk, a virtual machine log file and a snapshot file. These files are stored in the same folder as the original virtual machine. While not directly related to virtualisation, using the Windows 2008 R2 64b OS on both the client and host, it was noted that a Windows folder was created called ‘winsnx’. This folder was found to be several gigabytes in size. In order to reduce this in size, the following command was run: DISM.exe /online /Cleanup-Image /spsuperseded This resulted in an increase in free disk space on the C: partition of a host machine of 2.3 GB. The winsnx folder contains various different components of the OS. The DISM command removes backup files created during service pack installation. As a function of server maintenance, the running of this command should ensure that the Windows 2008 R2 servers have sufficient free disk space to function correctly. File Transfers To test network performance, a 2.93GB file was copied over the network to both virtual clients. The results of this are listed below Time to Copy(Seconds) Hyper-V Client 3:41:09 ESXi Client 00:56:09 Physical Server 01:22:35 Inspection of the nic’s on both servers report that the ESXi server was running at 1Gbps while the Hyper-V client reported that it was running at 10Gbps. Examination of the host Hyper-V server also showed that it reported a 10Gbps running speed. This is explained by the Hyper-V virtual network switch running at 10Gbps. Further examination of the physical nic’s showed that they were running at the expected 1Gbps speed. Upon enabling the Jumbo Frames option on the Hyper-V client virtual server, the transfer time of this file dropped to 03:24:78. The jumbo frame option was then enabled on the Hyper-V host server. The file transfer was tried again after both servers were rebooted. The transfer this time tool 03:45:60. It initially appeared to copy faster but practically stopped after approximately 2 thirds completion. The same file was copied to another Hyper-V host in 01:20:08. After turning off Jumbo frames on the Hyper-V host, the file was copying again. This time it again completed 2 thirds of the file in about 50s but took 03:26:23 to coy the entire file. This trait was observed consistently over a number of occasions. Using the RoboCopy utility produced a much more even performance and also copied the file in 2.31:11 Redoing the file copy testing with RoboCopy produced the following results.
  • 40. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 40 of 57 Time to Copy Speed(Bytes/Sec) Hyper-V Client 03:19:00 15803923 ESXi Client 02:45:00 19124740 Physical Server 11:44:00 4481021 A further copy comparison was carried out copying a compressed file between the three clients. The results were as follows. File Size(MB) Time to Copy Speed(Bytes/Sec) Hyper-V Client 263.7 00:00:21 12714995 ESXi Client 263.7 00:00:04 65217943 Physical Server 263.7 00:00:09 29553295 File Size(GB) Time to Copy Speed(Bytes/Sec) Hyper-V Client 2.9 00:07:58 6520640 ESXi Client 2.9 00:09:49 5287839 Physical Server 2.9 00:09:21 5553035 This test gives a clear advantage to the ESXi client in performing network operations, even over its older physical rival. Benchmarking Software Comparisons We will utilise several benchmarking tools which will run tests on different aspects of the server hardware. The following tools will be used to evaluate the various hardware components of Company’s servers. • CPU/Processor - SiSoft's SANDRA 2013 • GPU/Graphics Board - NovaBench • Memory - AIDA64 Extreme Edition Please note that it is important to read the documentation that is provided with these test tools, as it may affect any data residing on servers. These tools were run on two virtual servers – VSD the ESXi client and VSB the Hyper-V client. Both of these servers have similar capabilities as per table 1. SiSoft's SANDRA 2013 The SiSoft Sandra 2013 was used to compare processor performance. Upon launching the application, select Benchmarks – Processor – Processor Arithmetic. We also
  • 41. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 41 of 57 tested Benchmarks – Processor – Processor Multi-core Efficiency. A sample of the results from this test is displayed in Figure X below. The results are very close but do appear to give a slight edge performance wise to the Hyper-V client. AIDA64 Extreme Edition The AIDA64 Extreme Edition provides a Disk and Memory Benchmark facility. Disk tests were run against both virtual servers. The Results of this are displayed in Figure 12: Figure 12: AIDA64 Results These results indicate better disk performance from the VMware client. AIDA64 was also used to determine the cache and memory benchmark for both servers. These are displayed below Figure 13: ESXi Client Memory Benchmark
  • 42. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 42 of 57 Figure 14: Hyper-V Client Memory Benchmark Again with these tests, the data suggests that there is very little difference in performance between the two platforms. NovaBench This is a simple tool to use and it provided some interesting timings. It was run on two Windows 2008 R2 virtual servers and it tested processor and memory/disk writing speeds. These results are displayed in Figure 15. Figure 15: NovaBench Results These results appear to indicate that the VMware client is providing faster processor performance while the Hyper-V client is performing better at disk and memory writing. The results from the bench marking tools provide evidence that both virtual servers perform at similar operational levels. There is no clear distinctive case to be made for one platform over the other on the basis of these tests.
  • 43. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 43 of 57 iSCSI Performance To test iSCSI performance a utility call IOMeter was installed and run on each virtual machine. Please note that this utility creates a local test file (iobw.tst) as part of the test process. This file can grow dynamically to over 4GB is size. Attention needs to be paid to free disk space on the C: drive as a result. IOMeter can be configured to perform a series of different IO operations on the server disks. These categories are selected under the ‘Access Specification’s tab. For our testing we selected the same series of different block sizes on both servers. The results from IOMeter are displayed below in Figures 16 and 17. Figure 16: Hyper-V Client IOMeter Result Figure 17: ESXi Client IOMeter Result The comparison shows that I/O performance is again better on the ESXi client.
  • 44. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 44 of 57 SAN Connection Performance SQL Performance One of the test’s performed on the virtual MSSQL server’s was stress testing using the Microsoft SQLIOSIM utility. To quote from Microsoft this tool is used to “perform reliability and integrity tests on disk subsystems. These tests simulate read, write, checkpoint, backup, sort, and read-ahead activities for Microsoft SQL Server. However, if you have to perform benchmark tests and to determine I/O capacity of the storage system, you should use the SQLIO tool.” The SQLIO tool is “a tool provided by Microsoft which can also be used to determine the I/O capacity of a given configuration.” According to the SQLIO documentation it will provide performance statistics on throughput, disk latency and the number of I/O’s per second i.e. it should provide enough information to get a basic understanding of the system’s current I/O performance. To configure SQLIO to test all disk partitions, please ensure that the param.txt includes the c: and d: drives. SQLIO has the following command line options: Table 2 – Commonly used SQLIO.exe options Option Description -o Specify the number of outstanding I/O requests. Increasing the queue depth may result in a higher total throughput. However, increasing this number too high may result in problems (described in more detail below). Common values for this are 8, 32, and 64. -LS Capture disk latency information. Capturing latency data is recommended when testing a system. -k Specify either R or W (read or write). -s Duration of test (seconds). For initial tests, running for 5-10 minutes per I/O size is recommended to get a good idea of I/O performance. -b Size of the IO request in bytes. -f Type of IO to issue. Either ‘random’ or ‘sequential’. -F Name of the file which will contain the test files to run SQLIO against. Using the SQLIO tool the following command was executed: sqlio -kR -s360 -fsequential -o8 -b256 -LS -Fparam.txt The results of this are displayed in table 3 below. Hyper-V Client ESXi Client IOs/sec 419.34 806.84 MBs/sec 104.83 201.71 Min_Latency(ms) 14 0 Avg_Latency(ms) 75 39 Max_Latency(ms) 609 1304 The following command was also issued:
  • 45. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 45 of 57 sqlio -kW -s10 -frandom -o8 -b8 -LS -Fparam.txt Its results were as per table 4: Hyper-V Client ESXi Client IOs/sec 74.85 198.99 MBs/sec 0.58 1.55 Min_Latency(ms) 10 5 Avg_Latency(ms) 419 158 Max_Latency(ms) 914 2096 These results correlate with the network transfer results. The I/O of the ESXi client is much more efficient than the Hyper-V client according to this data. The Average latency figures also indicate that the ESXi client has a shorter delay time that it’s Hyper-V rival. A comparison was also conducted on a replica of Company’s SQL database. The database was backed up to disk and the resultant .BAK file was compressed using the 7Z compression utility. This file was then copied over the network to both test hosts. The file size was 293mb compressed and 3.22GB when extracted. The data transfer rate of the file copy was as follows Hyper-V Client 2780.137 MegaBytes/min ESXi Client 1195.283 MegaBytes/min The time taken to extract the file on each host was as follows Hyper-V Client 1.22.28s ESXi Client 2.47.32s The time take to restore the database from local D: drive was as follows Hyper-V Client 08.43.88s ESXi Client 05.57.66s Once the database was functional on both servers tasks were run and performance times noted. Database Backup to local D: drive Hyper-V Client 5.50.26s ESXi Client 4.21.85s Once the databases were operational on both servers, SQL queries were run on the SQL databases to test response times. The queries are outlined in the appendix.
  • 46. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 46 of 57 The results of this were as follows: Client Query # Initial Run Time Cached Time Rows Processed Hyper-V Client 1 00:00:19 00:00:07 153709 ESXi Client 1 00:00:16 00:00:06 153709 Hyper-V Client 2 00:00:00 00:00:00 443 ESXi Client 2 00:00:00 00:00:00 443 Hyper-V Client 3 00:00:15 00:00:03 32 ESXi Client 3 00:00:32 00:00:03 32 Hyper-V Client 1-3 Combined 00:00:10 00:00:10 154184 ESXi Client 1-3 Combined 00:00:11 00:00:10 154184 Once a SQL query is run it is cached in memory in order to reduce future run times. The ESXi client shows a slight performance advantage over the Hyper-V client. Pricing Microsoft has simplified their pricing structure for their Windows 2012 product line. There are now just two different editions which are differentiated based upon the number of virtualised clients they run. The Standard Edition allows 2 virtual clients to be run while the Datacentre Edition allows an unlimited number of virtual clients. The current costing for both versions is as listed below: WinSvrStd 2012 SNGL OLP NL 2Proc € 910 WinSvrDataCtr 2012 SNGL OLP NL 2Proc Qlfd € 5,328 Each licence covers two processors per server. Adding additional processors to the host servers requires that you obtain an addition Windows Standard licence. This is very attractive pricing as any Windows server clients running under the host Datacentre server are covered by the host licence. This could prove to be a very cost effective way of covering the cost of Windows server licensing for many organisations, while at the same time realising the benefits of virtualisation. VMware have a different pricing structure in place whereby end users pay an annual licence subscription as follows: HP VMw vSphere Std 1P 1yr E-LTU € 1073 Each additional server processor will also require its own ESXi licence. To assist with managing multiple ESXi servers, VMware provide a management centre application, vCentre, which is also available as an annual subscription. HP VMw vCntr Srv Std 1yr E-LTU €5105
  • 47. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 47 of 57 This is possible beyond the budget of most SME’s. The additional cost of the ESXi licence needs to be incorporated into the annual budget. Given that it is a recurring expense it may be offset an operation expense as opposed to a fixed asset charge. Conclusions From the testing and implementation configuration carried out relating to this project, the following basic conclusions can be made. That there is very little difference in actual server processing performance but a large difference in input output operations of the two candidates. The other main difference relates to the host virtual server maintenance. During the course of the testing there was very little if any maintenance needed to be carried out on the ESXi server. The Hyper-V server on the other hand encountered an issue relating to disk space which caused a client server outage and also required a solution to be implemented. Another area of maintenance was that of the monthly application of the latest Microsoft security fixes. These required application to the host Hyper-V server. As a consequence of this, its client servers will need additional down time which will need to be factored into the maintenance schedule for the Hyper-V clients. I think that the evaluation process showed that there is great potential and flexibility along with business value potential to be gained from a virtualisation implementation. During the evaluation, 4 additional Windows 2008 R2 servers were added to the company network. The speed and ease with which these servers were added was a vast improvement on the process involved with obtaining and installing a physical server. The decision regarding which platform to choose is difficult to make. The pricing for an Enterprise version licence of Hyper-V is a compelling argument in its favour over ESXi as it can allow a SME to consolidate its licensing. The familiarity of the Hyper-V interface is also a great advantage to Hyper-V. Against that is the addition maintenance of security patches, disk usage and network throughput. Is the cost factor the most compelling argument in the decision for choosing Company’s virtualisation platform? Chapter 4 Final Project Review Introduction At the beginning of this project, Company had in production 45 physical servers. Server room space was becoming an issue along with concerns relating to server power consumption and the related costs of server room cooling. The other major driver behind this implementation was the lack of flexibility in providing new server builds to the Business Development department. We will examine the implementation to assist in determining progress made regarding virtualisation.
  • 48. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 48 of 57 Implementation and Testing Prior to implementation Company server infrastructure consisted of a flat network environment and physical servers. This server infrastructure has been enhanced with the addition of a fully functional VMware ESXi server hosting two clients and a fully functional Windows 2008 Hyper-V server also hosting two clients. The network infrastructure has also gained an additional network segment which hosts Company’s SAN in an isolated network environment. The addition of a functional iSCSI based Storage Area Network is also a major enhancement to Company’s overall network infrastructure. This is a positive development for Company before we begin to evaluate any factors which virtualisation may have on Company. One of the main issues which arose during this implementation was that of server identification and location. Virtual servers are quite easy to deploy and connect to the host network. Knowing the location of where these servers are hosted is essential for the successful operations of Company’s server infrastructure. One addition to the virtual server basic configuration was to add an additional information item to the BGInfo background display identifying the virtualisation host server, as shown in figure 1 below. Figure 1: BGInfo Display Information This provides the logged on user with the name of the client host server. The client server also has a different naming nomenclature than its physical counter parts by the addition of the letter ‘v’ in its hostname. These steps can help identify that a server is virtual and may be of benefit during a troubleshooting process. Further to this a spread sheet is maintained by Company which identifies all hardware assets. An additional Virtual Server section was added here which also includes details of any iSCSI targets which the virtual machine attaches to. Management of the SAN is also a very important element of this project. Figure 2 below displays the current iSCSI targets which have been configured on the Synology Rackstation. These devices are configured with a standard naming convention in keeping with Company’s policy.
  • 49. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 49 of 57 Figure 2: iSCSI Target Configuration Monitoring and management of this device is very important as it hosts the virtual client VHD files. It also allows administrators to keep track of which servers are connected to individual iSCSI targets. Running the iSCSI initiator software on a host server allows confirmation that the connection has been made to the iSCSI Target. Figure 3: iSCSI Initiator Report It is extremely important all iSCSI targets are documented upon creation and also the servers which connect to them. This is a process which will be evolving over time. SAN management is as important as virtual server management. Management of both of these components of the virtual environment is essential for the long term success of this virtualisation implementation. In the event of a server outage, having the correct documentation will allow for an easier and quicker server restoration process. Another issue which came to light during the course of this exercise was the lack of acceptance of the ESXi application by server administrators. Company’s administrators are primarily from a Windows based skill set. As a result Administrators felt more comfortable working in the familiar Windows tool set provided for Hyper-V management. This resulted in more time spend on the Hyper-V server than the ESXi server. The ESXi tools are installed locally after a http connection is made to the ESXi server. This prompts the user to install locally a vSphere Client for managing both the ESXi server and its clients. The interface subsequently installs provides this access in a very clear and efficient manner. Some of the configuration diagrams are extremely clear and informative but it is a very different interface from standard Windows tools. If choosing the VMware path, training and familiarisation with the ESXi tools is a necessity.
  • 50. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 50 of 57 Power Comparisons Prior to this virtualisation work beginning, temperature graphs were taken from a Comms Room using the ServersCheck software and its associated temperature sensor. Figure 4 displays the resultant temperature graph for a 24 hour period with the temperature display at value of approximately 17°. Figure 4: Temperature Graph Full Server Compliment As a consequence of the virtualisation work being undertaken, two older Proliant DL380 G4 servers were decommissioned and powered off. They were replaced in the Comms Room by the Synology Rack station. After a setting in period of two months, a second temperature reading was taken and this is displayed in figure 5. Figure 5: Temperature Graph after Server Retirement
  • 51. X00077979 BSc (Hons) IT Management Project Tuesday, 06 August 2013 Page 51 of 57 The temperature in this comms room appears to have dropped closer to 16°. While not a scientific experiment this exercise does gives an indication of the benefits of using virtualisation to reduce power consumption and also highlights the difference in power consumption between different generation servers. Issues Encountered During the testing phase, the main issue which came to light was the large amounts of usage of server disk space. This is for both the host servers and its clients. This can be mitigated against by the correct sizing of client servers. One of the advantages of virtualisation is that in the event of needing additional resources, the client server can be powered off and additional resources allocated. Once the client is back online, utilising the standard disk management tools within Windows can allow the allocation of the additional disk space to the relevant partition. Attention also needs to be placed on the monitoring of Snapshot folder size as this was an issue which came to light during the Case study. The addition of disk space alerts into network monitoring applications is essential. It is also very important to perform a weekly physical check of each host virtualisation server to ensure that sufficient disk space is available. The server snapshot facility is extremely useful tool for administrators but it does utilise large amounts of disk space for each server. For this reason, sufficient administration resources and administration processes must be allocated to the administration of the host servers. Demonstrations As part of implementation process, a demonstration was arranged to display the ESXi and Hyper-V platforms and to give an introduction to virtualisation. The audience included Business Analysts whose roles are mainly responsible for database administration, reporting and Business Intelligence functions. Senior management was also present. The general response was very favourable. One of the topics covered was that of virtual appliances and that many organisations had their applications available as virtual appliances which can be downloaded and installed in a reasonable quick timeframe. The network monitoring application Naigos was downloaded and its integration as a virtual machine client on the ESXi server was demonstrated. The ease at which this application was up and running was clearly evident to those present and prompted a lot of positive feedback. One of the other noticeable observations from the demonstration was that the Business Analysts were able to see that the flexibility of a virtual environment could allow them the option to have additional servers to perform specific tasks. Discussions were started relating too how additional servers could assist with reporting and scheduling tasks. Senior management were also impressed and subsequent to the demonstration expressed a desire to proceed to the next stage in virtualisation implementation. Implementation During the course of the evaluation, Company’s main data file server developed an issue which caused it to become unreliable as a file server. As a result a virtual server was configured on the Hyper-V host and Company’s data copied to this server. Login scripts were changed for users and a server migration carried out from the unreliable physical host to the new virtual server. The nightly backup schedule was altered to include the new virtual host. Thus far there has been no noticeable