• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Platform Virtualization In The Enterprise
 

Platform Virtualization In The Enterprise

on

  • 469 views

 

Statistics

Views

Total Views
469
Views on SlideShare
461
Embed Views
8

Actions

Likes
0
Downloads
0
Comments
0

1 Embed 8

http://www.linkedin.com 8

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Platform Virtualization In The Enterprise Platform Virtualization In The Enterprise Document Transcript

    • Platform Virtualization in the Enterprise: A Study Shivanshu Singh shivanshukumar@gmail.com Abstract a network. This approach is plagued by issues such as network costs, high power consumption, Virtualization, to begin with, is a technology which complex administration and maintenance and also uses a resource and divides it up into multiple increased points of failure. But if we are to virtual resources so that they can be used in a maximize the return out a hardware investment parallel fashion and hence the processing time is while improving reliability and also simplifying reduced, maintenance and administration is management complexities of a large enterprise are simplified as well as things like network concerned, we would rather go towards the performance along with reliability of the system , approach which is getting the most attention today power consumption and cooling requirements are than any other technology: Virtualization. also addressed; while catering to more number of Virtualization, by making the management and clients, simultaneously. We present here, a survey maintenance of any organization’s IT and study of the virtualization technology in infrastructure, would “enable the true predictive different areas of enterprise computing and how it enterprise — the ability for an IT manager to is enabling better performance in some cases and respond dynamically to the change in demands in providing features like availability, integration, their business.”[1] better administration and reliability in some cases or both. We cannot have any doubt about the fact that Virtualization is the hottest buzzword in Enterprise IT today and it has been acclaimed with enabling 1. Introduction – Virtualization in the enterprises of all sizes and types to cut deployment costs, reduce maintenance blues, cut the overall Enterprise carbon footprint and affect just almost everything about enterprise IT to some extent at least in some The goal of any IT enterprise is to make the good way or the other. The term ‘Virtualization’ most that they can out of their hardware and essentially refers to a logical abstraction of a software resources, like processors, storage, physical resource but this not a very exact network, software licenses etc., to bring down definition with regards to what shape it has taken costs, have the ability to change to the dynamically and the areas that it has penetrated into. We evolving requirements of the company itself and of its clients and also develop the capability to Guest OS recover from IT related failures as soon as possible in a simple and cost effective manner. As far as evolving to the ever growing need to APP OS X 2 be able to cater to an ever increasing number of Linux clients is concerned, the traditional approach has been to bring in more computational power so that Windows Solaris the execution, of the process (application) in question, is sped up. There are two ways to do that Virtualization Software APP1 same: one would be to discard existing hardware and deploy NEW faster and better performing Host OS (Windows) hardware. This approach has obvious disadvantages like high costs of replacing the Hardware existing hardware. Second would be to add new hardware, not necessarily better in performance than the existing hardware and connect them up in Figure 1 - Platform Virtualization (General)
    • however cannot even come up with any more exact operating systems over here. It is this creation and definition of it owing to the various forms and management of the virtual machines which is flavors of it. known as platform or Server Virtualization and is one of the hottest technologies in the industry The field has seen a big surge in the latest today. times and we hope that this broad area would be subdivided into smaller specific and well defined The concept of a virtualization of hardware sub sections, with standards and metrics to evaluate consists of a given hardware configuration on a them, as the research gives newer forms to the field single machine and an operation system running and the market attains a greater degree of maturity. directly on this hardware (termed as the ‘Host’ “Virtualization of PCs is a ‘hot’ topic that has operating system) and a ‘Guest’ operating system, driven more than 600 inquiries from Gartner clients which runs on top of the ‘Host’ operating system, during the past three years. The only consistent which simulates a computer environment for the driver behind these inquiries was a desire to deliver Guest OS. The guest OS may or may not be the better, more-efficient and more-secure client same as the host. This would actually depend upon capabilities to users. In some cases, this involves the capabilities that the host would have and the trying to overcome compatibility issues among support that the host would provide for different applications running on a specific operating system operating system. So the Guest OS is never aware (OS). In other cases, the requesting organization of the fact that it is infact running atop another wants to support secure remote access from an operating system but feels that it has a set of unmanaged device or to centralize client- hardware underneath, which is simulated by the computing functions without reengineering the host operating system. applications” [2]. This technique a number of virtual machines Virtualization technology is employed in are hosted on a single hardware and is a nice way various areas like platform virtualization, to provide multiple runtime environments and to Operating Systems e.g. Microsoft Hyper V in multiple users, at the same time from just a single Windows Server 2008, data storage e.g. storage set of hardware. The number of virtual machines area networks and storage virtualization as such, as that can be hosted on a particular machine (the well as in many applications. We shall be diving hardware) would be governed by the configuration into these in the subsequent sections of this paper and the capacity of the underlying hardware and and would try to introduce and explore the various also upon the amount of resources that are virtualization techniques that are being used in allocated to each Guest OS, which in turn depends different areas of Enterprise IT. upon many features like first and foremost the Host OS’ policy for hosting each guest OS and also the minimum requirements for each of the Guest OS to 2. Platform Virtualization function in the desired fashion. Platform Virtualization essentially refers to the There are many forms of platform technique of virtualization of a computer system’s virtualization like Full Virtualization, Hardware hardware or the Operating System so that it can assisted Virtualization, which is actually a more support multiple operating system environments efficient form of full virtualization as the hardware simultaneously. “It hides the physical provides support for virtualization in this case characteristics of computing platform from the unlike the traditional hardware environments, users Virtualization enables the sharing and/or Partial Virtualization, Paravirtualization and OS- aggregation of physical resources, such as level virtualization. operating systems, software, and IT services, in a way that hides the technical detail from the end Virtualization is being done at various levels; users and reduces the per unit service cost.” [3], traditionally there have been 2 broad categories and presents an emulated platform to the user (or that mark the boundaries of the type of another software). This is done by either equipping virtualization: Full Virtualization and Partial the hardware or the OS in almost all of the cases, Virtualization. Recently as the field is maturing with the capabilities of supporting multiple with time, the broad term of Partial virtualization programs on top of it, thereby creating multiple has been specialized into two smaller sub ‘virtual’ machines on a single set of hardware; categories I have identified three broad categories programs would generally refer to multiple
    • of it, based on the types of products that are available in the industry today: 1. Full Virtualization: where a whole system, a whole operating system is being virtualized, often known as Emulation. This technique provides maximum functionality to the guest but is the most resource consuming and so it the most taxing for the host as well as doesn’t deliver much performance at the guest’s end. This is the most common type of technique and this approach is followed in the majority of the virtualization end user solutions that exist today. 2. Partial General Virtualization: This category refers to virtualization of a certain section of your computer system but not the whole of it, yet not a very specific part even. It captures both specific needs and to some Figure 2 - Typical VM Architecture (Role of a extent general needs of the user. The desktop VMM) [4] virtualization market heavily uses this technique for most of the desktop virtualization solutions. 2.1.1. Paravirtualizer 3. Partial Specific Virtualization: this refers to Pravirtualization is a technique in which the the virtualization of a specific application or host operating system provides an interface to the for a very specific bunch of applications. For guest operating system which the guest OS may example, we can do this for either Microsoft use in order to run on a virtual machine atop the Word or for the Microsoft Office Suite. This host OS. However, to be able to achieve implementation of partial virtualization can be virtualization by paravirtualization technique, each seen in some of the desktop virtualization guest operating system needs to be ported products and almost all of application specifically for the above mentioned interface virtualization solutions. that’s provided to it. A paravirtualization implementation essentially consists of a software 2.1. Virtual Machine Monitor (VMM) interface provided by the host OS or an application on the host OS to the guest OS which is similar but Any typical virtual machine setup typically not necessarily identical to the hardware consists of a host hardware, host OS, a Guest OS, environment underlying the host operating system. host applications and data, guest applications and This is done by a special type of VMM, knows as a data and most importantly something called as a Paravirtualizer. It is a VMM “that intercept and Virtual Machine Monitor (VMM). trap low-level CPU instructions, failing any instructions that would violate isolation or cause In a hosted environment, the one thing that has system instability.”[5] This approach is not very to be taken care of is that the host applications and good since it fails many requests which might have data must be separated out from the guest caused system instability, however these requests applications and data, the VMM does this job. The may be valid and required from the guest’s VMM may or may not run on top of a host OS, it perspective. may even run directly on the hardware, autonomous from the host OS. 2.1.2. Hypervisor The technology has recently been used in the latest Windows Server 2008 edition by Microsoft. This operating system provides explicit support for platform virtualization.
    • “A hypervisor provides the virtualization abstraction of the underlying computer system. In Lets consider a scenario when there is a user in full virtualization, a guest operating system runs an organization, who is working on an important unmodified on a hypervisor. However, improved document, works all day but saves the document on performance and efficiency is achieved by having his/her local machine after the daily data backup. the guest operating system communicate with the The next day, before this person comes back to the hypervisor. By allowing the guest operating system office, the hard drive crashes on the local machine. to indicate its intent to the hypervisor, each can This implies a huge loss on the part of the cooperate to obtain better performance when employee and the organization as just because the running in a virtual machine.”[6]. VMware data backup was done just before the file was saved proposed a communication method for a Guest OS on the local disk, the local disk failure after the to communicate with the hypervisor underneath backup was taken, resulted in the loss of the and released it as Virtual Machine Interface (VMI) document that s/he had created. Specification in the year 2006[7]. Hypervisor does not require that the guest OS is modified in any way and unlike a paravirtualizer, it is a VMM which traps low level CPU requests and emulates those which could have resulted into instability on the host system. Hypervisors are of two types [8]: Hosted: Run as an application that runs on top of a host OS and simulates the underlying hardware and provides a service that a guest OS can use and then run inside the host operating system Guest OS Guest OS Figure 4 - OS Streaming (Concept) [9] VM VM There is another (and better) way to accomplish the same thing: by Operating System Hypervisor Streaming, (Figure 5 - OS Streaming On Demand) shows how this is accomplished. Here we can say that this would be like treating operating system as a service to the user. In this, the local image of a Hardware local machine is taken and stored on a central CPU Memory Storage server as something known as a virtual disk or the ‘vDisk’ [10]. Then, this image is rebroadcasted to every user of this system, on demand and the users Figure 3 - VMware's Hypervisor Model use this as a service. Native or Bare-Metal: As a software component, that runs directly on the hardware and also acts a VMM for the guest OS (Figure 3 - VMware's Hypervisor Model). This has been the traditional approach towards virtualization and has existed since the days of CP/CMS, a time sharing operating system, developed at IBM in the 1960s. 2.2. Operating System Virtualization 2.2.1. Operating System Streaming
    • than an application running under a full general purpose OS.”[12] What was and what is: Traditionally the Operating System has been of the one size fits all type, where one operating system is required to fit one and all programs that it would ever support no matter if certain services would ever be needed by a certain users / applications used by that particular user or not. These programs include not only software applications but also include a huge list of supported devices and peripherals and in the world of plug and play functionality, this results in to a big bunch of device drivers and related software that would never ever be used but it is still there, “as a result, operating systems have ballooned, Figure 5 - OS Streaming On Demand becoming bloated, complex, and far less secure. Most operating systems now require at least 1 GB Although this seems to be quite inefficient but of RAM just to run because of the various this approach actually has many advantages like necessary and unnecessary services that are loaded easier and less complex (centralized) management, into memory, as well as a few GB of disk space. more reliability, faster boot, responsive Because the foot print is huge, keeping the OS and applications and it also has a good enough network your data center secure requires that it be patched performance. In an experiment done by Intel® it more often than ever before” [13]. The OS has was shown that such an approach would traditionally been responsible for providing the “effectively provide streaming, with good platform, the required libraries and interfaces that performance, to live training rooms in our the applications running on top of it can use to production IT environment. Clients booted quickly, fulfill the functionality that the program even in worst-case boot storms. We were able to (application) desires to fulfill. It has also been deliver streaming with a standard IT server, which looked at the authority responsible for managing supported up to 39 clients with low to moderate and allocating hardware resources to the server utilization. This indicates that streaming, in applications as and when needed, abstracting the contrast to the thin-client approach, would not underlying hardware specifics from the require substantial investment in new server applications running atop the operating system. hardware or our IT support model.”[11] This approach and the perceived responsibility 2.2.2. JeOS – ‘Juice’ of the OS has resulted into a humongous blob of software which may or may not suite the specific JeOS, pronounced as ‘Juice’, stands for Just needs of any particular user. Most of the time it Enough Operating System. Unlike the conventional does fulfill those needs but also wastes resources in full fledged operating systems, meant to support hosting services that are never ever used by that any and every application that may run on it, user of the applications that interest that user, in a “JeOS is not a generic, one-size-fits-all operating pursuit of being ready for whatever may come its system. Rather, it refers to a customized operating way but that rare chance almost never occurs. system that precisely fits the needs of a particular “An OS finely tuned to the application it application. The application's OS requirements can supports is smaller, more secure, easier to manage, be determined manually, or with an analytical tool. and higher performing than a general purpose OS. Therefore, JeOS includes only the pieces of an A smaller footprint means IT organizations can run operating system (often Linux) required to support more instances per server. Tailoring the OS a particular application and any other third-party specifically to the app enables the removal of components contained in the appliance. This makes vulnerable components such as the browser from the appliance smaller, and possibly more secure Windows and therefore significantly reduces the
    • number of vulnerabilities and patches required to address those vulnerabilities.”[14] Desktop virtualization provides many of the advantages of a terminal server, but (if so desired The main benefit of using a JeOS comes into and configured by system administrators) can picture when the JeOS is run inside a virtual provide users much more flexibility. Each, for appliance. Virtual Appliance is a pre-packaged instance might be allowed to install and configure software application along with an operating their own applications. Users also gain the ability system that is run inside a virtual machine. So now, to access their server-based virtual desktop from if this operating system is slimmed down according other locations.”[17] to the specific needs of this particular application that it has to cater to, many of the management and configuration issues are automatically done away Virtual Desktop Infrastructure with. Consider a scenario where a customer or an enterprise deploys this to run their own specific Virtual Desktop Infrastructure or VDI as it is application or while hosting any third party popularly known is a technique to achieve full application, the client does not have to bother about desktop virtualization. This would fall under the any of those issues which are not related to this Partial General Virtualization category as application of interest, in more specific business mentioned earlier in this paper. VDI has been and sense, the investment made in configuring and still continues to be the most popular ways of deploying this application will just be done to that realizing a desktop virtualization solution. Figure 6 - the performance of this application is maximized Traditional VDI in action shows how the traditional and the client can rest assured that their investment VDI works. It involves a central server that runs is not going into doing something that their the individual virtual machines for different users, business has no use of. It bring a high business storing the complete information for that user along value associated with it, including hassle free or with the desktop of it on a storage area network. rather specific maintenance and management, The clients connect to the virtual machines through reducing and almost eliminating the overhead Remote Desktop Protocol and the images are incurred in maintaining an operating environment streamed over the network to the thin clients. to host the application of interest. This technology has the advantage of easier manageability but also has some disadvantages 2.3. Desktop Virtualization associated with it: Desktop virtualization is THE hot new topic in  Owing to the method used to access the VMs the industry today and is changing the scene of the over the network i.e. via Remote Desktop Information technology and Information Systems Protocol, the user experience is not very good world, especially enterprise IT as we write this since every bit and pixel has to be sent over report. “Desktop virtualization represents a large the network for this and a typical network emerging opportunity in software (potentially cannot do justice to this. greater than $2 billion by 2011) as new applications are required to enable and manage this  No offline usage possible and the user has to important new technology.”[15] “The number of be connected to the server all the time. virtualized PCs is expected to grow from less than 5 million in 2007 to 660 million by 2011”[16],  This solution is quite expensive, the san which refers to virtualized desktops. storage is expensive, the data center required to run it doesn’t come cheap. “Installing and maintaining separate PC workstations is complex, and traditionally users  Updating the software in an enterprise wide have almost unlimited ability to install or remove way is tedious and requires a lot of effort and software. Corporate information technology as the number of users increase this problem departments and users have therefore often used increases manifold since each user has a Terminal Services or Citrix's Presentation Server to separate VM on the server and so each VM provide a stable, "locked down" desktop needs to be updated separately to accomplish environment out to the user, who could be either an enterprise wide upgrade. using a regular desktop PC, or a small, quiet and robust thin client.
    • The above factors have been the key reasons for The new VDI Architecture improves upon the the barrier for adoption of the desktop traditional approach by having just one golden virtualization technology by many enterprises in image of the VM instead of running separate VMs the past. for each user and storing them on a SAN and Data Center having instead user specific data stored directly on the server. Now whenever a user connects to the server, the golden image is transferred to the Server client’s local machine along with the related user VM 1 VM 2 specific data and then it is run locally over there thereby overcoming the bad user experience issues that are associated with the old approach. Costs are VM 3 VM 4 SAN reduced since no data center is necessary and no SAN is being used to address storage needs. VM 5 Mgmt. Management is simplified even more as updating Consol the system requires changes only to the golden image instead of doing that for each user and each VM in the old approach. Remote Desktop Protocol An added value that the new VDI architecture brings with it is the offline capability. Since the golden image is transferred over to the client’s local machine, it can be carried along while not connected to the server and the client can continue to use it. When there is a network connection Thin Clients available, only the user data can be synced with the Figure 6 - Traditional VDI in action server and the system maintains its consistency. This is particularly applicable in today’s scenario where we need to have our working environment in The NEW Virtual Desktop Infrastructure places like airplanes and provides a good value addition. Recent developments in the past few years have resulted into a new VDI architecture; companies The new approach has brought down barriers like MokaFive have adopted this new technique to that previously existed, which prevented many build their desktop virtualization products. Figure 7 enterprises from adopting desktop virtualization - New VDI Architecture shows the new architecture and now there are many products in the market of VDI that is being seen in the products that are which go by the new approach to VDI all the big now coming into the market. players in the field of virtualization have a product that taps this market. More and more enterprises User Specific Data are now moving to virtualization to bring down the Mgmt. maintenance costs and enable seamless Consol administration. e Workspace Virtualization There are certain other variants also available though, which are somewhat similar but do have a Golden Image few differences in terms of functionality for example ‘Workspace Virtualization’ pioneered by RingCube, where a workspace can be virtualized Win rather then doing that to the whole desktop environment. This also enables organizations to provide this as a mobile pack, on a disk or solid state mobile storage device while at the same time Figure 7 - New VDI Architecture following the approach of syncing the user data to a hosted VDI solution and offering it as a service,
    • thereby going a step ahead towards Partial specific Similarly, advancements such as virtualization and virtualization, as we described it earlier in this blade servers have further enabled consolidation at paper, whereby there is no need for a virtualized the application level.”[19] desktop solution but only your workspace is virtualized. This workspace can consist of one or “A consequence of unifying the IT infrastructure in more applications and the related user specific data the same physical location has been an increased can still be kept in sync over the hosted VDI. emphasis on facilities-level issues. In particular, power delivery, power consumption, and heat extraction are key challenges in the operation of DaaS – Desktop as a Service data centers. Though some of these challenges can be addressed partly by novel methods at the All these technologies and approaches to conventional facilities levels, they increasingly desktop virtualization have resulted into the need intervention at the level of the hardware and desktop being offered as a service. We have seen software design. Similarly, such infrastructure the various approaches to desktop virtualization consolidation has also led to the need for solutions and so on the lines of SaaS, companies are now to automate operations management tasks that can providing desktop environments as a service to end otherwise contribute to a large fraction of the total users or to other businesses or even at an intra data center costs.”[20] [21] [22] organization level. The trend is growing since this So all this calls for certain new considerations for solution and this service as such lowers the total future research, namely: cost of ownership and the user needs to pay only for the specific amount of time that the user uses Architectural research: to design software by the desktop saving upon the costs of numerous keeping in mind the new normal as far as behavior licenses, which are bought to have a computing of any software is concerned. Instead of the old environment, which most of the times remains batch behavior that was predictable, we should unused, incurring unnecessary costs to the business now focus on the spiky and unpredictable behavior or the end user. owing to the usage patterns and the network traffic patterns. 3. Implications on the Future[18] SLA driven software: since everything is based on the goal of cutting costs and focusing in specific The traditional approach to enterprise business needs instead of investing in solutions that computing has been a big network of computers, do cater to the business need but also bring with data servers, and the rest connected to form one them unwanted services and more resources are giant enterprise system, each node being used by a wasted in maintaining and securing these extra user, which would probably be specific to the user add-ons, we now need to focus not only on one and would also have its own local storage solution. aspect like performance but on the cost Even today such infrastructure exists in which effectiveness and keeping in mind the ultimate goal users of any organization come in, plug in their of tying to the SLA at the service provider’ end. computers to the network and just use the network (transport and communication basically) while Develop better data centers: issues like cooling relying on the local system for the computation and also need to be addressed since everything is storage needs. computing into a single space and this consolidation of previously separated out data This trend has changed in the recent past and centers call for such solutions which address these increasingly organizations are moving towards concerns as well. more consolidated solutions, in which for example, there is one central server that serves most of the Another thought that comes to mind when we look computational needs of the users, the user nodes at the trend of the industry acceptance of the have become more and more less equipped to technology on a wide scale specially when it comes handle highly performance incentive applications to moving from the traditional packaged towards and are now relying upon the centralized solution the SaaS paradigm of computing, which is also for those needs. “Specifically, in contrast to branching into the world of Desktop Virtualization individual silos of computational equipment to in the form of DaaS, is the heavy reliance of any handle departmental IT needs, organizations are organization’s business on a third party’s business, evolving towards large consolidated data centers. which is of providing this service which forms the
    • basis of carrying out our company’s business. While this would free the organization which is the consumer of these services, from the pains of worrying about factors like scalability and resource References usage and also uneven usage patterns along with [1] ‘How Virtualization Will Enable the True Predictive the resource allocation to deal with them, by tying Enterprise’ - Diane Bryant, Intel & Brian Byun, them up into an SLA with the service provider and VMware (August 28, 2008) shifting the focus from the IT management to ipip.intel.com/go/930/how-virtualization-will- doing business, it also creates a dependence of this enable-the-true-predictive-enterprise/ (accessed company’s business on the success and the effects March 2009) of external factors on the service provider’s [2] Gartner, Defining Four Desktop Virtualization business. There is no direct dependency but there Markets, 29 August, 2008 by Brian Gammage and still exists an indirect one. Some argue that the Mark A. Margevicius third party services like this do benefit from [3] ‘Electronic Commerce A Managerial Perspective’, Pg. 27, Turban, E (2008). economies of scale as they have a better idea and a [4] ‘Introduction to virtual desktop architectures’, Pg. 5 – larger end user base as such to analyze and get to © RingCube. know how the software works and what to do when [5] ‘Introduction to virtual desktop architectures’, Pg. 3 – it fails and hence are better equipped in addressing © RingCube. such IT related issues than the consumer company [6] Transparent Paravirtualization, © VMware, has but this does give away the control of the root vmware.com/interfaces/paravirtualization.html of the business or the essential tools to the third (accessed March 2009) party. Also, this may be seen as threat in a way that [7] Zachary Amsden, Daniel Arai, Daniel Hecht, Pratap the provider does have the necessary equipment or Subrahmanyam, VMware – “VMI Specification” (2006) vmware.com/pdf/vmi_specs.pdf (accessed rather a launch pad to jump into a new business March 2009) rather quickly and without much effort into the [8] ‘IBM Systems Virtualization’, IBM Corporation, business that the consumer organization was doing Version 2 Release 1 (2005) as the service provider already has the foundation publib.boulder.ibm.com/infocenter/eserver/v1r2/top to offer those services since the consumer ic/eicay/eicay.pdf (accessed March 2009) organization was using the service provider’s [9] ‘Improving Manageability with OS Streaming in services itself to offer other set of services. Training Rooms’ Pg. 11 – Catherine Spence, Randy A large chunk of legal copy, as we may call it, Nystrom, Craig Pierce, and William Wrenn, Intel would be required to safe guard interests and even Corporation (December 2008) [10] ‘Improving Manageability with OS Streaming in this may not solve the issue completely. In a Training Rooms’ – Catherine Spence, Randy nutshell this may turn out to mimic what the world Nystrom, Craig Pierce, and William Wrenn, Intel economy just did in the recent past when one Corporation (December 2008) system’s failure cause an entire economy to come [11] ‘Improving Manageability with OS Streaming in to a grinding hault as too many indirect Training Rooms’ Pg. 11 – Catherine Spence, Randy dependencies caused a rpple effect which also Nystrom, Craig Pierce, and William Wrenn, Intel propagated to a large part of the world simply Corporation (December 2008) because companies did not have the direct control [12]en.wikipedia.org/wiki/Just_enough_operating_syste over what formed the very nut and bolt of their m_(JeOS) (Accessed April 2009) [13] blogs.vmware.com/console/2007/07/get-juiced.html business, when the machine built on them was (accessed April 2009) indeed owned by them. [14] blogs.vmware.com/console/2007/07/get-juiced.html (accessed May 2009) 4. Conclusion [15] ‘EMERGING TECHNOLOGY RESEARCH Technology area: Virtualization’ Pg. 1 – Seogju Lee, James Covello, Sarah Friar, Krishna Kakarala, The Virtualization technology continues to change Goldman, Sachs & Co. (May 22, 2008) the scene of enterprise computing by the day, [16] ‘Gartner Says Virtualization Will Be the Highest- bringing in new value additions to business, cutting Impact Trend in Infrastructure and Operations on maintenance costs and also simplifying IT Market Through 2012’ – Press Release - Christy infrastructure management. The field is still Pettey, Gartner, Inc. (April 2, 2008) undergoing research; it has matured over the last gartner.com/it/page.jsp?id=638207 (accessed few years and is continuing to do so as we see March 2009) more and more organizations adopting it by the [17] en.wikipedia.org/wiki/Desktop_Virtualization day. (accessed May 2009)
    • [18]Parthasarathy Ranganathan and Norman Jouppi – ‘Enterprise IT Trends and Implications for Architecture Research’, - Proceedings of the 11th Int’l Symposium on High-Performance Computer Architecture (HPCA-11 2005), 2005. [19]Parthasarathy Ranganathan and Norman Jouppi – ‘Enterprise IT Trends and Implications for Architecture Research’, - Proceedings of the 11th Int’l Symposium on High-Performance Computer Architecture (HPCA-11 2005), 2005. [20]Parthasarathy Ranganathan and Norman Jouppi – ‘Enterprise IT Trends and Implications for Architecture Research’, - Proceedings of the 11th Int’l Symposium on High-Performance Computer Architecture (HPCA-11 2005), 2005. [21] W. Tschudi et al., Data Center Energy Research Roadmap, Lawrence Berkeley National Laboratory Report, 2003. [22] C. D. Patel, A Vision of Energy-aware Computing from Chips to Data Centers, Keynote address at the Japan Society of Mechanical Engineers, International Symposium of Micro-Mechanical Engineering (ISMME), 2003.