Your SlideShare is downloading. ×
Ca technology exchange   virtualization
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Ca technology exchange virtualization


Published on

CA Technologies Thought Leadership Publication on Virtualization

CA Technologies Thought Leadership Publication on Virtualization

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Volume 1 Issue 2 November 2010 CA Technology Exchange Insights from CA Technologies VirtualizationInside this issue:• Virtualization: What is it and what can it do for you?• Virtualization: Enabling the Self-Service Enterprise• Data VirtualizationplusColumns by CA Technologies thought leaders
  • 2. CA Technology ExchangeTable of Contents 1 Welcome from the Editor in Chief Marv Waschke, Principal Software Architect, Office of the CTO, CA Technologies, and Editor in Chief, CA Technology Exchange 3 Virtualization: What is it and what can it do for you? Anders Magnusson, Senior Engineering Services Architect, CA Technologies25 Leading Edge Knowledge Creation Dr. Gabriel Silberman, Senior Vice President and Director, CA Labs, CA Technologies27 Virtualization: Enabling the Self-Service Enterprise Efraim Moscovich, Principal Software Architect, CA Technologies40 Service Assurance & ITIL: Medicine for Business Service Management Brian Johnson, Principal Architect, CA Technologies42 Data Virtualization Sudhakar Anivella, Senior Architect, Service Desk, CA Technologies51 Lessons Learned from the Mainframe Virtualization Experience John Kane, Technical Fellow Emeritus53 Glossary of Virtualization Terms
  • 3. CATX: Virtualizationby Marv Waschke, Principal Software Architect, CA Technologies,Editor in Chief, CA Technology ExchangeOur first issue of CATX was published in April 2010. The theme was cloud com- CA Technology Exchangeputing. This issue addresses virtualization, a subject closely related to cloud Editorial Committeecomputing. The two concepts are often juxtaposed in discussion and I occasion-ally hear people speaking as if the two concepts were identical. Marv Waschke, Editor in Chief Principal Software Architect,Virtualization and Cloud Office of the CTO, CA TechnologiesVirtualization and cloud are related yes, but identical, clearly not. In computing, Janine Alexanderwe often advance by focusing on the outcome of an activity and delegate per- Technical Writer, CA Support,formance of the activity itself to some other entity. For instance, when we use CA TechnologiesSQL to query a relational database, we delegate moving read heads and scan-ning disk sectors to the RDBMS and concentrate on the tables of relations that Marie Danielsare the result. This allows database application builders to concentrate on the Program Director, CA Support,data, not the mechanics of moving bits and bytes. CA Technologies Michael DiazBoth cloud computing and virtualization are examples of delegation, but what is Quality Assurance Architect,delegated is different and the delegation occurs for different reasons. Workload Automation, CA TechnologiesCloud computing is delegation on a grand scale. A cloud consumer engages with Robert P. Kennedya network interface that permits the consumer to delegate the maintenance Senior Director, Technical Information,and management of equipment and software to a cloud provider. The consumer CA Technologiesconcentrates on the results and the provider keeps the lights on and the equip- Laurie McKennament running in the datacenter. Director, Technical Information CA TechnologiesVirtualization separates the execution of software from physical hardware bydelegating computing to emulators that emulate physical hardware with soft- David W. Martinware. The user can focus on the software rather than configuring the underlying Senior Principal Software Engineerhardware. Emulators can often be configured more quickly and with greater flex- Virtualization and Service Automation CA Technologiesibility than physical systems, and configured systems can be stored as files andreproduced easily. Without the convenience and flexibility of virtualized sys- Cheryl Morristems, cloud implementations can be slow and difficult, which is why almost all Principal, Innovation and Universitymention of cloud includes virtualization. Programs, CA TechnologiesArticles in This Issue Richard PhilyawPractice is never as simple as theory. Two of our three articles in this issue dis- Principal Software Architect, Office of the CTO, CA Technologiescuss virtual systems in practice. David TootillAlthough virtualization can deliver great rewards, deploying an IT service or Principal Software Architect, Servicegroup of services to run virtually is a complicated project that requires planning Management, CA Technologiesand systematic execution. Anders Magnusson from CA Services is an experi-enced implementer of virtualization projects. His article provides an insider’sview of the challenges in justifying, planning, and executing virtualization proj-ects.Efraim Moscovich is an architect of virtualization management tools. He hastaken time to consider the potential of virtual systems for self-service in IT. 1
  • 4. Finally Sudhakar Anivella, a senior architect in service management develop-ment, discusses another dimension to virtualization. We tend to think of virtual-ization as synonymous with virtual servers, but in fact, we use the concept ofvirtualization in many ways in computing: virtual memory, virtual routing, areall common. Data virtualization, as Sudhakar points out, has become very im-portant in IT systems.The glossary of virtualization terms was a joint project of the editors and the au-thors. Terms come and go and change meaning all the time as virtualizationevolves. We attempted to provide terms as they are understood today in thisglossary.ColumnsIn addition to full-length articles, we have columns from CA Labs senior execu-tive, Gabby Silberman, and ITIL expert, Brian Johnson. Virtualization has longbeen a staple of mainframe computing. Ideas that are new to distributed com-puting have been used for a long time on the mainframe. Recently retired CATechnical Fellow Emeritus, John Kane, has written a column that touches onsome of the ways that virtual distributed computing is recapping the experienceof the mainframe.All articles in CATX are reviewed by panels of experts from CA Technologies. Ar-ticles that pass the internal review go on to external review panels made up ofindividuals from universities, industry experts, and experts among CA Technolo-gies customers. These reviewers remain anonymous to preserve the integrity ofthe review process, but the editorial committee would like to thank them fortheir efforts. They are valued contributors to the success of CATX and we aregrateful to them. If any readers would like to participate in a review panel,please let us know of your interest and expertise in an email to editorial committee hopes you find value and are challenged in this issueon virtualization. Please consider contributing to our next issue, which will cen-ter on REST, (Representational State Transfer), the “architecture of the WorldWide Web.” Although REST will be the main theme of our next issue, we willalso include additional articles on virtualization, the cloud, and other topics ofinterest to the IT technical community.Our April 2011 issue promises to offer a varied range of thought-provoking arti-cles. CATX is open to everyone to contribute, not only CA Technologies employ-ees but all IT technologists. Please address questions and queries 2
  • 5. Virtualization: What is it and what can it do for you?by Anders Magnusson, Senior Engineering Services Architect, CA TechnologiesThe Promise of Virtualization About the author:Although the concept of virtualization began in the mainframe environment inthe late 1960’s and early 1970’s, its use in the distributed environment did notbecome commonplace until very recently. Even though the underlying technol-ogy and related best practices rapidly continue to evolve, for most applicationtypes virtualization has proven mature enough to support business critical sys-tems in production environment.When done right virtualization provides significant business value by helping or-ganizations manage cost, improve service, and simplify the process of aligningbusiness with IT. We can see a rapid acceleration in the number of traditionaldatacenters that are pursuing this value by shifting to a virtualization basedmodel, and some are even taking it one step further and by implementing pri- Anders Magnusson is a Senior Engineer-vate clouds. How fast this transformation will happen and how much of the ing Services Architect at CA Technolo-“old” datacenter will instead move out to public clouds is uncertain. To help us gies and a member of CA Technologieswith these estimates we can look at what Gartner Inc. and Forrester Research Council for Technical Excellence.are predicting: Since joining CA Technologies in 1997 he has held a number of roles and re-• “Virtualization continues as the highest-impact issue challenging infrastruc- sponsibilities across the organization ture and operations through 2015. It changes how you manage, how and what but, during the most recent several you buy, how you deploy, how you plan and how you charge. It also shakes up years he has focused on developing licensing, pricing and component management. Infrastructure is on an standard procedures and best practices inevitable shift from components that are physically integrated by vendors for utilizing virtualization and deploying (for example, monolithic servers) or manually integrated by users to logically multi-product solutions. composed "fabrics" of computing, input/output (I/O) and storage components, and is key to cloud architectures. This research explores many facets of Anders is responsible for providing siz- virtualization.” (Gartner, Inc., “Virtualization Reality”, by Philip Dawson, July ing best practices and tools for several 30, 2010.) CA Technologies solutions as well for virtualization related material on the Implementation Best Practices site,• “By 2012, more than 40% of x86 architecture server workloads in enterprises which can be found at will be running in virtual machines.” (Gartner, Inc., “IT Virtual Machines and Market Share Through 2012”, by Thomas J. Bittman, October 7, 2009.) mon/impcd/r11/StartHere.htm• "Despite the hesitancy about cloud computing, virtualization remains a top priority for hardware technology decision-makers, driven by their objectives of improving IT infrastructure manageability, total cost of ownership, business continuity, and, to a lesser extent, their increased focus on energy efficiency." (Forrester Research Inc. – Press Release: Cambridge, Mass., December 2, 2009, “Security Concerns Hinder Cloud Computing Adoption”. Press release quoted Tim Harmon, Principal Analyst for Forrester.)Despite the awareness of the huge potential provided by virtualization – or evenbecause of it – there are many virtualization projects that fail in the sense thatthey aren’t as successful as expected. This article is written in two parts. Partone defines virtualization and why organizations choose to use it, while part twofocuses on planning a successful virtualization project. 3
  • 6. What is Virtualization?The first step in understanding what the virtualization effort will achieve is to At a high level, virtualization presentsagree on what we mean by “virtualization”. At a very high level virtualization system users with an abstractcan be defined as a method of presenting “system users” (such as guest sys- emulated platform without details.tems and applications) with the big picture (that is, an abstract emulated com-puting platform) without the need to get into all the little details – namely thephysical characteristics of the actual computing platform that is being used.Virtualization has long been a topic of academic discussion and in 1966 it wasfirst successfully implemented in a commercial environment when the IBMmainframe systems S/360 supported virtual storage. Another breakthroughcame in 1972, when the first hypervisors were introduced with the VM/370 op-erating system. The introduction of the hypervisor is important because it en-able hardware virtualization by allowing multiple guest systems to run inparallel on a single host system. Since that time virtualization has been devel-oped on many fronts and can include:Platform or Server Virtualization: In this form of virtualization a single serverhosts one or more "virtual guest machines". Subcategories include: HardwareVirtualization, Paravirtualization, and Operating System Virtualization.Resource Virtualization: Virtualization also can be extended to encompass spe-cific system resources, such as storage and network resources. Resource virtual-ization can occur within a single host server or across multiple servers (using aSAN, for example). Modern blade enclosures/servers often combine platformand resource virtualization, sharing storage, network, and other infrastructureacross physical servers.Desktop Virtualization: Virtual Desktop Infrastructure (VDI) provides end userswith a computer desktop that is identical or similar to their traditional desktopcomputer while keeping the actual computing power in the datacenter.When this approach is used, the end user requires only a thin client on his desk-top. All updates or configuration changes to the application or hardware are per-formed in the centrally located datacenter. This approach provides greaterflexibility when it comes to securing the systems and supplying computingpower on demand to the end user.Application Virtualization: Application virtualization is a technology designedto improve portability, manageability, and compatibility of individual applica-tions by encapsulating the application so that it no longer communicates di-rectly with the underlying operating system.Application virtualization utilizes a “virtualization layer” to intercept calls fromthe virtualized application and translate them to call the resources needed toprovide the underlying operating system.Computer Clusters /Grid Computing: This type of virtualization connects mul-tiple physical computers together as a single logical entity in order to providebetter performance and availability. In these environments the user connects tothe “virtual cluster” rather than to one of the actual physical machines.The use of grid computing or clustering of computers is typically driven by the 4
  • 7. need to support high availability, load balancing, or a need for extreme comput-ing power.Each one of these general categories can be divided into additional subcate-gories. All of these potential options make it important that you are clear aboutwhat you are referring to when you talk about virtualization.The requirements and best practices for each of these different techniques arevery similar – often what is valid for one form is valid for many of the others. Inaddition, several of these depend on each other, and by implementing more ofthem, you enhance the value. For example, if you are implementing Server Vir-tualization or a Grid Structure you should also consider various types of resourcevirtualization to support the infrastructure.For the purposes of this article, we are focusing on server virtualization unlessotherwise specified.Why Use Virtualization?Now that you know what virtualization is, The short answer to the questionwhy do organizations choose to use it? “Why use virtualization” is to manageThe short answer is to manage cost, im- cost, improve service, and simplify theprove service, and simplify the process of process of aligning business with IT.aligning business with IT. For example, byusing virtualized environments, organiza-tions can provide improved service by an-ticipating and quickly responding togrowth in demand. In extreme examplesROI has been achieved in as little as 3-6months; however, a more realistic expec-tation is that it will take 12-18 months.Following are some of the common drivers that influence organizations in de-ciding to virtualize their IT environment.Hardware Cost Savings through Consolidation of logical servers into fewer Hardware cost savings throughphysical servers is one of the main promises from virtualization. There are mul- consolidation of logical servers intotiple ways in which savings can be realized. First, fewer physical servers may be fewer physical servers is one of therequired. In a well managed virtual environment, multiple logical servers can be main promises from virtualization.hosted on the same physical server. Second, by reducing the number of physicalservers required, virtualization can help manage “datacenter sprawl”, a savingsof both physical space and the utilities required to manage the larger space.To consolidate successfully, you need to understand the entire picture. An organ-ization can consolidate workload that was previously distributed across multiplesmaller – and often underutilized - servers onto fewer physical servers - espe-cially if those servers previously had a limited workload - but these new serversstill must have sufficient resources, at all times. See the section “New HardwareRequirements” below for more details on this.Automation and Enhanced Resource Management is, in many ways, relatedto hardware cost savings but the drivers are sometimes different:• Optimized usage of hardware resources. In a non-virtualized environment it is 5
  • 8. common to have some servers that are barely utilized. Many datacenters are filled with servers that use only a small percent of the available resources. These centers are perfect targets for consolidation and can provide an excellent return on investment.• Rapid deployment of new servers and applications. In a well managed environment with established templates for typical server installations, new logical servers can be deployed rapidly on host servers with available capacity.• Flexibility, ability to provide on demand resources. Many applications require significant resources - but only briefly. For example end of month or end of year reporting or other specific events may trigger a higher than usual load. In a virtualized environment, more resources can be assigned dynamically to a logical server or, if the application is designed to support scaling out horizon- tally, rapid deployment can supply additional logical servers as worker nodes.• Flexible chargeback systems. In a flexible virtualized environment an organiz- ation can provide a meaningful chargeback/showback system that will efficiently encourage system owners to use only the resources they need without risking the business by using servers that are inadequate for their needs. This approach is especially true in a highly mature and flexible virtual environment that includes management tools that collect all required metrics and resource virtualization techniques such as storage virtualization with thin provisioning.• Support test and development by providing access to a large number of potential servers that are active and using resources only when needed. This need is typically the starting point and an obvious choice to virtualize for any environment that requires temporary short-lived servers. It is especially true when test and development groups require a large number of different operating systems, configurations, or the ability to redeploy a test environ- ment quickly from a pre-defined standard.Fault Tolerance, High Availability, and Disaster Recovery on different levels Fault tolerance, high availability, andcan be simplified or made more efficient in a virtual environment. In highly disaster recovery on different levelsavailable environments, brief interruptions of service and potential loss of trans- can be simplified or made moreactions serviced at the time of failure are tolerated, while fault tolerant environ- efficient in a virtual environment.ments target the most mission-critical applications that cannot tolerate anyinterruption of service or data loss. Virtualization can provide a viable solutionfor both – including everything from simplified backup/restore of systems tocomplete disaster recovery or fault tolerance system supported by the varioushardware and virtualization vendors.A few examples of this scenario are:• Backup of complete images. A virtual server, by its very nature, is comprised of a set of files that can be moved easily between physical servers. A quick snapshot of those files can be used to start the server in this exact condition on another physical server.• Simplified disaster recovery solutions. When coupled with the appropriate hardware infrastructure, virtualization strategies can be used to simplify the process of disaster recovery. For example, a typical disaster recovery solution 6
  • 9. may include distributing resources into primary and secondary datacenters. Solution providers often take advantage of features built into a virtualization infrastructure and sell out-of-the-box solutions to support high availability and disaster recovery.• Minimize downtime for hardware and software maintenance tasks. All down- time due to planned hardware maintenance can be avoided or kept to a minimum because an organization can move the active virtual images to another physical server while the upgrade is performed. With correct planning, change control for software maintenance can also be significantly enhanced through judicious use of virtualization. Because the complete logical machine can be copied and handled as a set of files, organizations can easily set up separate areas such as Development, Quality Assurance, Library of available images, Archive of previously used images, Staging area for Configuration, and so on. A structure like this one encourages organizations to upgrade and test a new version in the “Development” and “QA” areas while still running the old version in “Production.” When the new version is approved, a small maintenance window can be scheduled to trans- fer the new, updated, and verified library image over to the production system. Depending on the application, the maintenance window can even be com- pletely eliminated by having the old and the newly updated images running in parallel and switching the DNS entry to point to the updated instance. This approach requires some advanced planning, but it has been successfully used by service providers with tight service level agreements.• Efficient usage of component level fault tolerance. Because all virtualized servers share a smaller number of physical servers, any hardware related problems with these physical servers will affect multiple logical servers. Therefore, it is important that servers take advantage of component level fault tolerance. The benefit of taking this approach is that all logical servers can take advantage of the fault tolerant hardware provided by the host system.Energy Saving and Green IT. Another justification for using virtualization is to Another justification for using virtual-support sustainability efforts and lower energy costs for your datacenter. By con- ization is to support sustainabilitysolidating hardware, fewer and more efficiently used servers demand less en- efforts and lower energy costs forergy to perform the same tasks. your datacenter.In addition, a mature and intelligent virtualized environment can power on andoff some virtual machines so that they are active only when they are in use. Insome cases, virtual machines running on underutilized host servers can bemoved onto fewer servers, and unused host servers powered down until theyare needed.Simplify Management. One of the primary challenges in managing datacentersis data center sprawl, the relentless increase in diverse servers that are patchedand configured in different ways. As the sprawl grows, the effort to maintainthese servers and keep them running becomes more complex and requires asignificant investment in time. It is worth noting that, unless effective lifecyclemanagement procedures and appropriate controls are in place, data centersprawl is a problem that will be magnified in a virtual environment.Using well controlled and well managed virtualization guest images, however, 7
  • 10. reduces the number of configuration variations making it easier to manageservers and keep them up to date. Note that this approach requires that a virtu-alization project also includes a change control process that manages virtualimages in a secure way.When a server farm is based on a small set of base images, these images can beefficiently tested and re-used as templates for all servers. Additional modifica-tions to these templates can be automatically applied in a final configurationstage. When done correctly this approach minimizes the risk of serious failuresin the environment. All changes, including the final automated configuration,should be tested before they are put in production. This secure environmentminimizes the need for expensive troubleshooting of production servers andfosters a stable and predictable environment.Managing Security. Security is one of the major concerns surrounding virtual- In a virtual environment, much ofization. Too often, the main security risk in any environment is the human fac- the security management can betor; administrators who, without malicious intent, mis-configure the system. The automated and raised one level sotraditional security models are effective if sufficiently rigorous procedures are that fewer manual steps are neededfollowed. In a virtual environment, much of the management can be automated to keep the environment secure.and raised one level so that fewer manual steps are needed to keep the environ-ment secure.A few examples of this are: Patch management. Virtualization allows testing changes in a controlled en- vironment, using an identical image. After the updated image is verified, the new image or the specific changes can be promoted to the production sys- tem with a minimum of downtime. This approach reduces the risks with patching the system and, in most cases, if something goes wrong reversion to a pre-patch snapshot is easy. Configuration management. The more dynamic environment and the poten- tial sprawl of both physical and logical servers makes it important to keep all networks and switches correctly configured. This is especially important in more established and dynamic virtual environment where virtual machines are moved between host servers based on location of available resources. In a virtual environment, configuration management can be handled by pol- icy driven virtual switches (a software implementation of a network switch running on the host server) where the configuration follows your logical server. Depending on your solution you can define a distributed switch where all the resources and policies are defined on the datacenter level. This approach provides a solution that is easy to manage for the complete datacenter. Support for O/S hardening and an integral part of change control. If all servers have been configured using a few well defined and tested base im- ages it becomes easier to lock down the operating systems on all servers in a well controlled manner and minimizes the risk for attacks.Enabling Private Cloud Infrastructure. A highly automated virtualized environ-ment can significantly help your environment create a private cloud infrastruc-ture. Stakeholders can request the resources they need and return them when 8
  • 11. they no longer are needed. In a highly mature environment where the stake-holder requests resources or services, these requests can be hosted in a privatecloud or, if resources aren’t available, in a public cloud. This level of flexibilitywill be difficult to accomplish in an acceptable way without basing the privatecloud on a virtual environment. From the requestor’s point of view, it doesn’tmatter if the services in the cloud are hosted on a physical machine, a virtualmachine, or some type of a grid as long as the stakeholder is getting the re-quired resources and the performance.Next StepsThe goals driving your particular virtualization project may include any numberof those identified in this article – or you may have a completely different set ofdrivers. What is critical is that you clearly identify those goals and drivers priorto undertaking the project. Project teams need a clear understanding of whatthey are expected to accomplish and what business value is expected to be de-rived in order to identify the appropriate metrics that will demonstrate the valueof virtualization to the stakeholders. Part two of this article “Planning Your Vir-tualization Project” examines how these drivers can be used to direct the projectand outlines a number of important areas to be aware of when planning a virtu-alization project.Planning Your Virtualization ProjectThe Importance of PlanningWhen you are planning a virtualization project, one of the most critical first When you are planning a virtualiza-steps is to ensure that both the project team and all stakeholders understand tion project, one of the most criticalwhat the project needs to accomplish, what the supporting technology is capa- first steps is to ensure that both theble of and what the true business drivers behind the project really are. This is project team and all stakeholderstrue for any project – but it is particularly true for virtualization endeavors be- understand what the project needs tocause there are many common misperceptions about what virtualization can accomplish, what the supportingand cannot offer. For further insights on the benefits of virtualization – and technology is capable of and what thecommon business drivers - see Part one of this article “What is Virtualization?” true business drivers behind the project really are.Even though your team may know that a virtualization project can provide sig-nificant value unless that value is explicitly spelled out, it runs the risk of be-coming “just another big project” which is an invitation to failure. Thevirtualization project may save the company money, it may make it easier toprovision new machines, and, perhaps, it might even reduce the company’s car-bon footprint, but a project with goals this vague is likely to fail and be be super-seded by a new project because there is no way of effectively measuring itsprogress or success. To endure and succeed, a project must have explicit intent,understandable milestones, and clear measures of success defined up front.Without them, expectations will be unclear and there will be no way to accu-rately communicate the benefits.Before undertaking any virtualization project, the following questions must beaddressed:• Maturity levels: What is the current and expected maturity level of the virtu- alized environment? (see “Maturity Levels” later in this article for examples).• Purpose: What are the business drivers for the project? 9
  • 12. • What: What processes, functions, and applications will be virtualized?• Support: Do stakeholders (for example, system owners and executive leaders) support the project goals?• Cost: How much is the project expected to cost, and save?• Risks: What functional and financial risks will be associated with the project? Are they acceptable?• Scope: What is the timeframe and what resources will be needed to complete the virtualization project? (Will it be a single, focused project, or one of multi- ple phases and milestones?)• Changes: Will changes will need to occur in the current processes, functions, and applications to support virtualization? Will changes need to occur in the deployment environment?• Accountability: What measurements will be incorporated that indicate that the project has reached its targets and is successful? Which stakeholders need to be informed of project progress, and how often?This list is by no means exhaustive, however, without at least a good under-standing of the answers to these questions it is likely that the project will beless successful than it could be. In a larger project where the goal is to virtualizea significant part of the environment or span multiple maturity levels, it is alsoimportant to have an open mind and, to some degree, an open project plan, thatpermits incorporation of lessons learned during earlier phases of the project intolater phases. Changes to the original stakeholder agreement must have buy-in;a minor change or delay that is communicated is rarely a problem, but ignoredchanges might turn an otherwise successful project into a failure.Virtualization Maturity LevelsAnalyzing the current state of virtualization, the maturity level, and comparing it There are typically four levels ofto the future desired level simplifies virtualization decisions. There are typically virtualization maturity:four levels of virtualization maturity: Level 1 – Islands of virtualization; Level 2 - Consolidation and managing expenses; Level 3 – Agility and flexibility; and Level 4 – Continuous adaptivity.Level 0 – No Server UtilizationAs the starting point of the virtualization maturity “ladder” this level describesan organization which has not yet implemented virtualization.Level 1 – Islands of Virtualization for Test and DevelopmentThis maturity level describes the state of most IT departments before they starta formal virtualization project. Virtualization is often used by individuals or lim- 10
  • 13. ited groups within the organization without centralized management or re-sources. At this stage virtualization is used reactively and ad hoc to create vir-tual machines for testing and development in order to address specific issuesfor non- business critical systems when they arise.Level 2 – Consolidation and Managing ExpensesAt this stage the primary driver is to consolidate servers and increase the utiliza-tion of available resources. When done correctly, consolidating small or under-utilized servers into larger servers can be very efficient and it can savesignificant costs. However, the key to saving costs is identifying the right serversfor virtualization. While there can be valid reasons to virtualize larger servers aswell, it is difficult to realize savings on hardware in doing so.Level 3 – Agility / FlexibilityThe driver for the next step on the virtualization maturity ladder is the need forenhanced flexibility, enabling you to add and remove resources on demand andeven move workload between physical hosts. This ability can be used to balanceworkload or to support a high availability solution that allows virtual machinesto be restarted on a different physical server after a server failure.Level 4 – Continuous AdaptivityThe driver behind this step is the desire to fully automate all of these functionsin order to enable software solutions, often with hardware support, to pre-dictably and dynamically balance the load between servers, rebalance resourcesbetween virtual machines, start up and shut down virtual servers based onneed, control power saving features in both the virtual machines and the hostsystem itself, etc. This automation should be service-aware and should considersuch factors as measured and expected workload, tariffs for energy, importanceand urgency of requested resources, and demand from other services, andshould use all available information to identify the best use of the available re-sources.The potential gains from virtualization grow significantly with each step up thematurity ladder: however, climbing too fast up the ladder can risk project failure.This is especially true if you also lack complete support from the stakeholdersand the executive leadership, access to the correct infrastructure and tools orthe required skillset. Travelling up the maturity levels is often a journey and it islikely that a project will lead to a mix of the different maturity levels, which isexpected, but it is important that your goals be clearly defined and communi-cated.Virtualization ChallengesPart one of this article “What is Virtualization?” discussed the importance ofidentifying the business drivers for a project. After that is done it is equally im-portant to be aware of problems and challenges that may arise. Awareness canguide infrastructure design to minimize problems caused by these obstacles.One common and challenging problem with server consolidation is that someareas of the organization may want to retain control over their existing hard-ware and applications. This resistance could be caused by a fear of losing con-trol of their environment, fear of inadequate response times or systemsavailability, concerns about security and handling of confidential data, or gen-eral anxiety about changes to their business environment. Some of these con- 11
  • 14. cerns may be valid while others may only express a lack of understanding ofwhat this new technology has to offer. The project team must identify theseconcerns and address them to the satisfaction of the stakeholders.Even though it is critical to have full support for the project, it is equally impor-tant to have a good understanding of the types of problems – both technicaland business impact related - that can potentially occur.A few common challenges are:Overutilization: One common problem with a virtualized environment is One common problem with a virtual-overutilization of physical servers. Although virtualization permits running mul- ized environment is overutilization oftiple logical servers on one physical server, virtualized applications require more, physical servers.not fewer, resources when they are virtualized. Virtualization always adds over-head. A virtualized application uses more resources than a non-virtualized in-stallation of the same application and it will not run faster unless it is hosted onand has access to faster hardware than the non-virtualized installation. The ac-tual overhead depends on a number of factors but independent tests haveshown that the CPU overhead generally ranges from 6%-20%(See “VMware: The Virtualization Drag” at Overutilization of resources can present a serious problem in virtual-ized environments that do not have correctly sized host servers. See the section“New Hardware Requirements” below for more details.Underutilization: Alternatively, the underutilization of servers minimizes thevalue of virtualization. To provide a good balance it is important to understandthe environment and to have the necessary tools to monitor and balance theload dynamically. Typically hypervisor vendors provide tools for this, but 3rdparty vendors can provide added flexibility and value. For example, one organi-zation I have worked with utilizes virtualization to provide a dynamic test envi-ronment that can scale to meet the needs of many different groups. Resourcerequirements can vary dramatically depending on the type of testing beingdone. The environment is a rapidly growing environment that initially experi-enced serious issues with overutilization. They resolved these issues by imple-menting a management solution that continuously measured the load andprovided early warning of potential overutilization. This allowed the team toproactively balance their workloads and add resources when needed.Single Point of Failure: In a virtualized environment where every host is run- In a virtualized environment wherening multiple logical servers, the impairment of a single physical server could every host is running multiple logicalhave devastating consequences. Therefore, it is important to implement redun- servers, the impairment of a singledant failsafe systems and high availability solutions to avoid situations where physical server could have devastatingone failing component affects multiple applications. This solution should in- consequences.clude configuring redundancy for all critical server components, employinghighly available storage solutions (RAID 5 or RAID 1+0), ensuring network con- Therefore, it is important to imple-nections are connected to separate switches, etc. In addition, in the event every- ment redundant failsafe systems andthing else fails, we recommend configuring the environment to be fault tolerant high availability solutions to avoid sit-so that if one host fails, the guest systems will start on a secondary host. Imple- uations where one failing componentmented correctly, virtualized systems are likely to have significantly better up- affects multiple applications.time than individual systems in physical environments.One organization that initially experienced a few hectic evenings as the resultof a single failing server bringing down multiple important applications learned 12
  • 15. early on the value of clustered host servers with dynamic load balancing. Aftervirtualization was fully implemented, and one host went down, the workloadsautomatically restarted on another node in the cluster. In addition this organiza-tion has also set up separate distributed datacenter so if one datacenter be-comes unavailable the entire organization isn’t affected.Virtualization of Everything: Attempting to virtualize every server and applica-tion in an environment can be challenging. It is true that it is possible to virtual-ize most workloads, however, success requires careful planning that identifieswhat should be virtualized, why it should be virtualized, and what supporting in-frastructure is required. Just because something is possible does not mean thatit is a good idea.Some of the more challenging examples are:Heavily utilized servers. Significant planning is required before virtualizingservers that often or always register high resource utilization. This is especiallytrue for servers with multiple CPUs. While most hypervisors support guest sys-tems with 4 or more vCPU, this requires complicated scheduling and the over-head can be steep. Therefore, unless there are compelling reasons and ampleavailable resources, virtualization should be avoided for heavily utilized systemsthat require multiple CPUs, especially when predictable performance is critical.Real time requirements. Applications that require real time or near real time re-sponse from their servers typically are not suitable for virtualization. The systemclock on virtualized system may lag as much as 5-10 seconds under a heavyload. For typical loads this is not a problem, but systems that require real timeor near real time response need special treatment. A satisfactory virtual imple-mentation will require careful analysis of the hypervisor solutions support forreal time requirements on guest systems.Application support. As virtualization becomes more common, many applica-tion vendors will begin to support their applications in virtualized environments.Nevertheless, a significant number of applications still are not supported andeven if virtualization is supported, some application vendors may require proofthat any reported issue can be reproduced in a non-virtualized environment.Licensing. There are still many applications and licensing agreements that aren’tdesigned with dynamic virtualized environments in mind. Ensure that licensingprovisions address whether the license cost is connected to the number of phys-ical CPUs on the host servers and whether it is licensed to only run on a dedi-cated physical server. In these situations, the license may require payment for alicense for the host server’s 16 CPUs even though the application is assigned toonly one vCPU. Dedicated physical server licenses may prevent dynamic migra-tion of the logical server to other host servers. Another consideration is that awell-planned lifecycle management solution requires each image to have multi-ple active instances for Development, Test/QA, Production, and so on. The or-ganization needs to determine and track whether each one of these instancesrequires additional licenses?Direct access to specific hardware. Applications that require direct access to cer-tain hardware such as a USB or serial port keys or other specialized hardware,such as video capturing equipment, tape drives and fax modems might be com- 13
  • 16. plicated or impossible to virtualize in a meaningful way.New Hardware Requirements. Hardware must be sized appropriately to take Hardware must be sized appropriatelyadvantage of virtualization. For efficient scheduling of resources between multi- to take advantage of virtualization.ple logical servers, each host server must have ample resources, including CPU, For efficient scheduling of resourcesmemory, network I/O and storage I/O. Because many concurrent resources are between multiple logical servers,sharing these resources, the environment must not only support high volumes, each host server must have ampleit also must support a large number of transactions. For example, one extremely resources, including CPU, memory,fast network can be helpful but a single fast card is seldom adequate. Efficient network I/O and storage I/O.virtualization requires equipment with multiple fast I/O channels between allcomponents. Sufficient hardware can also provide added value acting as compo-nent level fault tolerance for all logical servers.Special focus needs to be put on the storage infrastructure. Connecting all ofyour servers to a SAN (fibre channel or iSCSI based) is, highly recommended, fora virtual environment. A fast SAN and dedicated LUNs for the virtual machinesavoids many I/O bottlenecks. The more advanced features and common driversfor a virtualized environment, such as hot migration, high availability, and faulttolerance, are impossible or significantly harder to implement without a SAN.Cooling requirements can be a concern. An older datacenter may develop socalled ‘hot-spots’ when a large number of smaller servers are replaced withfewer but larger servers. Although new servers may require less energy and cre-ate less heat overall, the generated heat can be concentrated in a smaller area.There are many ways to address this situation, including adding new racks withintegrated cooling or developing more complex redesigns of the cooling system.A lack of sufficient resources is a common obstacle for virtualization efforts. Forexample, a large organization quickly became aware that they hadn’t allocatedsufficient storage when the constraints became so severe that they weren’t ableto take snapshots of their images. Consequently, they could not implement theirplanned high availability strategy.Another organization tried to implement a large number of I/O intensive net-work applications on a host with a limited number of network cards. As a result,the number of I/O interrupts to each card quickly became a bottleneck for thisphysical server.These examples demonstrate how crucial it is to actively monitor and manageall types of resources; a resource bottleneck can easily cause an otherwise suc-cessful project to lose critical planned functionality.Security. Another common concern is properly securing virtual environments.Reorganization and consolidation of servers and applications can be disruptiveand risky; however, these risks can be managed. For security, there are advan-tages and disadvantages to virtualized environments and the two are oftenclosely related. Items that are typically seen as problem areas or risks can oftenturn into advantages when the environment is well managed. For example, newabstraction layers and storage infrastructures create opportunities for attacksbut these additions have been generally proven to be robust. Nearly all attacksare due to misconfigurations, which are vulnerabilities that exist in both physi-cal and virtual environments. 14
  • 17. A few common concerns are: Security must be considered when deploying a virtualized environment. Management of inactive virtual machines can not rely on traditional patch Common concerns that should be management systems. In many virtualized environments, the templates or addressed are: guest systems that are used as definitive images for deployment may not be • Managing inactive virtual machines accessible to traditional patch management solutions. In a worst case sce- • Maintaining virtual appliances nario, a poorly managed definitive image may revert an existing image to an • Version controlling unsafe earlier patch level. • Server theft • Understanding the new abstraction In an environment with strict change control and automation of all configura- layer tion changes, this is not a major issue, but in some environments, these situa- • Hyperjacking tions can present major problems. • Securing a dynamic environment • Immature or incomplete tools Maintenance of virtual appliances. Virtual appliances are pre-packaged solu- • Securing and isolating confidential tions (applications, OS and required drivers) that are executed on a virtual host data and that require minimal setup and configuration. Appliances can be secured further through OS lockdown and removal of any services or daemons that aren’t necessary for the appliance. This practice makes the appliance more ef- ficient and more secure because it minimizes the attack areas of which a mali- cious user can take advantage. These non-standard installations can be harder to maintain and patch be- cause some standard patches might not work out-of-the-box unless provided by the virtual appliance vendor. This problem can be mitigated by testing all configuration changes in a separated development and test environment be- fore deploying in production. Lack of version control. Virtualized environments that allow guest systems to be reverted to an earlier state require that special attention is paid to locally stored audit events, applied patches, configuration, and security policies that could be lost in a reversion. Strict change control procedures help avoid this issue. Storing policies and audit logs in a central location also helps avoid problems. Server theft. In a non-virtualized environment, stealing a server is difficult. The thief needs physical access to the server to disconnect it and then cooly walk out of the datacenter with a heavy piece of hardware. In a virtual environ- ment, a would-be thief only needs access to the file system where the image is stored and a large enough USB key. Surreptitious network access may be even more convenient for a thief. A successful illegal copy of the virtual image may be indetectable. This issue underscores the need for efficient access con- trol and an audit system that tracks the actual user and not just ‘root’, ‘admin- istrator’ or other pre-defined privileged users. New abstraction layer. Hypervisors introduce a new abstraction layer that can introduce new failures as well as security exposures. Hypervisors are designed to be as small and efficient as possible which can be a double-edged sword from a security perspective. On the upside, hypervisors have a small footprint with few well controlled APIs so they are relatively easy to secure. On the downside, lightweight and efficient can mean limited error recovery and secu- rity implementation. The downside can be mitigated by configuring high secu- rity around the hypervisor including specific security related virtual appliances or plug-ins to the hypervisors. 15
  • 18. Hyperjacking is an attack on the hypervisor that enables a malicious user to access or disturb the function of a large number of systems. So far hyperjack- ing hasn’t been a significant problem, but it is critical to ensure that your vir- tual environment follows the lockdown procedures recommended by the hypervisor vendor and that you apply all recommended security patches. Securing dynamic environments. To fully take advanatage of the potential pro- vided by a virtual environment that environment needs to support automated migration of guest systems between host servers. A dynamic environment, however, presents new challenges. When secured resources move from host to host, a secure environment must be maintained regardless of the current host of the guest system. These challenges may not be as problematic as it ap- pears. With policy-based security and distributed vLANs that are managed for a complete group of host servers or the complete datacenter, policies will fol- low the guest system and remain correctly configured regardless of which server its currently running on. Immature or incomplete tools. Over the last several years the tools to manage virtual environments have been maturing rapidly and much of the functional- ity for securing the system, automating patching, and managing of the virtual system are enhanced frequently. Many of these functions are provided by the hypervisor vendor, while other tools with additional features are provided by 3rd party vendors. This rapid development of tools and features can be expected to continue and it will be more important to have management solutions that can extend across heterogeneous environments - including virtual and non-virtual sys- tems - and all the way out to cloud infrastructures. Precise predictions are im- possible, but the industry is aware that virtual environments and cloud solutions will rapidly take over more and more workloads. Management sys- tems of the future will work with and manage a mix of hypervisors, operating systems and workloads in these environments. The ability to secure and isolate confidential data is a common concern that must be carefully considered when designing storage solutions. Server virtual- ization itself doesn’t add a significant risk in this area, but it’s important to be aware of since SAN and more complex virtual storage solutions are often em- ployed to maximize the value of a virtualized environment. Further discussion on this topic is beyond the scope of this article, when these solutions are em- ployed, the steps required to secure the storage may require special or vendor- specific knowledge. This is particularly important if data is governed by regulations such as HIPAA, GLBA, PCI, SOX, or any other federal or local regulations. When regulations such as these apply, data security often must be designed in consultation with auditors and regulators.With a properly managed virtualization project, these risks can be minimized;however, it is important that organizations be aware of the risks and addressthem appropriately. A well-managed virtual environment can provide greater se-curity by ensuring that all servers are based on identical, well-tested base im-ages. 16
  • 19. In addition, security solutions are easier to implement and administer whenbased on centralized policies. For example, consider an organization that needsto distribute management of much of the guest system to individual owners.These individual owners might choose to revert to an earlier snapshot at will.Central IT manages the security on these system by regularly scanning them toensure they are correctly patched and do not contain any viruses or other mal-ware. When non-critical issues are detected, the owner is notified; for critical is-sues the system is disconnected from the network.Appropriate Skills and Training. With the right tools and planning, manage-ment of a virtualized environment can be simple and streamlined; but the ITstaff may need additional training to acquire new skills.Administrators who don’t fully understand the specific tools and requirementsfor virtualized environments can easily misconfigure the environment – result-ing in environments with unpredictable performance or, worse, securitybreaches. Sufficient time and resources for training are required both before andthroughout any virtualization project.Consider the same organization noted in the previous example. They needed todistribute many management tasks to individual owners. They ran into an prob-lem when a group using a guest system with 2TB of data took a snapshot of thesystem. The local manager didn’t realize that the system would now need 4 TBof data and that it would take 5 hours to commit the snapshot. The issue wasresolved by having a certified professional educate the system’s owner aboutthe impact various actions have on the storage requirements and performance.They were able to remove the snapshot safely, without losing any data, butcould have avoided the issue if they had taken the proper training first.General Project Risks. Virtualization projects are subject to the same genericrisks as any major project. Both scope creep and unrelated parallel changes withentangling side effects can derail virtualization projects as quickly and com-pletely as any other project.Design ApproachGiven an understanding of the reasons for virtualization, the business drivers,the possible affects on business and potential obstacles, the sources of failureand their mitigation, project planning can begin in earnest. The next step for avirtualization project is to carefully understand and analyze the environment. Asuccessful virtualization project is the result of more planning than anyone ex-pects. Some specific planning steps are laid out here.Identify Workloads Appropriate to VirtualizeThe first step is to identify the scope of this project, that is, the applications andservers to be included. The bullet item Virtualization of Everything listed in theprevious “Typical Challenges with Virtualization” section identified several typesof servers that are difficult to virtualize. Avoid virtualizing these types of serversunless there is a valid reason and a plan for addressing the concerns. Fortu-nately, few servers fall into these categories. Most hypervisor vendors providetools that can assist with this process, but to get the best possible result andavoid being at the mercy of the vendor, you should have a good understandingof the environment in question. The following server categories are listed in theorder of suitability for virtualization: 17
  • 20. Rarely used servers that must be accessed quickly. Virtualizing these servers al-lows the organization to keep a large library of servers with different operatingsystems and configurations with a minimum hardware investment.They are typically used for:This starting point is a common for many companies because the value is sig-nificant and the risks few. Value is realized through faster provisioning of newservers, reduction of provisioning errors, and minimized hardware investment.Additional worker nodes to handle peak loads. This approach is especially usefulwhen applications can be dynamically scaled out with additional nodes sharinga common virtual environment. If the environment is maintained and sufficientresources are available when needed, this scenario adds great business value. Ajust-in-time automated worker node provisioning system maximizes this value.Consolidation of lightly used servers. Some examples of lightly used serversinclude:• Service Providers (xSP) with many small clients.• Multiple mid-tier managers or file and print servers originally implemented on separate servers for political, organizational, or legal reasons.In many cases isolation provided by virtualization is sufficient, especially if thedata is separated onto private disk systems; however, you should verify that vir-tualization satisfies the organization’s isolation and separation requirements.Servers with predictable resource consumption profiles allow planning thedistribution of work to virtualized servers. In these cases, keep in mind that:• You should beware of applications with heavy I/O.• Applications that require different sets of resources at the same time can coexist on the same physical server.• Applications that require the same resources at different times can also coexist on the same physical server.In each of these cases, value comes from reducing the number of servers, re-sulting in both hardware maintenance and management cost savings. Unless aproject falls into one of these categories, virtualization alone seldom savesmoney. There are other good reasons to consider virtualization, but you shouldbe aware that cost savings may not appear. 18
  • 21. Understand and Analyze the EnvironmentServer consolidation is an opportunity to raise the virtualization maturity level Server consolidation is an opportunityof the environment, or to prepare to raise it by identifying aligned procedures to raise the virtualization maturitythat can be automated and enhanced. level of the environment, or to prepare to raise it by identifyingThe analysis should include performance profiles of individual servers and appli- aligned procedures that can becations, covering all critical resources (CPU, memory, storage I/O and network automated and enhanced.I/O) and their variation over time. Both the size and the number of transactionsare important. An understanding of when different applications need resourcesand under which circumstances helps determine which applications are suitablefor virtualization and which can share resources and be co-located in the sameresource groups.Many hypervisor vendors have tools that can assist with this process. However,regardless of which tool you are using, it is important to monitor performanceover a period of time that also includes any expected performance peaks. Cap-turing a baseline of this information is recommended so that it can be com-pared against corresponding data collected from the virtualized environment. Insituations where all expected peaks can’t be measured it is important to care-fully analyze and estimate the needs.This analysis also requires consideration of social engineering and the types ofevents that trigger resource consumption. You especially need to gauge the riskthat the same events will trigger multiple applications to use more resources.Awareness of these scenarios is critical to ensure acceptable response timesduring peak load times for different solutions.Consider the following examples:A typical environment in which, at the end of every workday, a majority of theusers:• Send an email, synchronize email folders and then logout from the mail server.• Run reports to prepare for the following day’s activities.• Print these reports and, perhaps, additional documents to bring home with them for review.• Make backup copies of a number of important files to a folder on a file server.An environment in which an unplanned event or fault occurs that triggers activ-ity on multiple systems such as:• A fault triggers the fault management systems to alarm, do root cause analysis, handle event storms, and certain automation tasks.• The end users notice the problem and use knowledge tools and service desk functions to determine if the problem is known and, otherwise, report it.• The operations and the help desk team receive the alert from the fault management and service desk system and connect to the service desk, CMDB, asset management, or network and system management tools to 19
  • 22. troubleshoot and correct the issue.If the applications share a common host system whose requirements are basedon ordinary usage, these virtual machines will slow down from the higher peakload as a result of the activity of hundreds or thousands of users. Just when thesystems are most needed, they become overloaded.Tracking the consumption of critical resources over time will reveal patterns ofresource usage by servers. Based on that knowledge, you can determine serversto safely virtualize, what resources they need and which applications can suit-ably share resources. This will enable you to more effectively pair virtual ma-chines to stress different types of resources and to stress the system at differentpoints in time.Hypervisor and Supporting Software SelectionA hypervisor must work well in the environment and efficiently support require-ments. A few years ago there were only a limited selection of hypervisors buttoday, the number of solutions has increased. Independent lab tests show thateach of the major hypervisor solutions has advantages and disadvantages:A few important areas to scrutinize when selecting a hypervisor vendor are:Organizational and Social Requirements: Requirements that arise from theknowledge and experience of the people in the environment are often as impor-tant, if not more important, than the technical requirements. These require-ments can affect the success and cost of the project.For example:• Does the organization have experience or knowledge about one specific solution?• Do preferred partners have knowledge or experience with any potential solutions?• Have solutions been tested or can they be tested easily with the planned hardware platform and most critical applications?Required Functions and Protocol: With the gradual standardization of basic hy-pervisor functions, many of the solutions from the major vendors are becomingsimilar. Added value has become the primary differentiator in the form of:• Efficient and dynamic automated migration that moves virtual servers between physical hosts. Load balancing and high availability solutions controlled by the vendor’s tools and integrated with standard enterprise management solutions are important here.• Support for specific hardware combinations. For example, more advanced functions like hot migration commonly require the servers (especially CPUs) to be identical or similar. Some hypervisors also allow compatibility mode with mixed CPU versions, but this forces the systems only to take advantage of functionality that all of the CPUs in use have in common. 20
  • 23. • Support for existing or planned SAN solutions.• Support for multiple storage repositories and dynamic move and rebalance of virtual images between repositories.• Support for all, or at least a majority, existing or planned software applications• Support for all operating systems planned for virtualization (32/64 bits versions of Windows, UNIX and/or Linux).• Ability to access, utilize, and efficiently distribute all required resources.• Management tools or support for management tools to monitor performance and availability and use this information to automate your environment. Preferably the solution will have an open API to integrate it with existing enterprise management systems.• Built in functions and APIs to manage advanced functions for security, high availability, fault tolerance, and energy saving.These are just a few examples; a project should carefully list the requirementsimportant in the business environment. Describing the importance of each re-quirement and the consequences of lack of support will simplify prioritization ofoptions.Virtualization Management ToolsManagement of an environment becomes even more critical when virtualization Management of an environmentis employed. Some of the common management issues related to virtualization becomes even more critical wheninclude the need to: virtualization is employed.• Simplify creation of new virtual servers and migration of existing systems into a virtualized environment.• Predict and track virtual environments that compete for server and storage resources.• Predict and track performance utilization in real time as well as historical trends in individual environments, the host system as well as the SAN system, and preferably, in a way that allows correlation between these components.• Tools that trace resource utilization, up and down time, and connect metrics from these tools with chargeback and show back systems. Efficient usage of chargeback systems, together with mature systems that spin up and down servers as required, allow the organization to encourage the system owners to manage their environment efficiently and, therefore, maximize the impact of Green IT and minimize energy bills.• Management tools supporting life cycle management processes with clear stages for development, quality assurance, library of available images, archive, configuration, and production.• Efficient tools for workflow orchestration and automation will simplify and modularize automation by securely reusing previously created tasks. 21
  • 24. While implementing automation focus on “low hanging fruit”, simple automa- tions that clearly save money or add security. Complex one-off automation tasks can be expensive to maintain and are often not worth the effort.• Tools that intelligently and actively manage the environment based on polices, measured performance and events. This added flexibility can be one of the great advantages of virtualization. A few examples are: • Dynamically changing resources available to virtual machines • Moving virtual machines between different host servers as needed • Dynamically provisioning and configuring servers on demand or when triggered by policies • Dynamically shutting down virtual machines and host servers when they aren’t being used If these capabilities aren’t managed appropriately, these otherwise great features can present some very significant risks.• Manage “VM sprawl” by implementing good change control and life cycle management processes that track where, why, and how virtual applications are running and which resources they use.• Provide tools for backup and disaster recovery of the virtual environment.• Provide tools and procedures to manage security, including patch manage- ment tools, firewall integrated with the hypervisor, and various security related virtual appliances for the virtual environment.If the management tools can handle most of these issues and include basic hotmigration (VMOTION, Live Migration, XenMotion or similar) the environment willsupport efficient load balancing between the servers. Although some manage-ment tasks can be automated, it is important to be able to predict, wheneverpossible, the resources that are required before they are required. This approachdemands strong understanding of the business systems and occasional humanintervention.The importance of a holistic view of datacenter management solutions cannotbe underemphasized. Datacenter management solutions must support the com-plete environment, virtual and non-virtual systems, both on-premise and off-premise in cloud infrastructures.The solution should focus on business services and the role of IT in the busi-ness, and, when needed, seamlessly drill into other aspects of management andthe business ecosystem. To accomplish this holistic approach, virtualizationtools must cooperate and integrate with the existing enterprise managementsoftware.Executive Buy-inHaving examined what virtualization can and cannot do and the considerationsfor deploying virtualization in an environment, we return to a crucial step in the 22
  • 25. project plan and one that can post the most common obstacle to success:stakeholder support.Without executive support and backing from all important stakeholders, anyproject is likely to fail or achieve only partial success and profitability.The following steps will help garner support:Identify the Importance: Articulate the importance of the virtualization projectto both the company as a whole and to the stakeholder’s organization and fa-vored projects.The business drivers listed earlier in this article are a starting point. Spell out thesavings the project will generate, and how it will support new business modelsthat will create new revenue streams. Will it make the organization more effi-cient and minimize the lead time to provision new services, and so on?Always communicate the importance of the project in a way that makes senseand is relevant to the target audience.Communicate Risks and Risks of Inaction: Honestly and sincerely point outthe risks. The stakeholders must buy into the true picture. Hidden facts seldomstay hidden forever. A strong supporter who feels misinformed by the projectgroup can easily turn into an even bigger obstacle, resulting in severe damageto the project.Explain Migration without Risk of Interrupting the Business: A main concernfor the business owners is interruption in the business. A detailed migrationplan that addresses interruption of business is essential. Point out that a matureand flexible virtualized environment will minimize downtime for planned out-ages.Listen: It is important to listen to concerns, investigate whether those concernsare valid and, if so, identify how they can be addressed. Again, the key to a suc-cessful project is to have strong support from the stakeholders.Proof PointsProof points are measurements that indicate the degree of the project’s success.Without identifying these points, the value of the virtualization will be obscure.These metrics will also help obtain executive buy-in and support for the projectby associating it with measurable gains in productivity – or reductions in costsor response time. This is especially important if the stakeholders have previ-ously raised concerns.Proof points can be derived from business drivers and their baseline metrics. Forexample, if the intent is to reduce the time it takes to deploy new hardware orsoftware, first identify current deployment times. If the intent is to save moneythrough hardware consolidation, identify the costs for maintaining the currenthardware, including cooling costs for the data center. Then, follow up with thosesame measurements after the project or a significant phase in the project hascompleted. 23
  • 26. Summary – Five Key PointsThe five points to remember for a successful virtualizationproject:Understand Why and What: Clearly understand the reasonfor the project, the business drivers, and the applicationsand functions to be virtualized. The scope of the projectmust be clearly defined including phases for a staged ap-proach, milestones, and the appropriate metrics to meas-ure progress and expected outcome.Identify the Expected Risks: Risks, both functional and fi-nancial, are expected and acceptable. Virtualization canprovide great value, but like any project, there are risks.Risks can usually be managed, but the key is awarenessand planning by the project team and the stakeholders.Virtualize Appropriate Workloads and Avoid Overutilization (and Underuti-lization): A common reason for virtualization failure is unreliable performanceafter applications have been virtualized. Avoid this situation by ensuring thattoo many applications do not share limited resources, and avoid host systemswith inadequate bandwidth or inadequate support for large numbers of I/Otransactions. Conversely, overestimating the amount of resources required canresult in too many idle resources and can reduce the overall ROI.When virtualizing an environment the key is to choose the appropriate workloadto virtualize, provide modern high-end server class host servers and carefullymanage the workload and rebalance the workload so that all applications havesufficient resources during their peak time.Get Support of Stakeholders: Get support from executive management as wellas from the business owners before starting the project. Listen to concerns andaddress them. Have buy-in before the project starts.Establish Success Criteria: Each project or subproject must have defined suc-cess criteria. These should include a comparison with the baselines from beforevirtualization. This criteria should be tied directly to the project’s business driv-ers, such as cost per application, energy consumption in the datacenter, speedto provision a server, or avoided alternative cost for building a new datacenter.Virtualization offers efficiency and agility, but there are many pitfalls and obsta-cles to success. By following these five key points and the principles explainedin this article, risks are reduced and chances for success are maximized.Additional insights on implementing virtualization can be found in the Virtual-ization Best Practices section of the Implementation Best Practices pages,which you can access through the following URL: thanks Terry Pisauro, Engineering Services Architect at CA Technologies,for providing valuable editing contributions. 24
  • 27. Leading Edge Knowledge Creationby Dr. Gabriel Silberman, Senior Vice President and Director, CA Labs, CATechnologiesEver since businesses began looking for efficiencies by outsourcing or leveraging About the author:specialized services or favorable cost structures, one of the challenges has beento use this approach for acquiring leading edge knowledge. It may be arguedthat mergers and acquisition activities fulfill this role, as does recruiting of newpersonnel, either new university graduates or those who have accumulated pro-fessional experience. But these methods tend to be sporadic and do not repre-sent a continuous process for bringing knowledge into a large and diverseorganization.At CA Technologies we have taken a different approach to tap into external re-sources. We aim to carry out a broad agenda geared towards continuous, in-con-text knowledge creation, to complement other more sporadic efforts. In contrastto the “pull” model used by some companies to attract ideas and proposals, CA Gabriel (Gabby) Silberman is SeniorLabs, the research arm of CA Technologies, relies on a “push” paradigm. This en- Vice President and Director of CAables us to reach out to the research community to seek insights into technical Labs, responsible for building CAchallenges, the evolution of existing products, point solutions, or research to as- Technologies research and innova-sist in new product development. tion capacity across the business. In collaboration with Development,Using a popular context, the … as a Service (aaS) framework, think of CA Labs as Technical Services, and Support, andan internal service provider. Its offerings include access to an extensive network working with leading universitiesof leading academics, and the mechanisms (legal, financial, etc) to establish a around the world, CA Labs supportsframework for collaboration. This would be the equivalent of an Infrastructure relevant academic research to furtheras a Service (IaaS) offering. On top of this, foundational research projects may establish innovation in the com-be structured to undertake long-term technical initiatives. These are based on panys key growth areas.needs identified by the Office of the CTO and others responsible for charting andexecuting the strategic direction for CA Technologies’ products and services. Gabby joined CA in 2005, bringingThese initiatives will explore technological advancements prior to potential im- with him more than 25 years of aca-plementation as CA offerings, and constitute a Platform as a Service (PaaS) type demic and industrial research experi-of offering. ence. He joined CA from IBM, where he was program director for the com-To complete the analogy with a Software as a Service (SaaS) offering, CA Labs panys Centers for Advanced Studiesprovides the capability to create “research sprints.” These are short term efforts, (CAS) worldwide. Previously, Gabbybased on the relationships established through our long-term trusted relation- was a manager and researcher atships with academic partners and their deep knowledge of interests, products IBMs T.J. Watson Research Centerand services relevant to CA Technologies. where he led exploratory and devel- opment efforts, including work in theConsider the example of Reacto, a tool for testing the scalability of reactive sys- Deep Blue chess project.tems developed as a foundational research project (think PaaS) in collaborationwith researchers from the Swinburne University of Technology in Australia and Gabby earned bachelor of scienceCA’s development lab in Melbourne. and master of science degrees in computer science from the TechnionIn a sophisticated enterprise application, a single user action may trigger a – Israel Institute of Technology, andnumber of coordinated activities across a variety of systems. Before deploying a Ph.D. in computer science fromsuch an application, it needs to be thoroughly tested against realistic operation the State University of New York atscenarios for quality assurance purposes. However, replicating such a large-scale Buffalo.testing environment is challenging and even cost prohibitive, due to resource 25
  • 28. and complexity constraints. The Reacto project developed a general emulationframework, using lightweight models to emulate the endpoints with which thesystem under test interacts. This enables large-scale realistic emulation of a va-riety of enterprise production environments using only a small number of physi-cal machines.Reacto has been used to demonstrate the scalability of several CA componentsand products, including the Java Connector Server (a component of CA IdentityManager).Now let us look at an example of a foundational research (PaaS) effort which be-came the basis for a research sprint (SaaS). The case in point is the Data MiningRoles and Identities project done in collaboration with researchers from the Uni-versity of Melbourne in Australia.Role mining tools automate the implementation of role based access control(RBAC) by data mining existing access rights, as found in logs, to reveal existingroles in an enterprise. Along with individual roles, a role hierarchy can be builtand roles may be assigned to individual users. Additionally, data mining may beused to identify associations among users, accounts and groups, and whetherthese associations are necessary.As a result of CA’s acquisition of Eurekify and its Enterprise Role Manager, the re-searchers were asked to move their focus to leverage the role visualization tooldeveloped as part of the project. This request gave birth to a research sprint todevelop a tool to visualize access control data. Using the tool it is possible to vi-sualize the “health” of a customers RBAC implementation, before and after thedeployment of CAs Role and Compliance Manager. Furthermore, the tool maybe used periodically to detect and investigate outliers within an enterprise’s rolehierarchy, as part of governance best practices.The success of the research model practiced by CA Labs has been sustained bythese and other examples of innovative and practical implementation of knowl-edge transfer. 26
  • 29. Virtualization: Enabling the Self-Service Enterpriseby Efraim Moscovich, Principal Software Architect, CA Technologies“To provision a complete multi-system SAP CRM application, press or say ‘1’.” About the author:Virtualization is not a new concept; it has been around since the early 1960s.Self-service systems such as travel reservations and online shopping are an in-tegral part of today’s dynamic economy. The marriage of virtualization technolo-gies and self-service concepts has the potential to transform the traditionaldatacenter to a Self-service App Store.This article examines virtualization technologies and the role they play in en-abling the self-service enterprise. It also discusses key concepts such as service,service catalog, security, policy, management, and management standards, suchas Open Virtualization Format, in the context of self-service systems.1.0 IT Services and the Services Gap Efraim Moscovich is a Principal SoftwareIn today’s enterprise, the IT department has the primary responsibility of deliv- Architect in the CA Architecture Team,ering, running, and maintaining the business critical services (line of business) specializing in Virtualization and Au-also known as production. This includes the infrastructure (such as servers, net-, cabling, and cooling), software, and management functions to ensure highavailability, good performance, and tight security. Downtime or degraded func- He has over 25 years of experience in IT and Software Development in varioustionality may cause significant negative financial impact to the bottom line. capacities including IT Production Con-The critical business services include, among others, email, customer relation- trol, programmer, as a developmentship management or practice management, supply chain management, manu- manager, and architect. Efraim has beenfacturing, and enterprise resource planning. involved in the development of many products including Unicenter NSM, andIn addition to the production services, the IT department has to provide infra- Spectrum Automation Manager.structure and support for a wide variety of other services, which range from as-sisting the sales force with setting up demo systems for clients to helping the He has expertise in various domains in-engineering department with their testing labs. The typical IT department has a cluding Event Management, Notificationlong backlog of projects, requests, and commitments that it cannot fulfill in a Services, automated testing, web serv- ices, virtualization, cloud computing, in-timely manner. Many of the backlog items are requests for evaluation and pur- ternationalization & localization,chase of new hardware or software, to set up and configure systems for end Windows internals, clustering and high-users, create custom applications for the enterprise, and provide short-term availability, scripting languages, and di-loaners for product demos and ad-hoc projects. For example, to convert a large agnostics techniques.collection of images from one format to another, the project team required hun-dreds of computers to run the conversion but only for a few days or weeks. He is an active participant in the DMTF Cloud Management Work Group.The gap between the ‘must do’ services and the ‘should do’ services typically iscalled the IT service gap. Prior to joining CA Technologies, Efraim worked on large scale performance management and capacity planningThe struggle to close this gap and provide high quality services to all IT users on projects at various IT departments.time at a low cost has been raging for years. Some of the solutions used to im-prove the speed and quality include: Efraim has a M.Sc. in Computer Science from New Jersey Institute of Technology.• Automating procedures (including scripting and job scheduling systems)• Adopting standardized policies and procedures (such as ITIL1) 27
  • 30. • Distributing functions to local facilities• Sub-contracting and using consulting services• Outsourcing the whole data center or some services to third parties• Enabling end users to fulfill their own needs using automated tools (self-service)2.0 Self-ServiceThe self-service concept dates back to 1917 when Clarence Saunders2, whoowned a grocery store, was awarded the patent for a self-serving store. Ratherthan having the customers ask the store employees for the groceries theywanted, Saunders invited them to go through the store, look at the selectionsand price of goods, collect the goods they wanted to buy, and pay a cashier ontheir way out of the store.Some well-known self-service examples include:• Gas stations, where the customers pump their own gas rather than have an attendant do it• Automatic Teller Machines (ATMs) that enable consumers to have better control of their money• The human-free, and sometimes annoying, phone support systems in many companies (“for directions, press 1”)• The ubiquitous shopping web sites (such as Amazon) that almost transformed the self-service concept into an art formThe main reasons for the proliferation of the self-service paradigm are the po-tential cost savings for the service providers and the assumed better service ex-perience for the consumers.In order for a service to be a candidate for automated self-service, some or all ofthe following conditions must be met:• There are considerable cost savings or revenue opportunities for the provider in operating the service.• There is a service gap between what the provider can offer and what the consumer demands.• The service can be automated (that is, the service has a discrete and repeat- able list of steps to be carried out and no human intervention is required from the provider.).• The implemented self-service is convenient and easy to use by the consumers, and is faster than the non-automated version.• The service offering fits nicely within the consumers’ mode of operations and does not require expert knowledge outside their domain. 28
  • 31. The IT department adopted the self-service paradigm for many of its functions The IT department adopted the self-even before virtualization was prevalent. Examples include the Help Desk and service paradigm for many of its func-other issue tracking systems, and reservation systems for enterprise resources. tions even before virtualization wasHowever, the implementation of more complex and resource intensive self-ser- prevalent.vice systems was not possible, at an acceptable cost, until the arrival of virtual-ization technologies.3.0 VirtualizationAccording to the Merriam-Webster dictionary, the word ”virtual” comes fromMedieval Latin ”virtualis”, from Latin ”virtus” strength, virtue, and it means ”effi-cacious” or ”potential”3.In our context, virtualization is a form of abstraction – abstracting one layer ofcomputing resources (real or physical) and presenting them in a different form(virtual, with more virtues) that is more efficacious and has more potential. Usu-ally the resources appear larger in size, more flexible, more readily usable, andfaster than they really are in their raw form.There are many forms of virtualization, from hardware or server virtualization(that can create what is commonly known as Virtual Machines or VMs), to Stor-age (implemented via SAN or NAS), to Network, and Application virtualization.Emerging forms of virtualization that are entering the mainstream are networkand memory virtualization (a shared resource pool of high-speed memory banksas opposed to virtual memory), and I/O virtualization.Server Virtualization is achieved by inserting a layer between the real resourcesand the services or applications that use them. This layer is called a Virtual Ma-chine Monitor, a Hypervisor, or Control Program.Figure 1: Virtualization (VMware)These virtualization technologies can be abstracted further to provide databaseand data virtualization, and more application-level constructs such as a mes-sage queuing appliance, a relational database appliance, and a web server ap-pliance. For additional virtualization terms and definitions, please refer to theGlossary. 29
  • 32. The virtualization wave was enabled by its inherent virtues: efficacy and poten-tial. Some of the perceived and inter-related attributes of virtualization are:• Plasticity – the ability to mold the resources into different forms; one day a Windows Server 2008 with 8GB memory and 500 MB storage, the next day two Red Hat Enterprise Servers with 4GB memory and 200 GB storage each.• Elasticity – the ability to incorporate changes and adapt easily; add more memory or another network card when needed, and remove them when done• Velocity – the speed at which changes can be applied to the systems; from creating a new system to carving a new volume.• Facility – ease of use; adding a new network card or hard drive does not require any screw drivers or other tools.In essence, all the various virtualization technologies turn an extensible collec-tion of computing resources into a malleable computing fabric that can bemolded easily, efficiently, and economically into many types of systems andservices needed by the enterprise.Virtualization DriversThere are many drivers behind this virtualization trend. Ultimately, they are allrelated to reducing cost and improving service, but others emerged as well afterthe virtualization trend began.The first major driver was cost reduction—minimizing the run single and buypeak syndrome. To ensure good performance, minimal interference, and ade-quate security, many IT departments adopted the following common practices:• Run only one application per server• Customize the hardware and software for the application• Buy enough capacity to sustain the peak load periodsAs a result, there is a proliferation of underutilized single-use servers that oc-cupy data center space, require cooling, consume energy, and require mainte-nance and management. In some cases, these practices may cause the datacenter to run out of physical space and cooling capacity, and push energy con-sumption to reach nearly max levels. Managing a large number of physicalservers is labor intensive and requires a relatively large number of highly skilledtechnicians and engineers (low ratio of administrators per server).Virtualization provides an attractive option to combat many of these problems:server consolidation. Combining many applications into fewer servers consumesless space and energy, and requires less maintenance and labor.The second major driver was the push for improved quality of service.• The need for a faster deployment of servers and applications: Nowadays, businesses cannot wait weeks for the hardware and software to be acquired, installed, configured, and deployed. Shortly after a service has been identified 30
  • 33. as important, the IT department is expected to deliver it.• The increased importance for high-availability and resiliency of the business: Downtime can be detrimental to the survival of most businesses. If the services are not up and running, their employees cannot do their job and serve their customers.Additional drivers that are related to cost and service but are nevertheless dis-tinct include:• The need to maintain and operate legacy systems: Despite the constant push for hardware and software upgrades, it may not be possible to upgrade some legacy applications. To continue to run these applications, a virtual environ- ment (hardware and software) of the application can be maintained.• The need for secure testing environments (sand boxing): To allow for testing of un-trusted or new applications without the possible security exposures, companies were using dedicated labs or networks and servers that are costly and hard to maintain. Virtualized environments make it much easier and economical to do so.• The need for disposable systems used for testing and demos: Many businesses, especially in the software business, maintain a pool of servers for testing and ad-hoc demos. Using VMs instead of physical servers made the reservation, use, and recovery of the systems much simpler and economical.• The potential for increased automation: The conversion of physical systems into virtual entities (files, configurations), opened the gate for the additional automation options. For example, automation tools can now patch virtual servers while offline or additional memory can be added to a running system much easier than on physical systems.With all these virtues, no wonder that according to a recent Gartner EXP World- . . . no wonder that according to a re-wide Survey virtualization technologies are of the highest importance on the list cent Gartner EXP Worldwide Surveyof “Top 10 Technology Priorities in 2010”4. virtualization technologies are of the highest importance on the list of “Top4.0 Virtualization and Self-Service 10 Technology Priorities in 2010”Historically, provisioning a new service or system was costly, lengthy, compli-cated, and required many hours of skilled technicians and engineers to set up,configure, and test the systems.A typical example: The Education Department needs a new internal web site tohost web-based training courses. The web site will require several servers(blades), a load balancer, SAN storage, software or licenses for web servers,database servers, monitoring agents, and a management system.The following are some of the important steps in this process: 1. Define requirements for hardware and software 2. Prepare RFP and select vendors or go through the company procurement process 3. Pre-allocate space and resources in a datacenter or other computing facility (rack space, cabling, cooling, energy allotment, etc.) 4. Purchase servers, storage, routers, and software 31
  • 34. 5. Wait for servers and software to arrive 6. Configure hardware and upgrade firmware 7. Install OS and drivers 8. Configure OS 9. Patch OS10. Install anti-virus and other required or system software11. Install and configure software (web servers, database servers, agents)12. Patch software13. Test system14. Authorize users to use systems15. Deliver servers to datacenter16. Notify usersCompleting the provisioning process can take weeks, if resources are alreadyavailable in the warehouses, or even months, if new resources need to be pur-chased. After the system or service is running, the IT department may help in op-erations and maintenance and recover the resources after the user is done usingit (see Figure 2). If users can do this on their own, the savings in time and moneywill be substantial!Figure 2: Resource Use CycleThese two major trends of virtualization and self-service are not necessarily re-lated. Virtualization was initially used mainly by system administrators to testnew versions of operating systems in a sandbox environment without interrup-tion to the production systems. The self-service concept was invented in the re-tail world to entice new clients to buy at these self-service stores, and eventuallyspread to the online world. However, the successful implementation of self-ser-vice in the enterprise is enabled and enhanced by virtualization. The marriage between the concept ofThe marriage between the concept of self-service in the enterprise and the mani- self-service in the enterprise and thefested attributes of the virtualization technologies empowers the IT department manifested attributes of the virtual-to provide highly sophisticated services faster than ever, at a fraction of the cost, ization technologies empowers the ITand in a repeatable fashion. Many forces are driving this merger. department to provide highly sophisti- cated services faster than ever, at aIn addition to the virtualization technologies, the following factors should be fraction of the cost, and in a repeat-considered: able fashion. 32
  • 35. • The iPhone generation state of mind – everything has to be easy, intuitive, always available or connectable from everywhere, with a huge selection of apps for every imaginable or fabricated need.• Self-service everywhere – for better or for worse, we are inundated with self- service from the gas pump to the check-in kiosk in the airport.• The abundance of cheap, commodity hardware.• The shortage of skilled hardware and software specialists.• The reduced and declining IT budgets.• Improved management software.• The push for more autonomy and self-sufficiency for the end-users in non-IT departments.The self-service process that is backed by virtualization for creating the new in-ternal web site is expected to be simpler and cheaper:1. Define requirements for hardware and software2. Select components from the catalog3. Configure them for the particular application4. The system (self-service portal, App Store) provisions and prepares the requested application5. RunThis is, of course, a simplified version of the real process, but the ideas of self-provisioning of possibly complex services still apply.Gradually, the prevalence of the self-services, the evident cost savings, the trendfor more self-services, and the availability of tools to create, assemble, provi-sion, and manage services by non-IT personnel can transform the IT depart-ment into an Enterprise App Store.5.0 The Enterprise App StoreThe Enterprise App Store started as a simple self-service portal to reserverun-of-the-mill pre-configured servers for testing. Based on current trends indemand for self-services, and the rapid pace of improvement and adoption ofvirtualization technologies, the Enterprise App Store is quickly becoming areality.The appropriation of the App Store concept (originally created by Apple’siTunes®) in the context of the enterprise is to emphasize consumer expectations:large selection of apps, ease of use of the store in finding and navigating it, andthe simplicity of downloading and installing the apps on the consumer device.The current IT self-service tools enable users to perform some of the provision-ing and configuration functions traditionally performed by the IT Department.The following are examples of some of the operations that users can performwith the self-service tools:1. Reserve one server for testing (configure CPUs, memory, storage, and so on)2. Reserve a server from a pre-defined template (see item 5 in this list)3. Reserve a CRM service (two or more interconnected servers) 33
  • 36. 4. Reserve a complete QA lab (20 pre-configured servers with central control)5. Select a system, configure it, and save it as a template6. Select a service, configure it, and save it as a templateThe Service CatalogAt the core of the App Store is a service catalog. All services that an enterprisewants to offer to its users are defined in the service catalog. From the users’ per-spective, this is their main interface to the App Store. The service catalog is alsoa well-defined ITIL® artifact5.The service catalog exposes any relevant information about its services.For example:• Service name• Service description• Some form of service level agreement• Entitlements (who can access the service and how)• Price, if any• Procedure on how to fulfill the serviceThe service catalog provides an easy interface for finding services the users areentitled to use, viewing their attributes, and provisioning them via wizards orsimple steps. Other facilities such as saving search results, and annotation andtagging services are also desirable. The service catalog can help standardize theIT service offerings by presenting the preferred offering first or providing someincentives in using them.Operation DescriptionDesign a Service Define the attributes of the service, the required and optional components, and the possible configura- tions; save the definition as a service blueprint (definitions).Assemble a Service Search selection of existing systems’ templates, services, or other parts; view properties of items; put them together; configure them; configure the final assembly; optionally save for later.Create a Service Refers to creating a system or service part that does not exist in the catalog. For example, add a new DB server that can be used in building other services.Import a Service The ability to read external definition of a service, such as an OVF file, and import the definition and as- sociated components into the system.Export a Service Package the service definition and associated components into an internal or external format that can be used by other systems.Overall, the App Store should provide an attractive and easy to use interfacewith familiar metaphors such as shopping carts, and come-to-expect featuressuch as user reviews and wish lists. The expectations are high.Finally, in the context of Cloud Computing, the App Store enables the so-calledvirtual private cloud by providing the same facilities as a public cloud in theconfines and security of the enterprise.6.0 The Enterprise App FactoryThe App Store described in section 5.0 provides users the means to select aservice or system from a predefined list and do some configuration before provi-sioning the service.The extension of the App Stores enables users to be more creative. They can de- 34
  • 37. sign, assemble, and create new applications or application parts and provisionthem or save them to be used later. The following table describes some of theadvanced operations:7.0 Managing Self-Service SystemsThe systems behind the App Store, as well as the VMs that constitute the serv-ices provisioned by the App Store are themselves IT systems, and as such, theyneed to be secured, monitored, configured, and otherwise managed.In addition to the common management tasks that are needed for all servers,the virtualized, self-service systems have their own unique management chal-lenges. The following are common management aspects that need to be ad-dressed in all virtual environments, but have a particular importance inself-service systems.SecurityVirtualization and self-service add additional security challenges that must be Virtualization and self-service add ad-addressed in order for the provided services to comply with the enterprise re- ditional security challenges that mustquirements. be addressed in order for the provided services to comply with the enterpriseFor example: requirements.• Since the Hypervisor (VMM) is another layer of software, it has to be protected as well via proper patching, anti-virus software, firewalls, and so on.• The internal virtual networks (VM to VM), which are implemented by the hypervisors, need to be inspected and protected by the Network Security systems (software and hardware) as other physical networks.• Offline VMs, which are files, should be protected from improper access and patched when needed.• Since VMs from multiple services can run on the same physical server, they must be separated and protected from one another from intercepting network packets, accessing file systems, and affecting performance. VMs provisioned for services with high security requirements will need to run on separate physical machines.In addition, because multiple users and departments will use the App Store,each with their unique requirements and privileges, a role based access controlsystem is required to define the various roles that users can be assigned in thesystem.Each user or department is granted access to resource pools, systems, tem-plates, and operations they can perform. In some cases, a user will have admin-istrative privileges on all resources assigned to the department or the servicescreated by its users. For example, a provisioned QA lab with potentially hun-dreds of servers and users will need an assigned admin that can perform someoperations on those systems, such as reset them to initial state.MonitoringThe service, once provisioned and running, needs to be monitored for perform-ance, exceptions, and changes. These are required if the service must comply 35
  • 38. with a pre-defined service level agreement (SLA). The monitoring data can beused also for capacity planning, trouble-shooting, and business continuity man-agement.Using the native virtualization facilities, remote facilities, or specially deployedagents, the management system is capturing certain measurements of the run-ning systems such as CPU use, memory use, bandwidth use, and response timeand storing them in its databases for ongoing analysis of performance. The chal-lenge is to present a service-centric analysis and correlate the data from theVMs, the hosting servers, and the related infrastructure into a cohesive pictureto the administrators and consumers.This performance data can be used for reporting to the consumers or adminis-trators, for capacity planning, or to make dynamic adjustments in the resourcesassigned to the system (such as adding more memory). Runaway systems needto be curbed (lower their priority or shutdown) to avoid impact on other services(see Multi-Tenancy below).The health state of all the systems that comprise theservice and the service as the whole has to be monitored as well. Errors and fail-ures have to be analyzed for possible impact on the service, failed systems mayneed to be restarted (if defined so in the service definition).In addition to performance and health, the provisioned systems need to bemonitored for changes in their configuration to make sure the systems complywith all applicable company regulations and external regulations such as SOX,HIPAA, and GLBA. The management system can use this data to verify if the cur-rent configuration does not deviate from authorized ones and authorizedchanges were performed accurately. For example, antivirus software cannot beremoved, certain ports cannot be opened, and the systems have to be at certainpatch levels on all components.The complexity here is to calculate the compliance of a service from all its un-derlying components at all levels. For example, if a hosting server does notcomply with the patch level requirement, any service that runs wholly or par-tially on that host will not be compliant as well. To complicate things even fur-ther, the components of a service may migrate from one host to another basedon policy or because of failure, so the compliance of a service can change dailyeven if nothing has changed in the service itself.Multi-TenancyA single App Store will typically serve multiple organizations or departmentsthat need to be shielded from one another. The VMs that comprise the servicesuse shared physical resources, as a result, special attention is needed to ensurethat one service cannot adversely impact the performance of other services, anda system in a service cannot snoop on systems in other services.Since users can deploy multiple instances of the same services, the App Storemust support resource name duplication (such as IP address or server name),separation (network fencing), and ensure non-interference.In addition, a management portal is required for each department so they canmanage their own resources, systems, and services (see Security).Finally, a management portal for the self-service system as a whole is used toview and administer resources and services for the whole enterprise. 36
  • 39. Chargeback and BillingMany enterprises want to ensure that the all services are properly accounted forand their use is charged back to the requesting user or department.The measurement data collected using the monitoring tools are used to create The measurement data collectedperiodic (monthly) usage reports. The typical usage is based on the type of sys- using the monitoring tools are used totems, amount of time in use (elapsed) or amount of CPU used, storage used, create periodic (monthly) usage re-and other metrics. These reports are used as information by the various depart- ports.ments or can be used to affect the budgets of the various departments. Depart-ments that exhaust their budget may not be able to reserve additional services.This will ensure that the IT resources are not used when not needed.Self-service systems used by service providers use this billing information tocharge their clients.AutomationTo make the services more efficient and achieve a high level of service, the AppStore must make heavy use of automation.The monitoring information (such as performance data or health statuschanges) can be used to increase or decrease the amount of resources used by aservice. For example, a policy can be configured to add more servers to a webfarm service during peak usage periods (holidays), and remove them when notneeded.A more sophisticated policy can do so based on historical usage patterns. Forexample, using health status information (such as ‘System is down’), the AppStore can automatically start a stand-by server to replace a failed one or tem-porarily increase the amount of resources to the remaining servers.Policies are usually implemented via a set of rules that are evaluated periodi-cally and can trigger actions to be performed on the service or its components.Virtualization can enhance automation the following ways:• Virtual servers can be uniformly manipulated via hypervisor SDKs and scripting interfaces much easier than physical servers.• Files that comprise a VM can be directly accessed even when the VM is offline (for example,, automated patching).Resource RecoveryOne of the challenges in provisioning physical systems is to know when to re-claim, prep, and put them back to work as soon as possible. Since the App Storeknows exactly the expiration date of the service, and the resources are virtual-ized, it is much easier to reclaim the resources and quickly re-provision them aspart of new services. This ensures a more efficient use of resources and mini-mizes the VM sprawl.8.0 Management StandardsIn an effort to enhance the portability, supportability, interoperability, and man-ageability of virtualized services, various standards bodies have published or areworking on standards that can be used by the App Store. 37
  • 40. DMTF’s Virtualization Management Initiative (VMAN6) includes a set of specifi-cations that address the management lifecycle of a virtual environment. In par-ticular, the OVF (Open Virtualization Format)7 specification provides a standardformat for packaging and describing virtual machines and applications for de-ployment across heterogeneous virtualization platforms.The TM Forum SDF (Service Delivery Framework) provides a set of principles,standards, and policies used to guide the design, development, deployment, op-eration, and retirement of services delivered by a service provider8.SNIA (Storage Networking Industry Association) is working on various initiativesto better support virtualized storage (virtualized disks, block devices) across het-erogeneous devices from multiple vendors9.In addition, a few best practices frameworks for managing and measuring ITServices emerged and are being adopted by many businesses, notably, ITIL10,COBIT11, and SMI12 . These frameworks contribute to the standardization ofmany aspects of service delivery and operation.Overall, these established and emerging standards can be used by the App Storeto import and export service, and provide standardized management functionsacross all layers of the service.9.0 What’s NextOnce the self-service system is incorporated into the enterprise and the organi-zation reaches a mature level in using virtualization technologies, further opti-mizations are possible in cost reduction and service level improvement.• Since many or most enterprise services run now on virtualized environments, the importance of which hardware or software is being used will diminish as the user will focus on the logical service properties rather than on its physical attributes. This frees up the IT department to standardize on the hardware and software that is the most economical and provides the service level needed.• As more and more services are defined and delivered by the App Store, the service catalog becomes the focal point of the self-service system and the enterprise. The visibility of these services in the catalog can encourage further standardization in components, templates, operating systems, and other software.• The data collected by the self-service and management systems can be used further to implement chargeback, so the actual cost of IT can be charged to the actual departments.• The App Factory will enable even non-technical users to assemble new services with little help from more skilled (and more expensive) workers.• This self-service App Store enables the so-called private cloud in the enterprise and facilitates faster adoption of the cloud in the enterprise. 38
  • 41. 10. ConclusionThe combination of the inherent attributes of virtualization technologies such asplasticity, elasticity, velocity, and facility coupled with the notion of self-serviceand an easy to use App Store enables the enterprise to provide a higher level ofservice to its consumers while reducing the overall cost of providing these serv-ices.Endnotes1 IT Infrastructure Library (ITIL): http://www.itil-officialsite.com2 Clarence Saunders: The Piggly Wiggly Man by Mike Freeman, Virtual (Webster Dictionary): “Gartner EXP Worldwide Survey”, January 19, 2010, Service Catalog (ITIL v3 Glossary of Terms and Acronyms): VMAN (DMTF): OVF (DMTF): SDF (TM Forum): SNIA: http://www.snia.org10 ITIL: COBIT: and Service Measurement Index (SMI):® is a Registered Trade Mark of the Office of Government Commerce in the United Kingdom and other countries. 39
  • 42. Service Assurance & ITIL: Medicine for BusinessService Managementby Brian Johnson, Senior Advisor, Services, CA TechnologiesMore than ever, we hear about IT organizations being rated according to how About the authors:well they “align with business.” Certainly, this has been the phrase that for adecade or more has been most associated with the “business service manage-ment.” This is the idea that IT environments should be managed in relationshipto how they impact a business, positive or negative: the idea that we manageIT assets with direct knowledge of how they impact the services that user expe-rience. Overall, this is the idea that keeping the IT environment up and runningat peak performance will ensure that the internal processes and online cus-tomer interactions business services.Lately, ‘alignment’ has been replaced by ‘integration’ — as if the introduction ofanother term will actually make a difference to the reality of the situation.Alignment/integration is easier said than done: for the most part, IT culture isn’tcollaborative enough across disciplines; many IT management tools aren’t Brian Johnson is a Senior Advisor insmart enough; and tools from most individual vendors, let alone different ven- the global CA Services practice spe-dors, aren’t integrated enough. All this contributes to a siloed approach to IT cializing in IT Infrastructure Librarymanagement: an approach that prevents a useful management view (or model) (ITIL).of what comprises each service that’s delivered to an end-user. He was part of the team that created the ITIL approach, and has authoredAnd whether ‘integration’ or ‘alignment’ is bandied about by the pundits, the or co-authored more than 20 ITILbottom line is that whatever IT believes, if the Business is not happy, they will books.find another supplier of IT services — how is that for those who believe they are‘fully integrated’ and ‘critical’ to the Business? Brian designed both the ITIL business perspective series and version two ofWhere BSM Fell Short ITIL, and founded the ITIL user group,Business Service Management solutions initially focused mainly on systems’ itSMF. Putting theory into practice, heimpact on applications and relied on Configuration Management Databases also led both early government im-(CMDBs) to define the parts of the IT environment that comprise (i.e., support) plementations of ITIL best practices,a service. But BSM focused on availability, and fell short on performance trend- as well early private-sector and capacity planning. In truth, it fell short too on Availability since Busi-ness wants true end-to-end measurement and IT failed to address the Businessview of a transaction (so what if ‘the service’ was 99.999% available if the Busi-ness could not access key applications because Network availability was notconsidered — by IT — to be a component measure of Availability?)System impact on the application is important, but is only a fraction of thestory. Parallel to BSM, solutions emerged to manage business services in termsof application performance and user experience, as well as the behavior of abusiness process’ individual transactions that comprise a service (i.e., log-in orcheck-out shopping cart slow and why). Along came solutions for measuringnetwork performance and its impact on applications. Workload automation, an-other key IT solution, viewed service in terms of jobs — another important pieceof the puzzle.With all these views of IT (application view, systems view, network view, work- 40
  • 43. load view), there’s opportunity to go beyond traditional BSM and put them to-gether to provide a true 360-degree view of business services. Moreover, wemust put them together in a way that is predictive, so negative service impactcan be avoided and so service quality can be improved. In other words, have atrue picture of what Business considers to be Availability.This is Service Assurance: a 360-degree situational awareness of business serv-ices, (i.e., a consolidated, real-time, predictive view of all silos, workloads andtheir relationship to a service). Where previous BSM solutions have failed, thisnext-generation approach is designed to build bridges of communication andpromote collaboration across the cultural and political barriers within IT organi-zations in an ITIL-like fashion. Not that Business cares about the ITIL, (if theydid it would be called the ‘BL’……..), but they do care that we have a repeatablemeans of working so that we can be consistent — and ITIL is a source of goodpractices that can be used or adapted. ITIL can and should be used to ensurethat domains handle, for example, incidents or changes, according to a Policythat is adhered-to across the organization. Promoting best practices, Service As-surance will give teams across IT organizations a shared view and common un-derstanding of services and service impact to enable closed loop service lifecycle management.Service Are Defined and Managed in Terms of the CustomerFor example, in insurance companies, this means consumer, employer andhealthcare provider self-service portals must always be up and running fast —and that all the internal systems, such as claims processing, do the same. ITstaff at universities worry about supporting thousands of students registeringfor courses within the same week, and about the performance of their web-based lectures. Hospital IT staff know that patient health is at stake if any ofthe key systems, like physician order entry, patient records, pharmacy, radiologyand operating room scheduling, slow down or go down.ITIL is not specific to these environments; it provides generic guidance that canbe used — or perhaps a better term would be interpreted for use — within spe-cific environments. Sadly, some people become disillusioned if they ask a ques-tion such as ‘What does ITIL say about Healthcare?’ and get the answer‘Nothing’; it is a misunderstanding that a set of generic good practices was writ-ten to accommodate specific Business or IT vertical needs. However access togood practice that can be adapted will save time and money.Brian thanks David Hayward, Senior Product Marketing Manager, Assurance, CATechnologies, for providing valuable contributions to this article. 41
  • 44. Data Virtualizationby Sudhakar Anivella, Senior Architect, CA TechnologiesEnterprises need rapid access to reliable data to ensure good business decisions. About the author:With the trend to globalization, data increasingly comes from multiple hetero-geneous sources. Data virtualization is a solution that provides a single, logicalview of information, without regard to the various physical formats of itssources. This paper examines several approaches to data virtualization, andshows that it can substantially reduce the time and cost of amalgamating mul-tiple data sources.1.0 Data Virtualization: A Remedy for Enterprise Data IntegrationChallengesIn any enterprise business, users need quick access to reliable data to make nec-essary business decisions. Often this data is spread across heterogeneous datasources managed by different applications with dissimilar formats within vari- Sudhakar Anivella is a Senior Architectous organizational silos. There may be multiple types of database management at the CA Office in Hyderabad, India. Hesystems, with data scattered across a spectrum of repositories. Information is is responsible for the design and devel-often duplicated across these repositories through replication and Extract, opment of Service Desk Manager. He isTransform and Load (ETL) operations that compound the existing complexity. currently focused on designing common RESTful services components for CAThis leads to the major data integration challenge of providing uniform access Technologies products in collaborationto business critical data in real time, while maintaining Integrity, reliability, se- with other senior colleagues.curity, performance, and availability. Prior to Joining CA Technologies, heEnterprises are challenged continuously by growing globalization, increase in in- worked at Cordys as a Solutions Archi-dustry dynamics, new data introduced with a merger or acquisition, and so on. tect where he designed and imple-These changes add more complexity to the existing challenges. It is observed mented various Cordys components andthat an increasing amount of people’s time is spent searching for relevant infor- SOA based enterprise solutions in themation. areas of Manufacturing, insurance and ITSM for customers in Europe, Gulf and India.Traditional integration approaches for consolidation and replication of data suchas data warehouse (DW), Master Data Management (MDM), or ETL are unable to At BHEL he developed a functional pro-address growing business demands with agility, low costs, and low risk. Most of gramming tool for programming PLC’sthese approaches require time-consuming and costly customizations that re- and software for distributed control sys-quire in-depth technical knowledge of the products for any changes. tems (DCS).As with server virtualization and cloud computing, data virtualization hasemerged as a solution to provide a logical, single virtual view of information by Currently he is a member of the Tech-aggregating data from dissimilar data sources in a way that is cost effective, nologies Innovation Committee (TIC) atflexible, efficient, and easy to operate and maintain. CA Technologies, and is a member of In- dian Science Congress Association.1.1 Quality of Information Usage He has co-authored 5 patents in theMany organizations are at the first stage of the maturity model, focusing only on areas of Business Process Optimizationthe collection of data and reporting. Organizations in the next levels of maturity and Change Management at CA Tech-are able to take competitive advantage of information in order to transform nologies.their business processes in a timely manner1. Sudhakar obtained his Masters DegreeThe following diagram shows various levels of information maturity in organiza- in Computers with specialization in AItions: from Osmania University, Hyderabad, India. 42
  • 45. Value to the business Figure 1: Information maturity in any organization2.0 What is Data Virtualization?Data virtualization is the abstraction of data contained within a variety of datasources in an enterprise so that data can be accessed without regard to thephysical storage formats or heterogeneous nature of the individual applications.This concept is commonly used within grid computing, cloud computing, andbusiness intelligence systems. 2Data virtualization forms a part of the virtualization stack that can be broadlydivided into Storage, Hardware, Application, and Data. Figure 2: Virtualization stackThe following diagram shows an overview of existing data integrations withinan enterprise: 43
  • 46. Figure 3: Data integrations using ETL, Data warehousing, MDM, etc. (Tran, Data Virtualization , 2008)Several aggregation solutions have evolved to solve the problem of data integra-tion. Some of them are Extract Transform and Load technologies (ETL), DataWarehouse solutions, and Master Data Management (MDM).Extract Transform and Load (ETL) is one of the early data integration approachesused by organizations for data integration.This process involves the following:a. Extracting data from systems that are primary sources of data in an enterpriseb. Transforming data, which consists of cleaning, filtering, validating, and applying business rulesc. Loading data into a database or an application that houses dataOrganizations use ETL processes to migrate and transform data from applica-tions and databases to feed an enterprise data warehouse, for example, a cen-tral database for decision support throughout the enterprise.Data warehousing solutions originated for an organizations need for reliable,consolidated, unique, and integrated reporting and analysis of data, at differentlevels of aggregation. This concept dates back to the late 1980s when IBM re-searchers Barry Devlin and Paul Murphy developed the "business data ware-house" (Murphy, 2008).Master Data Management (MDM) is a centralized facility designed to hold mas-ter copies of shared entities, such as customer and product. MDM consists of aset of processes and tools to define and manage non-transactional data entitiesof an organization.Currently, MDM is applied to the master data of specific business entities, suchas customer and product data, but it needs to be extended beyond these enti-ties to be useful to a broad category of enterprise users.Many enterprises have already invested significant amount of money and time 44
  • 47. on these approaches, often with more than one solution existing within thesame enterprise.The following diagram shows data integrations in an enterprise using a mix ofthese approaches and how data virtualization helps them move to virtual envi-ronments: Figure 4: Virtual data layer on top of data integrations (Tran, Data Virtualization , 2008)In the data virtualization approach, all composite applications interact withdata presented in a single virtual data layer.Data virtualization uses cloud platforms and Service Oriented Architecture (SOA)to simplify and unify the existing solutions by providing data and complexityabstraction in a flexible, effective, and efficient way. SOA focuses on businessprocesses, while data virtualization focuses on the information required forthese business processes.Some benefits of data virtualization are as follows:• Reduces the data integration costs with a unified virtual data layer• Brings flexibility and ability to change underlying architecture without impacting business applications that are built on top of this layer• Reduces dependency on individual data sources• Minimizes impact of changes• Improves data qualityServices created on top of the virtual data layer provide information as a service.These services can be consumed by business applications and processes. Newcomposite applications can be built easily by reusing these services. 45
  • 48. 3.0 Data Virtualization using SOAThe following diagram depicts how data virtualization can capitalize on existingsolutions such as MDM, Data Warehouse, and ETL using SOA: Figure 5: Data virtualization using SOAIn this approach, the virtual data layer consists of standards-based reusableweb services. These services provide data on demand from various sources inthe enterprise such as data views, data warehouse, ETL tools, and so on. Busi-ness applications can be built on top of these services without regard to the un-derlying tools and technologies.The virtual data layer consists of the following:• Automated processes to perform data integration tasks such as data extraction, transformations, cleansing, standardization of parsed data, validations, and checks. These processes are flexible and easy to customize, and facilitate needed changes in a cost-effective way.• Access to external web services for additional information to support data enrichment.• A facility for manual intervention of experts to handle data exceptions and knowledge-driven classification, manage data hierarchies, and enforce data governance policies such as business rules.3.1 Reference Implementations of Data VirtualizationImplementation 1:One example of implementing data virtualization is CA Technology’s UnifiedService Model (USM) and Catalyst integration platform3.USM is a service-centric virtual information model that unites information fromdiverse domain managers to create a 360-degree view of a service. It does thisby leveraging the service definition that is maintained within the Configuration 46
  • 49. Management Database (CMDB) and the rich data sources housed in CapabilitySolutions4. Each solution maps its internal data representation to USM, provid-ing a shared abstraction of the data.Catalyst is an application and integration platform. Its capabilities reduce thecost and complexity of IT environments having multiple products and integra-tions5. Catalyst connectors form the integration backbone for enterprise applica-tions. USM provides unified definitions for all business objects shared bymultiple applications.Creating a standard set of services on top of the USM model facilitates easy de-velopment of composite applications for dynamic enterprise needs. These serv-ices encompass data via USM and business processes that orchestratefunctionality around them.The following diagram shows how Catalyst and USM implement data virtualiza-tion: Figure 6: Data virtualization using USM and CatalystIn this approach all business applications can expose functional capabilities asshared USM objects, including business rules and processes around them. Thesecapabilities can be accessed via a set of standards-based services. Common ca-pabilities such as multi-tenancy, security, role-based access control, and so onprovided by the underlying platform can be shared across all functional applica-tions.Implementation 2Cordys6 is an SOA based Business Operations Platform (BOP) consisting of:a. Business Process Management (BPMS) suiteb. Integrated set of Tools & Technologies including Composite Application Framework (CAF)c. COBOC (Common Business objects definitions cache)d. SOA Grid with application connectors for providing standards based application connectivity 47
  • 50. The following diagram shows how Cordys BOP implements data virtualization: Figure 7: Data virtualization using Cordys BOPIn this approach all business applications can be connected to the SOA grid viaapplication connectors. These business applications then can expose functionalcapabilities as shared object definitions in COBOC, together with business rulesand processes orchestrated around them to facilitate data integration tasks.Data and functional capabilities of applications can be accessed via sharedcomposite services by business solutions developed using CAF. The underlyingSOA grid in the platform provides common capabilities such as multi-tenancy,security, and role-based access control to the composite applications.Some of the benefits of these reference implementations include:• Reduces costs due to a single environment for physical and virtual data integration• Avoids unnecessary data movement• Ensures data quality by promoting reuse• Reduces risk and provides flexibility to minimize the impact of changes through the use of the abstraction layer• Provides the ability to change the architecture below the virtual data layer without modifying applications running on top of the virtual data layer, which is a long-term business benefit4.0 Calculating ROI for Data VirtualizationCosts savings from adopting data virtualization7 result from:• Increased flexibility and reduced data integration costs• Increased analyst productivity• Reduced maintenance costsIt is possible to estimate the savings using the following empirical data.Savings Due to Increased Flexibility:Data Virtualization allows organizations to respond quickly to changes without 48
  • 51. incurring major IT investment.For example, consider the cost of reporting for a business unit that modifies fourreports and one application a week:Reporting Cost = (Report modification costs to add new data source * Annual report changes ) + (Application modification costs to add new data source *Annual application changes)Conventional Approach:Reporting Cost = ($1000 * 200) + ($5000 * 50) = $450,000Data Virtualization Approach:Reporting Cost = ($100 * 200) + ($0 * 50) = $20,000The cost of modifying the report to add a new data source is significantly re-duced due to the virtual data layer.The cost of application modifications to add a new data source is eliminated asit already forms part of the report modification.On average, enterprises spend 40% of their IT budget on data integrations (suchas ETL, DW, or EAI (Enterprise Application Integration). Implementing data virtu-alization significantly reduces integrations costs to approximately 10% of theirtotal IT budget resulting in 30% savings7.)Savings Due to Increased Analyst ProductivityIn any enterprise, typically about 60% of an analyst’s time is spent searching forrelevant data from multiple sources (extract, transform, cleanse, validate, andso on) leaving only 40% of time for actual analysis resulting in reduced produc-tivity7.Using data virtualization eliminates the need for interacting with multiple datasources for data retrieval (extract, transform, cleanse, validate. and so on) andprovides a single source of truth via the virtual data layer leading to substantialdecrease (from 60% to 10%) in data collection7. The remaining 90% of the timecan be used for analysis purposes.Savings Due to Reduced Maintenance CostsIn the conventional approach for data integrations, dissimilar data sources areconnected point to point using multiple custom tools and technologies withlimited adherence to standards, resulting in huge development and mainte-nance costs due to inherent complexity involved.Data virtualization eliminates point-to-point integrations and replaces multipletools and technologies (ETL, DW, and so on) with a single data virtualization so-lution, resulting in reduced maintenance costs of individual integrations.Data virtualization eliminates the cost of developing a point-to-point data inte-gration with the help of out of the box connectors with direct cost savings. An-nual maintenance costs are reduced significantly because there is only onevirtual data layer to be maintained. 49
  • 52. 5.0 ConclusionWith the growth in adoption of virtualization and cloud in enterprise environ-ments, data virtualization is crucial to address challenges such as the complexityof data federation, Enterprise Information Integration (EII), and to provide criticalinformation needed for business decisions. This layer becomes the “single sourceof truth” for data in the enterprise.By having an SOA layer together with a virtual data layer, we can create a set ofservices that leads to an Information as a Service model for the enterprise. Thismodel facilitates a standards based integration platform for business applica-tions and helps in rapidly developing composite applications for addressingchanging business needs. Data virtualization can reduce the time and cost re-quirements of accessing multiple data sources significantly.CitationsCordys. (2010). Cordys Platform.Retrieved from, W. (1995). Tech Topic: What is a Data Warehouse? Prism Solutions. Volume 1.Lawson, L. (2008). The Importance of Integration During Mergers and Acquisitions. Retrieved from Waschke, H. Scheil, D. Schiavello, D. LeClair, K. Demacopoulos. (2007). Unified Service Model.Retrieved from USM Working Group:, B. D. (2008). An architecture for a business and information systems. IBM Systems Journal.Tran, P. (2008). Data Virtualization . Retrieved from Composit Software: Working Group. (n.d.).Retrieved from virtualization: The answer to the integration problem? By Mark Brunelli Federation- Information Services Patterns - The On-Demand Information Services Pattern byMike Ferguson, Eric. Periscope: Access to Enterprise Data.Retrieved March 24, 2005 from http://www.tusc.comRethinking Data Virtualization IOD/MDM Strategy & Approach Bringing it all together! (2008) by Gonchar, Igor G, IBM Software Group WW Sales2 Data Virtualization Definition from Wiki3 CA Catalyst and USM are products from CA Technologies4 Unified Service Model by M. Waschke, H. Scheil, D. Schiavello, D. LeClair, K. Demacopoulos.(2007).5 USM Working Group6 Cordys BOP7 Determining Return on Investment for Enterprise Information Virtualization - IPEDO 50
  • 53. Lessons Learned from the Mainframe VirtualizationExperienceby John Kane, Technical Fellow EmeritusA few years ago, VMware was a novelty company whose primary product met About the author:the needs of a technical audience looking for a cheap, convenient, and “green”way to access a virtually (pun intended) unlimited number of machines. TheVMware offerings have successfully matured and been deployed in countlessways. And offerings from other sources have appeared that compete with, or arecomplementary to the computer emulation capabilities for which VMware iswell known. You don’t have to search long to find articles and white papers thatassert that virtualization has come of age and represents a fundamental para-digm shift for IT as substantial as the adoption of the World Wide Web.It’s hard to argue with that point. We now have virtual mass storage, virtual pe-ripherals, and virtual machines. We can create entire virtual configurations ofnear limitless capacity and place them into a suspended state when no longer John Kane’s career with CA Technolo-needed, and then miraculously unsuspend those configurations and bring them gies spanned over twenty-sevenback on line in moments. These capabilities are exciting, but they are not new: years and involved many roles, frommany of these concepts were implemented over 30 years ago on the IBM main- mainframe systems programmerframe. through to development and product line management.VM370, as it was known by many, supported virtual machines, and virtual de-vices, and even “n level” virtual host environments (imagine an ESX server run- During his tenure, John worked withning virtual ESX servers). One virtual environment could communicate with many Research and Developmentother virtual or real environments, or could be tightly secured. Virtual disks of groups including integration platformalmost any size could be carved out from a pool and reclaimed and returned to services, security systems develop-the pool at an instant (was called TDISK). The list of clever innovations goes on ment, storage systems development,and on. To be clear, these capabilities were not experimental, they were highly Unicenter, and with various main-scalable, secure, production proven and used by thousands of shops all over the frame-related areas, such as automa-world. tion tools, storage management and security. He was a principal contribu-There are those among us that can’t help but wonder why so many analysts and tor to the Unicenter product line fromothers in our industry pontificate on the incredible potential, and complexity of the inception of the idea to a line ofa virtualized IT landscape, but do not take advantage of the lessons learned in mature, market-leading products.the not too distant past. How to provision these systems, ensure compartmen-talization, ensure accountability in a hybrid virtual/real word are challenges that John retired from CA Technologies inwere faced and answered on the mainframe and many of those lessons can be June of 2010 as a Technical Fellow,learned and applied (at least conceptually) to the virtual environments of today. and was named Technical Fellow Emeritus by the CEO of CA Technolo-For example, 30 years ago our industry knew that access control security on the gies, William McCracken, in recogni-virtual machines was necessary so that the operations within the virtual ma- tion of his many contributions.chine were secure, but we also knew that wasn’t sufficient. We also had to im-plement access control security on the virtual host so that one virtual machine Prior to his retirement, John served ascould not access or interfere with resources of another virtual machine. And we Senior Vice President and Distin-developed concepts and techniques to provide that level of integrity to the vir- guished Engineer, directing R&D fortual computing platform. the team responsible for the design, development and delivery of the nextConsider for example what happens when one or more virtual machines are generation of Cloud Connected Enter-running on an ESX server. Each of those virtual machines share a virtual net- prise products for CA Technologies. 51
  • 54. work provided by the virtual host (the ESX server). What is securing that virtualnetwork ? If those virtual machines were real machines, a traditional networksecurity appliance could provide protection. But those network appliances aren’tdoing ANYTHING within that ESX virtual server’s network environment; that vir-tual intranet is wide open. In that virtual network world, what is the controlprogram that can transparently intervene on behalf of a virtual machine to pro-tect it? If none exists, that represents a gap that the auditors will one day real-ize needs to be addressed, probably within minutes of an exploit making theheadlines.Gaps such as this one are targets of opportunity to be exploited. Please under-stand that I’m not suggesting that we as a community should create malwareexploits; rather I am suggesting that we design and create solutions that ad-dress those gaps, so that when those malware exploits are created (and we allknow it’s just a matter of time) we have the solution.To put it plainly, as with so many other areas, there are lessons to be learnedfrom the mainframe-related to virtualization and review of those lessons canhelp identify product opportunities.One suggestion on how to identify those potential opportunities is to review thefollowing list of questions.• What were the obstacles to mainframe virtualization adoption?• Does that same obstacle (or an analog) exist in today’s virtual worlds?• How was the obstacle addressed before?• Can that same idea be applied today and is anyone applying it?Virtualization is mainstream, and has grown and continues to grow and growrapidly…. And adopters of these technologies are experiencing the growingpains. If you can be in the right place, at the right time, with the right solutionto address those growing pains, you can do very well. Determining what thosegrowing pains might be could be as simple as a review of the lessons learnedfrom the mainframe. 52
  • 55.  Glossary of Virtualization TermsApplication Running software on a central server rather than distributed to each of the users’Virtualization computers. Except for a very thin generic client and communication protocols (that often can be a standard web browser) no changes are made to the local computer’s file system or registry. Sometimes called Application Service Virtualization.Client Virtualization See Desktop VirtualizationCloning Creating copies of an existing virtual machine or system to speed the distribution of identical computing environments or for back upData as a Service A service offering that provides customers with uniform access to a set of data.(DaaS)Data federation Software that provides an organization with the ability to aggregate data from disparate sources in a virtual database so it can be used for business intelligence (BI) or other analysis.Data Virtualization The abstraction of data contained within a variety of heterogeneous data stores so that they can be accessed uniformly without regard to their location, physical storage, or structure.Desktop Separating the user desktop environment from the physical machine. There are twoVirtualization variations:  Hosted (Remote) Virtual Desktop: The entire user desktop environment is stored on a remote, central server. The user accesses the desktop remotely via the Virtual Desktop Infrastructure (VDI). This model allows desktop PCs to be replaced with thin clients (or dumb terminals) and allows central control of all user applications and data.  Local Virtual Desktop: The client operating system/desktop is installed as a guest on a hypervisor that runs on the client PC. This user interacts with the virtual desktop whereas the management functions can be done remotely on the virtual desktop via the hypervisor software.Emulated device A virtualized device that is designed to behave like a physical device and, therefore, can be used by native device drivers that typically are shipped with the guest operating system.Enterprise The use of software and computer systems architectural principles to integrate a set ofApplication enterprise computer applications. Enterprise Application Integration (EAI) is anIntegration (EAI) integration framework composed of a collection of technologies and services which form a middleware to enable integration of systems and applications across the enterprise.Enterprise A process of information integration, using data abstraction to provide a singleInformation interface (known as uniform data access) for viewing all the data within anIntegration (EII) organization, and a single set of structures and naming conventions (known as uniform information representation) to represent this data; the goal of EII is to get a large set of heterogeneous data sources to appear to a user or system as a single, homogeneous data source.  53
  • 56.  Extract, transform, A process in database usage and especially in data warehousing that involves:and load (ETL)  Extracting data from outside sources  Transforming it to fit operational needs (which can include quality levels)  Loading it into the end target (database or data warehouse)Grid computing A term referring to the combination of computer resources from multiple administrative domains to reach a common goal. What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. More recently, with the advent of virtualization, grids can be composed of homogeneous, virtualized resources that can be allocated on demand to provide elastic capacity and uniform operating environments as the basis for flexible application virtualization.Guest software The software, stored on the virtual disks, that runs when a virtual machine is powered on. The guest is typically an operating system and some user-level applications and services.Hardware Hiding the physical characteristics of a computing platform from its users by offeringVirtualization them an abstract platform that emulates the underlying hardware and or software or some other environmentHardware-assisted The use of hardware features and instructions that enable the hypervisor to handleVirtualization certain privileged instructions using a trap-and-emulate mode in hardware, as opposed to software. This reduces the virtualization overhead, and improves the overall performance of the hypervisor and the guest VMs.Host In virtualization discussions, a host is the machine that runs the hypervisor which creates the virtualized hardware for the guests (VMs). The host is usually a physical machine, but it can be a VM as well.Hot Migration Generic name for technologies (such as vMotion, Live Migration, and XenMotion) that allow virtualized environments to move between host systems while they are still running. This functionality greatly enhances the advantages of virtualization by minimizing (or even eliminating) application downtime when environments are moved to manage load balancing or during hardware maintenance.Hypervisor A layer of software running directly on computer hardware replacing the operating system, thereby allowing the computer to run multiple guest operating systems concurrently. The hypervisor is the basic virtualization component that provides an abstraction layer between the hardware and the “guest” operating systems. A hypervisor has its own kernel and is installed directly on the hardware. It can be considered as a minimalist operating system that controls the communication between the guest OS and the hardware. Hypervisors are sometimes also called “Virtual Machine Monitors” (VMM). This is also called Type 1, native, or bare-metal hypervisor. A virtualized environment without a true hypervisor (sometimes referred to as Type 2 Hypervisors) needs to have a primary OS in between the hardware and the Virtualization Engine, which can add significant overhead.  54
  • 57.  I/O Virtualization Refers to the technology that replaces physical NICs (Network Interface Controllers) and HBAs (Host Bus Adapter) with a device that provides virtual resources instead. The resources (NICs, HBAs) appear to the Operating Systems and applications exactly as their physical counterparts, so they require no application or OS modification. To the network and the SAN (Storage Area Network) resources, they also appear as traditional server devices, so they can be discovered and managed. Unlike traditional physical I/O devices, they can be created and deployed quickly with no interruption.Master data A set of processes and tools that consistently defines and manages the non-management transactional data entities of an organization (which may include reference data).(MDM) MDM has the objective of providing processes for collecting, aggregating, matching, consolidating, quality-assuring, persisting and distributing such data throughout an organization to ensure consistency and control in the ongoing maintenance and application use of this information.Memory Decoupling the physical memory (RAM) from individual servers in a cluster or groupVirtualization and aggregating it into a virtual memory pool that is available to all group members. Note that this is different from Virtual Memory.Network The process of combining hardware and software network resources and networkVirtualization functionality into a single, software-based administrative entity, a virtual network. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to the software containers on a single system.Open Virtualization A specification defining a platform independent, efficient, extensible, and openFormat (OVF). packaging and distribution format for virtual machines. OVF was originally written by VMware. VMware donated their OVF specification to the Distributed Management Taskforce (DMTF) where it was examined and extended by Dell, HP, IBM, Microsoft, VMware and XenSource. OVF is not tied to any specific hypervisor or processor architecture.Operating System Allowing isolated partitions, or virtual environments (VE), to be located on the same(OS) Virtualization physical server and operating system. Multiple VEs share a common operating system that they communicate with through an “OS virtualization Layer”. This layer is responsible for ensuring security and complete isolation of dedicated resources and data owned by a specific VE. VEs are sometimes called “virtual private servers (VPS)”, “jails”, “guests”, “zones”, “vservers” or “containers”, to name a few.P2V (Physical to A short name commonly used for tools that are used to convert physical servers intoVirtual) corresponding virtual servers.Paravirtualization A technique where the “guest” operating systems utilize software interfaces through the hypervisor that aren’t identical to the underlying hardware (though, typically, similar). This can make certain calls more efficient and it can certainly simplify the role of the virtual engine; however, it requires the operating system to be explicitly modified to use the specific Virtualization Engine interfaces.Partial Virtualization simulating multiple instances of only some aspects of the underlyingvirtualization physical hardware, typically address spaces running multiple applications.Pass-through disk A physical disk that is directly connected to the virtual machine. The data andaccess instructions to the disk are sent directly to the physical disk without any intervening processing by the hypervisor.  55
  • 58.  SAN (Storage Area An architecture that allows remote storage devices to be attached to servers in such aNetwork) way that the operating system perceives it to be a locally attached device. This significantly enhances the virtual environments since the storage of the virtual images can be easily separated from the rest of the hardware and thereby simplifies any type of Live Migration.Server The practice of partitioning a single server so that it appears as multiple servers. The physicalVirtualization server typically runs a hypervisor which is tasked with creating, destroying, and managing the resources of "guest" operating systems, or virtual machines.Snapshot The state of a VM at a particular point in time that is usually saved to disk as one or more files. That snapshot contains an image of the VM disks, memory, and devices at the time the snapshot was taken. With the snapshot, the VM can be returned to that point in time, whenever needed.Storage Refers to various technologies that present logical pools of data as a single physicalVirtualization disk even though the data may be distributed across many physical servers. The physical disk space is only guaranteed to be present when needed. (See Thin Provisioning.)The virtual storage device can be partitioned and shared between multiple systems. Storage virtualization is commonly used in Storage Area Networks (SAN).Thin Provisioning A technique that allows disk space to be allocated to servers or users on a “just(of storage enough” as well as a “just in time” basis.resources)Utility computing Computing or storage sold as a metered service resembling a public utility, unlike leasing where the cost of the service depends on the equipment leased, not the extent to which the equipment is used. Utility computing is frequently associated with cloud services and almost always implemented using virtualization.Virtual CPU An allocation of computing power in the form of CPU to a virtual machine. The Hypervisor typically schedules private access to one CPU-core for each allocated Virtual CPU (vCPU) for a short time slice when a virtual machine requests this resource. Virtual machines with multiple vCPUs allocated often need to have them co-scheduled to enable access to all of its CPUs at the same moment in time.Virtual Desktop The computing model enabling desktop virtualization, encompassing the hardwareInfrastructure and software systems required to support the virtualized environment. Virtual Desktop Infrastructure (VDI) utilizes virtualization techniques to provide end users with their desktop environments. VDI is similar to server virtualization in that multiple virtual desktops can be supported on a single physical server.Virtual Appliance A virtual machine that includes an OS and applications that is ready to run.Virtual Hardware The hardware (including the CPU, controllers, network devices, and disks) that is seen by the guest software.  56
  • 59.  Virtual Machine A full encapsulation of the virtual hardware, virtual disks, and the metadata associated with it. Virtual machines allow multiplexing of the underlying physical machine. A virtual machine runs on top of a host operating system or hypervisor through a set of interfaces. These interfaces can either be specific to this virtualization (see Paravirtualization) or be an abstraction layer. An abstraction layer provides flexibility that permits one physical machine to run operating systems that may not be targeted to the exact physical architecture.Virtual Machine A service comprised of a set of virtual machines. The service can be a simple set of oneCollection or more virtual machines, or it can be a complex service built out of a combination of virtual machines and other virtual machine collections. Because virtual machine collections can be composed, it enables complex nested components.Virtual Machine See Hypervisor.Monitor (VMM)Virtual LAN A logical LAN (also called vLAN) that is based on one or more physical LANs. A vLAN can be created by partitioning a physical LAN into multiple logical LANs (subnets) or several physical LANs can be combined to function as a single logical LAN.Xen An open source hypervisor that supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems.   57
  • 60. NOTICESCopyright © 2010 CA. All rights reserved. All trademarks, trade names, service marks and logos referencedherein belong to their respective companies.The information in this publication could include typographical errors or technical inaccuracies, and CA,Inc. (“CA”) and the authors assume no responsibility for its accuracy or completeness. The statements andopinions expressed in this publication are those of the authors and are not necessarily those of CA.Certain information in this publication may outline CA’s general product direction. However, CA may makemodifications to any CA product, software program, service, method or procedure described in this publica-tion at any time without notice, and the development, release and timing of any features or functionalitydescribed in this publication remain at CA’s sole discretion. CA will support only the referenced products inaccordance with (i) the documentation and specifications provided with the referenced product, and (ii)CA’s then-current maintenance and support policy for the referenced product. Notwithstanding anything inthis publication to the contrary, this publication shall not: (i) constitute product documentation or specifi-cations under any existing or future written license agreement or services agreement relating to any CAsoftware product, or be subject to any warranty set forth in any such written agreement; (ii) serve to affectthe rights and/or obligations of CA or its licensees under any existing or future written license agreementor services agreement relating to any CA software product; or (iii) serve to amend any product documenta-tion or specifications for any CA software product.Any reference in this publication to third-party products and websites is provided for convenience only andshall not serve as the authors’ or CA’s endorsement of such products or websites. Your use of such prod-ucts, websites, any information regarding such products or any materials provided with such products oron such websites shall be at your own risk.To the extent permitted by applicable law, the content of this publication is provided “AS IS” without war-ranty of any kind, including, without limitation, any implied warranties of merchantability, fitness for a par-ticular purpose, or non-infringement. In no event will the authors or CA be liable for any loss or damage,direct or indirect, arising from or related to the use of this publication, including, without limitation, lostprofits, lost investment, business interruption, goodwill or lost data, even if expressly advised in advance ofthe possibility of such damages. Neither the content of this publication nor any software product or serv-ice referenced herein serves as a substitute for your compliance with any laws (including but not limited toany act, statute, regulation, rule, directive, standard, policy, administrative order, executive order, and so on(collectively, “Laws”) referenced herein or otherwise or any contract obligations with any third parties. Youshould consult with competent legal counsel regarding any such Laws or contract obligations.
  • 61. CA Technology Exchange © 2010 CA Technologies. Printed in the U.S.A.