This white paper discusses building the next-generation data center through virtualization maturity. It outlines a 4-stage model of virtualization maturity: server consolidation, infrastructure optimization, automation and orchestration, and dynamic data center. For each stage, it discusses the challenges and provides a sample implementation plan. The target audience is IT directors and infrastructure leaders looking to advance their organizations along the virtualization maturity lifecycle with guidance on overcoming roadblocks at each stage through the right combination of people, processes, and technologies.
Teneja Group report highlighting the need for performance management solutions that guarantee service assurance and delivery in the data center. This report illustrates that while virtualization has brought many benefits and changed the nature of how we host applications; it has also brought to light "a critical gap for IO and storage." The report stresses the need for increased visibility into the physical infrastructure and how Virtual Instruments can assure the success of virtualizing mission critical applications.
Ctrls delineates how organizations are moving towards Virtualization and Cloud Computing to optimize their IT Infrastructure needs. Benefits such as cost effectiveness, scalability on demand, moving from a CAPEX to OPEX model and increased returns on investments have made virtualization a lucrative datacenter option.
This document discusses how mid-sized companies can transform their IT operations with the help of a managed services provider. It covers challenges with managing increasingly complex virtualized environments and unified communications systems. The document then summarizes how NetEnrich offers virtualization and unified communications management services, including planning, configuration, service management, and ensuring compliance.
IT has become increasingly complex and difficult to manage
and scale. Costs are growing at unprecedented rates, as
enterprises continue to add technologies in response to an
explosion in the quantity and new types of data. This explosion in data and the patchwork of methods used to store, retrieve
and provide usable views has resulted in inefficiencies in data center architecture, duplication of data base licenses, increased costs for data storage and management, and inefficient use of data center technicians. Database replication alone has precipitated new challenges and resulted in the lack of consolidation
of numerous software middleware licenses, further increasing costs.
Server virtualization upends this serial relationship. With server virtualization, a
hypervisor dynamically creates multiple, isolated software instances, or virtual machines
(VMs), on a single server. This abstraction allows multiple workloads to simultaneously
share the server's resources. By interleaving the ebb and flow of multiple workloads,
server utilization rates can be dramatically and permanently improved. In addition, server
sprawl can, for a period of time, be reversed.
This IDC white paper highlights how IBM eX5 systems with MAX5 memory technology play a significant role in increasing the value of memory dense servers.
This document discusses eight key challenges to effective infrastructure management: 1) detecting and handling incidents and problems, 2) handling changes with minimal impact on availability, 3) preventing security problems, 4) using emerging technologies effectively, 5) maintaining software and firmware, 6) having indicators of status and trends, 7) having the right tools, and 8) deploying infrastructure rapidly. It provides tactics to address each challenge, with the goals of improving efficiency, reducing costs, and increasing business value through better infrastructure management. Outsourcing and managed services are presented as strategic options.
Five reasons to make the move to converged infrastructureDellLatam
This document discusses the benefits of adopting a converged infrastructure in the data center. It provides 5 reasons to make the move: 1) Growing disparity between business demands and IT resources. 2) Need for IT to be more agile when provisioning services. 3) Risk of business disintermediating IT by using public clouds. 4) Convergence simplifies management and changes by consolidating infrastructure components. 5) Convergence complements virtualization with automation that eliminates manual processes and waste.
Teneja Group report highlighting the need for performance management solutions that guarantee service assurance and delivery in the data center. This report illustrates that while virtualization has brought many benefits and changed the nature of how we host applications; it has also brought to light "a critical gap for IO and storage." The report stresses the need for increased visibility into the physical infrastructure and how Virtual Instruments can assure the success of virtualizing mission critical applications.
Ctrls delineates how organizations are moving towards Virtualization and Cloud Computing to optimize their IT Infrastructure needs. Benefits such as cost effectiveness, scalability on demand, moving from a CAPEX to OPEX model and increased returns on investments have made virtualization a lucrative datacenter option.
This document discusses how mid-sized companies can transform their IT operations with the help of a managed services provider. It covers challenges with managing increasingly complex virtualized environments and unified communications systems. The document then summarizes how NetEnrich offers virtualization and unified communications management services, including planning, configuration, service management, and ensuring compliance.
IT has become increasingly complex and difficult to manage
and scale. Costs are growing at unprecedented rates, as
enterprises continue to add technologies in response to an
explosion in the quantity and new types of data. This explosion in data and the patchwork of methods used to store, retrieve
and provide usable views has resulted in inefficiencies in data center architecture, duplication of data base licenses, increased costs for data storage and management, and inefficient use of data center technicians. Database replication alone has precipitated new challenges and resulted in the lack of consolidation
of numerous software middleware licenses, further increasing costs.
Server virtualization upends this serial relationship. With server virtualization, a
hypervisor dynamically creates multiple, isolated software instances, or virtual machines
(VMs), on a single server. This abstraction allows multiple workloads to simultaneously
share the server's resources. By interleaving the ebb and flow of multiple workloads,
server utilization rates can be dramatically and permanently improved. In addition, server
sprawl can, for a period of time, be reversed.
This IDC white paper highlights how IBM eX5 systems with MAX5 memory technology play a significant role in increasing the value of memory dense servers.
This document discusses eight key challenges to effective infrastructure management: 1) detecting and handling incidents and problems, 2) handling changes with minimal impact on availability, 3) preventing security problems, 4) using emerging technologies effectively, 5) maintaining software and firmware, 6) having indicators of status and trends, 7) having the right tools, and 8) deploying infrastructure rapidly. It provides tactics to address each challenge, with the goals of improving efficiency, reducing costs, and increasing business value through better infrastructure management. Outsourcing and managed services are presented as strategic options.
Five reasons to make the move to converged infrastructureDellLatam
This document discusses the benefits of adopting a converged infrastructure in the data center. It provides 5 reasons to make the move: 1) Growing disparity between business demands and IT resources. 2) Need for IT to be more agile when provisioning services. 3) Risk of business disintermediating IT by using public clouds. 4) Convergence simplifies management and changes by consolidating infrastructure components. 5) Convergence complements virtualization with automation that eliminates manual processes and waste.
EMC Documentum xCelerated Composition Platform (xCP) enables organizations to rapidly build case-based applications up to 50% faster at lower cost. It provides a single platform for content management, business process management, intelligent capture, collaboration, customer communications, compliance, and case management. By emphasizing configuration over coding, xCP allows developers to quickly compose scalable, repeatable applications using graphical tools and reusable components with less risk.
See How Virtualization is a key Technology to help Datacenters Move: WhitepaperMicrosoft Private Cloud
This document discusses how organizations can move towards more dynamic IT using datacenter virtualization. It describes
the characteristics of dynamic IT organizations and introduces Microsoft's Core Infrastructure Optimization Model, which defines
four levels - basic, standardized, rationalized, and dynamic. Virtualization is presented as a key technology to help datacenters
improve efficiency, availability, and agility as they progress through the optimization levels. The document also provides an
overview of virtualization scenarios and technologies from Microsoft that can enable server consolidation and business continuity.
Gateway TechnoLabs provides application development and IT outsourcing services to reduce clients' costs and improve productivity. With over 12 years of experience and 1000+ IT professionals, Gateway offers remote infrastructure management (RIM) to design, deploy, manage, and maintain clients' IT infrastructure globally. Gateway's RIM division aims to be an end-to-end provider of outsourced IT infrastructure management services covering servers, networks, security, desktops and more through flexible pricing models and global delivery.
The document discusses the five pillars of federal data center excellence: (1) application effectiveness, (2) programmatic control and orchestration, (3) security and data integrity, (4) elasticity and scalability, and (5) automation and simplified management. It describes how Brocade delivers on these pillars through technologies that increase application performance, provide programmatic control of resources, ensure security and data integrity, allow for flexible scaling of resources, and enable automated management of data centers. The document also categorizes different types of federal data centers and discusses how Brocade technologies address the specific needs of customer-facing, analytical, and tactical/transportable data centers.
Kryptos Professional Services provides IT infrastructure implementation, migration, and optimization services to help lower costs, accelerate growth, and minimize risk for clients. Their services include implementation, migration, and managed services. They assist with designing, deploying, and managing critical IT infrastructure components like servers, networks, databases, security, and applications. Kryptos aims to deliver these services remotely and on-demand while adhering to best practices and standards to provide affordable, reliable support.
1) The document discusses convergence as a continuum to drive superior customer outcomes. It argues that convergence should optimize for both efficiency/agility and performance to meet different organizational needs.
2) Dell believes convergence should span infrastructure, operations, applications, and services. Dell offers solutions focused on both performance and efficiency/agility.
3) Convergence involves trade-offs and may be incomplete. Dell advocates for open, flexible architectures and interoperability between existing management solutions to fully realize the benefits of convergence.
The document discusses the need for specialized monitoring of virtual desktop infrastructures (VDI) compared to server virtualization. It notes that while server and desktop virtualization both use virtual machines, they have key differences in scale, usage patterns, and monitoring objectives. An effective VDI monitoring solution must monitor all tiers from a user-centric perspective to diagnose problems, plan capacity, and analyze usage. The eG VDI Monitor is presented as a solution designed specifically for these unique VDI monitoring requirements.
Virtualizing More While Improving Risk Posture – From Bare Metal to End PointHyTrust
Virtualizing more of an organization's workloads presents both opportunities and risks. As more mission-critical workloads are virtualized, security and compliance become greater priorities. Purpose-built solutions that provide security, visibility, and control over virtual infrastructure and assets are needed. Intel, HyTrust, and McAfee are partnering to provide comprehensive solutions through technologies like Intel TXT, the HyTrust Appliance, and McAfee security products to help organizations securely virtualize more workloads while improving their security posture and compliance.
This document discusses the deployment of private clouds with IBM systems and software. It outlines the basic technical requirements for private clouds, such as robust virtualization platforms and good management tools. It also discusses the need for standardizing virtual machine images and automating provisioning tasks. Once these practices are in place, private clouds can introduce cost savings by eliminating administrative overhead through self-service catalogs. IBM is well-positioned to help customers deploy private clouds through solutions like CloudBurst, IBM Service Delivery Manager, and Tivoli Service Automation Manager that support virtualization on IBM server platforms.
Businesses are increasingly turning to hybrid cloud solutions to reduce costs and enable more scalable and flexible business processes. A hybrid cloud is a seamless and transparent combination of public and private clouds that uses a highly virtualized, on-premise infrastructure and merges with public cloud resources to increase efficiency, scalability and dependability. This seamless integration also helps extend security, control and transparency to all IT assets and environments regardless of their physical location.
Virtual Clustered Multiprocessing (vCMP) allows administrators to segment physical ADC resources into independent virtual ADCs. It provides true isolation between instances through a purpose-built hypervisor designed specifically for F5 hardware and software. This deep integration enables high performance virtual instances without sacrificing reliability. It allows organizations to independently operate and manage virtual instances while maintaining interoperability with existing equipment.
1) The document describes IBM's Enterprise virtual machine infrastructure on the IBM Cloud, which provides virtual server instances that customers can use to help reduce costs, increase business agility, and improve security.
2) It allows customers to quickly provision virtual servers on demand and only pay for resources when needed, helping reduce capital and operational expenses compared to owning physical infrastructure.
3) Customers can choose from various operating system and software images and have tools to automate server provisioning, helping speed application development and deployment cycles.
Finding The Right Cloud Solution Wp111455Erik Ginalick
This document discusses finding the right cloud solution for businesses' needs. It defines cloud computing as having four categories: Software as a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Managed Services. Good cloud solutions give small and medium businesses access to technology capabilities without large upfront costs. They can pay for the level of service needed and easily upgrade resources. This allows SMBs to streamline operations and scale in a cost effective manner.
This document discusses ensuring high availability and data security with Datacore software. It notes that companies' data is their most important asset and infrastructures have become more dynamic with server virtualization. As a result, storage systems and networks must also adapt quickly. Datacore provides a future-proof and flexible solution to ensure 24/7 availability even during operations, maintenance, extensions or migrations. It allows for central management and virtualization of storage resources across systems and locations for high performance, security and simplicity.
This certificate from CCIM Institute recognizes that Mark Macek successfully completed the CCIM Institute program and has earned the CCIM designation. CCIM Institute is an affiliate of the National Association of Realtors that provides commercial real estate education. The certificate is signed by Mark Macek, who was the 2015 President of CCIM Institute, and Walt S. Clements, the CEO and Executive Vice President.
This certificate from CCIM Institute recognizes that Mark Macek successfully completed the CCIM Institute program and has earned the CCIM designation. CCIM Institute is an affiliate of the National Association of Realtors that provides commercial real estate education. The certificate is signed by Mark Macek, who was the 2015 President of CCIM Institute, and Walt S. Clements, the CEO and Executive Vice President.
EMC Documentum xCelerated Composition Platform (xCP) enables organizations to rapidly build case-based applications up to 50% faster at lower cost. It provides a single platform for content management, business process management, intelligent capture, collaboration, customer communications, compliance, and case management. By emphasizing configuration over coding, xCP allows developers to quickly compose scalable, repeatable applications using graphical tools and reusable components with less risk.
See How Virtualization is a key Technology to help Datacenters Move: WhitepaperMicrosoft Private Cloud
This document discusses how organizations can move towards more dynamic IT using datacenter virtualization. It describes
the characteristics of dynamic IT organizations and introduces Microsoft's Core Infrastructure Optimization Model, which defines
four levels - basic, standardized, rationalized, and dynamic. Virtualization is presented as a key technology to help datacenters
improve efficiency, availability, and agility as they progress through the optimization levels. The document also provides an
overview of virtualization scenarios and technologies from Microsoft that can enable server consolidation and business continuity.
Gateway TechnoLabs provides application development and IT outsourcing services to reduce clients' costs and improve productivity. With over 12 years of experience and 1000+ IT professionals, Gateway offers remote infrastructure management (RIM) to design, deploy, manage, and maintain clients' IT infrastructure globally. Gateway's RIM division aims to be an end-to-end provider of outsourced IT infrastructure management services covering servers, networks, security, desktops and more through flexible pricing models and global delivery.
The document discusses the five pillars of federal data center excellence: (1) application effectiveness, (2) programmatic control and orchestration, (3) security and data integrity, (4) elasticity and scalability, and (5) automation and simplified management. It describes how Brocade delivers on these pillars through technologies that increase application performance, provide programmatic control of resources, ensure security and data integrity, allow for flexible scaling of resources, and enable automated management of data centers. The document also categorizes different types of federal data centers and discusses how Brocade technologies address the specific needs of customer-facing, analytical, and tactical/transportable data centers.
Kryptos Professional Services provides IT infrastructure implementation, migration, and optimization services to help lower costs, accelerate growth, and minimize risk for clients. Their services include implementation, migration, and managed services. They assist with designing, deploying, and managing critical IT infrastructure components like servers, networks, databases, security, and applications. Kryptos aims to deliver these services remotely and on-demand while adhering to best practices and standards to provide affordable, reliable support.
1) The document discusses convergence as a continuum to drive superior customer outcomes. It argues that convergence should optimize for both efficiency/agility and performance to meet different organizational needs.
2) Dell believes convergence should span infrastructure, operations, applications, and services. Dell offers solutions focused on both performance and efficiency/agility.
3) Convergence involves trade-offs and may be incomplete. Dell advocates for open, flexible architectures and interoperability between existing management solutions to fully realize the benefits of convergence.
The document discusses the need for specialized monitoring of virtual desktop infrastructures (VDI) compared to server virtualization. It notes that while server and desktop virtualization both use virtual machines, they have key differences in scale, usage patterns, and monitoring objectives. An effective VDI monitoring solution must monitor all tiers from a user-centric perspective to diagnose problems, plan capacity, and analyze usage. The eG VDI Monitor is presented as a solution designed specifically for these unique VDI monitoring requirements.
Virtualizing More While Improving Risk Posture – From Bare Metal to End PointHyTrust
Virtualizing more of an organization's workloads presents both opportunities and risks. As more mission-critical workloads are virtualized, security and compliance become greater priorities. Purpose-built solutions that provide security, visibility, and control over virtual infrastructure and assets are needed. Intel, HyTrust, and McAfee are partnering to provide comprehensive solutions through technologies like Intel TXT, the HyTrust Appliance, and McAfee security products to help organizations securely virtualize more workloads while improving their security posture and compliance.
This document discusses the deployment of private clouds with IBM systems and software. It outlines the basic technical requirements for private clouds, such as robust virtualization platforms and good management tools. It also discusses the need for standardizing virtual machine images and automating provisioning tasks. Once these practices are in place, private clouds can introduce cost savings by eliminating administrative overhead through self-service catalogs. IBM is well-positioned to help customers deploy private clouds through solutions like CloudBurst, IBM Service Delivery Manager, and Tivoli Service Automation Manager that support virtualization on IBM server platforms.
Businesses are increasingly turning to hybrid cloud solutions to reduce costs and enable more scalable and flexible business processes. A hybrid cloud is a seamless and transparent combination of public and private clouds that uses a highly virtualized, on-premise infrastructure and merges with public cloud resources to increase efficiency, scalability and dependability. This seamless integration also helps extend security, control and transparency to all IT assets and environments regardless of their physical location.
Virtual Clustered Multiprocessing (vCMP) allows administrators to segment physical ADC resources into independent virtual ADCs. It provides true isolation between instances through a purpose-built hypervisor designed specifically for F5 hardware and software. This deep integration enables high performance virtual instances without sacrificing reliability. It allows organizations to independently operate and manage virtual instances while maintaining interoperability with existing equipment.
1) The document describes IBM's Enterprise virtual machine infrastructure on the IBM Cloud, which provides virtual server instances that customers can use to help reduce costs, increase business agility, and improve security.
2) It allows customers to quickly provision virtual servers on demand and only pay for resources when needed, helping reduce capital and operational expenses compared to owning physical infrastructure.
3) Customers can choose from various operating system and software images and have tools to automate server provisioning, helping speed application development and deployment cycles.
Finding The Right Cloud Solution Wp111455Erik Ginalick
This document discusses finding the right cloud solution for businesses' needs. It defines cloud computing as having four categories: Software as a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Managed Services. Good cloud solutions give small and medium businesses access to technology capabilities without large upfront costs. They can pay for the level of service needed and easily upgrade resources. This allows SMBs to streamline operations and scale in a cost effective manner.
This document discusses ensuring high availability and data security with Datacore software. It notes that companies' data is their most important asset and infrastructures have become more dynamic with server virtualization. As a result, storage systems and networks must also adapt quickly. Datacore provides a future-proof and flexible solution to ensure 24/7 availability even during operations, maintenance, extensions or migrations. It allows for central management and virtualization of storage resources across systems and locations for high performance, security and simplicity.
This certificate from CCIM Institute recognizes that Mark Macek successfully completed the CCIM Institute program and has earned the CCIM designation. CCIM Institute is an affiliate of the National Association of Realtors that provides commercial real estate education. The certificate is signed by Mark Macek, who was the 2015 President of CCIM Institute, and Walt S. Clements, the CEO and Executive Vice President.
This certificate from CCIM Institute recognizes that Mark Macek successfully completed the CCIM Institute program and has earned the CCIM designation. CCIM Institute is an affiliate of the National Association of Realtors that provides commercial real estate education. The certificate is signed by Mark Macek, who was the 2015 President of CCIM Institute, and Walt S. Clements, the CEO and Executive Vice President.
Caleb J Benjamin has received a course certificate for successfully completing the Algorithmic Toolbox online course through Coursera. The certificate is authorized by University of California, San Diego and Higher School of Economics and confirms Caleb's identity and participation in the course, which was taught by professors from UC San Diego.
El documento proporciona instrucciones para comparar la novela "El Jardín Secreto" con su adaptación cinematográfica. Explica que la comparación implica establecer semejanzas y diferencias entre los elementos clave, determinar los criterios de comparación y llegar a una conclusión.
This certificate from CCIM Institute recognizes that Mark Macek successfully completed the CCIM Institute program and has earned the CCIM designation. CCIM Institute is an affiliate of the National Association of Realtors that provides commercial real estate education. The certificate is signed by Mark Macek, who was the 2015 President of CCIM Institute, and Walt S. Clements, the CEO and Executive Vice President.
Control de lectura el viaje de la marmotaAlitahead
Este documento presenta un examen de comprensión lectora sobre la novela "El viaje de la marmota" que trata sobre una joven llamada Lucero que debe internarse en un hospital para tratar su cáncer. El examen contiene 12 preguntas verdadero/falso sobre detalles de la trama y 5 preguntas abiertas que requieren comprender y analizar temas como la cronología de eventos en la vida de Lucero, la importancia de la amistad, los efectos del cáncer en el espíritu y desarrollar temas central
Este documento presenta un control de lectura sobre la novela "El jardín secreto" de Frances Hodgson Burnett. Consiste en 28 preguntas de opción múltiple, emparejamiento, verdadero/falso y desarrollo para evaluar la comprensión lectora de los estudiantes sobre la trama, personajes y temas centrales de la obra.
This document presents a benchmark for deep learning algorithms developed by identifying basic operations that account for most CPU usage. Three algorithms were implemented - sparse autoencoder, convolutional neural network, and FISTA optimization. The operations were abstracted into an API for easier optimization. Results showed the Theano GPU implementation was 3-15 times faster than Numpy. Challenges included choosing array dimensions and memory allocation to optimize performance. Convolution was identified as the most expensive operation for CNNs in terms of CPU usage.
This document discusses how IT operations are becoming more complex with the rise of cloud computing and virtualization. It notes that managing technologies across on-premises and cloud environments introduces challenges around monitoring, automation, and maintaining processes. The document also discusses how NetEnrich provides services to help companies operationalize their virtual and cloud environments through consulting, monitoring, security, and managing the full lifecycle of virtual machines and cloud workloads.
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...webhostingguy
Virtualization and cloud computing introduce new risks to uptime and availability due to single points of failure, I/O and scalability limitations, and challenges with failover and fault isolation. Using fault-tolerant server hardware can help address these issues by eliminating single points of failure, improving I/O performance, and simplifying deployment. Fault-tolerant servers are well-suited for high-performance, high-density, or mission-critical environments that require five-nines uptime.
This document discusses how enterprises can achieve agility in the cloud through integrated solutions from VMware, F5, and BlueLock. It provides an overview of how each company's technologies - VMware's virtualization platform, F5's application delivery networking solutions, and BlueLock's managed cloud services - can work together to provide enterprises with platform agility, network agility, and flexible cloud services. It then provides an example scenario of how these integrated solutions could help a company dynamically scale its applications between its private cloud and BlueLock's public cloud during periods of high traffic.
This document discusses how enterprises can achieve agility in the cloud through integrated solutions from VMware, F5, and BlueLock. It provides an overview of how each company's technologies - VMware's virtualization platform, F5's application delivery networking solutions, and BlueLock's managed cloud services - can work together to provide enterprises with platform agility, network agility, and flexible cloud services. It then provides an example scenario of how these integrated solutions could help a company dynamically scale its applications between its private cloud and BlueLock's public cloud during periods of high traffic.
Communicating Virtualization to Non-IT AudiencesAkweli Parker
Virtualization brings numerous benefits including cost savings, reduced carbon footprint and potentially reduced IT workload. Implementing it successfully requires adaptation that some employees may find challenging. This paper explores those challenges and explains how to cultivate broad-based support for your virtualization project.
Virtualization 2.0: The Next Generation of VirtualizationEMC
In this paper, Frost & Sullivan define virtualization 2.0 and show the enhanced benefits that the latest virtualization platforms can deliver to the business.
You will learn how the virtualization 2.0 can:
- Improve your business agility, productivity, and application performance
- Provide new benefits of next generation virtualization platforms, including capacity management, predicitive analytics and data protection
Virtualization has become mainstream, with over 50% of server workloads now running on virtual machines. This has driven demand for servers with larger memory capacities that can support higher VM densities. While virtualization helped address issues like server sprawl and underutilization, the need for more memory per VM is now a key limitation for increasing consolidation. New servers with massive memory capacity are aiming to maximize the number of VMs per physical server and lower costs per application.
Enterprise data centres have traditionally used servers and storage that typically scale only to a few nodes. Even small capacity or performance scales required large installation increments or worse, required replicating the existing IT infrastructure, which is prohibitive in terms of cost and space. An important impediment was that as storage capacity increased, system performance and efficiency suffered. In addition, IT budgets came under pressure and created high entry barriers to scale for enterprise class data centres. However, virtualization and cloud platforms are changing that. IT departments can now linearly scale to several server and storage nodes rapidly, for capacity and performance without compromising on efficiency and to keep costs under control. This helps save space via hardware consolidation, improves productivity, and derives a competitive advantage through increased availability, lean administration, and fast deployment times.
Securing Your Infrastructure: Identity Management and Data ProtectionLumension
The document discusses securing infrastructure by introducing solutions from Microsoft, Lieberman Software, and Lumension to address challenges around privileged identity management, data protection, and device control through products like Microsoft System Center, Lieberman Enterprise Random Password Manager, and Lumension Device Control. It outlines infrastructure security challenges businesses face around increased access and security threats and how an integrated security solution from these vendors can help keep systems running securely while protecting sensitive data.
Securing virtualization in real world environmentsArun Gopinath
This document discusses the security implications of server virtualization. While virtualization provides benefits like reduced costs and improved management, it also introduces new security risks. Specifically, a breach of one virtual server could potentially impact multiple virtual servers running on the same physical hardware. Traditional security tools are not designed to address the unique security challenges of virtualized environments. The document argues that organizations must understand these new risks and take steps to secure virtualized environments in order to fully realize the benefits of virtualization.
White Paper: The Benefits of An Outsourced IT InfrastructureAsaca
This white paper will explore the benets of a hosted IT infrastructure
in the context of several key business topics including disaster recovery,
cost management and scalability .
Interop nyc building a private cloud today - automation - shopp -finalDavid Resnic
Building a private cloud does not require building a whole new data center. The quickest path to the cloud is to reclaim and repurpose your current IT infrastructure into a new private cloud likely within your current data center facility. Making the move from the static data center requires more than just infrastructure; it also requires new processes and automation. This session will help data center managers: 1) Learn how automation aides in the migration to and management of private clouds; 2) Understand the benefits of deploying a private cloud; and 3) Review best practices for enabling self-service to a private cloud.
Cross Docking IT is a three-stage framework with five phases that helps align IT applications with business functions. It covers the entire spectrum of an organization's IT systems. The framework includes: [1] Discover - mapping existing business processes, applications, and infrastructure; [2] Rationalize - suggesting options to reduce costs and create a unified environment; [3] Build - transforming processes and architecture to align with business goals. This framework enhances efficiency, visibility, stability, and reduces costs.
The Evolution Of Server Virtualization By Hitendra MolletiHitendra Molleti
This document discusses the benefits of server virtualization for businesses. It begins by outlining top business challenges such as reacting quickly to market changes and containing costs. It then discusses what businesses demand from IT, including flexibility, cost control, simplicity, continuity, and security. Server virtualization can help meet these demands. The document provides a brief history of virtualization and explains how the virtualization approach abstracts physical hardware and allows for more efficient utilization of servers than the traditional one application, one server model. Key benefits of server virtualization include more efficient utilization rates, reduced power consumption and costs, decreased capital expenditures, enhanced business continuity, more flexible resource allocation, simplified management, and acting as the first step towards cloud computing. The
Virtualization 101 provides an overview of virtualization and the VMware product suite. It begins with an introduction to virtualization and its benefits such as cost reduction and increased efficiency. It then discusses VMware's position as the market leader in virtualization and its core virtualization products, including vSphere Hypervisor. vSphere Hypervisor is VMware's free hypervisor that allows users to quickly partition a physical server into multiple virtual machines. The document provides installation and setup instructions for vSphere Hypervisor and explains how to create and manage virtual machines. It aims to give attendees a fundamental understanding of virtualization and how to get started with VMware's virtualization technology.
Support you Microsoft cloud with Microsoft services By Anis Chebbi)TechdaysTunisia
The document provides an agenda for a conference on supporting Microsoft cloud services. It discusses Microsoft's approach to implementing cloud services globally across many markets and languages. It then outlines the value proposition of private clouds for businesses and how Microsoft's data center services and journey to the cloud offerings can help with private cloud transformation and infrastructure as a service.
NJVC is an IT and security services company that helps businesses and government agencies address challenges in a new digital world. The document discusses:
1) New considerations for IT departments due to ubiquitous data, connectivity demands, and accelerating pace of change.
2) NJVC's portfolio of managed services to optimize IT infrastructure while enhancing security, governance and compliance.
3) NJVC's focus on automation, innovation, and mission-critical services to support required IT transformations and hybrid computing strategies.
A virtual datacenter provider’s facilities are powered by a robust cloud infrastructure. Its resources (like compute, memory, storage and
bandwidth) are customized to cater to enterprise business needs.
This document provides an overview of virtual data centers and how to select a virtual data center provider. It discusses that virtual data centers offer scalable computing resources that can be customized to meet business needs. When selecting a provider, businesses should consider their hosting requirements, network uptime guarantees, power/cooling redundancy, and security solutions. Virtual data centers can boost business growth by providing cost savings, scalability, resilience, insights, and control over IT resources.
Server virtualization allows multiple operating systems to run on a single physical server. Network virtualization completely replicates a physical network in software. Software-defined storage pools disk and flash drives into high-performance storage that can be delivered virtually. Virtualization consolidates resources, simplifies management, and reduces costs for both large enterprises and small/midsize businesses. It increases uptime and flexibility while decreasing the time and resources needed for provisioning.
1. WHITE PAPER
Building the Next-Generation Data Center | February 2011
building the
next-generation
data center – a
detailed guide
Birendra Gosai
CA Technologies, Virtualization Management
2. Building the Next-Generation Data Center
Table of Contents
Executive summary 3
Virtualization maturity lifecycle 4
Purpose, target audience, and assumptions 4
Server consolidation 5
Migrate with confidence 5
Infrastructure optimization 10
Gain visibility and control 10
Automation and orchestration 14
Control VM sprawl 14
Dynamic data center 18
Agility made possible 18
Implementation methodology 22
How CA Technologies can help 23
References 23
About the Author 24
–2–
3. Building the Next-Generation Data Center
Executive summary
Challenge
Server virtualization promotes flexible utilization of IT resources, reduced capital and operating
costs, high energy efficiency, highly-available applications, and improved business continuity.
However, virtualization brings along with it a unique set of challenges around management of the
virtual infrastructure. These challenges are not just limited to technology, but include the lack of
virtualization expertise—they result in roadblocks (or ‘tipping points’) during the virtualization
journey, and prevent customers from advancing their virtualization implementations.
Opportunity
Traditional management software vendors with long-standing expertise in mainframe and
distributed environments are now delivering products and specialized services to manage the
virtual environment. The virtualization management offerings by some of these vendors are
mature, offer robust features, and have well defined roadmaps. These vendors bring decades of
enterprise IT expertise into virtualization management—expertise which is essential to overcome
‘tipping points’ in the virtualization journey and accelerate adoption of this emerging technology
within the enterprise.
Benefits
IT organizations can now enjoy a consistent framework to manage and secure physical, virtual and
cloud environments—thus gaining the same level of visibility, control, and security in the virtual
environments that they are accustomed to in the physical ones. With these robust virtualization
management capabilities, IT organizations can advance along the virtualization maturity lifecycle
with confidence, better realize the capital expenditure (CapEx) and operating expense (OpEx)
savings promised by this emerging technology, and operate an agile data center—thus serving as a
true business partner instead of a cost center.
–3–
4. Building the Next-Generation Data Center
Virtualization maturity lifecycle
Virtualization has the power to transform the way business runs IT, and it is the most important
transition happening in IT today. It promotes flexible utilization of IT resources, reduced capital
and operating costs, high energy efficiency, highly available applications, and better business
continuity. However, the virtualization journey can be long and difficult as virtualization brings with
it a unique set of challenges around the management and security of the virtual infrastructure.
Most organizations struggle, sooner or later, with workload migrations, visibility and control, virtual
machine (VM) sprawl, and the lack of data center agility.
Working with customers, industry analysts and other experts, CA Technologies has devised a
simple 4-staged model, shown in Figure 1, to describe the progression from an entry-level
virtualization project to a mature dynamic data center and private cloud strategy. These stages
include server consolidation, infrastructure optimization, automation and orchestration, and
dynamic data center.
+ +
Figure 1.
Customer virtualization
maturity cycle 1
+ +
Purpose, target audience, and assumptions
Most organizations face one (or more) clear ‘tipping points’ during the virtualization journey where
virtualization deployment stalls as IT stops to deal with new challenges. This ‘VM stall’ tends to
coincide with different stages in the virtualization maturity lifecycle—such as the transition from
tier 2/tier 3 server consolidation to the consolidation of mission-critical tier 1 applications; or from
basic provisioning automation to a dynamic data center approach. This paper provides guidance on
–4–
5. Building the Next-Generation Data Center
the combination of people, process and technology needed to overcome virtualization roadblocks
and promote success at each of the four distinct stages of the virtualization maturity lifecycle. For
each of the four phases, we will discuss:
A definition of the phase and challenges associated with it
A high-level project plan for a sample implementation
The target audience of this paper is the director/VP of Operations/Infrastructure at IT organizations
in mid-sized companies and departmental IT owner at large companies. This paper assumes that
the IT organization has:
Deployed x86 server virtualization hypervisor(s), along with the requisite compute, network
and storage virtualization resources, and related management software
Incorporated Intrusion Detection Systems (IDS)/Intrusion Prevention Systems
(IPS)/firewall/vulnerability management software to secure the virtual and related physical
systems components
Ensured virtual network segmentation using Virtual LANs (VLANs)/other technologies to meet
internal best practices or compliance requirements
The tasks and timelines for sample project plans described in this paper will vary depending upon
the size and scope of the project, available resources, number and complexity of candidate
applications, and other parameters.
Server consolidation
Server consolidation (using virtualization) is an approach that makes efficient use of available
compute resources and reduces the total number of physical servers in the data center; it is one of
the main drivers of virtualization adoption today. There are significant savings in hardware,
facilities, operations and energy costs associated with server consolidation—hence it is being
widely adopted within enterprises today.
Migrate with confidence
The bottom line for IT professionals undertaking initial server consolidation is that any failure at
this stage could potentially stall or end the virtualization journey. Organizations undergoing server
consolidation face key challenges with:
Gaining insight into application configuration, dependency, and capacity requirements
Quick and accurate workload migrations in a multi-hypervisor environment
Ensuring application performance after the workload migration
The lack of virtualization expertise
–5–
6. Building the Next-Generation Data Center
The following section discusses the tasks and capabilities required to overcome these and other
challenges, and migrate with confidence.
Project plan
A high-level project plan for a department-level server consolidation project within a mid-sized
organization is presented here. It details some of the key tasks necessary for a successful server
consolidation project. The timelines and tasks mentioned in Table 1 present a broad outline for a
tier 2/tier 3 departmental server consolidation project that targets converting approximately 200
production and non-production physical windows and Linux servers onto about 40 virtual server
hosts—the 2-3 person implementation team suggested for the project is expected to be proficient
in project management, virtualization design and deployment, and systems management.
A successful server consolidation project necessitates a structured approach which should consist
of the following high-level tasks. For each of these tasks we will discuss the key objectives and
possible challenges, articulate a successful outcome, and more.
+ +
Table 1.
# Tasks Weeks 1 2 3 4 5 6 7 8
Server consolidation
a Server consolidation workshop
project plan
b Application and system discovery/profiling
c Capacity analysis and planning
d Workload migration
e VM configuration testing
f Production testing and final deliverables
Resources Weeks 1 2 3 4 5 6 7 8
a Project Manager and Architect
b Virtualization Analyst(s)
c Application and Systems Consultant(s)
d All of the above
+ +
–6–
7. Building the Next-Generation Data Center
Server consolidation workshop
A server consolidation workshop should identify issues within the environment that may limit the
organization’s ability to successfully achieve expected results, and provide a project plan that
details tasks, resources, and costs associated with the project. A well-defined checklist should be
used to identify potential challenges with resources like servers, network, and storage. For example,
the checklist should ensure that:
The destination servers are installed and a physical rack placement diagram is available that
depicts the targeted location of equipment in the rack space provided
Highly available and high-bandwidth network paths are available between the virtual servers
and the core network to support both application and management traffic
An optimum data store and connectivity to it is available for the virtual machine disks
The workshop should draw a complete picture of the potential challenges with the proposed server
consolidation, and include concrete strategies and recommendations on moving forward with the
project. It should result in the creation of a comprehensive project plan that clearly divides tasks
among the implementation teams/individuals, defines timelines, contingency plans, etc., and is
approved by all the key management and IT stakeholders.
Application and system discovery/profiling
Migrating physical workloads necessitates in-depth knowledge of the applications supported by
them. Discovery and dependency mapping is one of the most important tasks during server
consolidation as not knowing details of the environment can jeopardize the entire project. Although
many vendors provide free tools for light discovery, these don’t dig deep enough to collect
information necessary for successful server consolidation. The lack of detailed discovery and
application/system dependency will result in problems during migration almost all of the time.
During the application discovery and profiling process, application and systems consultants should
use a configuration management tool to store configuration and dependency details of the
applications supported by the target workloads. These tools discover and snapshot
application/system components to provide a comprehensive, cross-platform inventory of
applications at a granular level, including directories, files, registries, database tables, and
configuration parameters—thus allowing for greater success during workload migration.
Capacity analysis and planning
Once organizations have profiled the candidate applications/systems for consolidation, they will
need to determine what host hardware configuration and VM configuration will support optimal
performance and scalability over time. Comprehensive capacity analysis and planning is essential
to determine the optimal resource requirements in the target virtual infrastructure, and allows IT
organizations to plan additional capacity purchases (server/storage hardware, bandwidth, etc.) prior
to starting the migration process.
–7–
8. Building the Next-Generation Data Center
Here too, there are free tools available, but they are generally very ‘services heavy’. In addition,
they do not include important factors necessary for comprehensive capacity planning such as
power consumption, service level requirements, organizational resource pools, security restrictions,
and other non-technical factors. These tools also lack critical features such as what-if analysis, etc.
Capacity planning becomes even more important with critical applications. For instance,
combining applications that have similar peak transaction times could have serious consequences,
resulting in unnecessary downtime, missed SLAs and consequent credibility issues with internal
customers. To avoid such issues, historical performance data from the applications should be
utilized during the capacity planning process.
Workload migration
The workload migration process is easily the most complex component of an organization’s
virtualization endeavor. The migration process refers to the “copying” of an entire
server/application stack. IT organizations face several many challenges during workload
migration—most end up migrating only 80-85% of target workloads successfully, and that too with
considerable problems. Some of the challenges include:
Migration in a multi-hypervisor environment, and possible V2P and V2V scenarios
The flexibility of working with either full snapshots or performing granular data migration
In-depth migration support for critical applications like AD, Exchange, SharePoint, etc.
Application/system downtime during the migration process
There are free tools for migration available from some hypervisor vendors, but these don’t work
well and require system shutdown for several hours for the conversion. They might also limit the
amount of data supported or require running tests on storage to uncover and address bad storage
blocks in advance. Backup, High Availability (HA) and IP based replication tools serve as a very
good option for successful workload migrations as they not only help overcome/mitigate the
abovementioned challenges, but can also be used for comprehensive BCDR (Business Continuity
and Disaster Recovery) capabilities (discussed in section 2 of this paper).
From a process standpoint, ensure that the migrations are performed per a pre-defined schedule
and include acceptance testing and sign-off steps to complete the process. Ensure contingency
plans, and factor in a modest amount of troubleshooting time to work out minor issues in real-time
and complete the migration of that workload at that time rather than rescheduling downtime
again later.
–8–
9. Building the Next-Generation Data Center
VM configuration testing
Configuration error is the root cause of a large percentage of application performance problems—
post migration VM configuration testing prevents performance and availability problems due to
configuration error. Another key post-migration challenge is preventing configuration drift;
maintaining various disparate VM/OS/application base templates can be very challenging—and
IT organizations can significantly ease configuration challenges by using a few well-defined gold
standard templates. Post migration, application/systems consultants should use change and
configuration management tools to:
Compare to validated configurations (stored during the discover/profiling task) after migration
Detect and remediate deviations from the post-migration configuration or gold standard
templates
These and related actions are essential to enable a successful migration, debug post-migration
issues if any, and prevent configuration drift.
Production testing and final deliverables
The breadth and depth of post-migration testing will vary according to the importance of the
migrated workload—less critical workloads might require only basic acceptance tests, while critical
ones might necessitate comprehensive QA tests. In addition, this task should include follow up on
any changes that the migration teams should have applied to a VM but are unable to perform due
to timing or need for additional change management approval. All such post-migration
recommendations should be noted, as appropriate, within the post-test delivery document/s.
This final stage of the implementation process should include delivery of documentation on the
conversion and migration workflow and procedures for all workloads. Doing so will remove
dependency on acquired tribal knowledge and allow staffing resources to be relatively
interchangeable. These artifacts and related best practices documents will also allow continuation
of the migration process for additional workloads in an autonomous fashion in the future if desired.
–9–
10. Building the Next-Generation Data Center
Infrastructure optimization
IT organizations today are experiencing pressure to not only adopt new and emerging technologies
like virtualization, but also reduce costs and do more with fewer resources (thus reducing CapEx)—
all while delivering assurance of capacity and performance to the business. However, organizations
that have successfully consolidated their server environment and are progressing on their
virtualization journey, often find it difficult to virtualize tier 1 workloads. They also face significant
challenges in utilizing the hosts at higher capacity. This happens because they lack the confidence
to move critical application onto the virtual environment, or utilize servers to capacity.
A mature and optimized infrastructure is essential for IT organizations to virtualize tier 1 workloads
and achieve increased capacity utilization on the virtual hosts—thus helping reap the true CapEx
savings promised by virtualization.
Gain visibility and control
Organizations face significant challenges in trying to achieve the visibility and control necessary to
optimize their virtual infrastructure. These include:
Providing performance and Service Level Agreement (SLA) assurance to the business
Deploying and maintaining capacity on an automated basis
Securing access to the virtual environment and facilitating compliance
Providing business continuity in the event of a failure
The following section discusses the tasks and capabilities required to optimize the infrastructure
and gain visibility and control into the availability and performance of the virtual environment.
Project plan
A high-level plan for an infrastructure optimization project is presented here. The timelines and
tasks mentioned in Table 2 present a broad outline for a tier 1 infrastructure optimization project
that targets setting up an optimized infrastructure and adding approximately 10 critical production
workloads to about 40 virtual server hosts (with existing workloads)—thus resulting in a 80-90%
capacity utilization on those servers. The 3-4 person implementation team suggested for the
project is expected to proficient in project management, virtualization design and deployment, and
systems management.
A successful infrastructure optimization project necessitates a structured approach which should
consist of the following high-level tasks. For each of these tasks we will discuss the key objectives
and possible challenges, articulate a successful outcome, and more. Since workload migration and
production testing were discussed in the previous section, they are not repeated here.
– 10 –
11. Building the Next-Generation Data Center
+ +
Table 2.
# Tasks Weeks 1 2 3 4 5 6 7 8
Infrastructure
a Performance and fault monitoring
Optimization project
plan b Continuous capacity management
c Change and configuration mgmt
d Workload migration
e Privileged user mgmt and system hardening
f Business continuity and disaster recovery
g Production testing, and final deliverables
Resources Weeks 1 2 3 4 5 6 7 8
a Virtualization Analyst(s)
b Application and Systems Consultant(s)
c All of the above
+ +
Performance and fault monitoring
Prior to moving critical workloads onto the virtual environment, IT operations teams need to
ensure that they have clear visibility and control into the availability and performance of the virtual
environment. To foster this visibility and control, application/systems consultants should use
performance management tools to:
Discover the virtual environment and create an aggregate view of the virtual infrastructure:
This discovery should be dynamic and not static—i.e. the aggregate view should automatically
reflect changes in the virtual environment that result from actions such as vMotion. In
addition, this discovery should not only reflect the virtual environment, but also components
surrounding the virtual network.
Setup event correlation: In a production environment where hundreds of events may be
generated every second by the various components, event correlation is extremely essential to
navigate through the noise and narrow down the root cause of active or potential problems.
Enable real-time performance monitoring and historical trending: The performance monitoring
should go beyond the basic metrics like CPU/memory consumption and provide insight into
the traffic responsiveness across hosts. Trending capabilities are also essential to monitor and
be cognizant of historical performance.
Capabilities like the ones mentioned above provide IT administrators and business/application
owners the confidence to move critical production applications into the virtual environment.
– 11 –
12. Building the Next-Generation Data Center
Continuous capacity management
Critical applications depend on multiple components in the virtual environment. Given the
dynamic nature of the virtual environment and high volume of workloads processed by virtual
servers, it is almost impossible for administrators to create and manage capacity plans on a project
by project basis. Therefore, managing critical workloads requires automating the manual steps of
capacity management, thus enabling continuous capacity management. A continuous capacity
management environment should:
Collect and correlate data from multiple data sources, update dashboards with the current
state of utilization across virtual and physical infrastructure, and publish reports on the
efficiency of resource utilization for each application/business service
Highlight opportunities for optimization, solve resource constraints, update baselines in
predictive models, utilize the predictive model to produce interactive illustrations of future
conditions
Integrate with provisioning solutions for intelligent automation, and eco-governance solutions
to help maintain compliance with environmental mandates
The level of continuous capacity management described above, along with comprehensive analytic
and simulation modeling capabilities, will allow the IT administrator to effectively manage the
capacity of critical applications/services on an ongoing basis.
Change and Configuration Management (CCM)
Pre/post migration configuration discovery and testing is essential to enable successful server
consolidation. However, IT organizations that support tier 1 workloads cannot afford to perform
these activities on a one-time project basis. Optimized infrastructures need continuous CCM not
only for the workloads, but also for the infrastructure itself. In a highly dynamic environment,
erroneous virtual infrastructure configuration can have drastic effects on VM performance.
Comprehensive CCM involves:
Providing ongoing configuration compliance with system hardening guidelines from the Center
for Internet Security (CIS), hypervisor vendors, etc.
Tracking virtual machines, infrastructure components, applications, and the dependencies
between them on a continuous basis
Monitoring virtual infrastructure configuration and its association with workload performance
Implementing comprehensive CCM for the virtual environment will not only help avoid
configuration drift and its impact to workload performance, but also facilitate compliance with
vendor license agreements and regulatory mandates like Payment Card Industry Data Security
Standards (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA), Sarbanes-Oxley
Act (SOX), etc.
– 12 –
13. Building the Next-Generation Data Center
Privileged user management and system hardening
Privileged users enjoy much more leverage in the virtual environment as they have access to most
virtual machines running on a host—hence tight control of privileged user entitlements is essential.
This task should ensure that:
Access to critical system passwords is only available to authorized users and programs
Passwords are stored in a secure password vault, and not shared among users or hardcoded in
program scripts
Privileged user actions are audited and the audit-logs are stored in a tamper-proof location
In addition to privileged user management which protects from internal threats, IT organizations
need to ensure that their servers are secure from malicious external threats. This includes installing
antivirus/antimalware software to protect against these external threats, and making sure that the
systems conform to the comprehensive system hardening guidelines provided by the hypervisor
vendors.
Business continuity and disaster recovery (BCDR)
BCDR has long been an essential requirement for critical applications and services—this includes
backup, high availability and disaster recovery capabilities. However, server virtualization has
changed the way modern IT organizations view BCDR. Instead of the traditional methods of
installing and maintaining backup agents on each virtual machine, IT organizations should utilize
tools that integrate with snapshot and off-host backup capabilities provided by most hypervisor
vendors—thus enabling backups without disrupting operations on the VM and offloading workload
from production servers to proxy ones. Activities within this task should ensure that:
Machines are backed up according to a pre-defined schedule, and granular restores using push
button failback are possible
Critical applications and systems are highly available, and use automated V2V or V2P failover
for individual systems/clusters
Non disruptive recovery testing capabilities are available for the administrators, etc.
The one week timeline scheduled for this task assumes the existence of comprehensive BCDR
plans for the physical workloads, which then only need to be translated into the virtual
environment.
– 13 –
14. Building the Next-Generation Data Center
Automation and orchestration
IT organizations that have successfully consolidated and optimized their virtual infrastructures face
a unique set of virtualization management challenges. Server provisioning that used to take weeks
can now be achieved in minutes, and results in increased virtualization adoption within the
business. This increased adoption results in 'VM sprawl’ (the problem of uncontrolled workloads),
increased provisioning and configuration errors, and the lack of a detailed audit trail—all of which
significantly increase the risk of service downtime.
Organizations that try to tackle this problem with increase in manpower will fail to get their hands
around the problem. In addition, IT managers/CIOs don't want expensive IT staff to do mundane
repetitive tasks, but focus their time on important strategic initiatives. Automation and
Orchestration capabilities are extremely essential to tackle VM sprawl, reduce provisioning errors,
improve audit capabilities, and achieve the significant savings in OpEx promised by server
virtualization.
Control VM sprawl
Organizations face multiple challenges when trying to automate and orchestrate their virtual
environments and obtain the reduction in OpEx achievable by server virtualization. These include:
Faster provisioning of standardized servers/applications into heterogeneous virtual and cloud
environments
Process integration across heterogeneous platforms, applications and IT groups
Standardized configuration and compliance
The following section discusses the tasks and capabilities required to incorporate automation and
orchestration capabilities within the virtual environment, thus helping control VM sprawl and
reduce OpEx.
Project plan
A high-level plan for a sample project to automate application provisioning and build associated
orchestrations is presented here. The timelines and tasks mentioned in Table 3 present a broad
outline for a lab management project with approval orchestration, showback, and the ability to test
composite applications spanning multiple VMs. Other sample projects could be an automated self-
service reservation management system, production/staging environment management, demos on
demand capability, educational labs, etc. The IT capabilities required for these projects are almost
similar, but the design and workflows required will be different. The 3-4 person implementation
team suggested for the project is expected to proficient in project management, virtualization
design and deployment, and systems management.
– 14 –
15. Building the Next-Generation Data Center
A successful automation and orchestration project necessitates a structured approach which
should consist of the following high-level tasks. For each of these tasks we will discuss the key
objectives and possible challenges, articulate a successful outcome, and more.
+ +
Table 3.
# Tasks Weeks 1 2 3 4 5 6
Automation and
a System design
orchestration project
plan b Resource pool deployment
c VM template and lifecycle management
d Workflow orchestration
e Showback configuration
f Monitoring, production testing, and final deliverables
Resources Weeks 1 2 3 4 5 6
a Project Manager and Architect
b Virtualization Analyst(s)
c Application and Systems consultant(s)
d All of the above
+ +
System design
A lab management system enables IT organizations to provide a web-based self-service
reservation system so that users can reserve and deploy customized server and virtual machine
instances without administrator intervention. The system design begins with application/systems
consultants interviewing IT administrators and users to better understand the business
requirements and workflows. The requirements should be captured, analyzed, and refined over
multiple interviews and/or white-boarding sessions, and result in the development of
comprehensive workflows. A well-defined checklist should be used to identify important details
such as:
Usage characteristics, roles and access entitlements of the various users
Operating system/other software needs and system maintenance requirements
Approval workflows, reporting needs, HA/DR requirements, etc.
The system design phase should result in the creation of a comprehensive project plan that clearly
details the deliverables, defines timelines, contingency plans, etc., and is approved by all the key
business and IT stakeholders.
– 15 –
16. Building the Next-Generation Data Center
Resource pool deployment
The lab management system will be required to serve several departments within the
organization—each of which might have different availability requirements. Resource pools are a
very easy and convenient way for IT administrators to manage departmental requirements. Setting
up resource pools involves:
Defining resource pools to better manage and administer the system for different
departments/organizations. Resource pool definitions should consider service level/Quality of
Service (QoS) requirements of the applications supported by the resource pools, HA and BCDR
requirements, etc.
Attaching appropriate compute, network and storage resources to the resource pool
Integrating with performance monitoring products to consume data on usage thresholds and
perform dynamic balancing of the resource pools
Careful planning during resource pool design and deployment will significantly reduce manual
administration requirements and helpdesk calls during regular operations.
VM template and lifecycle management
Template based provisioning capabilities are present in all automation products. This task involves
creating VM templates, defining a default lifecycle for provisioning and de-provisioning VMs, etc.
Careful consideration should be given to the following during this phase:
Software license requirements and integration with Asset Management products if necessary
Integration with identity management products to import user and role information and
providing a personalized experience to the users
Setting up template availability rules for different user roles, and orchestrating workflow if
necessary
A properly configured VM lifecycle significantly improves the user experience, reduces helpdesk
calls, and helps control/arrest virtual sprawl.
Workflow orchestration
Workflow orchestration goes hand in hand with the automation system—allowing organizations to
design, deploy and administer the automation of manual, resource-intensive and often
inconsistent IT operational procedures. For the lab management system discussed here, workflow
orchestration will incorporate:
Design of essential workflows such as reservation/access approvals, system availability,
change notifications, etc.
Integration with relevant enterprise systems—such as email, identity management/LDAP,
asset management, etc.—to enable the workflow
– 16 –
17. Building the Next-Generation Data Center
Execution of the workflow while maintaining insight into the process, audit records, and other
details for administration and compliance
A good orchestration engine will speed the delivery of IT services while helping to remove manual
errors—by defining, automating and orchestrating processes across organizational silos that use
disparate systems, it helps improve productivity while also enforcing standards.
Showback configuration
Showback is essential to inform users about the cost of their system reservation and report on their
usage. It is different from chargeback, which integrates with financial systems to provide a
comprehensive bill to the business units requesting resources. Showback provides users with a
comprehensive view of their costs depending upon the reservation options, duration/time of
reservation, etc. It also allows administrators to generate detailed usage reports by Lines of
Business (LOB), geographical location, asset type, etc.
During the showback configuration task, reservation options should be evaluated and costs
assigned to the different options and services offered in the lab management system. The system
should either be orchestrated to get these details from other financial/asset management software
or approximate values should be derived from previous metrics available within the IT organization.
Monitoring, production testing, and final deliverables
The monitoring and production testing details for this project will be similar to the ones discussed
in the previous sections. In addition, this final deliverable should document the system design,
deployment, testing and orchestration details for knowledge transfer. It should include
An architecture and design guide that will document client business requirements combined
with best practices guidelines
An assembly, configuration and testing guide that will enable building the system in
accordance to the abovementioned architecture and design guide
Formal user focused training customized to above architecture will facilitate knowledge transfer of
final design and usage policies/procedures as well as level set knowledge base amongst the entire
user group.
– 17 –
18. Building the Next-Generation Data Center
Dynamic data center
A dynamic data center is an IT environment that not only supports the business, but at times, is
part of the product delivered by the business. It is an agile IT environment, built on top of an
optimized and automated virtual infrastructure (discussed in the previous two sections), that is:
Service oriented – delivering on-demand, standardized services to the business (internal
customers, partners, etc.)
Scalable – with the ability to span heterogeneous physical, virtual and cloud environments
Secure – providing security as a service to internal/external customers
Agility made possible
The dynamic data center is neither a one-size-fits-all solution, nor an endless pit where CIOs
should invest money and resources to obtain capabilities not needed for their business. However,
IT organizations trying to build a dynamic data center face some fundamental challenges such as:
Delivering a standard set of tiered-services (with well-defined SLAs) that are consumable by
business users
Service oriented automation and orchestration that spans heterogeneous physical, virtual and
cloud environments
Ensuring security, compliance and QoS for the entire service
Providing a comprehensive service interface that serves as a visual communication tool
between IT and the business
The following section discusses the basic tasks and capabilities required to build and maintain a
dynamic data center that allows IT departments to serve as an agile service provider and drive
competitive differentiation for the business.
Project plan
A high-level project plan for a sample scenario that would be part of achieving dynamic data center
is presented here. The timelines and tasks mentioned in Table 4 present a broad outline for a mid-
tier IT project that is focused on supporting expanding business initiatives with agility. Forward Inc
has identified a lucrative opportunity in offering one of its internal services (e.g. billing, shipping,
order management, Electronic Medical Records, etc.) to a host of new local and international
partners. This would not only allow Forward Inc to profit from its IT investments, but also provide
valuable services to its partners—thus helping improve partner retention/expansion.
The project assumes the availability of an optimized infrastructure with comprehensive
automation and orchestration capabilities (as discussed in the previous two sections of this paper).
The 4-6 person implementation team suggested for the project is expected to proficient in project
– 18 –
19. Building the Next-Generation Data Center
management, virtualization design and deployment, security management, and systems
management.
A successful dynamic data center project necessitates a structured approach which should consist
of the following high-level tasks. For each of these tasks we will discuss the key objectives and
possible challenges, articulate a successful outcome, and more.
+ +
Table 4.
# Tasks Months 1 2 3 4 5 6
Dynamic data center
a Service design
project plan
b Enable automated service provisioning
c Provide security and BCDR for the service
d Ensure service assurance
e Implement service contract management and chargeback
f Integrate with / implement a service catalog
g Monitoring, production testing, and final deliverables
Resources Months 1 2 3 4 5 6
a Project Manager and Architect
b Virtualization analyst(s)
c Application and Systems consultant(s)
d Security Specialists
+ +
Service design
Service design is the first and most important step in building an agile IT environment, and should
be conducted in close collaboration with the business. Some key service design considerations
include:
Modularity – with the ability to source the service internally or from external vendors
Heterogeneity – allowing flexibility and avoiding vendor lock-in
Compliance – taking into account internal and partner compliance, information protection, and
audit requirements
This task should involve the creation of service tiers (gold, silver, etc.); for example, a tier 2/tier 3
service offering would not have the same level of storage and BCDR capabilities associated with it
as a tier 1 service. These and other related decisions should be taken after close collaboration with
product management, security, compliance, network, storage, and other component owners of
the service.
– 19 –
20. Building the Next-Generation Data Center
Enable automated service provisioning
Automated service provisioning is the ability to provision, on demand, an instance of the service in
a private, public or hybrid cloud. Some of the tasks involved in this process include:
Performing workload migrations, if necessary, of service components (servers, applications,
databases, etc.)
Automating provisioning of the entire service infrastructure using template-based provisioning
capabilities offered by next-generation automation tools
Orchestrating integrations between the service components, including approval workflows,
integration with change and configuration management systems, helpdesk software, etc.
With the ability to provision across multiple platforms, IT organizations will retain the flexibility to
in-source/outsource the entire service/components of it, to public or private data centers.
Provide security and BCDR for the service
The security in context here builds on the already optimized and automated infrastructure
discussed in the previous sections (which includes IDS, IPS, Firewall, VLAN and PUPM capabilities).
The capabilities discussed below are necessary considering the dynamic nature of service in
context, and include
Installing security policies on the VMs associated with the service, and providing for the
appropriate policy to be in place irrespective of the VM location (i.e. within an internal
production/staging cloud, or external cloud)
Implementing Web Access Management software to permit only authenticated and authorized
users to have access to critical resources within the service
Using identity federation technologies to maintain a clear distinction between the identity
provider (partner) and the service provider (business)
Providing for backup and high-availability of the service
Security is one of the top concerns in the minds of the business as any major breach can not only
cause financial damage but also affect customer loyalty and brand image. In addition to securing
the service, the abovementioned capabilities also allow IT to leverage security services for business
enablement.
Ensure service assurance
The modular and scalable nature of the service, coupled with the dynamic nature of the virtual
environment, necessitates service-centric assurance—the ability to monitor the availability and
performance of the service (application plus the underlying infrastructure) as a whole. This task
involves:
Building and maintaining an end-to-end model of the infrastructure supporting the service
– 20 –
21. Building the Next-Generation Data Center
Real-time monitoring of events and notifications from the service components
Providing a dashboard for customers to view service availability/quality status
Serving data to SLA management tools regarding service availability, performance, etc.
Monitoring of service components in silos is not only cumbersome, but can fail to detect critical
inter-dependent errors. Service-centric assurance significantly reduces management costs by
providing a single portal for administration, improving service quality and reducing risk.
Implement service contract management and chargeback
In today’s competitive business environment, accountability and transparency are essential to
maintain customer satisfaction. To do so, IT organizations need to define, manage, monitor and
report on their SLAs in a timely manner. To enable this, IT analysts should:
Define easy-to-understand SLAs; this definition should include metrics such as system
availability, helpdesk response times, Mean Time To Repair (MTTR) for reported problems, etc.
Aggregate service-level information from disparate operational systems (network monitoring
solutions, application monitoring solutions, helpdesk systems, etc.), and compare it to
performance objectives for each customer
Report on these SLAs in a scheduled manner, and tie them back to the chargeback system
Performing these tasks manually or on a project basis will not be sustainable over the long run—
automated service contract management and chargeback capabilities are essential to allow the
end customer track, on demand, the responsiveness of IT services. In addition, chargeback
capabilities should be linked to contract management, thus ensuring customer satisfaction with
service delivery, and easing the burden of financial accounting.
Integrate with/implement a service catalog
A service catalog serves as a front end for IT to interface with business users. It is a perfect portal
for publishing role-based services to internal/external users—allowing them to subscribe to
services delivered by IT. Organizations that have already implemented a service catalog should
look to publish this service within the existing catalog implementation. Since the end consumer for
the service is most probably a business user, it is essential to ensure that the service is easily
described in business terms instead of technical jargon.
Service desk integration is also essential as there is generally a learning curve involved with new
services—a good service desk and related knowledge base implementation prevents IT from being
inundated with individual requests.
– 21 –
22. Building the Next-Generation Data Center
Monitoring, production testing, and final deliverables
The monitoring and production testing requirements for this project will be similar to the ones
discussed in the previous sections. In addition, the large scope of the project might necessitate a
structured beta program with a controlled group before the service is rolled out to a large audience
of partners.
Implementation methodology
Virtualization is a relatively new technology, and not all IT organizations have strong in-house
expertise or experience with virtualization implementations. The Virtual Infrastructure Lifecycle
Methodology (described in Figure 2) from CA Technologies is an excellent example of leveraging
enterprise experiences and industry best practices to carefully navigate each stage of virtualization
adoption. It helps ensure that the key aspects of virtualization are accounted for and addressed,
thus enabling a smooth deployment without the remediation delays that are common to many
virtualization initiatives. The Virtual Infrastructure Lifecycle Methodology is based upon actual
practical experience gained from the delivery of virtual infrastructure to many Fortune 500
enterprise environments, and is widely adopted by CA Services teams and partner organizations.
+ +
Figure 2.
Analyze – Define the business objectives for adopting virtualization along with TCO/ROI
Virtual Infrastructure
analysis. Execute a broad assessment of the environment including existing people,
Lifecycle Methodology
process, and technologies to identify potential gaps that will impact adoption. Create a
from CA Technologies2
go-forward strategy supported by actionable steps to ensure success.
Design – Consider implementation and support requirements by developing staffing and
training plans. Identify and document functional and non-functional requirements that
will shape the design. Create a detailed architectural design and plan for the
implementation.
Implement – Institute a program portal or other medium for communicating key content
like policy, project status, etc. Start adapting existing operational processes to support the
virtualization and cloud infrastructure. Install and configure the solution as specified by
the plan and blueprints.
Optimize – Identify and develop areas to drive more efficiency in the virtual infrastructure
based on experiences to this point. Implement means to monitor the usage of resources
and harvest capacity through reclamation. Perform financial tracking of usage to
rationalize growth while adding continuity and leveraging external clouds.
+ +
– 22 –
23. Building the Next-Generation Data Center
How CA Technologies can help
People, process and technology are three key ingredients to achieve the reduction in CapEx/OpEx,
and improved data center agility with virtualization. Figure 3 summarizes the key capabilities
required by IT organizations at each phase of the virtualization maturity lifecycle to overcome
‘tipping points’ in the virtualization journey and gain the same level of visibility, control, and
security in the virtual environments, that they’ve been accustomed to in the physical ones.
CA Technologies recently introduced CA Virtual, a comprehensive portfolio of products to provision,
control, manage, secure and assure heterogeneous virtual environments. Products in the CA Virtual
portfolio are not only quick to deploy and easy to use, but also provide an on-ramp to manage
heterogeneous physical, virtual, and cloud environments. For further information, please visit
ca.com/virtual or ca.com/virtualization.
+ +
Figure 3.
Overcoming virtualization
'tipping points'
+ +
References
1. CA Technologies virtualization maturity lifecycle: ca.com/virtual or ca.com/virtualization
2. CA Technologies (ex 4Base) Virtual Infrastructure Lifecycle Methodology: ca.com/4base
– 23 –