Virtualization is a critical infrastructure-architecture
layer that is required for achieving higher IT-maturity
levels, but several others layers—such as automation, management, and orchestration—are equally important.
Configurability for Cloud-Native Applications: Observability and ControlCognizant
The billowing multi-cloud, with loosely coupled services, requires better observability of live configuration changes and management tools. Here’s how to address these challenges.
Learn about the IBM SmartCloud Desktop Infrastructure.The SmartCloud Desktop Infrastructure solution with VMware View running on IBM Flex System simplifies IT manageability and control. It delivers high fidelity user experiences across devices and networks. The features of VMware View that are included in the SmartCloud Desktop Infrastructure solution provide enhanced security, high availability, centralized management and control, and scalability. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Learn about the IBM SmartCloud Desktop Infrastructure. The SmartCloud Desktop Infrastructure solution with Citrix XenDesktop running on IBM Flex System offers tailored solutions for every business, from the affordable all-in-one Citrix VDI-in-a-Box for simple IT organizations to the enterprise-wide Citrix XenDesktop. XenDesktop is a comprehensive desktop virtualization solution with multiple delivery models that is optimized for flexibility and cost efficiency. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Configurability for Cloud-Native Applications: Observability and ControlCognizant
The billowing multi-cloud, with loosely coupled services, requires better observability of live configuration changes and management tools. Here’s how to address these challenges.
Learn about the IBM SmartCloud Desktop Infrastructure.The SmartCloud Desktop Infrastructure solution with VMware View running on IBM Flex System simplifies IT manageability and control. It delivers high fidelity user experiences across devices and networks. The features of VMware View that are included in the SmartCloud Desktop Infrastructure solution provide enhanced security, high availability, centralized management and control, and scalability. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Learn about the IBM SmartCloud Desktop Infrastructure. The SmartCloud Desktop Infrastructure solution with Citrix XenDesktop running on IBM Flex System offers tailored solutions for every business, from the affordable all-in-one Citrix VDI-in-a-Box for simple IT organizations to the enterprise-wide Citrix XenDesktop. XenDesktop is a comprehensive desktop virtualization solution with multiple delivery models that is optimized for flexibility and cost efficiency. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Cloud computing is a progressive innovation that has achieved new extravagances in the field of
Information Technology. It gives a wellspring of information and application programming stockpiling as
colossal server farms called 'mists', which can be gotten to with the assistance of a system association.
These mists boost the capacities of undertakings with no additional set-up, faculty or permitting costs.
Mists are for the most part sent utilizing Public, Private or Hybrid models relying on the necessities of the
client. In this paper, we have explored the distributed computing engineering, concentrating on the
elements of the Public, Private and Hybrid cloud models. There is a dire need to examine the performance
of a cloud environment on several metrics and enhance its usability and capability. This paper aims at
highlighting important contributions of various researchers in domains like computational power,
performance provisioning, Load balancing and SLAs.
Cloud computing is a technique that has a great capabilities and benefits for users. Cloud characteristics
encourage many organizations to move to this technology. But many consideration faces transmission
process. This paper outline some of these considerations and considerable efforts solved cloud scalability
issues.
18 ottobre 2011 VMware presenta al Virtualization day,evento patrocinato dalla Provincia di Roma e organizzato da S&Q a Palazzo Valentini, i suoi prodotti per la virtualizzazione dei desktop .L' Intervento di Michele Apa è stato molto interessante ed è stato apprezzato da tutta la platea
Per maggiori informazioni sull'evento : www.sqingegneria.com
More and more organizations want greater control of their website content. They want a centralized process where many individuals can create, edit and publish new content and do it efficiently. The WAVES2 WCMS empowers organizations to streamline the management of their online presence, reducing technical reliance, and enabling them to execute marketing strategies with greater speed.
Knorr-Bremse Group Strong Authentication Case StudySafeNet
Knorr-Bremse was seeking a secure remote access solution that would enable one device per user strong authentication to their existing Check Point IPSec VPN solution and Citrix applications. They also wished to add support for a new SSL-VPN portal that utilized X.509 certifi cates, integrated with their Microsoft Certificate Authority (MS CA) PKI solution. In addition, the company wanted a solution that enabled installation of the backend in a virtualized environment (VMWare ESX).
Understanding the cloud computing stackSatish Chavan
Understanding the cloud computing stack
Introduction
Key characteristics
At Glance
Standardization, Migration &Adaptation
Service models
Deployment models
Network as a Service
Software as a Service (SaaS).
Platform as a Service (PaaS).
Infrastructure as a Service (IaaS).
Communications as a Service (CaaS)
Data as a Service - DaaS
Benefits & Challenges
Security Risks & Challenges
Cloud Vendors
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
Modern internet services rely on web and cloud technology, and as such they are no longer independent packages with in-built security, but are constructed through the combination and reuse of other services distributed across the web. While the ability to build applications in this way results in highly innovative services, it creates new issues in terms of security. Trusted computing aims to provide a way to meet the evolving security requirements of users, businesses, regulators and infrastructure owners.
Cloud computing is a progressive innovation that has achieved new extravagances in the field of
Information Technology. It gives a wellspring of information and application programming stockpiling as
colossal server farms called 'mists', which can be gotten to with the assistance of a system association.
These mists boost the capacities of undertakings with no additional set-up, faculty or permitting costs.
Mists are for the most part sent utilizing Public, Private or Hybrid models relying on the necessities of the
client. In this paper, we have explored the distributed computing engineering, concentrating on the
elements of the Public, Private and Hybrid cloud models. There is a dire need to examine the performance
of a cloud environment on several metrics and enhance its usability and capability. This paper aims at
highlighting important contributions of various researchers in domains like computational power,
performance provisioning, Load balancing and SLAs.
Cloud computing is a technique that has a great capabilities and benefits for users. Cloud characteristics
encourage many organizations to move to this technology. But many consideration faces transmission
process. This paper outline some of these considerations and considerable efforts solved cloud scalability
issues.
18 ottobre 2011 VMware presenta al Virtualization day,evento patrocinato dalla Provincia di Roma e organizzato da S&Q a Palazzo Valentini, i suoi prodotti per la virtualizzazione dei desktop .L' Intervento di Michele Apa è stato molto interessante ed è stato apprezzato da tutta la platea
Per maggiori informazioni sull'evento : www.sqingegneria.com
More and more organizations want greater control of their website content. They want a centralized process where many individuals can create, edit and publish new content and do it efficiently. The WAVES2 WCMS empowers organizations to streamline the management of their online presence, reducing technical reliance, and enabling them to execute marketing strategies with greater speed.
Knorr-Bremse Group Strong Authentication Case StudySafeNet
Knorr-Bremse was seeking a secure remote access solution that would enable one device per user strong authentication to their existing Check Point IPSec VPN solution and Citrix applications. They also wished to add support for a new SSL-VPN portal that utilized X.509 certifi cates, integrated with their Microsoft Certificate Authority (MS CA) PKI solution. In addition, the company wanted a solution that enabled installation of the backend in a virtualized environment (VMWare ESX).
Understanding the cloud computing stackSatish Chavan
Understanding the cloud computing stack
Introduction
Key characteristics
At Glance
Standardization, Migration &Adaptation
Service models
Deployment models
Network as a Service
Software as a Service (SaaS).
Platform as a Service (PaaS).
Infrastructure as a Service (IaaS).
Communications as a Service (CaaS)
Data as a Service - DaaS
Benefits & Challenges
Security Risks & Challenges
Cloud Vendors
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
Modern internet services rely on web and cloud technology, and as such they are no longer independent packages with in-built security, but are constructed through the combination and reuse of other services distributed across the web. While the ability to build applications in this way results in highly innovative services, it creates new issues in terms of security. Trusted computing aims to provide a way to meet the evolving security requirements of users, businesses, regulators and infrastructure owners.
Short Economic EssayPlease answer MINIMUM 400 word I need this.docxbudabrooks46239
Short Economic Essay
Please answer MINIMUM 400 word
I need this maximum in 2,5 hour because now I’m doing the online final exam and the clock is ticking.
Question:
What is the purpose of the term sheet and why is it important? Be sure to write a detailed long essay to this question. Think about who the term sheet is written for, why it is written, and what does it need to convey.
Cloud Computing: Virtualization and Resiliency for
Data Center Computing
Valentina Salapura
IBM T. J. Watson Research Center
Yorktown Heights, NY, USA
[email protected]
Index Terms — Cloud computing, data center management,
data center optimization, virtualization, Infrastructure as a
service (IaaS), Platform as a service (PaaS), Software as a service
(SaaS), high availability, disaster recovery, virtual appliance.
INTRODUCTION
Cloud computing is being rapidly adopted across the IT
industry, driven by the need to reduce the total cost of
ownership of increasingly more demanding workloads. Within
companies, private clouds are offering a more efficient way to
manage and use private data centers. In the broader
marketplace, public clouds offer the promise of buying
computing capabilities based on a utility model. This utility
model enables IT consumers to purchase compute resources on
demand to fit current business needs and scale expenses
associated with computing resources. Thus, cloud computing
offers IT to be treated as an ongoing variable operating expense
billed by usage rather than requiring capital expenditures that
must be planned years in advance. Advantageously, operating
expenses can be charged against the revenue generated by these
expenses directly. In contrast, capital expenses incurred by the
purchase of a system need to be paid at the time of purchase,
but can only be depreciated to reduce the taxable income over
the lifetime of the system.
THE MAIN ATTRIBUTES OF CLOUD COMPUTING
The main attributes of cloud computing are scalable,
shared, on-demand computing resources delivered over the
network, and pay-per-use pricing. This offers flexibility in
using as few or as many IT resources as needed at any point in
time. Thus, users do not need to predict future resources they
might need, and to commit to capital investment in hardware.
This is especially advantageous for start-ups, and small and
medium businesses which might otherwise not be able to afford
the IT infrastructure they need to support their growing
business. At the same time, redirecting capital investment from
IT infrastructure to the core business is attractive even for large
and financially strong businesses.
From a technical perspective, cloud computing brings the
benefits of virtualization and multi-tenancy to scale-out
systems. Virtualization techniques allow multiple system
images to share the same hardware resources: CPU
virtualization techniques create multiple virtual hardware
systems, while network virtualization .
The benefits of employing virtualization in the corporate data center are compelling – lower operating
costs, better resource utilization, increased availability of critical infrastructure to name just a few. It is an
apparent “no brainer” which explains why so many organizations are jumping on the bandwagon. Industry
analysts estimate that between 60 and 80 percent of IT departments are actively working on server
consolidation projects using virtualization. But what are the challenges for operations and security staff
when it comes to management and ensuring the security of the new virtual enterprise? With new
technology, complexity and invariably new management challenges generally follow.
Over the last 18 months, Prism Microsystems, a leading security information and event management
(SIEM) vendor, working closely with a set of early adopter customers and prospects, has been working on
extending the capability of EventTracker to provide deep support for virtualization, enabling our customers
to get the same level of security for the virtualized enterprise as they have for their non-virtualized
enterprise. This White Paper examines the technology and management challenges that result from
virtualization, and how EventTracker addresses them.
Net Optics and VMware Team Up to Deliver Full Visibility, Automation, Flexibility and Scalability for Comprehensive Moni
Enterprises have been utilizing Tap solutions for network traffic access for many years. Traffic capture, analysis, replay, and logging are now part of every well-managed network environment. In recent years, the significant shift to virtualization—with penetration exceeding 50%—is yielding great benefits in efficiency. However, today’s virtualization-based deployments create challenges for network security, compliance, and performance monitoring. This is because Inter-VM traffic is optimized to speed up connections and minimize network utilization. This imposes invisibility on physical tools unable to extend easily into the new environments. Costly new virtualization-specific tools plus training can affect the economic benefits and cost-savings of virtualizing. Currently, many tools suffer from limited throughput, hypervisor incompatibility, and excessive resource utilization.
Discovering New Horizons in Virtualization Solutions | The Enterprise WorldTEWMAGAZINE
Regardless of the virtualization solution you choose, adhering to best practices is essential to ensure optimal performance, reliability, and security.
Virtual machines are popular because of their efficiency, ease of use and flexibility. There has been an increasing demand for deployment of a robust distributed network for maximizing the performance of such systems
and minimizing the infrastructural cost. In this paper we have discussed various levels at which virtualization can be implemented for distributed computing which can contribute to increased efficiency and performance of distributed
computing. The paper gives an overview of various types of virtualization techniques and their benefits. For eg: Server virtualization helps to create multiple server instances from one physical server. Such techniques will decrease
the infrastructure cost, make the system more scalable and help in full utilization of available resources.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
From Virtualization to Dynamic IT
1. 24
Learn the discipline, pursue the art, and contribute ideas at
www.architecturejournal.net
input for better outcomes
The Different Paths to
Virtualization
The Impact of Virtualization Virtualization: Without Control, Getting the Most Out
on Software Architecture Power Is Nothing of Virtualization
How Virtualized Corporate Models and Application From Virtualization to
Networks Raise the Premium Life‑Cycle Management Dynamic IT
on System Reliability
2. From Virtualization to Dynamic IT
by David Ziembicki
Summary
Architectural Considerations
Virtualization is a critical infrastructure-architecture Achieving higher IT maturity requires a well-designed infrastructure
layer that is required for achieving higher IT-maturity architecture that comprises the previously outlined layers, each of
which might be provided by one or more products/solutions.
levels, but several others layers—such as automation, Figure 1 illustrates this conceptual architecture.
management, and orchestration—are equally One of the most important choices to be made is which
important. architecture layer or layers will provide resiliency and high availability
for your IT services. These attributes can be provided by the
infrastructure layers, at the application platform layer, or some
Introduction combination of them.
Dynamic IT is an advanced state of IT maturity that has a high It is well known that the architecture pattern that is used by most
degree of automation, integrated-service management, and efficient large cloud services is a scale-out model on commodity hardware.
use of resources. Several IT-maturity models, including Gartner’s High availability and service resiliency are provided by the application
Infrastructure Maturity Model and Microsoft’s Infrastructure platform and associated development model (stateless-server roles,
Optimization Model, define maturity levels and the attributes that automatic data replication, and so on). This model makes sense,
are required to achieve each level. given the nature of the workload—in particular, the small amount
Both the Gartner and Microsoft models agree that virtualization of standard workloads that the hardware must support in very large
is a critical architecture component for higher IT-maturity levels. quantities, such as a standard data server and a standard web server.
However, many other components are required. An infrastructure For the more complex and heterogeneous workloads that are
that is 100 percent virtualized might still have no process automation; found in most infrastructures, this model might not be possible,
it might not provide management and monitoring of applications particularly in the short term. Virtualization and consolidation of
that are running inside virtual machines (VMs) or IT services that are legacy workloads that do not include load balancing and other
provided by a collection of VMs. In addition to virtualization, several high-availability techniques at the application layer might require the
other infrastructure-architecture layers are required to reach the virtualization or hardware layers to provide this capability.
highest levels of IT maturity. It is critical to evaluate the costs of each of these approaches.
A rich automation layer is required. The automation layer Providing high availability in the software and application layer
must be enabled across all hardware components—including can enable a significant reduction in hardware costs by enabling
server, storage, and networking devices—as well as all software commodity hardware to be utilized and reducing the need for
layers, such as operating systems, services, and applications. The expensive, fault-tolerant hardware. One goal of an enterprise
Windows Management Framework—which comprises Windows architecture strategy should be to move your application inventory
Management Instrumentation (WMI), Web Services-Management over time (via redesign, retirement, and so on) to the least costly
(WS-Management), and Windows PowerShell—is an example of a infrastructure model. This is one of the key business drivers of cloud
rich automation layer that was initially scoped to Microsoft products, computing.
but that is now being leveraged by a wide variety of hardware and
software partners.
A management layer that leverages the automation layer and Figure 1: Infrastructure-architecture layers
functions across physical, virtual, and application resources is another
required layer for higher IT maturity. The management system must Orchestration Layer
be able to deploy capacity, monitor health state, and automatically
respond to issues or faults at any layer of the architecture. Management Layer
Finally, an orchestration layer that manages all of the automation
and management components must be implemented as the interface Automation Layer
between the IT organization and the infrastructure. The orchestration
layer provides the bridge between IT business logic, such as “deploy Virtualization Layer
a new web-server VM when capacity reaches 85 percent,” and the
dozens of steps in an automated workflow that are required to Hardware Layer
actually implement such a change.
The integration of virtualization, automation, management, and
orchestration layers provides the foundation for achieving the highest Storage Network Compute Facility
levels of IT maturity.
24
The Architecture Journal 24
3. From Virtualization to Dynamic IT
To illustrate how this conceptual architecture enables a high degree of the automation layer, so that the management and orchestration
IT maturity and automation, we will consider a virtualized, three-tier layers can leverage these to automate processes such as patching the
application that consists of a clustered pair of database servers, physical servers, without affecting application availability. In more
two application servers, and three web servers. For each layer of advanced scenarios, the virtualization layer can span facilities to
the conceptual infrastructure architecture, the requirements and provide a site-resilient architecture.
capabilities that it must provide to support the workload will be
illustrated. Automation Layer
The ability to automate all expected operations over the lifetime of
Hardware Layer a hardware or software component is critical. Without this capability
The hardware-architecture choices that are available to data-center being embedded in a deep way across all layers of the infrastructure,
architects are constantly evolving. Choices range from commodity dynamic processes will grind to a halt as soon as user intervention or
rack-mounted servers to tightly integrated, highly redundant blade other manual processing is required.
systems to container models. The same spectrum exists for storage Windows PowerShell and several other foundational technologies,
and networking equipment. including WMI and WS-Management, provide a robust automation
Hardware and software integration is an attribute of higher layer across nearly all of Microsoft’s products, as well as a variety
IT maturity. Server, storage, and networking hardware should of non-Microsoft hardware and software. This evolution provides
be selected based on its ability to be managed, monitored, and a single automation framework and scripting language to be used
controlled by the architecture layers that are above it. An example across the entire infrastructure.
would be a server that is able to report issues—such as high system The automation layer is made up of the foundational automation
temperatures or failed power supplies—to higher-level management technology plus a series of single-purpose commands and scripts
systems that can take proactive measures to prevent service that perform operations such as starting or stopping a VM, rebooting
disruptions. Ideally, there would not be separate management systems a server, or applying a software update. These atomic units of
for each hardware type; each component should integrate into a automation are combined and executed by higher-level management
higher-level management system. systems. The modularity of this layered approach dramatically
Careful consideration of the virtualization and application simplifies development, debugging, and maintenance.
platforms should be part of the process of hardware selection. If
the application platform will provide high availability and resiliency, Management Layer
commodity hardware and limited redundancy can be acceptable and The management layer consists of the tools and systems that
more cost-effective. If the application platform cannot provide these are utilized to deploy and operate the infrastructure. In most
capabilities, more robust hardware and high-availability features in cases, this consists of a variety of different toolsets for managing
the virtualization layer will be required. hardware, software, and applications. Ideally, all components of
For our reference three-tier application, the hardware layer the management system would leverage the automation layer and
provides the physical infrastructure on which the application’s virtual not introduce their own protocols, scripting languages, or other
servers run. The hardware layer must provide detailed reporting on technologies (which would increase complexity and require additional
the condition and performance of each component. If a disk or host staff expertise).
bus adapter (HBA) starts to exhibit a number of errors that is higher The management layer is utilized to perform activities such as
than average, this must be reported to the management system, so provisioning the storage-area network (SAN), deploying an operating
that proactive measures can be taken prior to an actual fault. system, or monitoring an application. A key attribute is its abilities to
manage and monitor every single component of the infrastructure
Virtualization Layer remotely and to capture the dependencies among all of the
The virtualization layer is one of the primary enablers of greater infrastructure components.
IT maturity. The decoupling of hardware, operating systems, data, One method for capturing this data is a service map that contains
applications, and user state opens a wide range of options for better all of the components that define a service such as the three-tier
management and distribution of workloads across the physical application that we have been discussing. The Microsoft Operations
infrastructure. The ability of the virtualization layer to migrate running Framework (MOF) defines a simple but powerful structure for service
VMs from one server to another with zero downtime, the ability mapping that focuses on the software, hardware, dependent services,
to manage memory usage dynamically, and many other features customers, and settings that define an IT service and the various
that are provided by hypervisor-based virtualization technologies teams that are responsible for each component. In this case, the
provide a rich set of capabilities. These capabilities can be utilized by application consists of the database, application, and web-server
the automation, management, and orchestration layers to maintain VMs, the physical servers on which they run, dependent services such
desired states (such as load distribution) or to proactively address as Active Directory and DNS, the network devices that connect the
decaying hardware or other issues that would otherwise cause faults servers, the LAN and WAN connections, and more. Failure of any of
or service disruptions. The virtualization layer can include storage, these components causes a service interruption. The management
network, OS, and application virtualization. layer must be able to understand these dependencies and react to
As with the hardware layer, the virtualization layer must be able faults in any of them. Figure 2 on page 26 represents a service map for
to be managed by the automation, management, and orchestration the three-tier application.
layers. The abstraction of software from hardware that virtualization The management layer must be able to model the entire
provides moves the majority of management and automation into application or service and all of its dependencies, and manage
the software space, instead of requiring people to perform manual and monitor them. In System Center Operations Manager, this is
operations on physical hardware. referred to as a distributed application. This mapping of elements
In our reference three-tier application, the database, application, and dependencies enables event correlation, failure root-cause
and web servers are all VMs. The virtualization layer provides a analysis, and business-impact analysis—all of which are critical
portion of the high-availability solution for the application by toward identifying issues before they cause service interruption;
enabling the ability to migrate the VMs to different physical servers helping to restore service rapidly, if there is an interruption;
for both planned and unplanned downtimes. The virtualization and assisting in resource prioritization when multiple issues are
layer must expose performance and management interfaces to encountered.
25
The Architecture Journal 24
4. From Virtualization to Dynamic IT
Figure 2: Service-map diagram
Legend Windows Server 2008 R2
Hardware Team Hyper-V R2
Virtualization Team
Automation Team Failover Clustering
Management Team
IIS 7.5
Orchestration Team
Core-Infrastructure Team
Software .NET 3.5
Database Team
Application Team SQL 2008
Service-Management Team
PowerShell 2.0
SAN LUNs OpsMgr 2007 R2 Agent
Hardware Hyper-V Servers ConfigMgr 2007 R2 Agent
Hyper-V Virtual Machines VMMgr 2008 R2 Agent
Active Directory
Three-Tier
IT Services DNS
Application
LAN
SAN
Virtualization
Finance
Customers Automation
Executive Management
Management
Performance Thresholds Orchestration
Load Balancing Security
Settings Web Role Service Management
Application Role
Database Role
Orchestration Layer
The orchestration layer leverages the management and automation applying the patch must be defined; the VMs that are hosted by
layers. In much the same way that an enterprise resource planning the server must be migrated to a different host; the patch must be
(ERP) system manages a business process such as order fulfillment applied, and then tested; and, finally, VMs must be moved back
and handles exceptions such as inventory shortages, the orchestration to the host. What happens if any one of these operations fails? An
layer provides an engine for IT-process automation and workflow. The orchestration layer is required to enable complex IT processes and
orchestration layer is the critical interface between the IT organization workflows, such as those that were previously described.
and its infrastructure. It is the layer at which intent is transformed into To be able to orchestrate and automate an IT process, the inputs,
workflow and automation. actions, and outputs of the process must be well understood.
Ideally, the orchestration layer provides a graphical interface in Also required is the ability to monitor the running workflow and
which complex workflows that consist of events and activities across configure recovery actions, should any step in the process fail. The
multiple management-system components can be combined, so as notifications that are required during success or failure must also be
to form an end-to-end IT business process such as automated patch well understood. An effective method for documenting an IT process
management or automatic power management. The orchestration is to use an IT-process map. Figure 3 on page 27 illustrates an example
layer must provide the ability to design, test, implement, and monitor that is based on the patch-deployment scenario that was previously
these IT workflows. described.
Consider how many activities must take place in order to update In the three tier–application scenario, a process such as the one
a physical server in a cluster that is running a number of VMs. An that was previously defined is required. During maintenance of the
applicable patch must be identified; a maintenance window for physical machines that host the VMs, the orchestration layer must
26
The Architecture Journal 24
5. From Virtualization to Dynamic IT
Figure 3: IT process–map diagram
Service
Management Approve service Investigate any Investigate any
request issues issues
Security
Receive security
updates
Orchestration Initiate update Continue Report workflow Continue Report workflow
workflow workflow results workflow results
Management Migrate VMs Initiate maint. Patch physical Verify host End maint.
Patch master Verify patch Migrate VMs
off host mode on host host image availability installation mode on host off host
Automation Migrate VMs, Run host Migrate VMs
ensure separation health check back
Virtualization VM live Verify Hyper-V VM live
migration health migration
Servers Patch Verify server
installation health
Network
Verify network
connectivity
Storage Patch Verify storage
installation connectivity
account for the high-availability requirements of the application. Conclusion
The system must ensure that the clustered database-server VMs are Virtualization is an important tool for increasing IT maturity.
not placed on a single physical host, as this would negate the benefit To augment the benefits of virtualization, capabilities such as
of the cluster by creating a single point of failure. The orchestration automation, management, and orchestration are required. The
layer is where IT business-process rules and workflows such as this are infrastructure architecture must be combined with a trained staff
defined. These should be treated as source code and placed under and streamlined IT processes. It is the combination of efficient
strict change control. This layer provides an excellent platform for processes, a well-trained staff, and a robust infrastructure
continuous improvement by encoding the lessons that are learned architecture that enables dynamic IT.
into IT-process automation.
Resources
Achieving Dynamic IT Microsoft Operations Framework Team. “Service Mapping:
One of the key drivers of the layered approach to infrastructure Microsoft Operations Framework (MOF), Version 1.0.” Online
architecture that is presented here is to enable complex workflow document. March 2010. Microsoft Corporation, Redmond, WA.
and automation to be developed over time by creating a collection (http://go.microsoft.com/fwlink/?LinkId=186459)
of simple automation tasks, assembling them into procedures that
are managed by the management layer, and then creating workflows Microsoft TechNet Library. “Infrastructure Optimization.”
and process automation that are controlled by the orchestration layer. Online article series. 2010. Microsoft Corporation, Redmond, WA.
Atop-down approach to this type of automation is not recommended, (http://technet.microsoft.com/en-us/library/bb944804.aspx)
as it often results in large, monolithic, and very complicated scripts
that are specific to individual environments. These become very
costly to maintain over time. A better approach is to assemble a About the Author
library of automation tasks, script modules, and workflows that David Ziembicki (davidzi@microsoft.com) is a Solution Architect
can be combined in different ways and reused. Start by recording in Microsoft’s Public Sector Services CTO organization, focusing
how often each IT task is performed in a month and automate the on virtualization, dynamic data centers, and cloud computing.
top 10 percent. Over time, this automation library will grow into a A Microsoft Certified Architect | Infrastructure, David has been
significant asset. with Microsoft for four years leading infrastructure projects
Achieving dynamic IT requires deep understanding and at multiple government agencies. David is a lead architect for
documentation of your existing IT processes. Each process should Microsoft’s virtualization service offerings. He has been a speaker
be evaluated prior to attempting to automate it. Service maps and at multiple Microsoft events and served as an instructor for several
IT-process maps are essential tools for gathering and documenting virtualization-related training classes.
the information that is required to automate, manage, and orchestrate
IT processes. All redundant or nonessential steps should be removed You can visit David’s blog at http://blogs.technet.com/davidzi.
from the process prior to automating it. This both streamlines the
process itself and reduces the amount of automation effort that is
required.
27
The Architecture Journal 24