This document provides an overview of virtualization including:
- Virtualization separates resources and requests for services from physical delivery, allowing more efficient use of hardware through pooling and sharing of resources.
- There are two main approaches to virtualization - hosted which runs on a standard OS and hypervisor which runs directly on hardware for better performance and scalability.
- Virtualization enables server consolidation to reduce costs by increasing utilization rates and flexible provisioning of test systems. It also improves business continuity.
- New hardware like blades, 64-bit systems and multi-core CPUs are well-suited for virtualization. Hardware assists from Intel and AMD further boost virtualization performance.
- VMware pioneered x86 virtualization and
Virtual versions of servers, applications, networks and storage can be created through virtualization. Its main types include operating system virtualization (VMs), hardware virtualization, application-server virtualization, storage virtualization, network virtualization, administrative virtualization and application virtualization.
The benefits of employing virtualization in the corporate data center are compelling – lower operating
costs, better resource utilization, increased availability of critical infrastructure to name just a few. It is an
apparent “no brainer” which explains why so many organizations are jumping on the bandwagon. Industry
analysts estimate that between 60 and 80 percent of IT departments are actively working on server
consolidation projects using virtualization. But what are the challenges for operations and security staff
when it comes to management and ensuring the security of the new virtual enterprise? With new
technology, complexity and invariably new management challenges generally follow.
Over the last 18 months, Prism Microsystems, a leading security information and event management
(SIEM) vendor, working closely with a set of early adopter customers and prospects, has been working on
extending the capability of EventTracker to provide deep support for virtualization, enabling our customers
to get the same level of security for the virtualized enterprise as they have for their non-virtualized
enterprise. This White Paper examines the technology and management challenges that result from
virtualization, and how EventTracker addresses them.
The process of virtualization enables the creation of virtual forms of servers, applications, networks and storage. The four main types of virtualization are network virtualization, storage virtualization, application virtualization and desktop virtualization.
Virtual versions of servers, applications, networks and storage can be created through virtualization. Its main types include operating system virtualization (VMs), hardware virtualization, application-server virtualization, storage virtualization, network virtualization, administrative virtualization and application virtualization.
The benefits of employing virtualization in the corporate data center are compelling – lower operating
costs, better resource utilization, increased availability of critical infrastructure to name just a few. It is an
apparent “no brainer” which explains why so many organizations are jumping on the bandwagon. Industry
analysts estimate that between 60 and 80 percent of IT departments are actively working on server
consolidation projects using virtualization. But what are the challenges for operations and security staff
when it comes to management and ensuring the security of the new virtual enterprise? With new
technology, complexity and invariably new management challenges generally follow.
Over the last 18 months, Prism Microsystems, a leading security information and event management
(SIEM) vendor, working closely with a set of early adopter customers and prospects, has been working on
extending the capability of EventTracker to provide deep support for virtualization, enabling our customers
to get the same level of security for the virtualized enterprise as they have for their non-virtualized
enterprise. This White Paper examines the technology and management challenges that result from
virtualization, and how EventTracker addresses them.
The process of virtualization enables the creation of virtual forms of servers, applications, networks and storage. The four main types of virtualization are network virtualization, storage virtualization, application virtualization and desktop virtualization.
This is summary on Virtualization. It contains benefits and different types of Virtualization. For example:Server Virtualization, Network Virtualization, Data Virtualization etc.
The process of creating a virtual version of something be it an operating system, a storage device, a server or network resources is known as virtualization. With virtualization, enterprises and companies succeeded in integrating administrative tasks, enhancing scalability, managing workloads, and reducing operational complexities.
This slides focuses on Virtualization concepts, types of virtualization, Hypervisors, Evolution of virtualization towards cloud and QEMU-KVM architecture.
This presentation tries to explain basics of virtualization, what is server virtualization ? why is it important ? how it is done ? What are the limitations and risks associated with it ?
In a general sense, virtualization, is the creation of a virtual, rather than an actual, version of something.
For example:
Google Earth, It is a virtual image of Earth which hold every detail about earth.
From a computing perspective, we might have already done some virtualization if you’ve ever partitioned a hard disk drive into more than one “virtual” drive.
Virtualization in a computing environment can be present in many different forms, some of which are:
Hardware virtualization
Storage and data virtualization
Software virtualization
Network virtualization
This is summary on Virtualization. It contains benefits and different types of Virtualization. For example:Server Virtualization, Network Virtualization, Data Virtualization etc.
The process of creating a virtual version of something be it an operating system, a storage device, a server or network resources is known as virtualization. With virtualization, enterprises and companies succeeded in integrating administrative tasks, enhancing scalability, managing workloads, and reducing operational complexities.
This slides focuses on Virtualization concepts, types of virtualization, Hypervisors, Evolution of virtualization towards cloud and QEMU-KVM architecture.
This presentation tries to explain basics of virtualization, what is server virtualization ? why is it important ? how it is done ? What are the limitations and risks associated with it ?
In a general sense, virtualization, is the creation of a virtual, rather than an actual, version of something.
For example:
Google Earth, It is a virtual image of Earth which hold every detail about earth.
From a computing perspective, we might have already done some virtualization if you’ve ever partitioned a hard disk drive into more than one “virtual” drive.
Virtualization in a computing environment can be present in many different forms, some of which are:
Hardware virtualization
Storage and data virtualization
Software virtualization
Network virtualization
Customers are using NSX to drive business benefits as show in the figure below. The main themes for NSX deployments are Security, IT automation and Application Continuity.
Figure 3: NSX Use Cases
• Security:
NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure.
NSX can be used in conjunction with 3rd party security vendors such as Palo Alto Networks, Checkpoint, Fortinet, or McAffee to provide a complete DMZ like security solution within a cloud infrastructure.
NSX has been deployed widely to secure virtual desktops to secure some of the most vulnerable workloads, which reside in the data center to prohibit desktop-to-desktop hacking.
• Automation:
VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the datacenter using NSX.
NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services.
6
Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack
• Application Continuity:
NSX provides a way to easily extend networking and security up to eight vCenters either within or across data center In conjunction with vSphere 6.0 customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites.
NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.
In recent years, we have seen an overwhelming number of TV commercials that promise that the Cloud can help with many problems, including some family issues. What stands behind the terms “Cloud” and “Cloud Computing,” and what we can actually expect from this phenomenon? A group of students of the Computer Systems Technology department and Dr. T. Malyuta, whom has been working with the Cloud technologies since its early days, will provide an overview of the business and technological aspects of the Cloud.
The battle to be your virtualization vendor is in full swing, and it
has important ramifications for the vendors involved, and for your
data center. The goal of this whitepaper is to analyze the
technical aspects of the two major choices: VMware vSphere 4
and Microsoft Hyper-V R2 (as part of Windows Server 2008 R2).
The two contenders are described in technical detail, and then
those details are compared head-to-head. Typical pricing in two
scenarios is included. Analysis of these tools, how they will
impact your datacenter virtualization, and what the future likely
holds is included. »
Risk Analysis and Mitigation in Virtualized EnvironmentsSiddharth Coontoor
As companies move towards hybrid cloud solution there are still many private cloud solutions still out there. Traditional risk assessment techniques cannot be applied to such virtual servers. This paper is an attempt to identify key assets and assess risks related to these critical assets.
A Rookie-level presentation on Virtualization, and a sneak peek Cloud Computing.
This is a presentation created for a seminar presentation on Cloud and Virtualization Technologies.
Under normal conditions, this presentation may take upto 20-40 mins to complete.
Created and presented in Oct 2014.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
2. 2
VMWARE WHITE PAPER
Table of Contents
Introduction..............................................................................................................................................3
Virtualization in a Nutshell ...................................................................................................................3
Virtualization Approaches .................................................................................................................... 4
Virtualization for Server Consolidation and Containment........................................................... 7
How Virtualization Complements New-Generation Hardware .................................................. 8
Para-virtualization...................................................................................................................................8
VMware’s Virtualization Portfolio........................................................................................................ 9
Glossary .....................................................................................................................................................10
3. 3
VMWARE WHITE PAPER
Introduction
Among the leading business challenges confronting CIOs and
IT managers today are: cost-effective utilization of IT infrastruc-
ture; responsiveness in supporting new business initiatives;
and flexibility in adapting to organizational changes. Driving
an additional sense of urgency is the continued climate of IT
budget constraints and more stringent regulatory requirements.
Virtualization is a fundamental technological innovation that
allows skilled IT managers to deploy creative solutions to such
business challenges.
Virtualization Overview
Virtualization in a Nutshell
Simply put, virtualization is an idea whose time has come.
The term virtualization broadly describes the separation of a
resource or request for a service from the underlying physical
delivery of that service. With virtual memory, for example,
computer software gains access to more memory than is
physically installed, via the background swapping of data to
disk storage. Similarly, virtualization techniques can be applied
to other IT infrastructure layers - including networks, storage,
laptop or server hardware, operating systems and applications.
This blend of virtualization technologies - or virtual infrastruc-
ture - provides a layer of abstraction between computing,
storage and networking hardware, and the applications running
on it (see Figure 1). The deployment of virtual infrastructure
is non-disruptive, since the user experiences are largely
unchanged. However, virtual infrastructure gives administrators
the advantage of managing pooled resources across the enter-
prise, allowing IT managers to be more responsive to dynamic
organizational needs and to better leverage infrastructure
investments.
Figure 1: Virtualization
����������������
����������������������
����������������
���������������������������
����������������
��� ������ ��� ����
After Virtualization:
• Hardware-independence of operating
system and applications
• Virtual machines can be provisioned to any
system
• Can manage OS and application as a single
unit by encapsulating them into virtual
machines
Before Virtualization:
• Single OS image per machine
• Software and hardware tightly coupled
• Running multiple applications on same machine
often creates conflict
• Underutilized resources
• Inflexible and costly infrastructure
�����������
����������������
����������������
��� ��� ����������
4. VMWARE WHITE PAPER
Using virtual infrastructure solutions such as those from
VMware, enterprise IT managers can address challenges that
include:
• Server Consolidation and Containment – Eliminating ‘server• Server Consolidation and Containment – Eliminating ‘server• Server Consolidation and Containment
sprawl’via deployment of systems as virtual machines (VMs)
that can run safely and move transparently across shared
hardware, and increase server utilization rates from 5-15%
to 60-80%.
• Test and Development Optimization – Rapidly provisioning
test and development servers by reusing pre-configured
systems, enhancing developer collaboration and standard-
izing development environments.
• Business Continuity – Reducing the cost and complexity of• Business Continuity – Reducing the cost and complexity of• Business Continuity
business continuity (high availability and disaster recovery
solutions) by encapsulating entire systems into single files
that can be replicated and restored on any target server,
thus minimizing downtime.
• Enterprise Desktop – Securing unmanaged PCs, work-
stations and laptops without compromising end user
autonomy by layering a security policy in software around
desktop virtual machines.
�����������
���������������������
����������������
����������������������
�����������
��������������������
��� ��� ����������
���
����������
������
���
����������
������
���
����������
������
���
����������
������
�������
�������
���������������������������
����������������
����������
��������
��� ��� ����������
Figure 2: Virtualization Architectures
Virtualization Approaches
While virtualization has been a part of the IT landscape for
decades, it is only recently (in 1998) that VMware delivered
the benefits of virtualization to industry-standard x86-based
platforms, which now form the majority of desktop, laptop and
server shipments. A key benefit of virtualization is the ability to
run multiple operating systems on a single physical system and
share the underlying hardware resources – known as partition-
ing.
Today, virtualization can apply to a range of system layers,
including hardware-level virtualization, operating system-
level virtualization, and high-level language virtual machines.
Hardware-level virtualization was pioneered on IBM mainframes
in the 1970s, and then more recently Unix/RISC system vendors
began with hardware-based partitioning capabilities before
moving on to software-based partitioning.
For Unix/RISC and industry-standard x86 systems, the two
approaches typically used with software-based partitioning are
hosted and hypervisor architectures (See Figure 2). A hosted
approach provides partitioning services on top of a standard
operating system and supports the broadest range of hardware
configurations. In contrast, a hypervisor architecture is the firsthypervisor architecture is the firsthypervisor
layer of software installed on a clean x86-based system (hence
it is often referred to as a “bare metal”approach). Since it has
direct access to the hardware resources, a hypervisor is more
efficient than hosted architectures, enabling greater scalability,
robustness and performance.
Hosted Architecture
• Installs and runs as an application
• Relies on host OS for device support
and physical resource management
Bare-Metal (Hypervisor) Architecture
• Lean virtualization-centric kernel
• Service Console for agents and helper
applications
4
5. 5
VMWARE WHITE PAPER
Hypervisors can be designed to be tightly coupled with operat-
ing systems or can be agnostic to operating systems. The latter
approach provides customers with the capability to implement
an OS-neutral management paradigm, thereby providing
further rationalization of the data center.
Application-level partitioning is another approach, whereby
many applications share a single operating system, but this
offers less isolation (and higher risk) than hardware or software
partitioning, and limited support for legacy applications or
heterogeneous environments. However, various partitioning
techniques can be combined, albeit with increased complexity.
Hence, virtualization is a broad IT initiative, of which partitioning
is just one facet. Other benefits include the isolation of virtual
machines and the hardware-independence that results from
the virtualization process. Virtual machines are highly portable,
and can be moved or copied to any industry-standard (x86-
based) hardware platform, regardless of the make or model.
Thus, virtualization facilitates adaptive IT resource management,
and greater responsiveness to changing business conditions
(see Figures 3-5).
To provide advantages beyond partitioning, several system
resources must be virtualized and managed, including CPUs,
main memory, and I/O, in addition to having an inter-partition
resource management capability. While partitioning is a useful
capability for IT organizations, true virtual infrastructure delivers
business value well beyond that.
�����������
����������������
�����������
����������������
������������������������������������������������
�����������
����������������
�������������� ������� ������� ���������������������
Figure 3: Traditional Infrastructure
6. 6
VMWARE WHITE PAPER
����������������������
����������������
���������������
����������������
���������������
����������������
���������������
���������������������� �����������
�������
�������
����������������������
����������������
���������������
����������������
���������������
����������������
���������������
���������������������� �����������
������� ��������������
�������
������� �������
Figure 5: VMware Virtual Infrastructure
� � � � � � � � � � � � � � � � � � � � � �
�����������
������
��������
������
���
�����
���
�����
������
������
�������������
����������
����������
������
� � � � � � �
� � � � � � �
� � � � � � �
Figure 4: Virtual Infrastructure
Hardware/Software Separation
Infrastructure is what
connects resources to your
business.
Virtual Infrastructure is a
dynamic mapping of your
resources to your business.
Result: decreased costs and
increased efficiencies and
responsiveness
Transforms farms of individual x86 servers, storage, and
networking into a pool of computing resources
7. 7
VMWARE WHITE PAPER
Virtualization for Server Consolidation and
Containment
Virtual infrastructure initiatives often spring from data center
server consolidation projects, which focus on reducing existing
infrastructure “box count”, retiring older hardware or life-extend-
ing legacy applications. Server consolidation benefits result
from a reduction in the overall number of systems and related
recurring costs (power, cooling, rack space, etc.)
While server consolidation addresses the reduction of existing
infrastructure, server containment takes a more strategic view,server containment takes a more strategic view,server containment
leading to a goal of infrastructure unification. Server contain-
ment uses an incremental approach to workload virtualization,
whereby new projects are provisioned with virtual machines
rather than physical servers, thus deferring hardware purchases.
It is important to note that neither consolidation nor contain-
ment should be viewed as standalone exercise. In either case,
the most significant benefits result from adopting a total cost-
of-ownership (TCO) perspective, with a focus on the ongoing,
recurring support and management costs, in addition to one-
time, up-front costs. Data center environments are becoming
more complex and heterogeneous, with correspondingly
CPU NIC DiskMemory
Hardware
Blade Hardware
Other Hardware
VM VM VM VM
VirtualCenter
Distributed
Services ESX Server
VMotion
Provisioning
Consolidated Backup
DRS
DAS
VMM VMM VMM VMM
Resource
Management
VMFS
MPIO
CPU
Virtualization
MMU
Virtualization
I/O
Virtualization
Virtual
Networking
Other Enterprise
Features
Management
and
Distributed
Virtualization
Services
Monitor
Enterprise-Class
Features
Hypervisor
Hardware
Certification
higher management costs. Virtual infrastructure enables
more effective optimization of IT resources, through the
standardization of data center elements that need to be
managed.
Partitioning alone does not deliver server consolidation or
containment, and in turn consolidation does not equate to
full virtual infrastructure management. Beyond partition-
ing and basic component-level resource management, a
core set of systems management capabilities are required
to effectively implement realistic data center solutions (see
Figure 6). These management capabilities should include
comprehensive system resource monitoring (of metrics such
as CPU activity, disk access, memory utilization and network
bandwidth), automated provisioning, high availability and
workload migration support.
Figure 6: Virtual Infrastructure Management
8. 8
VMWARE WHITE PAPER
How Virtualization Complements New-
Generation Hardware
Extensive ‘scale-out’and multi-tier application architectures are
becoming increasingly common, and the adoption of smaller
form-factor blade servers is growing dramatically. Since the
transition to blade architectures is generally driven by a desire
for physical consolidation of IT resources, virtualization is an
ideal complement for blade servers, delivering benefits such as
resource optimization, operational efficiency and rapid provi-
sioning.
The latest generation of x86-based systems feature processors
with 64-bit extensions supporting very large memory capaci-
ties. This enhances their ability to host large, memory-intensive
applications, as well as allowing many more virtual machines to
be hosted by a physical server deployed within a virtual infra-
structure. The continual decrease in memory costs will further
accelerate this trend.
Likewise, the forthcoming dual-core processor technology
significantly benefits IT organizations by dramatically lowering
the costs of increased performance. Compared to traditional
single-core systems, systems utilizing dual-core processors will
be less expensive, since only half the number of sockets will be
required for the same number of CPUs. By significantly lowering
the cost of multi-processor systems, dual-core technology will
accelerate data center consolidation and virtual infrastructure
projects,
Beyond these enhancements, VMware is also working closely
with both Intel and AMD to ensure that new processor technol-
ogy features are exploited by virtual infrastructure to the fullest
extent. In particular, the new virtualization hardware assist
enhancements (Intel’s “VT”and AMD’s “Pacifica”) will enable
robust virtualization of the CPU functionality. Such hardware
virtualization support does not replace virtual infrastructure, but
allows it to run more efficiently.
Para-virtualization
Although virtualization is rapidly becoming mainstream tech-
nology, the concept has attracted a huge amount of interest,
and enhancements continue to be investigated. One of these is
para-virtualization, whereby operating system compatibility is
traded off against performance for certain CPU-bound applica-
tions running on systems without virtualization hardware assist
(see Figure 7). The para-virtualized model offers potential perfor-
mance benefits when a guest operating system or application
is ‘aware’that it is running within a virtualized environment,
and has been modified to exploit this. One potential downside
of this approach is that such modified guests cannot ever be
migrated back to run on physical hardware.
In addition to requiring modified guest operating systems, para-
virtualization leverages a hypervisor for the underlying technol-
ogy. In the case of Linux distributions, this approach requires
extensive changes to an operating system kernel so that it can
coexist with the hypervisor. Accordingly, mainstream Linux
distributions (such as Red Hat or SUSE) cannot be run in a para-
virtualized mode without some level of modification. Likewise,
Microsoft has suggested that a future version of the Windows
operating system will be developed that can coexist with a new
hypervisor offering from Microsoft.
Yet para-virtualization is not an entirely new concept. For
example, VMware has employed it by making available as
an option enhanced device drivers (packaged as VMware
Tools) that increase the efficiency of guest operating systems.
Furthermore, if and when para-virtualization optimizations are
eventually built into commercial enterprise Linux distributions,
VMware’s hypervisor will support those, as it does all main-
stream operating systems.
����������������������
���������������������
����������������
���������������������
����������������
��������������
������������
Figure 7: Para-virtualization
9. 9
VMWARE WHITE PAPER
VMware’s Virtualization Portfolio
VMware pioneered x86-based virtualization in 1998 and
continues to be the innovator in that market, providing the
fundamental virtualization technology for all leading x86-
based hardware suppliers. The company offers a variety of
software-based partitioning approaches, utilizing both hosted
(Workstation and VMware Server) and hypervisor (ESX Server)
architectures. (see Figure 8)
VMware’s virtual machine (VM) approach creates a uniform
hardware image – implemented in software – on which oper-
ating systems and applications run. On top of this platform,
VMware’s VirtualCenter provides management and provisioning
of virtual machines, continuous workload consolidation across
physical servers and VMotion™ technology for virtual machine
mobility.
VirtualCenter is virtual infrastructure management software that
centrally manages an enterprise’s virtual machines as a single,
logical pool of resources. With VirtualCenter, an administra-
tor can manage thousands of Windows NT, Windows 2000,
Windows 2003, Linux and NetWare servers from a single point
of control.
Unique to VMware is the VMotion technology, whereby live,
running virtual machines can be moved from one physical
system to another while maintaining continuous service avail-
ability. VMotion thus allows fast reconfiguration and optimiza-
tion of resources across the virtual infrastructure.
VMware is the only provider of high-performance virtualization
products that give customers a real choice in operating systems.
VMware supports: Windows 95/98/NT/2K/2003/XP/3.1/MS-DOS
6; Linux (Red Hat, SUSE, Mandrake, Caldera); FreeBSD (3.x, 4.0-
4.9); Novell (NetWare 4,5,6); Sun Solaris 9 and 10 (experimental).
VMware is designed from the ground up to ensure compatibil-
ity with customers’existing software infrastructure investments.
This includes not just operating systems, but also software for
management, high availability, clustering, replication, multi-
pathing, and so on.
VMware’s hypervisor-based products and solutions have been
running at customer sites since 2001, with more than 75% of
customers running ESX Server in production deployments. As
the clear x86 virtualization market leader, VMware is uniquely
positioned to continue providing robust, supportable, high-
performance virtual infrastructure for real-world, enterprise data
center applications.
Figure 8: Single Virtual Platform Desktop to Enterprise
CONSISTENTVIRTUAL HARDWARE PLATFORM
Open Interfaces
ACE
Secured Enterprise
Desktop
Workstation
Technical
Desktop
VMware Server
Departmental
Computing
ESX Server
Enterprise
Computing
VMware
Infrastructure
System Architecture
& Highlights
Hosted onWindows Hosted onWindows
or Linux
Bare Metal
V-SMP Option
Hosted onWindows
or Linux
Mgmt Server,
Console & APIs
VMotion
App
OS
Ap
O
p
S
p
S
pp App
OS
App
OS
10. 10
VMWARE WHITE PAPER
Glossary
Virtual Machine
A representation of a real machine using software that provides
an operating environment which can run or host a guest oper-
ating system.
Guest Operating System
An operating system running in a virtual machine environment
that would otherwise run directly on a separate physical system.
Virtual Machine Monitor
Software that runs in a layer between a hypervisor or host oper-
ating system and one or more virtual machines that provides
the virtual machine abstraction to the guest operating systems.
With full virtualization, the virtual machine monitor exports a
virtual machine abstraction identical to a physical machine, so
that standard operating systems (e.g., Windows 2000, Windows
Server 2003, Linux, etc.) can run just as they would on physical
hardware.
Hypervisor
A thin layer of software that generally provides virtual partition-
ing capabilities which runs directly on hardware, but under-
neath higher-level virtualization services. Sometimes referred to
as a “bare metal”approach.
Hosted Virtualization
A virtualization approach where partitioning and virtualization
services run on top of a standard operating system (the host).
In this approach, the virtualization software relies on the host
operating system to provide the services to talk directly to the
underlying hardware.
Para-virtualization
A virtualization approach that exports a modified hardware
abstraction which requires operating systems to be explicitly
modified and ported to run.
Virtualization Hardware Support
Industry standard servers will provide improved hardware
support for virtualization. Initial hardware support includes
processor extensions to address CPU and some memory
virtualization. Future support will include I/O virtualization, and
eventually more complex memory virtualization management.
Hardware-level virtualization
Here the virtualization layer sits right on top of the hardware
exporting the virtual machine abstraction. Because the virtual
machine looks like the hardware, all the software written for it
will run in the virtual machine.
Operating system–level virtualization
In this case the virtualization layer sits between the operating
system and the application programs that run on the operating
system. The virtual machine runs applications, or sets of applica-
tions, that are written for the particular operating system being
virtualized.
High-level language virtual machines
In high-level language virtual machines, the virtualization layer
sits as an application program on top of an operating system.
The layer exports an abstraction of the virtual machine that can
run programs written and compiled to the particular abstract
machine definition. Any program written in the high-level
language and compiled for this virtual machine will run in it.
For more information:
http://www.vmware.com
http://www.vmware.com/solutions/
http://www.vmware.com/vinfrastructure/