- vSphere 5.0 introduces new features for platforms, networking, availability, vMotion, DRS/DPM, vCenter Server, storage, and Site Recovery Manager.
- Key enhancements include support for larger VMs, 3D graphics, more devices, an ESXi firewall, image builder tool, and auto deploy for faster host provisioning using host profiles.
- Auto deploy allows rapid initial deployment and patching of ESXi hosts using a "on the fly" model coordinated with vSphere Host Profiles.
Management tools and techniques for controlling, customizing, and managing your VMware ESXi infrastructure without the use of the Linux-based Service Console.
Overview of my VMware vSphere 5.1 with ESXi and vCenter class. Get an overview of the most powerful, enterprise class private cloud platform available.
XenServer, Hyper-V, and ESXi - Architecture, API, and Coding_Humair_Ahmed_
XenServer, Hyper-V, and ESXi hypervisor comparison in regards to market share, architecture/installation, and APIs/coding. Technical details, demos, and code provided. Visit my blog at http://humairahmed.com/blog/.
Management tools and techniques for controlling, customizing, and managing your VMware ESXi infrastructure without the use of the Linux-based Service Console.
Overview of my VMware vSphere 5.1 with ESXi and vCenter class. Get an overview of the most powerful, enterprise class private cloud platform available.
XenServer, Hyper-V, and ESXi - Architecture, API, and Coding_Humair_Ahmed_
XenServer, Hyper-V, and ESXi hypervisor comparison in regards to market share, architecture/installation, and APIs/coding. Technical details, demos, and code provided. Visit my blog at http://humairahmed.com/blog/.
VMware vSphere Version Comparison 4.0 to 6.5Sabir Hussain
VMware vSphere leverages the power of virtualization to transform datacenters into simplified cloud computing infrastructures and enables IT organizations to deliver flexible and reliable IT services VMware vSphere virtualizes and aggregates the underlying physical hardware resources across multiple system and provides pools off virtual resources to the datacenter.
VM Virtualization
VMGate.com
Iwan ‘e1’ Rahabok who's working as a Staff SE, Strategic Accounts in Singapore ha created an awesome vCenter Operations 5 Training. It's available in PowerPoint format and I really would like to advise you to read the slide notes. The presentation serves 2 purposes, first it provides in-depth training for those who are learning or evaluating vCenter Operations 5 and second it provides materials that vCenter Ops champion can use to share with internal colleagues (e.g. storage team, app team, etc)
Hyper-V vs. vSphere: Understanding the DifferencesSolarWinds
For more information on Virtualization Manager visit: http://www.solarwinds.com/virtualization-manager.aspx
Watch this webcast: http://www.solarwinds.com/resources/webcasts/hyper-v-vs-vsphere-understanding-the-differences.html
Watch this webinar with Scott Lowe, Founder and Managing Consultant at The 1610 Group, and SolarWinds virtualization expert Jonathan Reeve where they discuss “Hyper-V vs. vSphere: Understanding the differences.”
The virtualization market is abuzz with talk of different hypervisors – most prominently VMware ESX® versus Microsoft Hyper-V®, who together own over 90% of the market. Small and medium businesses are already moving quickly toward Hyper-V, and a growing number of larger organizations are beginning to put plans in place to transition some portion of their environment from ESX to Hyper-V.
In this webcast we explore the reasons for these changes and the ecosystems for these two platforms both now and in the future. We also take a look ahead to what is known about Hyper-V 3.0 and why it warrants an even deeper look when evaluating hypervisors for your future virtualization deployments.
** Edureka Certification Training: https://www.edureka.co **
This Edureka "VMware Tutorial for Beginners” video will give you a thorough and insightful overview of Virtualization and help you understand other related terms that revolve around VMware and Virtualization. Following are the offering of this video:
1. What is VMware?
2. What is Virtualization?
3. Types Of Virtualization
4. What Is Hypervisor?
5. Hypervisor Types
6. Demo- Creating a VM using VMware Workstation Player
VMware vSphere® 6.0 permet aux utilisateurs de virtualiser leurs applications verticales et horizontales en toute sécurité, redéfinit les besoins en disponibilité et simplifie la gestion du datacenter virtuel. Cette version majeure offre une infrastructure à la demande, hautement disponible et fiable qui constitue la base idéale pour tout environnement de Cloud Computing.
Horizon 6, la suite logicielle VDI de VMware, ajoute le support des postes de travail virtuels Linux, en plus de l’environnement Windows de Microsoft. L’éditeur de Palo Alto a lancé un programme d'accès précoce pour les clients désirant tester en avant-première Horizon 6 avec les distributions Linux de Red Hat et Ubuntu sur des ordinateurs distants et des terminaux mobiles.
In this session we heard customer experiences facing some of the biggest DR challenges ever. We heard how Site Recovery Manager was used in Japan after the great earthquake disaster and in New Zealand after the earthquake at Christchurch. We also learned about a case in which Site Recovery Manager was used for site migration.
VMware vSphere Version Comparison 4.0 to 6.5Sabir Hussain
VMware vSphere leverages the power of virtualization to transform datacenters into simplified cloud computing infrastructures and enables IT organizations to deliver flexible and reliable IT services VMware vSphere virtualizes and aggregates the underlying physical hardware resources across multiple system and provides pools off virtual resources to the datacenter.
VM Virtualization
VMGate.com
Iwan ‘e1’ Rahabok who's working as a Staff SE, Strategic Accounts in Singapore ha created an awesome vCenter Operations 5 Training. It's available in PowerPoint format and I really would like to advise you to read the slide notes. The presentation serves 2 purposes, first it provides in-depth training for those who are learning or evaluating vCenter Operations 5 and second it provides materials that vCenter Ops champion can use to share with internal colleagues (e.g. storage team, app team, etc)
Hyper-V vs. vSphere: Understanding the DifferencesSolarWinds
For more information on Virtualization Manager visit: http://www.solarwinds.com/virtualization-manager.aspx
Watch this webcast: http://www.solarwinds.com/resources/webcasts/hyper-v-vs-vsphere-understanding-the-differences.html
Watch this webinar with Scott Lowe, Founder and Managing Consultant at The 1610 Group, and SolarWinds virtualization expert Jonathan Reeve where they discuss “Hyper-V vs. vSphere: Understanding the differences.”
The virtualization market is abuzz with talk of different hypervisors – most prominently VMware ESX® versus Microsoft Hyper-V®, who together own over 90% of the market. Small and medium businesses are already moving quickly toward Hyper-V, and a growing number of larger organizations are beginning to put plans in place to transition some portion of their environment from ESX to Hyper-V.
In this webcast we explore the reasons for these changes and the ecosystems for these two platforms both now and in the future. We also take a look ahead to what is known about Hyper-V 3.0 and why it warrants an even deeper look when evaluating hypervisors for your future virtualization deployments.
** Edureka Certification Training: https://www.edureka.co **
This Edureka "VMware Tutorial for Beginners” video will give you a thorough and insightful overview of Virtualization and help you understand other related terms that revolve around VMware and Virtualization. Following are the offering of this video:
1. What is VMware?
2. What is Virtualization?
3. Types Of Virtualization
4. What Is Hypervisor?
5. Hypervisor Types
6. Demo- Creating a VM using VMware Workstation Player
VMware vSphere® 6.0 permet aux utilisateurs de virtualiser leurs applications verticales et horizontales en toute sécurité, redéfinit les besoins en disponibilité et simplifie la gestion du datacenter virtuel. Cette version majeure offre une infrastructure à la demande, hautement disponible et fiable qui constitue la base idéale pour tout environnement de Cloud Computing.
Horizon 6, la suite logicielle VDI de VMware, ajoute le support des postes de travail virtuels Linux, en plus de l’environnement Windows de Microsoft. L’éditeur de Palo Alto a lancé un programme d'accès précoce pour les clients désirant tester en avant-première Horizon 6 avec les distributions Linux de Red Hat et Ubuntu sur des ordinateurs distants et des terminaux mobiles.
In this session we heard customer experiences facing some of the biggest DR challenges ever. We heard how Site Recovery Manager was used in Japan after the great earthquake disaster and in New Zealand after the earthquake at Christchurch. We also learned about a case in which Site Recovery Manager was used for site migration.
VMware Ready vRealize Automation Program
Author: Meena Nagarajan
IT’s quest for maximum speed, flexibility and accountability is driving a shift in thinking about cloud management platforms. VMware’s new cloud management platform provides automated management for heterogeneous and hybrid clouds.
Learn more about how VMware delivers the foundation for the Software Defined Enterprise:
- Managing a multi-vendor, multi-cloud infrastructure
- Providing centralized automation of infrastructure services
- Creating extensibility opportunities for cloud management
AWS re:Invent 2016: VMware and AWS Together - VMware Cloud on AWS (ENT317)Amazon Web Services
VMware CloudTM on AWS brings VMware’s enterprise class Software-Defined Data Center software to Amazon’s public cloud, delivered as an on-demand, elastically scalable, cloud-based VMware sold, operated and supported service for any application and optimized for next-generation, elastic, bare metal AWS infrastructure. This solution enables customers to use a common set of software and tools to manage both their AWS-based and on-premises vSphere resources consistently. Further virtual machines in this environment have seamless access to the broad range of AWS services as well. This session will introduce this exciting new service and examine some of the use cases and benefits of the service. The session will also include a VMware Tech Preview that demonstrates standing up a complete SDDC cluster on AWS and various operations using standard tools like vCenter.
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...EMC
This white paper describes a data protection and disaster recovery solution for Virtualized Oracle Database 11gOLTP environments, enabled by EMC Symmetrix VMAXe with Enginuity for VMAXe, EMC RecoverPoint, and VMware vCenter Site Recovery Manager. It covers both local data protection and automated failover and failback between remote sites.
Active Directory Introduction
Active Directory Basics
Components of Active Directory
Active Directory hierarchical structure.
Active Directory Database.
Flexible Single Master Operations (FSMO)Role
Active Directory Services.
Some useful Tool
vSphere 5 - Image Builder and Auto DeployEric Sloof
Auto Deploy is a new method for provisioning ESXi hosts in vSphere 5.1. At a high level the ESXi host boots over the network (using PXE/gPXE), contacts the Auto Deploy Server which loads ESXi into the hosts memory. After loading the ESXi image the Auto Deploy Server coordinates with vCenter Server to configure the host (using Host Profiles and Answer Files (answer files are new in 5.0). Auto Deploy eliminates the need for a dedicated boot device, enables rapid deployment for many hosts, and also simplifies ESXi host management by eliminating the need to maintain a separate “boot image” for each host.
Image profiles and VIBs are available in software depots from VMware or from VMware partners, and managed using the Image Builder PowerCLI. You can use software depots, image profiles, and software packages (VIBs) to specify the software you want to use during installation or upgrade of an ESXi host. Understanding how depots, profiles, and VIBs are structured and where you can use them is a prerequisite for in-memory installation of a custom ESXi ISO, for provisioning ESXi hosts using VMware Auto Deploy, and for some custom upgrade operations.
VIB A VIB is an ESXi software package. VMware and its partners package solutions, drivers, CIM providers, and applications that extend the ESXi platform as VIBs.
VIBs can be used to create and customize ISO images or installed asynchronously onto ESXi hosts. VIBs are available from software depots.
Image Profile An image profile defines an ESXi image and consists of VIBs (software packages). An image profile always includes a base VIB, and might include
additional VIBs. You examine and define an image profile using the Image Builder PowerCLI.
Using Packer to Migrate XenServer Infrastructure to CloudStackTim Mackey
When adopting IaaS cloud solutions, one of the biggest challenges will be template management. Creating that first template can easily be more challenging that deploying the cloud software itself. In this presentation two options are presented for template creation, using a kickstart file or cloning a running VM with Packer from packer.io as the core framework.
This presentation was delivered at CloudStack Days 2015 in Austin Texas. Two demos were given. The first demo used an existing XenServer environment to create a golden master from ISO and kickstart file, then automatically upload it to a CloudStack management server for deployment. The second demo cloned a running VM and created a template which was then uploaded to CloudStack. In the case of the running VM, migration occurred without any user interruption. The VM in question was a CentOS 7 image, and the hypervisor for both source infrastructure and CloudStack compute was XenServer based
McAfee MOVE (Management for Optimized Virtual Environments) bietet Sicherheitsmanagement für virtuelle Umgebungen. Außerdem werden Lösungen für Endpoint Security vorgestellt.
In 2010, Microsoft released a bold new featureset to support management of virtual test environments. "Lab Management" provided the ability to easily spin up test environments, perform automated build and deployments, run automated tests, and collect diagnostic data. Unfortunately, many teams were discouraged by the infrastructure requirements. Now, with Visual Studio 2012 and standard environments, even small teams or groups that can't use Microsoft's Hyper-V can still benefit from lab management. This session will demonstrate how to configure your existing environments for many of the same compelling features formally available only with Hyper-V.
Virtualization and how it leads to cloudHuzefa Husain
What exactly is virtualization?
Types of virtualization
Current trend in virtualization
How virtualization leads to Cloud Computing?
Cloud Computing Stack
VMworld 2013: vSphere Web Client - Technical WalkthroughVMworld
VMworld 2013
Ameet Jani, VMware
Justin King, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
At Techbox Square, in Singapore, we're not just creative web designers and developers, we're the driving force behind your brand identity. Contact us today.
Discover the innovative and creative projects that highlight my journey throu...dylandmeas
Discover the innovative and creative projects that highlight my journey through Full Sail University. Below, you’ll find a collection of my work showcasing my skills and expertise in digital marketing, event planning, and media production.
An introduction to the cryptocurrency investment platform Binance Savings.Any kyc Account
Learn how to use Binance Savings to expand your bitcoin holdings. Discover how to maximize your earnings on one of the most reliable cryptocurrency exchange platforms, as well as how to earn interest on your cryptocurrency holdings and the various savings choices available.
Digital Transformation and IT Strategy Toolkit and TemplatesAurelien Domont, MBA
This Digital Transformation and IT Strategy Toolkit was created by ex-McKinsey, Deloitte and BCG Management Consultants, after more than 5,000 hours of work. It is considered the world's best & most comprehensive Digital Transformation and IT Strategy Toolkit. It includes all the Frameworks, Best Practices & Templates required to successfully undertake the Digital Transformation of your organization and define a robust IT Strategy.
Editable Toolkit to help you reuse our content: 700 Powerpoint slides | 35 Excel sheets | 84 minutes of Video training
This PowerPoint presentation is only a small preview of our Toolkits. For more details, visit www.domontconsulting.com
Recruiting in the Digital Age: A Social Media MasterclassLuanWise
In this masterclass, presented at the Global HR Summit on 5th June 2024, Luan Wise explored the essential features of social media platforms that support talent acquisition, including LinkedIn, Facebook, Instagram, X (formerly Twitter) and TikTok.
B2B payments are rapidly changing. Find out the 5 key questions you need to be asking yourself to be sure you are mastering B2B payments today. Learn more at www.BlueSnap.com.
Event Report - SAP Sapphire 2024 Orlando - lots of innovation and old challengesHolger Mueller
Holger Mueller of Constellation Research shares his key takeaways from SAP's Sapphire confernece, held in Orlando, June 3rd till 5th 2024, in the Orange Convention Center.
Understanding User Needs and Satisfying ThemAggregage
https://www.productmanagementtoday.com/frs/26903918/understanding-user-needs-and-satisfying-them
We know we want to create products which our customers find to be valuable. Whether we label it as customer-centric or product-led depends on how long we've been doing product management. There are three challenges we face when doing this. The obvious challenge is figuring out what our users need; the non-obvious challenges are in creating a shared understanding of those needs and in sensing if what we're doing is meeting those needs.
In this webinar, we won't focus on the research methods for discovering user-needs. We will focus on synthesis of the needs we discover, communication and alignment tools, and how we operationalize addressing those needs.
Industry expert Scott Sehlhorst will:
• Introduce a taxonomy for user goals with real world examples
• Present the Onion Diagram, a tool for contextualizing task-level goals
• Illustrate how customer journey maps capture activity-level and task-level goals
• Demonstrate the best approach to selection and prioritization of user-goals to address
• Highlight the crucial benchmarks, observable changes, in ensuring fulfillment of customer needs
7. New Virtual Machine Features
§ vSphere 5.0 supports the industry’s most capable VM’s
• 32 virtual CPUs per VM • 1TB RAM per VM
• 4x previous capabilities!
VM Scalability
• 3D graphics
Richer Desktop
Experience
• Client-connected USB • VM BIOS boot order config API
devices
and PowerCLI interface
• USB 3.0 devices
• EFI BIOS
• Smart Card Readers for
Broader Device VM Console Access
Coverage
• UI for multi-core virtual • Support for Mac OS X
Other new CPUs servers
features
• Extended VMware
Tools compatibility
Items which require HW version 8 in blue
9. ESXi 5.0 Firewall Features
§ Capabilities
• ESXi 5.0 has a new firewall engine which is not based on iptables.
• The firewall is service oriented, and is a stateless firewall.
• Users have the ability to restrict access to specific services based on
IP address/Subnet Mask.
§ Management
• The GUI for configuring the firewall on ESXi 5.0 is similar to that used with
the classic ESX firewall — customers familiar with the classic ESX firewall
should not have any difficulty with using the ESXi 5.0 version.
• There is a new esxcli interface (esxcfg-firewall is deprecated in ESXi 5.0).
• There is Host Profile support for the ESXi 5.0 firewall.
• Customers who upgrade from Classic ESX to ESXi 5.0 will have their
firewall settings preserved.
10. UI: Security Profile
§ The ESXi Firewall can be managed via the vSphere client.
§ Through the Configuration > Security Profile, one can
observe the Enabled Incoming/Outgoing Services, the
Opened Port List for each service & the Allowed IP List for
each service.
11. UI: Security Profile > Services >
Properties
§ Through the Services Properties, one can configure if a
service should be automatically started.
§ Services can also be stopped & started on-the-fly.
13. Composition of an ESXi Image
Core CIM
Hypervisor
Providers
Plug-in Drivers
Components
14. ESXi Image Deployment
§ Challenges
• Standard ESXi image from VMware download site is sometimes limited
• Doesn’t have all drivers or CIM providers for specific hardware
• Doesn’t contain vendor specific plug-in components
?
Missing
CIM
provider
Missing
driver
Standard
ESXi ISO
• Base providers
• Base drivers
15. Building an Image
Start PowerCLI session
Windows Host with PowerCLI
and Image Builder Snap-in
16. Building an Image
Activate Image Builder Snap-in
Windows Host with PowerCLI
and Image Builder Snap-in
Image
Builder
17. Building an Image
Depots
Connect to depot(s)
Image
Profile
Windows Host with PowerCLI
and Image Builder Snap-in
ESXi
VIBs
Image
Driver Builder
VIBs
OEM VIBs
18. Building an Image
Depots
Clone and modify
existing Image Profile
Image
Profile
Windows Host with PowerCLI
and Image Builder Snap-in
ESXi
VIBs
Image
Driver Builder
VIBs
OEM VIBs
19. Building an Image
Depots
Generate new image
Image
Profile
Windows Host with PowerCLI
and Image Builder Snap-in
ESXi
VIBs
Image
Driver Builder
ISO Image
VIBs
PXE-bootable
Image
OEM VIBs
21. vSphere 5.0 – Auto Deploy
Overview
vCenter Server with
Auto Deploy
• Deploy and patch vSphere hosts in
minutes using a new “on the fly” model
Image
Host Profiles
• Coordination with vSphere Host Profiles
Profiles
Benefits
• Rapid provisioning: initial deployment
and patching of hosts
vSphere
vSphere
vSphere
• Centralized host and image management
• Reduce manual deployment and patch
processes
22. Deploying a Datacenter Has Just Gotten Much Easier
Before
After
Time: 30 Time: 30 Time: 30
mins
mins
mins
…..Repeat 37 more times…
Total time: 20 Total time: 10 Minutes!
Hours!
23. Auto Deploy Example – Initial Boot
Provision new host
vCenter Server
Image
Image
Profile
Image Host Profile
Profile
Host Profile
Profile
Host Profile
Rules
Engine
ESXi
VIBs
Driver
VIBs
“Waiter”
Auto
Deploy
TFTP
DHCP
OEM VIBs
24. Auto Deploy Example – Initial Boot
1) PXE Boot server
vCenter Server
Image
Image
Profile
Image Host Profile
Profile
Host Profile
Profile
Host Profile
Rules
Engine
ESXi
VIBs
Driver
VIBs
“Waiter”
gPXE DHCP
image
Reques
t
Auto
Deploy
TFTP
DHCP
OEM VIBs
25. Auto Deploy Example – Initial Boot
2) Contact Auto Deploy Server
vCenter Server
Image
Image
Profile
Image Host Profile
Profile
Host Profile
Profile
Host Profile
Rules
Engine
ESXi
VIBs
Driver
b oot
http est
VIBs
r eq u
“Waiter”
Auto
Deploy
OEM VIBs
Cluster A
Cluster B
26. Auto Deploy Example – Initial Boot
3) Determine Image Profile, Host Profile and cluster
vCenter Server
Image
Image
Profile
Image Host Profile
Profile
Host Profile
Profile
Host Profile
Rules
Engine
ESXi
• Image Profile
X
VIBs
• Host Profile 1
• Cluster B
Driver
VIBs
“Waiter”
Auto
Deploy
OEM VIBs
Cluster A
Cluster B
27. Auto Deploy Example – Initial Boot
4) Push image to host, apply host profile
vCenter Server
Image
Image
Profile
Image Host Profile
Profile
Host Profile
Profile
Host Profile
Rules
Engine
ESXi
Image Profile
VIBs
Host Profile
Cache
Driver
VIBs
“Waiter”
Auto
Deploy
OEM VIBs
Cluster A
Cluster B
28. Auto Deploy Example – Initial Boot
5) Place host into cluster
vCenter Server
Image
Image
Profile
Image Host Profile
Profile
Host Profile
Profile
Host Profile
Rules
Engine
ESXi
Image Profile
VIBs
Host Profile
Cache
Driver
VIBs
“Waiter”
Auto
Deploy
OEM VIBs
Cluster A
Cluster B
29. vSphere 5.0 – Networking
• LLDP
• NetFlow
• Port Mirror
• NETIOC – New Traffic Types
30. What Is Discovery Protocol?
(Link Layer Discovery Protocol )
§ Discovery protocol is a data link layer network protocol
used to discover capabilities of network devices.
§ Discovery protocol allows customer to automate the
deployment process in a complex environment through its
ability to
• Discover capabilities of Network devices
• Discover configuration of neighboring infrastructure
§ vSphere infrastructure supports following Discovery
Protocol
• CDP (Standard vSwitches Distributed vSwitches)
• LLDP (Distributed vSwitches)
§ LLDP is a standard based vendor neutral discovery protocol
(802.1AB)
32. vSphere 5.0 – Networking
• LLDP
• NetFlow
• Port Mirror
• NETIOC – New Traffic Types
33. What Is NetFlow?
§ NetFlow is a networking protocol that collects IP traffic info
as records and sends them to third party collectors such as
CA NetQoS, NetScout etc.
Legend :
VM A
VM B
VM traffic
NetFlow session
Collecto Physical
r
switch
vDS
Host
trun
k
§ The Collector/Analyzer report on various information such as:
• Current top flows consuming the most bandwidth
• Which flows are behaving irregularly
• Number of bytes a particular flow has sent and received in the past 24 hours
34. NetFlow with Third-Party
Collectors
Legend :
Net Scout
Internal flows
nGenius
External flows
Collector
NetFlow session
External
Systems
vDS
Hos
t
CA NetQoS
Collector
36. What Is Port Mirroring (DVMirror)?
§ Port Mirroring is the capability on a network switch to send
a copy of network packets seen on a switch port to a
network monitoring device connected on another switch
port.
§ Port Mirroring is also referred to as SPAN (Switched Port
Analyzer) on Cisco Switches.
§ Port Mirroring overcomes the limitation of promiscuous
mode.
• By providing granular control on which traffic can be monitored
• Ingress Source
• Egress Source
§ Helps in troubleshooting network issue by providing access
to:
• Inter-VM traffic
• Intra-VM traffic
37. Port Mirror Traffic Flow When
Mirror Destination Is a VM
Inter-VM traffic
Ingress Egress
Destinatio Destinatio
Source
Source
n
n
vDS
vDS
Legend :
Mirror Flow
VM Traffic
Intra-VM traffic
Egress
Ingress Destinatio
Destinatio Source
Source
n
n
External External
System
System
vDS
vDS
39. What Is Network I/O Control
(NETIOC)?
§ Network I/O control is a traffic management feature of
vSphere Distributed Switch (vDS).
§ In consolidated I/O (10 gig) deployments, this feature allows
customers to:
• Allocate Shares and Limits to different traffic types.
• Provide Isolation
• One traffic type should not dominate others
• Guarantee Service Levels when different traffic types compete
§ Enhanced Network I/O Control — vSphere 5.0 builds on
previous versions of Network I/O Control feature by
providing:
• User-defined network resource pools
• New Host Based Replication Traffic Type
• QoS tagging
40. NETIOC VM Groups
VMRG1
VMRG2
VMRG3
Total BW = 20 Gig
User Defined RP
vMotion
iSCSI
VMware vNetwork Distributed Switch
HBR
NFS
FT
VM
Network I/O Control
10 GigE
VMRG1
VMRG2
VMRG3
41. NETIOC VM Traffic Pepsi VMs
Coke
VM
vMotio
HBR
FT
n
Mgmt
NFS
iSCSI
Server Admin
vNetwork Distributed Portgroup
Teaming Policy
vNetwork Distributed Switch
Load Based
Shaper
Teaming
Traffic
Shares
Limit (Mbps)
802.1p
vMotion
5
150
1
Scheduler
Scheduler
Mgmt
30
--
Limit enforcement
NFS
10
250
--
per team
Shares enforcement
iSCSI
10
2
per uplink
FT
60
--
HBR
10
--
VM
20
2000
4
Pepsi
5
--
Coke
15
--
43. vSphere HA Primary Components
§ Every host runs a agent
• Referred to as ‘FDM’ or Fault Domain
Manager
• One of the agents within the cluster is
chosen to assume the role of the Master
• There is only one Master per cluster during normal
ESX 01
ESX 03
operations
• All other agents assume the role of Slaves
§ There is no more Primary/
Secondary concept with vSphere HA
ESX 02
ESX 04
vCenter
44. The Master Role
§ An FDM master monitors:
• ESX hosts and Virtual Machine availability.
• All Slave hosts. Upon a Slave host failure,
protected VMs on that host will be restarted.
• The power state of all the protected VMs.
ESX 01
ESX 03
Upon failure of a protected VM, the Master
will restart it.
§ An FDM master manages:
• The list of hosts that are members of the
cluster, updating this list as hosts are added
or removed from the cluster.
• The list of protected VMs. The
Master updates this list after ESX 02
ESX 04
each user-initiated power on
or power off.
vCenter
45. The Slave Role
§ An Slave monitors the runtime state
of it’s locally running VMs and
forwards any significant state
changes to the Master.
§ It implements vSphere HA features ESX 01
ESX 03
that do not require central
coordination, most notably VM
Health Monitoring.
§ It monitors the health of the Master.
If the Master should fail, it
participates in the election process
ESX 02
ESX 04
for a new master.
§ Maintains list of powered on VMs
vCenter
46. Storage Level Communications
§ One of the most exciting new features of
vSphere HA is its ability to use a storage
subsystem for communication.
§ The datastores used for this are referred to as
‘Heartbeat Datastores’.
§ This provides for increased communication ESX 01
ESX 03
redundancy.
§ Heartbeat datastores are used as a
communication channel only when the
management network is lost - such as in the
case of isolation or network partitioning.
ESX 02
ESX 04
vCenter
47. Storage Level Communications
§ Heartbeat Datastores allow a Master to:
• Monitor availability of Slave hosts and the
VMs running on them
• Determine whether a host has become
network isolated rather than network ESX 01
ESX 03
partitioned.
• Coordinate with other Masters - since a
VM can only be owned by only one
master, masters will coordinate VM
ownership thru datastore communication.
• By default, vCenter will
automatically pick 2
datastores. These 2 ESX 02
ESX 04
datastores can also be
selected by the user.
vCenter
49. vSphere 5.0 – vMotion
§ The original vMotion keeps getting better!
§ Multi-NIC Support
• Support up to four 10Gbps or sixteen 1Gbps NICs.
(ea. NIC must have its own IP).
• Single vMotion can now scale over multiple NICs.
(load balance across multiple NICs).
• Faster vMotion times allow for a higher number of concurrent vMotions.
§ Reduced Application Overhead
• Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce
timeouts and improve success.
• Ensures less than 1 Second switchover time in almost all cases.
§ Support for higher latency networks (up to ~10ms)
• Extend vMotion capabilities over slower networks.
50. Multi-NIC Throughput
Multi-NIC
30
25
Throughput (Gbps)
20
15
10
5
0
One NIC
Two NICs
Three NICs*
* Limited by throughput of PCI-E bus in this particular setup.
51. vSphere 5.0 – DRS/DPM
§ DRS/DPM improvements focus on cross-product integration.
• Introduce support for “Agent VMs.”
• Agent VM is a special purpose VM tied to a specific ESXi host.
• Agent VM cannot / should not be migrated by DRS or DPM.
• Special handling of Agent VMs now afforded by DRS DPM.
§ A DRS/DPM cluster hosting Agent VMs.
• Accounts for Agent VM reservations (even when powered off).
• Waits for Agent VMs to be powered on and ready before placing client VMs.
• Will not try to migrate a Agent VM (Agent VMs pinned to their host).
§
Maintenance Mode / Standby Mode Support
• Agent VMs do not have to be evacuated for host to enter
maintenance or standby mode.
• When host enters maintenance/standby mode, Agent VMs are powered off
(after client VMs are evacuated).
• When host exits maintenance/standby mode, Agent VMs are powered on
(before client VMs are placed).
53. vSphere Web Client Architecture
The vSphere Web
Client runs within
a browser Fx
Application
Server that Flex Client
provides a Back End
scalable back end
The Query Service
obtains live data
vCenter in either Query from the core
single or Service vCenter Server
Linked mode process
operation vCenter
55. Features of the vSphere Web Client
§ Ready Access to Common Actions
• Quick access to common tasks provided out of the box
56. Introducing vCenter Server
Appliance
§ The vCenter Server Appliance is the answer!
• Simplifies Deployment and Configuration
• Streamlines patching and upgrades
• Reduces the TCO for vCenter
§ Enables companies to respond to business faster!
VMware
vCenter Server
Virtual
Appliance
Automatio
Visibility
n
Scalability
57. Component Overview
§ vCenter Server Appliance (VCSA) consists of:
• A pre-packaged 64 bit application running on SLES 11
• Distributed with sparse disks
• Disk Footprint
Distribution
Min Deployed
Max Deployed
3.6GB
~5GB
~80GB
• Memory Footprint
• A built in enterprise level database with optional support for a remote
Oracle databases.
• Limits are the same for VC and VCSA
• Embedded DB
• 5 hosts/50 VMs
• External DB
• 300 hosts/3000 VMs (64 bit)
• A web-based configuration interface
58. Feature Overview
§ vCenter Server Appliance supports:
• The vSphere Web Client
• Authentication through AD and NIS
• Feature parity with vCenter Server on Windows
• Except –
• Linked Mode support - Requires ADAM (AD LDS)
• IPv6 support
• External DB Support
• Oracle is the only supported external DB for the first release
• No vCenter Heartbeat support
• HA is provided through vSphere HA
59. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
60. Introduction to VMFS-5
§ Enhanced Scalability
• Increase the size limits of the filesystem support much larger single
extent VMFS-5 volumes.
• Support for single extent 64TB Datastores.
§ Better Performance
• Uses VAAI locking mechanism with more tasks.
§ Easier to manage and less overhead
• Space reclamation on thin provisioned LUNs.
• Smaller sub blocks.
• Unified Block size.
61. VMFS-5 vs VMFS-3 Feature
Comparison
Feature
VMFS-3
VMFS-5
Yes
2TB+ VMFS Volumes
Yes
(using extents)
Support for 2TB+ Physical RDMs
No
Yes
Unified Block size (1MB)
No
Yes
Atomic Test Set
Enhancements No
Yes
(part of VAAI, locking mechanism)
Sub-blocks for space efficiency
64KB (max ~3k)
8KB (max ~30k)
Small file support
No
1KB
62. VMFS-3 to VMFS-5 Upgrade
§ The Upgrade to VMFS-5 is clearly displayed in the vSphere
Client under Configuration Storage view.
§ It is also displayed in the Datastores Configuration view.
§ Non-disruptive upgrades.
63. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
64. VAAI – Introduction
§ vStorage API for Array Integration = VAAI
§ VAAI’s main purpose is to leverage array capabilities.
• Offloading tasks to reduce overhead
• Benefit from enhanced mechanisms arrays mechanisms
§ The “traditional” VAAI primitives have been improved.
§ We have introduced multiple new primitives. Application
§ Support for NAS! VI-3
Hypervisor
Non-VAAI
Fabric
Array
VAAI
LUN LUN
01
02
65. VAAI Primitive Updates in
vSphere 5.0
§ vSphere 4.1 has a default plugin shipping for Write Same as
the primitive was fully T10 compliant, however ATS and Full
Copy were not.
• The T10 organization is responsible for SCSI standardization (SCSI-3) and
a standard used by many Storage Vendors.
§ vSphere 5.0 has all the 3 primitives which are T10 compliant
integrated in the ESXi Stack.
• This allows for arrays which are T10 compliant leverage these primitives
with a default VAAI plugin in vSphere 5.0.
§ It should also be noted that the ATS primitive has been
extended in vSphere 5.0 / VMFS-5 to cover even more
operations, resulting in even better performance and greater
scalability.
66. Introducing VAAI NAS Primitives
§ With this primitive, we will enable hardware acceleration/
offload features for NAS datastores.
§ The following primitives are defined for VAAI NAS:
• Full File Clone – Similar to the VMFS block cloning. Allows offline VMDKs
to be cloned by the Filer.
• Note that hot migration via Storage vMotion on NAS is not hardware accelerated.
• Reserve Space – Allows creation of thick VMDK files on NAS.
§ NAS VAAI plugins are not shipped with ESXi 5.0. These
plugins will be developed and distributed by the storage
vendors, but signed by the VMware certification program.
67. VAAI NAS: Thick Disk Creation
§ Without the VAAI NAS primitives, only Thin format is
available.
§ With the VAAI NAS primitives, Flat (thick), Flat pre-initialized
(eager zeroed-thick) and Thin formats are available.
Non
VAAI
VAAI
68. Introducing VAAI Thin
Provisioning
§ What are the driving factors behind VAAI TP?
• Provisioning new LUNs to a vSphere environment (cluster) is
complicated.
§ Strategic Goal:
• We want to make the act of physical storage provisioning in a vSphere
environment extremely rare.
• LUNs should be an incredibly large address spaces should be able to
handle any VM workload.
§ VAAI TP features include:
• Dead space reclamation.
• Monitoring of the space.
69. VAAI Thin Provisioning – Dead
Space Reclamation
§ Dead space is previously written blocks that are no longer
used
by the VM. For instance after a Storage vMotion.
§ vSphere conveys block information to storage system
via VAAI storage system reclaims the dead blocks.
• Storage vMotion, VM deletion
and swap file deletion can trigger vSphere
Storage vMotion
the thin LUN to free some
physical space.
• ESXi 5.0 uses a standard SCSI
command for dead space reclamation.
VMFS volume A
VMFS volume B
70. Current “Out Of Space” User
Experience
No space related warnings
VMware
No mitigation steps available
Space exhaustion, VMs
and LUN offline
?
VMware
71. “Out Of Space” User Experience
with VAAI Extensions Space exhaustion warning in UI
VMware
Storage vMotion based
evacuation or add space
VMware
Space exhaustion, affected VMs paused,
LUN online awaiting space allocation.
72. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
73. Storage vMotion – Introduction
§ In vSphere 5.0, a number of new enhancements were made
to Storage vMotion.
• Storage vMotion will work with Virtual Machines that have snapshots,
which means coexistence with other VMware products features such as
VCB, VDR HBR.
• Storage vMotion will support the relocation of linked clones.
• Storage vMotion has a new use case – Storage DRS – which uses Storage
vMotion for Storage Maintenance Mode Storage Load Balancing (Space
or Performance).
74. Storage vMotion Architecture
Enhancements (1 of 2)
§ In vSphere 4.1, Storage vMotion uses the Changed Block Tracking
(CBT) method to copy disk blocks between source destination.
§ The main challenge in this approach is that the disk pre-copy phase
can take a while to converge, and can sometimes result in Storage
vMotion failures if the VM was running a very I/O intensive load.
§ Mirroring I/O between the source and the destination disks has
significant gains when compared to the iterative disk pre-copy
mechanism.
§ In vSphere 5.0, Storage vMotion uses a new mirroring architecture to
provide the following advantages over previous versions:
• Guarantees migration success even when facing a slower destination.
• More predictable (and shorter) migration time.
76. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
77. Storage I/O Control Phase 2 and
Refreshing Memory
§ In many customer environments, storage is mostly accessed from
storage arrays over SAN, iSCSI or NAS.
§ One ESXi host can affect the I/O performance of others by issuing
large number of requests on behalf of one its virtual machines.
§ Thus the throughput/bandwidth available to ESXi hosts itself may
vary drastically leading to highly-variable I/O performance for VMs.
§ To ensure stronger I/O guarantees, we implemented Storage I/O
Control in vSphere 4.1 for block storage which guarantees an
allocation of I/O resources on a per VM basis.
§ As of vSphere 5.0 we also support SIOC for NFS based storage!
§ This capability is essential to provide better performance for I/O
intensive and latency-sensitive applications such as database
workloads, Exchange servers, etc.
78. Storage I/O Control Refreshing
Memory
What you see What you want to see
online Microsoft data online Microsoft data
store Exchange mining store Exchange mining
VIP VIP VIP VIP
NFS / VMFS Datastore NFS / VMFS Datastore
79. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
80. What Does Storage DRS Solve?
§ Without Storage DRS:
• Identify the datastore with the most disk space and lowest latency.
• Validate which virtual machines are placed on the datastore and ensure
there are no conflicts.
• Create Virtual Machine and hope for the best.
§ With Storage DRS:
• Automatic selection of the best placement for your VM.
• Advanced balancing mechanism to avoid storage performance
bottlenecks
or “out of space” problems.
• Affinity Rules.
81. What Does Storage DRS Provide?
§ Storage DRS provides the following:
1. Initial Placement of VMs and VMDKS based on available space and
I/O capacity.
2. Load balancing between datastores in a datastore cluster via Storage
vMotion based on storage space utilization.
3. Load balancing via Storage vMotion based on I/O metrics, i.e. latency.
§ Storage DRS also includes Affinity/Anti-Affinity Rules for VMs
and VMDKs;
• VMDK Affinity – Keep a VM’s VMDKs together on the same datastore.
This is the default affinity rule.
• VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different datastores.
• Virtual Machine Anti-Affinity – Keep VMs separate on different datastores.
§ Affinity rules cannot be violated during normal operations.
82. Datastore Cluster
§ An integral part of SDRS is to create a group of datastores
called
a datastore cluster.
• Datastore Cluster without Storage DRS – Simply a group of datastores.
• Datastore Cluster with Storage DRS – Load Balancing domain similar to
a DRS Cluster.
§ A datastore cluster, without SDRS is just a datastore folder.
It is the functionality provided by SDRS which makes it more
than just a folder.
2TB
datastore cluster
500GB
500GB
500GB
500GB
datastores
83. Storage DRS Operations – Initial
Placement (1 of 4)
§ Initial Placement – VM/VMDK create/clone/relocate.
• When creating a VM you select a datastore cluster rather than an
individual datastore and let SDRS choose the appropriate datastore.
• SDRS will select a datastore based on space utilization and I/O load.
• By default, all the VMDKs of a VM will be placed on the same datastore
within a datastore cluster (VMDK Affinity Rule), but you can choose to
have VMDKs assigned to different datastore clusters.
2TB
datastore cluster
500GB
500GB
500GB
500GB
datastores
300GB 260GB 265GB 275GB
available
available
available
available
84. Storage DRS Operations – Load
Balancing (2 of 4)
Load balancing – SDRS triggers on space usage latency threshold.
§ Algorithm makes migration recommendations when I/O response
time and/or space utilization thresholds have been exceeded.
• Space utilization statistics are constantly gathered by vCenter, default threshold
80%.
• I/O load trend is currently evaluated every 8 hours based on a past day history,
default threshold 15ms.
§ Load Balancing is based on I/O workload and space which ensures
that no datastore exceeds the configured thresholds.
§ Storage DRS will do a cost / benefit analysis!
§ For I/O load balancing Storage DRS leverages Storage I/O Control
functionality.
86. Storage DRS Operations –
Datastore Maintenance Mode
§ Datastore Maintenance Mode
• Evacuates all VMs VMDKs from selected datastore.
• Note that this action will not move VM Templates.
• Currently, SDRS only handles registered VMs.
Place VOL1 in
maintenance
mode
2TB
datastore cluster
VOL1
VOL2
VOL3
VOL4
datastores
87. Storage DRS Operations (4 of 4)
Datastore Cluster
Datastore Cluster
Datastore Cluster
VMDK affinity
VMDK anti-affinity
VM anti-affinity
§ Keep a Virtual § Keep a VM’s VMDKs § Keep VMs on different
Machine’s VMDKs on different datastores
together on the same datastores
§ Similar to DRS anti-
datastore
§ Useful for separating affinity rules
§ Maximize VM log and data disks of
§ Maximize availability
availability when all database VMs
of a set of redundant
disks needed in order
§ Can select all or a VMs
to run
subset of a VM’s disks
§ On by default for all
VMs
88. SDRS Scheduling
SDRS allows you to create a schedule to change its settings.
This can be useful for scenarios where you don’t want VMs to migrate between
datastore or when I/O latency might rise, giving false negatives, e.g. during VM
backups.
90. So What Does It Look Like? Load
Balancing.
§ It will show “utilization before” and “after.”
§ There’s always the option to override the
recommendations.
91. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
92. What Is vStorage APIs Storage
Awareness (VASA)?
§ VASA is an Extension of the vSphere Storage APIs, vCenter-
based extensions. Allows storage arrays to integrate with
vCenter for management functionality via server-side plug-
ins or Vendor Providers.
§ This in turn allows a vCenter administrator to be aware of
the topology, capabilities, and state of the physical storage
devices available to the cluster.
§ VASA enables several features.
• For example it delivers System-defined (array-defined) Capabilities that
enables Profile-driven Storage.
• Another example is that it provides array internal information that helps
several Storage DRS use cases to work optimally with various arrays.
93. Storage Compliancy
§ Once the VASA Provider has been successfully added to
vCenter, the VM Storage Profiles should also display the
storage capabilities provided to it by the Vendor Provider.
§ The above example contains a ‘mock-up’ of some possible
Storage Capabilities as displayed in the VM Storage Profiles.
These are retrieved from the Vendor Provider.
94. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
95. Why Profile Driven Storage? (1 of 2)
§ Problem Statement
1. Difficult to manage datastores at scale
• Including: capacity planning, differentiated data services for each datastore, maintaining capacity
headroom, etc.
2. Difficult to correctly match VM SLA requirements to available storage
• Because: Manually choosing between many datastores and 1 storage tiers
• Because: VM requirements not accurately known or may change over its lifecycle
§ Related trends
• Newly virtualized Tier-1 workloads need stricter VM storage SLA promises
• Because: Other VMs can impact performance SLA
• Scale-out storage mix VMs with different SLAs on the same storage
96. Why Profile Driven Storage? (2 of 2)
Save OPEX by reducing repetitive planning and effort!
§ Minimize per-VM (or per VM request) “thinking” or planning
for storage placement.
• Admin needs to plan for optimal space and I/O balancing for each VM.
• Admin needs to identify VM storage requirements and match to physical
storage properties.
§ Increase probability of “correct” storage placement and use
(minimize need for troubleshooting, minimize time for
troubleshooting).
• Admin needs more insight into storage characteristics.
• Admin needs ability to custom-tag available storage.
• Admin needs easy means to identify incorrect VM storage placement
(e.g. on incorrect datastore).
97. Save OPEX by Reducing Repetitive Planning and Effort!
Periodically
Identify Find optimal
Create VM
check Today
requirements
datastore
compliance
Initial setup
Identify storage
Periodically
characteristics
Identify Storage
Create VM
check
requirements
DRS
compliance
Group
datastores
Initial setup
Discover
storage
Select VM
Storage DRS +
characteristics
Create VM
Storage profile
Profile driven
Group storage
datastores
98. Storage Capabilities VM
Storage Profiles
Not
Compliant
Compliant
VM Storage Profile
associated with VM
VM Storage Profile
referencing Storage
Capabilities
Storage Capabilities
surfaced by VASA or
user-defined
99. Selecting a Storage Profile
During Provisioning
§ By selecting a VM Storage Profile, datastores are now split
into
Compatible Incompatible.
§ The Celerra_NFS datastore is the only datastore which
meets the GOLD Profile requirements – i.e. it is the only
datastore that has our user-defined storage capability
associated with it.
100. VM Storage Profile Compliance
§ Policy Compliance is visible from the Virtual Machine
Summary tab.
101. vSphere 5.0 – vStorage
• VMFS 5.0
• vStorage API for Array Integration
• Storage vMotion
• Storage I/O Control
• Storage DRS
• VMware API for Storage Awareness
• Profile Driven Storage
• FCoE – Fiber Channel over Ethernet
102. Introduction
§ Fiber Channel over Ethernet (FCoE) is an enhancement that
expands Fiber Channel into the Ethernet by combining two
leading-edge technologies (FC and the Ethernet)
§ The FCoE adapters that VMware supports generally fall into
two categories, hardware FCoE adapters and software FCoE
adapters which uses an FCoE capable NIC
• Hardware FCoE adapters were supported as of vSphere 4.0.
§ The FCoE capable NICs are referred to as Converged
Network Adapters (CNAs) which facilitate network and
storage traffic.
§ ESXi 5.0 uses FCoE adapters to access Fibre Channel storage.
103. Software FCoE Adapters (1 of 2)
§ A software FCoE adapter is a software code that performs
some of the FCoE processing.
§ This adapter can be used with a number of NICs that
support partial FCoE offload.
§ Unlike the hardware FCoE adapter, the software adapter
needs to be activated, similar to Software iSCSI.
104. Software FCoE Adapters (2 of 2)
§ Once the Software FCoE is enabled, a new adapter is
created, and discovery of devices can now take place.
105. Conclusion
§ vSphere 5.0 has many new compelling storage features.
§ VMFS volumes can be larger than ever before.
• They can contain many more virtual machines due to VAAI
enhancements and architectural changes.
§ Storage DRS and Profile Driven Storage will help solve
traditional problems with virtual machine provisioning.
§ The administrative overhead will be severely reduced.
• VASA surfacing storage characteristics.
• Creating Profiles through Profile Driven Storage.
• Combining multiple datastores in a large aggregate.
107. Introduction (1 of 3)
§ In vSphere 5.0, VMware releases a new storage appliance
called VSA.
• VSA is an acronym “vSphere Storage Appliance.”
• This appliance is aimed at our SMB (Small-Medium Business) customers
who may not be in a position to purchase a SAN or NAS array for their
virtual infrastructure, and therefore do not have shared storage.
• It is the SMB market that we wish to go after with this product — our aim
to move these customers from Essentials to Essentials+.
• Without access to a SAN or NAS array, this excludes these SMB customers
from many of the top features which are available in a VMware Virtual
Infrastructure, such as vSphere HA vMotion.
• Customers who decide to deploy a VSA can now benefit from many
additional vSphere features without having to purchase a SAN or NAS
device to provide them with shared storage.
108. Introduction (2 of 3)
VSA
VSA
VSA
VSA Manager
vSphere
vSphere
vSphere
vSphere Client
NFS
NFS
NFS
§ Each ESXi server has a VSA deployed to it as a Virtual Machine.
§ The appliances use the available space on the local disk(s) of the ESXi
servers present one replicated NFS volume per ESXi server. This
replication of storage makes the VSA very resilient to failures.
109. Introduction (3 of 3)
§ The NFS datastores exported from the VSA can now be used as
shared storage on all of the ESXi servers in the same datacenter.
§ The VSA creates shared storage out of local storage for use by a
specific set of hosts.
§ This means that vSphere HA vMotion can now be made
available on low-end (SMB) configurations, without external SAN
or NAS servers.
§ There is a CAPEX saving achieved by SMB customers as there is no
longer a need to purchase a dedicated SAN or NAS devices to
achieve shared storage.
§ There is also an OPEX saving as the management of the VSA may be
done by the vSphere Administrator and there is no need for
dedicated SAN skills to manage the appliances.
110. Supported VSA Configurations
§ The vSphere Storage Appliance can be deployed in two
configurations:
• 2 x ESXi 5.0 servers configuration
• Deploys 2 vSphere Storage Appliances, one per ESXi server a VSA Cluster Service on the vCenter
server
• 3 x ESXi 5.0 servers configuration
• Deploys 3 vSphere Storage Appliances, once per ESXi server
• Each of the servers must contain a new/vanilla install of ESXi 5.0.
• During the configuration, the user selects a datacenter. The user is then
presented with a list of ESXi servers in that datacenter.
• The installer will check the compatibility of each of these physical hosts
to make sure they are suitable for VSA deployment.
• The user must then select which compatible ESXi servers should
participate in the VSA cluster, i.e. which servers will host VSA nodes.
• It then ‘creates’ the storage cluster by aggregating and virtualizing each
server’s local storage to present a logical pool of shared storage.
111. Two Member VSA
vCenter Server
VSA VSA Cluster
Manager
Service
Manage
Volume 2
Volume 1
Volume 1
Volume 2
(Replica)
(Replica)
VSA VSA
Datastore 1
Datastore 2
VSA cluster with 2 members
112. Three Member VSA
vCenter Server
VSA
Manager
Manage
VSA VSA VSA
Datastore 1
Datastore 2
Datastore 3
Volume 3
Volume 2
Volume 1
Volume 3
(Replica)
(Replica)
Volume 1
Volume 2
(Replica)
VSA cluster with 3 members
113. VSA Manager
§ The VSA Manager helps an administrator perform the
following tasks:
• Deploy vSphere Storage Appliance instances onto ESXi hosts to create a
VSA cluster
• Automatically mount the NFS volumes that each vSphere Storage
Appliance exports as datastores to the ESXi hosts
• Monitor, maintain, and troubleshoot a VSA cluster
114. Resilience
§ Many storage arrays are a single point of failure (SPOF) in
customer environments.
§ VSA is very resilient to failures.
§ If a node fails in the VSA cluster, another node will
seamlessly take over the role of presenting its NFS datastore.
§ The NFS datastore that was being presented from the failed
node will now be presented from the node that holds its
replica
(mirror copy).
§ The new node will use the same NFS server IP address that
the failed node was using for presentation, so that any VMs
that reside on that NFS datastore will not be affected by the
failover.
115. What’s New in VMware vCenter
Site Recovery Manager v5.0 –
Technical
116. vCenter Site Recovery Manager
Ensures Simple, Reliable DR
§ Site Recovery Manager Complements vSphere to provide the
simplest and most reliable disaster protection and site migration
for all applications
§ Provide cost-efficient replication of applications to failover site
• Built-in vSphere Replication
• Broad support for storage-based replication
§ Simplify management of recovery and migration plans
• Replace manual runbooks with centralized recovery plans
• From weeks to minutes to set up new plan
§ Automate failover and migration
processes for reliable recovery
• Enable frequent non-disruptive testing
• Ensure fast, automated failover
• Automate failback processes
117. SRM Provides Broad Choice of
Replication Options
Site Site
vCenter Server
Recovery vCenter Server
Recovery
Manager
Manager
VM
V
V V V V V V VM
V V V V V V
M
M
M
M
M
M
M
M
M
M
M
M
M
vSphere
vSphere vSphere
Replication
V V
V M
M
M
Storage-based
replication
vSphere Replication: simple, cost-efficient replication for Tier 2 applications and
smaller sites
Storage-based replication: High-performance replication for business-critical
applications in larger sites
118. SRM of Today’s High-Level Architecture
“Protected” Site
“Recovery” Site
vSphere Client
vSphere Client
SRM Plug-In
SRM Plug-In
SRM Server
vCenter Server
vCenter Server
SRM Server
SRA
SRA
ESX
ESX
ESX
ESX
ESX
Replication Software
Replication Software
Replication
SAN SAN
VMFS
VMFS
Array
VMFS
VMFS
Array
119. Technology – vSphere Replication
§ Adding native replication to SRM
• Virtual machines can be replicated regardless of what storage they
live on
• Enables replication between heterogeneous datastores
• Replication is managed as a property of a virtual machine
• Efficient replication minimizes impact on VM workloads
• Provides guest-level copy of the VM and not a copy of the VM itself
120. vSphere Replication Details
§ Replication Granularity per Virtual Machine
• Can opt to replicate all or a subset of the VM’s disks
• You can create the initial copy in any way you want - even via sneaker net!
• You have the option to place the replicated disks where you want.
• Disks are replicated in group consistent manner
§ Simplified Replication Management
• User selects destination location for target disks
• User selects Recovery Point Objective (RPO)
• User can supply initial copy to save on bandwidth
§
Replication Specifics
• Changes on the source disks are tracked by ESX
• Deltas are sent to the remote site
• Does not use VMware snapshots
121. Replication UI
§ Select VMs to replicate from within the vSphere client by
right click options
§ Can do this on one VM, or multiple at the same time!
122. vSphere Replication 1.0 Limitations
§ Focus on virtual disks of powered-on VMs.
• ISOs and floppy images are not replicated.
• Powered-off/suspended VMs not replicated.
• Non-critical files not replicated (e.g. logs, stats, swap, dumps).
§ vSR works at the virtual device layer.
• Independent of disk format specifics.
• Independent of primary-side snapshots.
• Snapshots work with vSR, snapshot is replicated, but VM is recovered
with collapse snapshots.
• Physical RDMs are not supported.
§ FT, linked clones, VM templates are not supported with
HBR.
§ Automated failback of vSR-protected VMs will be late,
but will be supported in the future.
§ Virtual Hardware 7, or later, in the VM is required.
123. SRM Architecture with vSphere Replication
“Protected” Site
“Recovery” Site
vSphere Client
vSphere Client
SRM Plug-In
SRM Plug-In
SRM Server
vCenter Server
vCenter Server
SRM Server
vRMS
vRMS
vRS
ESX
ESX
ESX
ESX
ESX
vRA
vRA
vRA
Storage
Storage
Storage
VMFS
VMFS
VMFS
VMFS
124. SRM Scalability
Maximum Enforced
Protected virtual machines total 3000 No
Protected virtual machines in a single
500 No
protection group
Protection groups 250 No
Simultaneous running recovery plans 30 No
vSphere Replicated virtual machines 500 No
126. Planned Migration
§ New is Planned Migration.
Will shutdown
protected
VM’s, and
than
synchronize
them!
Planned migration ensures application consistency and no data-loss during
migration
• Graceful shutdown of production VMs in application consistent state
• Data sync to complete replication of VMs
• Recover fully replicated VMs
127. Failback
Description
Benefits
• “Single button” to • Facilitates DR operations for enterprises that are mandated
failback all recovered to perform a true failover as part of DR testing
VMs
• Simplifies recovery process after disaster
• Interfaces with storage
to automatically reverse
replication
• Replays existing
recovery plans – so new
virtual machines are not
part of failback
Reverse Replication
Site A (Primary)
Site B (Recovery)
128. Failback
§ To failback, you need first to do a planned migration,
followed by a reprotect. Then, to do the actual failback, you
do a recovery.
§ Below is a successful recovery of a planned migration.
135. Dependencies (continued) – VM
Startup Order
Group 1
Group 2
Group 3
Group 4
Group 5
App Server
Desktop
Apache
Apache
Master Database
Database
Database
Desktop
Apache
Desktop
App Server
Desktop
Exchange
Mail Sync