SlideShare a Scribd company logo
1
Storage Policy Based
Management
Cormac Hogan - @CormacJHogan
Blog – cormachogan.com
Chief Technologist - Storage & Availability
Polska VMUG
2017
2
Data in the news!
Dane w wiadomościach
3
4
How do you manage all of that data?
How do you keep it safe?
How can you choose data services, such as
replication and encryption, on a per-application
or a per–VM or virtual disk basis?
Storage Policy Based Management
Agenda
• Introduction
– vSphere APIs for Storage Awareness (VASA)
– Storage Policy Based Management (SPBM)
• SPBM and vSAN
• SPBM and Virtual Volumes (VVols)
• SPBM and VAIO (IO Filters)
– Host-based data services, 3rd parties as well as VMware provided
• SPBM integration with other VMware products
– with vRealize Automation / vRealize Orchestration
– with VMware Horizon View
• Q&A
5
6
Introduction to
vSphere APIs for Storage Awareness
(VASA)
VASA – vSphere APIs for Storage Awareness
• VASA – vSphere APIs for Storage Awareness – gives
vSphere insight into data services, either on storage
systems or on hosts.
• VASA providers publish storage capabilities to
vSphere.
• With Virtual Volumes, VASA is also used to initiate
certain operations on the array from vSphere
– e.g. Create VVol, Delete VVol, Take a Snapshot
7
8
Introduction
to
Storage Policy Based Management
The Storage Policy Based Management (SPBM) Paradigm
• SPBM is the foundation of
VMware's Software Defined
Storage vision
• Common framework to allow
storage and host related
capabilities to be consumed
via policies.
• Applies data services (e.g.
replication, encryption,
performance) on a per VM, or
even per VMDK level, through
policies
9
Creating Policies via Rules and Rule Sets
• Rule
– A Rule references a combination of a metadata tag and a related value, indicating the quality or
quantity of the capability that is desired.
– These two items act as a key and a value that, when referenced together through a Rule,
become a condition that must be met for compliance.
• E.g. Place VM on datastore where Encryption = True
• Rule Sets
– A Rule Set is comprised of one or more Rules.
– Multiple “Rule Sets” can be leveraged to allow a single storage policy to define alternative
selection parameters, even from several storage providers.
• E.g. Place VM on vSAN datastore where Deduplication = On OR VVol datastore where Deduplication = On.
10
11
12
VAIO
vSAN,
VVOLs,
VMFS
13
SPBM and vSAN
VMware vSAN
• Storage scale out architecture built into the hypervisor
• Aggregates locally attached storage from each ESXi
host in a cluster
• Dynamic capacity and performance scalability
• Flash optimized storage solution
• Fully integrated with vSphere:
• vCenter, vMotion, Storage vMotion, DRS, HA, FT, …
• VM-centric data operations through SPBM (policies)
14
VSAN 10GbE network
esxi-01 esxi-02 esxi-03
vSAN and HA/DRS Cluster
vSAN Shared Datastore
15
vSAN
VASA
Provider
Storage policy rules available in vSAN 6.6.1
• Primary level of Failures To Tolerate (Primary FTT for cross-site stretched cluster protection)
• Secondary level of Failures To Tolerate (Secondary FTT for local stretched cluster protection)
• Failure Tolerance Method (Mirroring [Raid1:default] or Erasure Coding [Raid5/Raid6])
• IOPS limit for object
• Disable object checksum
• Force provisioning
• Number of disk stripes per object
• Flash read cache reservation (%)
• Object space reservation (%)
• Affinity (when PFTT=0 in stretched clusters)
16
Defining a policy for vSAN
• Policies define levels of
protection and performance
• Applied at a per VM level, or
per VMDK level
• vSAN currently provides 10
unique storage capabilities to
vCenter Server
17
What If APIs
Assign it to a new or existing VM, or vmdk
• When the policy is selected, vSAN
uses it to place/distribute the
VM/VMDK to guarantee availability
and performance
• Policies can be changed on-the-fly
– In some cases, 2X space may be
temporarily required to change it
– May also introduce rebuild/resync
traffic, so advice is to treat policy
change on-the-fly as maintenance
task
18
Policy Setting - Number of Failures to Tolerate (FTT)
• “FTT” defines the number of
failures a VM/VMDK can tolerate.
• For RAID-1, “n” failures tolerated
means “n+1” copies of the object
are created and “2n+1” host
contributing storage are required!
esxi-01 esxi-02 esxi-03
vmdk
RAID-1
FTT=1
esxi-04
witnessvmdk
~50% of I/O ~50% of I/O
19
Policy Setting - Number of Disk Stripes Per Object
• Defines the minimum number of
capacity devices across which
each replica of a storage object
is distributed.
• Higher values may result in
better performance. Stripe width
can improve performance of
write destaging, and fetching of
reads
• Higher values may put more
constraints on flexibility of
meeting storage compliance
policies
• Primarily used to achieve
highest performance, even at
expense of flexibility
esxi-01 esxi-02 esxi-03
stripe-2a
RAID-1
esxi-04
witnessstripe-2b
RAID-0 RAID-0
stripe-1a
stripe-1b
FTT=1
Stripe width=2
20
Policy Setting – Fault Tolerance Method (FTM) - RAID-5
• Available in all-flash configurations only
• Example: FTT = 1 with FTM = RAID-5
– 3+1 (4 host minimum, 1 host can fail
without data loss)
– 5 hosts would tolerate 1 host failure
or maintenance mode state, and still
maintain redundancy
– 1.33x instead of 2x overhead.
– 30% savings (20GB disk consumes
40GB with RAID-1, now consumes
~27GB with RAID-5)
RAID-5
ESXi Host
parity
data
data
data
ESXi Host
data
parity
data
data
ESXi Host
data
data
parity
data
ESXi Host
data
data
data
parity
21
Policy Setting - Fault Tolerance Method (FTM) - RAID-6
• Available in all-flash configurations only
• Example: FTT = 2 with FTM = RAID-6
– 4+2 (6 host minimum. 1 host can fail
without data loss)
– 7 hosts would tolerate 1 host failure
or maintenance mode state, and still
maintain redundancy)
– 1.5x instead of 3x overhead.
– 50% savings. (20GB disk consumes
60GB with RAID-1, now consumes
~30GB with RAID-6)
RAID-6
ESXi Host
parity
data
data
ESXi Host
parity
data
data
ESXi Host
data
parity
data
ESXi Host
data
parity
data
ESXi Host
data
data
parity
ESXi Host
data
data
parity
22
Sky’s the limit for expansion on an agile cloud
• Europe’s leading media brand
• 22 million subscribers
• Pay TV, on-demand Internet streaming, broadband mobile
• Always looking for new markets and new revenue stream
• Challenge: Bring new services online, cost-effectively, without
impacting existing services. Avoid creating expensive silos per
service.
• vSAN enabled Sky to scale out its video service on time and on
budget, delivering a fast, cost-effective and reliable platform for
video transport.
23
24
SPBM and Virtual Volumes
Why VVols?
25
Typical SAN
• Lots of paths to manage
• Lot of devices to manage
• Risk of hitting path/device limits
• IO Blender effect
VVols are 1st class citizen on storage array
26
Data services on array are consumed
on a per VM/VMDK basis via SPBM
• Less paths/devices to manage
• Array appears as a Volume
• More scalable than LUNs
• 1:1 relationship with VM:storage
PE
• No Filesystem
• ESXi manages array through
VASA APIs.
• Arrays are logically partitioned
into containers, called Storage
Containers.
• NO LUNS
• VM files, called Virtual Volumes,
stored natively on the Storage
Containers.
• IO from ESXi to array is
addressed through an access
point called, Protocol Endpoint
• Data Services (snapshot, etc.)
are offloaded to the array
• Managed through SPBM.
27
High Level Architecture
Overview
vSphere
Storage Policy-Based Mgmt.
Virtual Volumes
Storage Policy
Capacity
Availability
Performance
Data Protection
Security
PE PE
Published Capabilities
Snapshot
Replication
Deduplication
Encryption
VASA Provider
28
VASA Provider (VP)
Virtual Volumes
VASA Provider
• Software component developed by
storage array vendors
• Provides “storage awareness” of array’s
data services
• VASA Provider can be implemented within
the array’s management firmware, in the
array controller or as a virtual appliance.
• Responsible for creating, deleting of
Virtual Volumes (VMs, clones, snapshots)
Characteristics
Protocol Endpoints (PE)
Virtual Volumes
VASA ProviderPE
• Separate the access points from the
storage itself
• Allows for fewer access points (compared
to LUN approach)
Why Protocol Endpoints?
• Access points that enables
communication between ESXi hosts and
storage array systems
• SCSI T10 Secondary Addressing scheme
to access VVol (PE + VVol Offset)
What are Protocol Endpoints?
29
Protocol Endpoints (PE)
VASA ProvideriSCSI/NFSPE
Virtual Volumes
• Compatible with all SAN and NAS
Protocols:
- iSCSI
- NFS
- FC
- FCoE
• Existing multi-path policies and NFS
topology requirements can be applied
to the PE
• NFS v3 and v4.1 supported.
Scope of Protocol Endpoints
30
0
Storage Container (SC)
Virtual Volumes
• Logical storage constructs for grouping of
virtual volumes.
• Setup by Storage Administrator
• Capacity is based on physical storage
• Logically partition or isolate VMs with
diverse storage needs and requirement
• Minimum one storage container per array
• Maximum depends on the array
• A single Storage Container can be
simultaneously accessed via multiple
Protocol Endpoints
• It is NOT a LUN
What are storage containers?
32
33
VVol walk-thru
with
Nimble Storage
[Now part of HPE]
34
Nimble Storage [now HPE]
Populate vCenter info
on
Storage Array
Add Nimble info directly
into vSphere
35
Full visibility into VM
• Home
• Swap
• VMDK
Storage Container
• Create a folder
• Set management
type to VMware
Virtual Volumes
• Set a capacity limit
36
Nimble Storage - VASA Provider
(automatically populated from array)
37
Protocol Endpoint automatically discovered!
Nimble Storage VVol Policy Setup – granular data services per-VM
38
Nimble Storage VVol Policy Setup
39
Some VVol Adoption figures from HPE – 3PAR
40
41
SPBM and vSphere APIs for I/O Filters
(VAIO)
42
VMM
VMX
Filter Framework
Filter 1
Filter 2
Filter 3
Filter n
Guest OS
I/O
Virtual Disk
I/O
43
VAIO
Data Services
Provided by 3rd parties
I/O Filters from 3rd parties – Cache Acceleration and Replication
44
45
VAIO
Data Services
provided by VMware in
vSphere 6.5
46
2 new features introduced with vSphere 6.5
- Encryption
- Storage I/O Control v2
Implementation is done via I/O Filters
Introduced in vSphere 6.5 - Storage I/O Control v2
• VM Storage Policies in vSphere 6.5 has a new option called “Common Rules”.
• These are used for configuring data services provided by hosts, such as Storage I/O Control
and Encryption. It is the same mechanism used for VAIO/IO Filters.
47
Now managed
via policy and
not set on a per
VM basis –
reduced
operational
overhead
QoS
Introduced in vSphere 6.5 - vSphere VM Encryption
• A new VM encryption mechanism.
• Implemented in the hypervisor,
making vSphere VM encryption
agnostic to the Guest OS.
• This not just data-at-rest encryption;
it also encrypts data in-flight.
• vSphere VM Encryption in vSphere
6.5 is policy driven.
• Requires an external Key
Management Server - KMS (not
provided by VMware)
48
3rd Party and vSphere IO Filters can co-exist
49
There are 3 I/O Filters on these hosts:
- VM Encryption
- Storage I/O Control
- Cache Accelerator from Infinio
Case Study from Infinio – VAIO Cache Acceleration
• The University of Georgia Center for Continuing Education and Hotel
– Conference center located in Athens, Georgia, USA
• Using DELL Compellent All Flash Array
• Pilot on vSphere Cluster running over 50 VMs
– file and print services
– digital signage applications
– back office applications like SQL and QuickBooks
50
Response times were fast – as low as
170 microseconds – which is
even faster than our all-flash array!”
51
SPBM and vRealize Automation/vRealize Orchestration
vRealize Automation 7.3 + vRealize Orchestration 6.5 and SPBM
• vRealize Automation (vRA) 7.3 enables SPBM through vRealize Orchestration (vRO)
– vRA itself does not know about SPBM, so relies on vRO
– SPBM policies must be preconfigured
– SPBM policies can be changed on-the-fly (day #2 operation)
• Leverages the latest vCenter Server (6.5) plug-in shipped with vRO out-of-the-box
• All SPBM policies are accessible through API in vRO/vRA
52
53
54
SPBM and VMware Horizon View
Horizon View 7.2 and SPBM (with vSAN)
Policy (as appears in vCenter) Description Stripes FTT %RCR %OSR
VM_HOME_<guid> VM home directory 1 1 0 0
REPLICA_DISK_<guid>
Linked Clone Replica Disk, Instant Clone
Replica Disk
1 1 10 0
PERSISTENT_DISK_<guid> Linked Clone Persistent Disk 1 1 0 100
OS_DISK_FLOATING_<guid>
Floating Linked Clone OS and
disposable disks, floating Instant Clone
OS and disposable disks
1 1 0 0
OS_DISK_<guid>
Dedicated Linked Clone OS and
disposable disks
1 1 0 0
FULL_CLONE_DISK_FLOATING_<guid> Floating Full Clone Virtual Disk 1 0 0 0
FULL_CLONE_DISK_<guid> Dedicated Full Clone Virtual Disk 1 1 0 0
55
• Policies are automatically created when Horizon View is deployed on vSAN datastores
56
Summary
• The amount of data in the world is exploding!
• Data is critical to your organization, and in many
cases, how you innovate with this data keeps
you ahead of your competitors.
• Managing that data, keeping it safe and providing
the appropriate data services at the granularity
of an application can be complex
• Storage Policy Based Management, a
fundamental building block to VMware’s Software
Defined Storage achieves this.
• SPBM is integrated with all vSphere storage
technologies, from vSAN to VVols to VAIO.
• With SPBM, data services (e.g. deduplication,
encryption, replication, RAID level) can be
assigned to your data on a per VM or per VMDK
basis.
57
58
Dziękuję
Q&AThank You

More Related Content

What's hot

VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014
David Davis
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017
Novosco
 
Virtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareVirtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure software
Duncan Epping
 
VSAN-VMWorld2015-Rev08
VSAN-VMWorld2015-Rev08VSAN-VMWorld2015-Rev08
VSAN-VMWorld2015-Rev08
Nelson Fonseca
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
Duncan Epping
 
VMware Vsan vtug 2014
VMware Vsan vtug 2014VMware Vsan vtug 2014
VMware Vsan vtug 2014
csharney
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
VMUG IT
 
STO7535 Virtual SAN Proof of Concept - VMworld 2016
STO7535 Virtual SAN Proof of Concept - VMworld 2016STO7535 Virtual SAN Proof of Concept - VMworld 2016
STO7535 Virtual SAN Proof of Concept - VMworld 2016
Cormac Hogan
 
A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875
Duncan Epping
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes Everything
VMUG IT
 
VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld
 
VMware Virtual SAN slideshow
VMware Virtual SAN slideshowVMware Virtual SAN slideshow
VMware Virtual SAN slideshow
Ashley Williams
 
VMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed SwitchVMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed Switch
VMworld
 
Presentation v mware virtual san 6.0
Presentation   v mware virtual san 6.0Presentation   v mware virtual san 6.0
Presentation v mware virtual san 6.0
solarisyougood
 
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld
 
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
Duncan Epping
 
VSAN – Architettura e Design
VSAN – Architettura e DesignVSAN – Architettura e Design
VSAN – Architettura e Design
VMUG IT
 
The dark side of stretched cluster
The dark side of stretched clusterThe dark side of stretched cluster
The dark side of stretched cluster
Andrea Mauro
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
David Pasek
 

What's hot (20)

VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017
 
Virtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareVirtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure software
 
VSAN-VMWorld2015-Rev08
VSAN-VMWorld2015-Rev08VSAN-VMWorld2015-Rev08
VSAN-VMWorld2015-Rev08
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
 
VMware Vsan vtug 2014
VMware Vsan vtug 2014VMware Vsan vtug 2014
VMware Vsan vtug 2014
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
 
STO7535 Virtual SAN Proof of Concept - VMworld 2016
STO7535 Virtual SAN Proof of Concept - VMworld 2016STO7535 Virtual SAN Proof of Concept - VMworld 2016
STO7535 Virtual SAN Proof of Concept - VMworld 2016
 
A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes Everything
 
VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices
 
VMware Virtual SAN slideshow
VMware Virtual SAN slideshowVMware Virtual SAN slideshow
VMware Virtual SAN slideshow
 
VMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed SwitchVMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed Switch
 
Presentation v mware virtual san 6.0
Presentation   v mware virtual san 6.0Presentation   v mware virtual san 6.0
Presentation v mware virtual san 6.0
 
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
 
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
 
VSAN – Architettura e Design
VSAN – Architettura e DesignVSAN – Architettura e Design
VSAN – Architettura e Design
 
The dark side of stretched cluster
The dark side of stretched clusterThe dark side of stretched cluster
The dark side of stretched cluster
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
 

Similar to 2017 VMUG Storage Policy Based Management

VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
Duncan Epping
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld
 
V sphere virtual volumes technical overview
V sphere virtual volumes technical overviewV sphere virtual volumes technical overview
V sphere virtual volumes technical overview
solarisyougood
 
VMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep DiveVMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep Dive
VMworld
 
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealizePartner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Erik Bussink
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld
 
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes ConfigurationsVMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld
 
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
VMware
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
VMworld
 
VMworld 2014: Virtual Volumes Technical Deep Dive
VMworld 2014: Virtual Volumes Technical Deep DiveVMworld 2014: Virtual Volumes Technical Deep Dive
VMworld 2014: Virtual Volumes Technical Deep Dive
VMworld
 
VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN
VMworld
 
Exchange 2010 New England Vmug
Exchange 2010 New England VmugExchange 2010 New England Vmug
Exchange 2010 New England Vmug
csharney
 
vSphere
vSpherevSphere
VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld
 
VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...
VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...
VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...
VMworld
 
S016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710dS016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710d
Tony Pearson
 
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld
 
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld
 
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
VMware
 

Similar to 2017 VMUG Storage Policy Based Management (20)

VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep Dive
 
V sphere virtual volumes technical overview
V sphere virtual volumes technical overviewV sphere virtual volumes technical overview
V sphere virtual volumes technical overview
 
VMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep DiveVMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep Dive
 
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealizePartner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealize
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
 
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes ConfigurationsVMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
 
VMworld 2014: Virtual Volumes Technical Deep Dive
VMworld 2014: Virtual Volumes Technical Deep DiveVMworld 2014: Virtual Volumes Technical Deep Dive
VMworld 2014: Virtual Volumes Technical Deep Dive
 
VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN
 
Exchange 2010 New England Vmug
Exchange 2010 New England VmugExchange 2010 New England Vmug
Exchange 2010 New England Vmug
 
vSphere
vSpherevSphere
vSphere
 
VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN
 
VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...
VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...
VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...
 
S016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710dS016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710d
 
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
 
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
 
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
 

Recently uploaded

Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |
AstuteBusiness
 
Leveraging the Graph for Clinical Trials and Standards
Leveraging the Graph for Clinical Trials and StandardsLeveraging the Graph for Clinical Trials and Standards
Leveraging the Graph for Clinical Trials and Standards
Neo4j
 
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
Edge AI and Vision Alliance
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
MichaelKnudsen27
 
High performance Serverless Java on AWS- GoTo Amsterdam 2024
High performance Serverless Java on AWS- GoTo Amsterdam 2024High performance Serverless Java on AWS- GoTo Amsterdam 2024
High performance Serverless Java on AWS- GoTo Amsterdam 2024
Vadym Kazulkin
 
GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
Javier Junquera
 
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving
 
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...
"$10 thousand per minute of downtime: architecture, queues, streaming and fin..."$10 thousand per minute of downtime: architecture, queues, streaming and fin...
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...
Fwdays
 
Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)
Jakub Marek
 
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyFreshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
ScyllaDB
 
Demystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through StorytellingDemystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through Storytelling
Enterprise Knowledge
 
Session 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdfSession 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdf
UiPathCommunity
 
Harnessing the Power of NLP and Knowledge Graphs for Opioid Research
Harnessing the Power of NLP and Knowledge Graphs for Opioid ResearchHarnessing the Power of NLP and Knowledge Graphs for Opioid Research
Harnessing the Power of NLP and Knowledge Graphs for Opioid Research
Neo4j
 
A Deep Dive into ScyllaDB's Architecture
A Deep Dive into ScyllaDB's ArchitectureA Deep Dive into ScyllaDB's Architecture
A Deep Dive into ScyllaDB's Architecture
ScyllaDB
 
Essentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation ParametersEssentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation Parameters
Safe Software
 
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
Jason Yip
 
inQuba Webinar Mastering Customer Journey Management with Dr Graham Hill
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillinQuba Webinar Mastering Customer Journey Management with Dr Graham Hill
inQuba Webinar Mastering Customer Journey Management with Dr Graham Hill
LizaNolte
 
"NATO Hackathon Winner: AI-Powered Drug Search", Taras Kloba
"NATO Hackathon Winner: AI-Powered Drug Search",  Taras Kloba"NATO Hackathon Winner: AI-Powered Drug Search",  Taras Kloba
"NATO Hackathon Winner: AI-Powered Drug Search", Taras Kloba
Fwdays
 
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Pitangent Analytics & Technology Solutions Pvt. Ltd
 
Mutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented ChatbotsMutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented Chatbots
Pablo Gómez Abajo
 

Recently uploaded (20)

Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |Astute Business Solutions | Oracle Cloud Partner |
Astute Business Solutions | Oracle Cloud Partner |
 
Leveraging the Graph for Clinical Trials and Standards
Leveraging the Graph for Clinical Trials and StandardsLeveraging the Graph for Clinical Trials and Standards
Leveraging the Graph for Clinical Trials and Standards
 
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
 
High performance Serverless Java on AWS- GoTo Amsterdam 2024
High performance Serverless Java on AWS- GoTo Amsterdam 2024High performance Serverless Java on AWS- GoTo Amsterdam 2024
High performance Serverless Java on AWS- GoTo Amsterdam 2024
 
GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
 
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
 
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...
"$10 thousand per minute of downtime: architecture, queues, streaming and fin..."$10 thousand per minute of downtime: architecture, queues, streaming and fin...
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...
 
Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)
 
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyFreshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
 
Demystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through StorytellingDemystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through Storytelling
 
Session 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdfSession 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdf
 
Harnessing the Power of NLP and Knowledge Graphs for Opioid Research
Harnessing the Power of NLP and Knowledge Graphs for Opioid ResearchHarnessing the Power of NLP and Knowledge Graphs for Opioid Research
Harnessing the Power of NLP and Knowledge Graphs for Opioid Research
 
A Deep Dive into ScyllaDB's Architecture
A Deep Dive into ScyllaDB's ArchitectureA Deep Dive into ScyllaDB's Architecture
A Deep Dive into ScyllaDB's Architecture
 
Essentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation ParametersEssentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation Parameters
 
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
 
inQuba Webinar Mastering Customer Journey Management with Dr Graham Hill
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillinQuba Webinar Mastering Customer Journey Management with Dr Graham Hill
inQuba Webinar Mastering Customer Journey Management with Dr Graham Hill
 
"NATO Hackathon Winner: AI-Powered Drug Search", Taras Kloba
"NATO Hackathon Winner: AI-Powered Drug Search",  Taras Kloba"NATO Hackathon Winner: AI-Powered Drug Search",  Taras Kloba
"NATO Hackathon Winner: AI-Powered Drug Search", Taras Kloba
 
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
Crafting Excellence: A Comprehensive Guide to iOS Mobile App Development Serv...
 
Mutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented ChatbotsMutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented Chatbots
 

2017 VMUG Storage Policy Based Management

  • 1. 1 Storage Policy Based Management Cormac Hogan - @CormacJHogan Blog – cormachogan.com Chief Technologist - Storage & Availability Polska VMUG 2017
  • 2. 2 Data in the news! Dane w wiadomościach
  • 3. 3
  • 4. 4 How do you manage all of that data? How do you keep it safe? How can you choose data services, such as replication and encryption, on a per-application or a per–VM or virtual disk basis? Storage Policy Based Management
  • 5. Agenda • Introduction – vSphere APIs for Storage Awareness (VASA) – Storage Policy Based Management (SPBM) • SPBM and vSAN • SPBM and Virtual Volumes (VVols) • SPBM and VAIO (IO Filters) – Host-based data services, 3rd parties as well as VMware provided • SPBM integration with other VMware products – with vRealize Automation / vRealize Orchestration – with VMware Horizon View • Q&A 5
  • 6. 6 Introduction to vSphere APIs for Storage Awareness (VASA)
  • 7. VASA – vSphere APIs for Storage Awareness • VASA – vSphere APIs for Storage Awareness – gives vSphere insight into data services, either on storage systems or on hosts. • VASA providers publish storage capabilities to vSphere. • With Virtual Volumes, VASA is also used to initiate certain operations on the array from vSphere – e.g. Create VVol, Delete VVol, Take a Snapshot 7
  • 9. The Storage Policy Based Management (SPBM) Paradigm • SPBM is the foundation of VMware's Software Defined Storage vision • Common framework to allow storage and host related capabilities to be consumed via policies. • Applies data services (e.g. replication, encryption, performance) on a per VM, or even per VMDK level, through policies 9
  • 10. Creating Policies via Rules and Rule Sets • Rule – A Rule references a combination of a metadata tag and a related value, indicating the quality or quantity of the capability that is desired. – These two items act as a key and a value that, when referenced together through a Rule, become a condition that must be met for compliance. • E.g. Place VM on datastore where Encryption = True • Rule Sets – A Rule Set is comprised of one or more Rules. – Multiple “Rule Sets” can be leveraged to allow a single storage policy to define alternative selection parameters, even from several storage providers. • E.g. Place VM on vSAN datastore where Deduplication = On OR VVol datastore where Deduplication = On. 10
  • 11. 11
  • 14. VMware vSAN • Storage scale out architecture built into the hypervisor • Aggregates locally attached storage from each ESXi host in a cluster • Dynamic capacity and performance scalability • Flash optimized storage solution • Fully integrated with vSphere: • vCenter, vMotion, Storage vMotion, DRS, HA, FT, … • VM-centric data operations through SPBM (policies) 14 VSAN 10GbE network esxi-01 esxi-02 esxi-03 vSAN and HA/DRS Cluster vSAN Shared Datastore
  • 16. Storage policy rules available in vSAN 6.6.1 • Primary level of Failures To Tolerate (Primary FTT for cross-site stretched cluster protection) • Secondary level of Failures To Tolerate (Secondary FTT for local stretched cluster protection) • Failure Tolerance Method (Mirroring [Raid1:default] or Erasure Coding [Raid5/Raid6]) • IOPS limit for object • Disable object checksum • Force provisioning • Number of disk stripes per object • Flash read cache reservation (%) • Object space reservation (%) • Affinity (when PFTT=0 in stretched clusters) 16
  • 17. Defining a policy for vSAN • Policies define levels of protection and performance • Applied at a per VM level, or per VMDK level • vSAN currently provides 10 unique storage capabilities to vCenter Server 17 What If APIs
  • 18. Assign it to a new or existing VM, or vmdk • When the policy is selected, vSAN uses it to place/distribute the VM/VMDK to guarantee availability and performance • Policies can be changed on-the-fly – In some cases, 2X space may be temporarily required to change it – May also introduce rebuild/resync traffic, so advice is to treat policy change on-the-fly as maintenance task 18
  • 19. Policy Setting - Number of Failures to Tolerate (FTT) • “FTT” defines the number of failures a VM/VMDK can tolerate. • For RAID-1, “n” failures tolerated means “n+1” copies of the object are created and “2n+1” host contributing storage are required! esxi-01 esxi-02 esxi-03 vmdk RAID-1 FTT=1 esxi-04 witnessvmdk ~50% of I/O ~50% of I/O 19
  • 20. Policy Setting - Number of Disk Stripes Per Object • Defines the minimum number of capacity devices across which each replica of a storage object is distributed. • Higher values may result in better performance. Stripe width can improve performance of write destaging, and fetching of reads • Higher values may put more constraints on flexibility of meeting storage compliance policies • Primarily used to achieve highest performance, even at expense of flexibility esxi-01 esxi-02 esxi-03 stripe-2a RAID-1 esxi-04 witnessstripe-2b RAID-0 RAID-0 stripe-1a stripe-1b FTT=1 Stripe width=2 20
  • 21. Policy Setting – Fault Tolerance Method (FTM) - RAID-5 • Available in all-flash configurations only • Example: FTT = 1 with FTM = RAID-5 – 3+1 (4 host minimum, 1 host can fail without data loss) – 5 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy – 1.33x instead of 2x overhead. – 30% savings (20GB disk consumes 40GB with RAID-1, now consumes ~27GB with RAID-5) RAID-5 ESXi Host parity data data data ESXi Host data parity data data ESXi Host data data parity data ESXi Host data data data parity 21
  • 22. Policy Setting - Fault Tolerance Method (FTM) - RAID-6 • Available in all-flash configurations only • Example: FTT = 2 with FTM = RAID-6 – 4+2 (6 host minimum. 1 host can fail without data loss) – 7 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy) – 1.5x instead of 3x overhead. – 50% savings. (20GB disk consumes 60GB with RAID-1, now consumes ~30GB with RAID-6) RAID-6 ESXi Host parity data data ESXi Host parity data data ESXi Host data parity data ESXi Host data parity data ESXi Host data data parity ESXi Host data data parity 22
  • 23. Sky’s the limit for expansion on an agile cloud • Europe’s leading media brand • 22 million subscribers • Pay TV, on-demand Internet streaming, broadband mobile • Always looking for new markets and new revenue stream • Challenge: Bring new services online, cost-effectively, without impacting existing services. Avoid creating expensive silos per service. • vSAN enabled Sky to scale out its video service on time and on budget, delivering a fast, cost-effective and reliable platform for video transport. 23
  • 25. Why VVols? 25 Typical SAN • Lots of paths to manage • Lot of devices to manage • Risk of hitting path/device limits • IO Blender effect
  • 26. VVols are 1st class citizen on storage array 26 Data services on array are consumed on a per VM/VMDK basis via SPBM • Less paths/devices to manage • Array appears as a Volume • More scalable than LUNs • 1:1 relationship with VM:storage PE
  • 27. • No Filesystem • ESXi manages array through VASA APIs. • Arrays are logically partitioned into containers, called Storage Containers. • NO LUNS • VM files, called Virtual Volumes, stored natively on the Storage Containers. • IO from ESXi to array is addressed through an access point called, Protocol Endpoint • Data Services (snapshot, etc.) are offloaded to the array • Managed through SPBM. 27 High Level Architecture Overview vSphere Storage Policy-Based Mgmt. Virtual Volumes Storage Policy Capacity Availability Performance Data Protection Security PE PE Published Capabilities Snapshot Replication Deduplication Encryption VASA Provider
  • 28. 28 VASA Provider (VP) Virtual Volumes VASA Provider • Software component developed by storage array vendors • Provides “storage awareness” of array’s data services • VASA Provider can be implemented within the array’s management firmware, in the array controller or as a virtual appliance. • Responsible for creating, deleting of Virtual Volumes (VMs, clones, snapshots) Characteristics
  • 29. Protocol Endpoints (PE) Virtual Volumes VASA ProviderPE • Separate the access points from the storage itself • Allows for fewer access points (compared to LUN approach) Why Protocol Endpoints? • Access points that enables communication between ESXi hosts and storage array systems • SCSI T10 Secondary Addressing scheme to access VVol (PE + VVol Offset) What are Protocol Endpoints? 29
  • 30. Protocol Endpoints (PE) VASA ProvideriSCSI/NFSPE Virtual Volumes • Compatible with all SAN and NAS Protocols: - iSCSI - NFS - FC - FCoE • Existing multi-path policies and NFS topology requirements can be applied to the PE • NFS v3 and v4.1 supported. Scope of Protocol Endpoints 30
  • 31. 0 Storage Container (SC) Virtual Volumes • Logical storage constructs for grouping of virtual volumes. • Setup by Storage Administrator • Capacity is based on physical storage • Logically partition or isolate VMs with diverse storage needs and requirement • Minimum one storage container per array • Maximum depends on the array • A single Storage Container can be simultaneously accessed via multiple Protocol Endpoints • It is NOT a LUN What are storage containers? 32
  • 33. 34 Nimble Storage [now HPE] Populate vCenter info on Storage Array Add Nimble info directly into vSphere
  • 34. 35 Full visibility into VM • Home • Swap • VMDK Storage Container • Create a folder • Set management type to VMware Virtual Volumes • Set a capacity limit
  • 35. 36 Nimble Storage - VASA Provider (automatically populated from array)
  • 37. Nimble Storage VVol Policy Setup – granular data services per-VM 38
  • 38. Nimble Storage VVol Policy Setup 39
  • 39. Some VVol Adoption figures from HPE – 3PAR 40
  • 40. 41 SPBM and vSphere APIs for I/O Filters (VAIO)
  • 41. 42 VMM VMX Filter Framework Filter 1 Filter 2 Filter 3 Filter n Guest OS I/O Virtual Disk I/O
  • 43. I/O Filters from 3rd parties – Cache Acceleration and Replication 44
  • 44. 45 VAIO Data Services provided by VMware in vSphere 6.5
  • 45. 46 2 new features introduced with vSphere 6.5 - Encryption - Storage I/O Control v2 Implementation is done via I/O Filters
  • 46. Introduced in vSphere 6.5 - Storage I/O Control v2 • VM Storage Policies in vSphere 6.5 has a new option called “Common Rules”. • These are used for configuring data services provided by hosts, such as Storage I/O Control and Encryption. It is the same mechanism used for VAIO/IO Filters. 47 Now managed via policy and not set on a per VM basis – reduced operational overhead QoS
  • 47. Introduced in vSphere 6.5 - vSphere VM Encryption • A new VM encryption mechanism. • Implemented in the hypervisor, making vSphere VM encryption agnostic to the Guest OS. • This not just data-at-rest encryption; it also encrypts data in-flight. • vSphere VM Encryption in vSphere 6.5 is policy driven. • Requires an external Key Management Server - KMS (not provided by VMware) 48
  • 48. 3rd Party and vSphere IO Filters can co-exist 49 There are 3 I/O Filters on these hosts: - VM Encryption - Storage I/O Control - Cache Accelerator from Infinio
  • 49. Case Study from Infinio – VAIO Cache Acceleration • The University of Georgia Center for Continuing Education and Hotel – Conference center located in Athens, Georgia, USA • Using DELL Compellent All Flash Array • Pilot on vSphere Cluster running over 50 VMs – file and print services – digital signage applications – back office applications like SQL and QuickBooks 50 Response times were fast – as low as 170 microseconds – which is even faster than our all-flash array!”
  • 50. 51 SPBM and vRealize Automation/vRealize Orchestration
  • 51. vRealize Automation 7.3 + vRealize Orchestration 6.5 and SPBM • vRealize Automation (vRA) 7.3 enables SPBM through vRealize Orchestration (vRO) – vRA itself does not know about SPBM, so relies on vRO – SPBM policies must be preconfigured – SPBM policies can be changed on-the-fly (day #2 operation) • Leverages the latest vCenter Server (6.5) plug-in shipped with vRO out-of-the-box • All SPBM policies are accessible through API in vRO/vRA 52
  • 52. 53
  • 53. 54 SPBM and VMware Horizon View
  • 54. Horizon View 7.2 and SPBM (with vSAN) Policy (as appears in vCenter) Description Stripes FTT %RCR %OSR VM_HOME_<guid> VM home directory 1 1 0 0 REPLICA_DISK_<guid> Linked Clone Replica Disk, Instant Clone Replica Disk 1 1 10 0 PERSISTENT_DISK_<guid> Linked Clone Persistent Disk 1 1 0 100 OS_DISK_FLOATING_<guid> Floating Linked Clone OS and disposable disks, floating Instant Clone OS and disposable disks 1 1 0 0 OS_DISK_<guid> Dedicated Linked Clone OS and disposable disks 1 1 0 0 FULL_CLONE_DISK_FLOATING_<guid> Floating Full Clone Virtual Disk 1 0 0 0 FULL_CLONE_DISK_<guid> Dedicated Full Clone Virtual Disk 1 1 0 0 55 • Policies are automatically created when Horizon View is deployed on vSAN datastores
  • 55. 56
  • 56. Summary • The amount of data in the world is exploding! • Data is critical to your organization, and in many cases, how you innovate with this data keeps you ahead of your competitors. • Managing that data, keeping it safe and providing the appropriate data services at the granularity of an application can be complex • Storage Policy Based Management, a fundamental building block to VMware’s Software Defined Storage achieves this. • SPBM is integrated with all vSphere storage technologies, from vSAN to VVols to VAIO. • With SPBM, data services (e.g. deduplication, encryption, replication, RAID level) can be assigned to your data on a per VM or per VMDK basis. 57

Editor's Notes

  1. Data, and most especially what you do with it to offer new/better experiences for your customers, is going to be the key differentiator between you and your competition
  2. Self-driving cars – Other projections state that they will generate 1GB of data per second. Equifax – personal data from 143 million US citizens. Cost CxOs their jobs. Hurricane Irma in the US, – Are you prepared for Disaster Recovery? Now what if you put these 2 together? What if someone hacked a self-driving car?
  3. VVOLS KB - https://kb.vmware.com/kb/2113013 Storage providers inform vCenter Server about specific storage devices, and present characteristics of the devices and datastores (as storage capabilities).
  4. Storage Policy-Based Management (SPBM) is the foundation of the VMware SDS Control Plane and enables vSphere administrators to over come upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity headroom, whether using vSAN or Virtual Volumes (VVols) on external storage arrays. SPBM provides a single unified control plane across a broad range of data services and storage solutions. The framework helps to align storage with application demands of your virtual machines. SPBM is about ease, and agility. Traditional architectural models relied heavily on the capabilities of an independent storage system in order to meet protection and performance requirements of workloads. Unfortunately the traditional model was overly restrictive in part because standalone hardware based storage solutions were not VM aware, and were limited in their abilities to unique settings to various workloads. Storage Policy Based Management (SPBM) lets you define requirements for VMs or collection of VMs. This SPBM framework is the same framework used for storage arrays supporting VVOLs. Therefore, a common approach to managing and protecting data can be employed, regardless of the backing storage. ---------------------------------- Overview: Key to software defined storage (SDS) architectural model SPBM is the common framework to abstract traditional storage related settings away from hardware, and into hypervisor Applies storage related settings for protection and performance on a per VM, or even per VMDK level ----------------------------------
  5. https://blogs.vmware.com/vsphere/2014/10/vsphere-storage-policy-based-management-overview-part-2.html
  6. Common Rules – these come from I/O Filters on hosts (VMCrypt, SIOCv2, VAIO) Rule-Sets come from storage, either vSAN or VVOls.
  7. http://cormachogan.com/2013/09/06/vsan-part-5-the-role-of-vasa/
  8. Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes. Mirroring = RAID-1 Erasure Coding = RAID-5/RAID-6
  9. Key Message/Talk track: Creating a storage policy is nothing more than defining what your requirements are for a VM, or a collection of VMs. These requirements are typically around protection and performance of the VM. A new policy can be created and applied to a VM, or an existing policy can be adjusted. The VM will adopt the new performance and protection settings without any down time. ---------------------------------- Overview: Policies define levels of protection and performance Applied at a per VM level, or per vmdk level vSAN currently provides five unique storage capabilities to vCenter Server ---------------------------------- Details: Storage policy rules available (in 6.6) are: Number of disk stripes per object Flash read cache reservation (%) Primary level of failures to tolerate (PFTT - for stretched clusters) Secondary level of failures to tolerate (SFTT – for local protection) Failure Tolerance method Affinity IOPS limit for object Disable object checksum Force provisioning Object space reservation (%) Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes. ----------------------------------
  10. Key Message/Talk track: After a policy is created, it can easily be applied to an individual VMDK of a VM, an entire VM, or a collection of VMs in the data center. Applying at a VMDK level can be useful for applications that have different needs within defined drives of of the guest OS. For instance, a drive dedicated for the database may have different requirements than the drive dedicated for transaction logs. ---------------------------------- Overview: When the policy is selected, vSAN uses it to place/distribute the VM to guarantee availability and Performance Policies can be changed without any interruption to the VM ---------------------------------- Details: Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes. Only one SPBM policy is allowed to be applied. vSAN does not support the appended SPBM policies. Policies can also be assigned by rules or tags. An example might be all VMs with “Prod-SQL” in the VM name or resource group might be set at RAID-1 and an FTT=2. VM named “Test-Web” would never be applied to this SPBM policy, and would adopt the default policy for the environment. ----------------------------------
  11. Key Message/Talk track: Failures to Tolerate (FTT) is a rule that defines how many failures can be tolerated to let the VM or other object continue to run in the event of a failure. This is one of the key pillars behind vSAN’s ability to protect a VM from failure of a fault domain (disk, disk group, host, defined fault domain, or site) ---------------------------------- Overview: “FTT” defines the number of hosts, disk or network failures a storage object can tolerate. For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required! Primary Failures to Tolerate (PFTT) defines the number of sites that can accept failure. (0, 1) Secondary Failures to Tolerate (SFTT) defines the number within a site that can accept failure (0, 1, 2, 3) ---------------------------------- Details: FTT can and will be dependent on a number of factors. A few important factors include: The number of hosts in the vSAN cluster The Failure Tolerance Method (FTM) that is defined for the object. Using a RAID-1 (mirroring) Fault Tolerance Method (FTM), an FTT of 2 would mean that a minimum number of hosts in a cluster would be 5. FTT=3 would require 7 hosts. Number of Failures Mirror copies Witnesses Min. Hosts Hosts + Maintenance 0 1 0 1 host n/a 1 2 1 3 hosts 4 hosts 2 3 2 5 hosts 6 hosts 3 4 3 7 hosts 8 hosts ---------------------------------------------------------------------------------------------- There is also Primary and Secondary Failures to Tolerate (PFTT and SFTT) are for vSAN stretched clusters PFTT defines the number of sites failures (0, 1) SFTT defines the number of failures within a site (0, 1, 2, 3)
  12. Key Message/Talk track: This policy, sometimes known as “stripe width” defines the minimum number of capacity devices across which each replica of a storage object is distributed. Increasing the predefined number of stripes per object beyond 1 is intended to help performance. ---------------------------------- Overview: Defines the minimum number of capacity devices across which each replica of a storage object is distributed. Higher values may result in better performance. Stripe width can improve performance of write Destaging, and fetching of uncached reads Higher values may put more constraints on flexibility of meeting storage compliance policies To be used only if performance is an issue ---------------------------------- Details: Most beneficial on the following scenarios: A non cached read on a hybrid configuration, where one is typically reliant on the rotational latencies of a single spinning disk. Reads on an all-flash configuration, where fetching I/O may be able to be improved in some situations. Destaging buffered writes to persistent tier (all flash, or hybrid). This will relieve some of the backpressure that could be induced by large amount of write activity, whether they are sequential or random in nature. vSAN may create more stripes than what is defined. With DD&C, component A with a strip width of 1 will not necessarily live just on disk 1, but rather, be sprinkled around the various capacity disks of a disk group. It becomes an implicit stripe width setting, but will not show up in the UI as a traditional change in a stripe width. Component size can impact stripe width, as an object over 255GB will be split into two components. This however could end up on the same disk, or a different disk group. ----------------------------------
  13. Key Message/Talk track: A failure tolerance method (FTM) is the way data will maintain redundancy. The simplest FTM is a RAID-1 mirror. This would have a mirror copy of objects/components across multiple hosts. Another FTM is RAID-5/RAID-6, where data is striped across multiple hosts with parity information written to provide tolerance of a failure. Parity is striped across all hosts. When done over the network using software only, this is sometimes referred to as erasure coding. This is done inline; there is no post-processing required. VMware’s implementation of erasure coding stripes the data with parity across the minimum number of hosts in order to comply with the policy. RAID-5 will offer a guaranteed 30% savings in capacity overhead compared to RAID-1 ---------------------------------- Overview: Available in all-flash configurations only Example: FTT = 1 with FTM = RAID-5 3+1 (4 host minimum, 1 host can fail without data loss) 5 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy 1.33x instead of 2x overhead. 30% savings 20GB disk consumes 40GB with RAID-1, now consumes ~27GB with RAID-5 ---------------------------------- Details: RAID-5/6 does have I/O amplification on writes (only). RAID-5. Single write operation results in 2 reads and 2 writes RAID-6. Single write operation results in 3 reads and 3 writes (due to double parity) RAID-5/6 only supports the FTT of 1, or 2 (implied by choosing RAID-5 or RAID-6). Will not support FTT=0, or FTT=3 The realized dedup & compression ratios will be different when employing RAID-5/6 than when using RAID-1 mirroring. Space efficiency using erasure codes will be more of a guaranteed space reduction because of the lack of implied multiple full copies. Even if the DD&C ratio may be less on objects that use RAID-5/6, the effective overall capacity used will be equal to, if not better than RAID-1 with DD&C. FTM can and will be dependent on a number of factors. A few important factors include: The number of hosts in the vSAN cluster Stripe width defined for the objects Using a RAID-5, and an implied FTT of 1 would mean that a minimum number of hosts in a cluster would be 4. With 4 hosts, 1 host can fail without data loss (but will lose redundancy). To maintain full redundancy with a single host in maintenance mode, the minimum would be 5 hosts. Cluster sizes for RAID-5 need to be 4 or more hosts. Not multiples of 4 hosts. Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts. ---------------------------------- Internal:
  14. Key Message/Talk track: VMware’s RAID-6 is a dual parity version of the erasure coding scheme used in the RAID-5 FTM. An FTM of RAID-6 will imply an ability to tolerate 2 failures (e.g. FTT=2) and maintain operation. RAID-6 will offer a guaranteed 50% savings in capacity overhead compared to RAID-1 using an FTT of 2. Just as with RAID-5 erasure coding, this is all done inline, with no post processing required. Parity is striped across all hosts. VMware’s implementation of erasure coding stripes the data with parity across the minimum number of hosts in order to comply with the policy. RAID-6 will offer a guaranteed 50% savings in capacity overhead compared to RAID-1 and an FTT of 2. ---------------------------------- Overview: Available in all-flash configurations only Example: FTT = 2 with FTM = RAID- 4+2 (6 host minimum. 1 host can fail without data loss 7 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy 1.5x instead of 3x overhead. 50% savings 20GB disk consumes 60GB with RAID-1, now consumes ~30GB with RAID-6 ---------------------------------- Details: RAID-5/6 does have I/O amplification on writes (only). RAID-5. Single write operation results in 2 reads and 2 writes RAID-6. Single write operation results in 3 reads and 3 writes (due to double parity) RAID-5/6 only supports the FTT of 1, or 2 (implied by choosing RAID-5 or RAID-6). Will not support FTT=0, or FTT=3 The realized dedup & compression ratios will be different when employing RAID-5/6 than when using RAID-1 mirroring. Space efficiency using erasure codes will be more of a guaranteed space reduction because of the lack of implied multiple full copies. Even if the DD&C ratio may be less on objects that use RAID-5/6, the effective overall capacity used will be equal to, if not better than RAID-1 with DD&C. FTM can and will be dependent on a number of factors. A few important factors include: The number of hosts in the vSAN cluster Stripe width defined for the objects Using a RAID-6, and an implied FTT of 2 would mean that a minimum number of hosts in a cluster would be 6. With 6 hosts, 2 hosts can fail without data loss (but will lose redundancy). To maintain full redundancy with a single host in maintenance mode, the minimum would be 7 hosts. Cluster sizes for RAID-6 need to be 6 or more hosts. Not multiples of 6 hosts. Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts. ----------------------------------
  15. We get a lot of questions about whether vSAN is available for prime-time production use. With 10,000 customers, vSAN is now used everywhere for all manner of applications. Here is one such example, where vSAN is used in a mission critical role.
  16. VVol 2.0" refers to additional functionality supported in vSphere for VVol targets written specifically for it, notably replication. Many VVol solutions still offer only what you might call "VVol 1.0". Regardless, the vSphere Compatibility Guide will tell you whether a given VVol storage system is certified to work with vSphere 6.5, which could be "VVol 1.0" or "VVol 2.0". To be clear vSphere 6.5 does NOT REQUIRE "VVol 2.0" on the storage side.
  17. The IO Blender effect – lots of different I/O types – random/sequential, read/write, different block sizes, being handled by the same LUN. All sorts of mechanisms were introduced to alleviate this situation, such as RAID, wide-striping, QoS, etc. On the vSphere side of things, we introduced SIOC, SDRS, etc. Many customers kept spreadsheets of what VMs were supposed to be on which LUNs for performance and data service purposes.
  18. VASA providers the Control Plane. PEs provide the Data Plane
  19. https://blogs.vmware.com/virtualblocks/2016/11/30/vasa-provider-considerations-controller-embedded-vs-virtual-appliance/ VASA Provider in VVols: Provides storage awareness services Centralized connectivity for ESXi and vCenter Servers Responsible for creating Virtual Volumes (VVols) Provide support for VASA APIs used for ESXi Responsible for defining binding operations Offloading VM related operations directly to array
  20. Why the concept of a PE? In today’s LUN-Datastore world, the datastore has two purposes – It serves as the access point for ESXi to send IO to and it also serves as storage container to store many VM files (VMDKs). This dual-purpose nature of this entity poses several challenges. It should not be necessary to have so many access points to the storage. Because of the rigid nature of the size of the datastore, and the fewer number of datastores, multiple VMs are stored together in the same datastore even if the VMs have different requirements. This leads to the so-called IO blender effect. So, how about we separate out the concept of the access point from the storage? This way, we can fewer number of access points to several number of storage entities. And hence the introduction of PE.
  21. NFS v41 support statement: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-AAA99054-4D81-49F8-9927-65E9B08577AD.html
  22. During a rescan ESX will identify PE and maintain then in DBs. Multi-pathing on the PE ensures high availability Concept of queue depth in a PE? Yes, PEs are given queue depth of 128. Compare with a LUN which only had 32 or 64, and how many VMs per LUN.
  23. Need at least 1 SC per array. You can have as many as the array can support. An SC cannot span across array
  24. Login to UI. Select Administration. Select vSphere Integration. Populate VC info. Select plugins – in this case, web client and VASA Provider.
  25. Note that not all VASA implementations give you this level of detail. Also, others may take a different approach to configuring PEs and Storage Containers.
  26. Octo is the name of a “group” on the Nimble Array which I provided as part of the registration – it could be anything.
  27. Storage = Nimble Storage Add a Rule e.g. encryption Add another rule e.g. protection
  28. Compatible = nimble. Other refs: https://www.hpe.com/h20195/v2/getpdf.aspx/4AA5-6907ENW.pdf (HPE and Vvols)
  29. Figures provided by HPE – August 2017 (VMworld 2017 Las Vegas)
  30. https://code.vmware.com/programs/vsphere-apis-for-io-filtering
  31. https://code.vmware.com/programs/vsphere-apis-for-io-filtering IO request moving between the guest operating system (Initiator), located in the Virtual Machine Monitor(VMM), and a virtual disk (Consumer) are filtered through a series of two IO Filters (per disk), one filter per filter class, invoked in filter class order. For example, a replication filter executes before a cache filter. Once the IO request has been filtered by all the filters for the particular disk, the IO request moves on to its destination, either the VM or the virtual disk. Partner will develop IO Filter plug-ins to provide filtering virtual machines. Each IO Filter registers a set of callbacks with the Filter Framework, pertaining to different disk operations. If a filter fails an operation, only the filters prior to it are informed of the failure. Any filter can complete, fail, pass, or defer an IO request. A filter will defer an IO if the filter has to do a blocking operation like sending the data over the network, but wants to allow further IOs to get processed as well. If a filter performs a blocking operation during the regular IO processing path, it would affect the IOPS of the virtual disk, since we wouldn't be processing any further IOs until the blocking operation completes. If the filter defers an IO request, the Filter Framework will not pass the request to subsequent filters in the class order until the filter completes the request and notifies the Filter Framework that the IO may proceed.
  32. Available since vSphere 6.5.
  33. https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vaio 6 certified partner VAIO products, out of which 3 are Cache and 3 are Replication. Cache accelerators using local flash devices (or some memory) to accelerate reads, and sometimes writes.
  34. Available since vSphere 6.5.
  35. This is before I added the I/O Accelerator from Infinio. These are provided by default in vSphere.
  36. When the policy has been created, it may be assigned to newly deployed VMs during provisioning, or to already existing VMs by assigning this new policy to the whole VM (or just an individual VMDK) by editing its settings.
  37. What is the relationship between vCenter Server and KMS Server? VMware vCenter now contains a KMIP client, which works with many common KMIP key managers (KMS). VMware does not own the KMS. Plan for backup, DR, recovery, etc., with your KMS provider. You must be able to retrieve the encryption keys in the event of a failure, or you may render your VMs unusable. Administrators should not encrypt their vCenter Server. Possible “chicken-and-egg” situation where you need vCenter to boot (KMS client) so it can get the key from the KMS to unencrypt its files, but it will not be able to boot as its files are encrypted. vCenter Server does not manage encryption. It is only a client of the KMS. With VM Home encrypted, only administrators with ‘encryption privileges’ can access the console of the virtual machine. One misconception: VM Home folder is not encrypted. Only some files in the VM Home folder are encrypted. Some (non-sensitive) VM files and log files are not encrypted. Core dumps are encrypted on ESXi hosts with encrypted VMs. Encrypted virtual machines cannot be exported to an OVF, nor can they be suspended.
  38. The VM Encryption and SIOC are available by default. Infinio is a third party plugin for cache acceleration - I installed this separately.
  39. http://www.infinio.com/sites/default/files/resources/Case%20Study%20-%20UG%20Center%20and%20Hotel%20-%20FINAL.pdf
  40. Screenshots courtesy of http://www.virtualjad.com/2017/05/scoop-vrealize-automation-7-3.html https://blogs.vmware.com/virtualblocks/2017/05/23/storage-policy-based-management-vrealize-automation/
  41. I don’t know much about this, but I believe that changing the policy will also Storage vMotion the VM to another datastore that meets the policy requirements – checking with Jad.
  42. When you use Virtual SAN, Horizon defines four virtual machine storage requirements, such as capacity, performance, and availability, in the form of default storage policy profiles and automatically deploys them for virtual desktops onto vCenter Server.  The policies are automatically and individually applied per disk (Virtual SAN objects) and maintained throughout the lifecycle of the virtual desktop. Storage is provisioned and automatically configured according to the assigned policies. You can modify these policies in vCenter.  Horizon creates vSAN policies for linked-clone desktop pools, instant-clone desktop pools, full-clone desktop pools, or an automated farm per Horizon cluster.