3. Software-Defined Storage
3
Bringing the efficient operational model of virtualization to storage
Virtual Data Services
Data Protection Mobility Performance
Policy-driven Control Plane
SAN / NAS
SAN/NAS Pool
Virtual Data Plane
x86 Servers
Hypervisor-converged
Storage pool
Object Storage Pool
Cloud Object
Storage
Virtual SAN
4. Virtual SAN: Radically Simple Hypervisor-Converged Storage
4
vSphere + VSAN
…
• Runs on any standard x86 server
• Policy-based management framework
• Embedded in vSphere kernel
• High performance flash architecture
• Built-in resiliency
• Deep integration with VMware stack
The Basics
Hard disks
SSD
Hard disks
SSD
Hard disks
SSD
VSAN Shared Datastore
5. 12,000+
Virtual SAN Beta
Participants
95%
Beta customers
Recommend
VSAN
90%
Believe VSAN will
Impact Storage like
vSphere did to
Compute
Unprecedented Customer Interest And Validation
5
6. Why Virtual SAN?
6
• Two click Install
• Single pane of glass
• Policy-driven
• Self-tuning
• Integrated with VMware stack
Radically Simple
• Embedded in vSphere kernel
• Flash-accelerated
• Up to 2M IOPs from 32 nodes
cluster
• Granular and linear scaling
High Performance Lower TCO
• Server-side economics
• No large upfront investments
• Grow-as-you-go
• Easy to operate with powerful
automation
• No specialized skillset
7. Two Ways to Build a Virtual SAN Node
7
Completely Hardware Independent
1. Virtual SAN Ready Node
…with multiple options available at GA + 30
Preconfigured server ready to use Virtual
SAN…
2. Build Your Own
…using the Virtual SAN Compatibility Guide*
Choose individual components …
SSD or PCIe
SAS/NL-SAS/ SATA HDDs
Any Server on vSphere
Hardware Compatibility List
HBA/RAID Controller
⃰ Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide Page
⃰ Components for Virtual SAN must be chosen from Virtual SAN HCL, using any other components is unsupported
8. Broad Partner Ecosystem Support for Virtual SAN
8
Storage
Server / Systems
Solution
Data Protection
Solution
9. Virtual SAN Simplifies And Automates Storage Management
9
Per VM Storage Service Levels From a Single Self-tuning Datastore
Storage Policy-Based Management
Virtual SAN
Shared Datastore
vSphere + Virtual SAN
SLAs
Software Automates
Control of Service Levels
No more LUNs/Volumes!
Policies Set Based
on Application Needs
Capacity
Performance
Availability
Per VM
Storage Policies
“Virtual SAN is easy to deploy,
just a few check boxes. No
need to configure RAID.”
— Jim Streit
IT Architect, Thomson Reuters
10. Virtual SAN Delivers Enterprise-Grade Scale
10
2M
IOPS
3,200
VMs
4.4
Petabytes
Maximum Scalability per Virtual SAN Cluster
32
Hosts
“Virtual SAN’s allows us to build out
scalable heterogeneous storage
infrastructure like the Facebooks and
Googles of the world. Virtual SAN allows
us to add scale, add resources, while
being able to service high performance
workloads.”
— Dave Burns
VP of Tech Ops, Cincinnati Bell
11. High Performance with Elastic and Linear Scalability
11
80K 160K
320K
480K
640K
253K
505K
1M
1.5M
2M
4 8 16 24 32
IOPS
Number of Hosts In Virtual SAN Cluster
Mixed 100% Read
286
473
677
767
805
3 5 7 8
Number of Hosts In Virtual SAN Cluster
Number of VDI VMs
VSAN All SSD Array
Notes: based on IOmeter benchmark
Mixed = 70% Read, 4K 80% random Notes: Based on View Planner benchmark
Up to 2M IOPs in 32 Node Cluster Comparable VDI density to an All Flash Array
12. Virtual SAN is Deeply Integrated with VMware Stack
12
Ideal for VMware Environments
CONFIDENTIAL – NDA ONLY
vMotion
vSphere HA
DRS
Storage vMotion
vSphere
Snapshots
Linked Clones
VDP Advanced
vSphere Replication
Data Protection
VMware View
Virtual Desktop
vCenter Operations Manager
vCloud Automation Center
IaaS
Cloud Ops and Automation
Site Recovery Manager
Disaster Recovery
Site A Site B
Storage Policy-Based Management
13. Virtual SAN 5.5 – Pricing And Packing
13
VSAN Editions and Bundles
Virtual SAN
Virtual SAN with Data
Protection
Virtual SAN for Desktop
Overview
• Standalone edition
• No capacity, scale or
workload restriction
• Bundle of Virtual SAN and
vSphere Data Protection Adv.
• Standalone edition
• VDI only (VMware or Citrix)
• Concurrent or named users
Licensing Per CPU Per CPU Per User
Price (USD) $2,495
$2,875
(Promo ends Sept 15th 2014)
$50
Features
Persistent data store
Read / Write caching
Policy-based Management
Virtual Distributed Switch
Replication
(vSphere Replication)
Snapshots and clones
(vSphere Snapshots & Clones)
Backup
(vSphere Data Protection Advanced)
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
Note: Regional pricing in standard VMware currencies applies. Please check local pricelists for more detail.
14. Virtual SAN – Launch Promotions
14
Virtual SAN
with Data
Protection
Virtual SAN
(1 CPU)
vSphere Data
Protection
Advanced
(1 CPU)
VSA to VSAN
upgrade
Virtual SAN
(6 CPUs per
bundle)
Register and
download promo
Virtual SAN
(1 CPU)
Beta PromoBundle Promos
20% 20% 20%
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
$9,180 / bundle$2,875 / CPU $1,996 / CPU
Promo Discount
Promo Price
End Date
Terms
9/15/2014 9/15/2014 6/15/2014
• Min purchase of 10 CPUs
• First purchase only
Note: Regional pricing for promotions exist in standard VMware currencies. Please check local pricelists for more detail.
15. Virtual SAN Reduces CAPEX and OPEX for Better TCO
15
CAPEX
• Server-side economics
• No Fibre Channel network
• Pay-as-you-grow
OPEX
• Simplified storage configuration
• No LUNs
• Managed directly through
vSphere Web Client
• Automated VM provisioning
• Simplified capacity planning
As Low as
$0.50/GB2
As Low as
$0.25/IOPS
5X Lower
OPEX4
Up to 50%
TCO
Reduction
As Low as
$50/Desktop
1
1. Full clones
2. Usable capacity
3. Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
4. Source: Taneja Group
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
16. Flexibly Configure For Performance And Capacity
16
Performance
2xCPU – 8-core
128GB Memory
2xCPU – 8-core
128GB Memory
2xCPU – 8-core
128GB Memory
1x
400GB MLC SSD
(~15% of usable capacity)
1x
400GB MLC SSD
(~10% of usable capacity)
2x
400GB MLC SSD
(~4% of usable capacity)
5x
1.2TB 10K SAS
7x
2TB 7.2K NL-SAS
10x
4TB 7.2K NL-SAS
IOPS1
Raw Capacity
~20-15K
6TB
~15-10K
14TB
~10-5K
40TB
Capacity
1. Mix workload 70% Read, 80% Random
Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
$0.32/IOPS
$2.12/GB
$0.57/IOPS
$1.02/GB
$1.38/IOPS
$0.52/GB
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
17. • Compared to external storage at scale
• Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
• Additional savings come from reduced Opex through automation
• Virtual SAN configuration: 9 VMs per core, with 40GB per VM, 2 copies for availability and 10% SSD for performance
Granular Scaling Eliminates Overprovisioning
Delivers Predictable Scaling and ability to Control Costs
VSAN enables
predictable linear
scaling
Spikes correspond to
scaling out due to
IOPs requirements
17
$40
$90
$140
$190
$240
500 1000 1500 2000 2500 3000
StorageCostPerDesktop
Number of Desktops
$/VDI Storage Cost
Virtual SAN Midrange Hybrid Array
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
18. Running a Google-like Datacenter
18
Modular infrastructure. Break-Replace Operations
"From a break fix perspective, I think
there's a huge difference in what
needs to be done when a piece of
hardware fails. I can have anyone
on my team go back and replace a
1U or 2U servers. … essentially
modularizing my datacenter and
delivering a true Software-Defined
Storage architecture."
— Ryan Hoenle
Director of IT, DOE Fund
19. Hardware Requirements
19
Any Server on the VMware
Compatibility Guide
• SSD, HDD, and Storage Controllers must be listed on the VMware Compatibility Guide for VSAN
http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
• Minimum 3 ESXi 5.5 Hosts, Maximum Hosts “I’ll tell you later……”
1Gb/10Gb NIC
SAS/SATA Controllers (RAID Controllers must work in
“pass-through” or RAID0” mode
SAS/SATA/PCIe SSD
SAS/NL-SAS/SATA HDD
At least 1 of
each
4GB to 8GB USB, SD Cards
20. Flash Based Devices
VMware SSD Performance Classes
– Class A: 2,500-5,000 writes per second
– Class B: 5,000-10,000 writes per second
– Class C: 10,000-20,000 writes per second
– Class D: 20,000-30,000 writes per second
– Class E: 30,000+ writes per second
Examples
– Intel DC S3700 SSD ~36000 writes per second -> Class E
– Toshiba SAS SSD MK2001GRZB ~16000 writes per second
-> Class C
Workload Definition
– Queue Depth: 16 or less
– Transfer Length: 4KB
– Operations: write
– Pattern: 100% random
– Latency: less than 5 ms
Endurance
– 10 Drive Writes per Day (DWPD), and
– Random write endurance up to 3.5 PB on 8KB transfer size
per NAND module, or 2.5 PB on 4KB transfer size per
NAND module
20
21. Flash Capacity Sizing
The general recommendation for sizing Virtual SAN's flash capacity is to have 10% of the anticipated
consumed storage capacity before the Number of Failures To Tolerate is considered.
Total flash capacity percentage should be based on use case, capacity and performance requirements.
– 10% is a general recommendation, could be too much or it may not be enough.
Measurement Requirements Values
Projected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash capacity percentage 10%
Total flash capacity required 20TB x .10 = 2 TB
22. Multi-level cell SSD (or better) or
PCIe SSD
SAS/NL-SAS HDD
Select SATA HDDs
Any Server on vSphere
Hardware Compatibility List
* Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide
6Gb enterprise-grade
HBA/RAID Controller
1 2 Build your ownVSAN Ready Node
…with 10 different options between
multiple 3rd party vendors available at GA
Preconfigured server ready to use VSAN…
…using the VSAN Compatibility Guide*
Choose individual components …
Two Ways to Build a Virtual SAN Node
Radically Simple Hypervisor-Converged Storage
23. Virtual SAN Implementation Requirements
• Virtual SAN requires:
– Minimum of 3 hosts in a cluster configuration
– All 3 host MUST!!! contribute storage
• vSphere 5.5 U1 or later
– Locally attached disks
• Magnetic disks (HDD)
• Flash-based devices (SSD)
– Network connectivity
• 1GB Ethernet
• 10GB Ethernet (preferred)
23
esxi-01
local storage local storage local storage
vSphere 5.5 U1 Cluster
esxi-02 esxi-03
cluster
HDDHDD HDD
24. Virtual SAN Scalable Architecture
24
• Scale up and Scale out architecture – granular and linearly storage, performance and compute
scaling capabilities
– Per magnetic disks – for capacity
– Per flash based device – for performance
– Per disk group – for performance and capacity
– Per node – for compute capacity
disk group disk group disk group
VSAN network VSAN networkVSAN network
vsanDatastore
HDD
disk group
HDD HDD HDD
disk group
VSAN network
HDD
scaleup
scale out
26. Storage Policy-based Management
• SPBM is a storage policy framework built into vSphere that enables virtual machine policy
driven provisioning.
• Virtual SAN leverages this new framework in conjunction with VASA API’s to expose
storage characteristics to vCenter:
– Storage capabilities
• Underlying storage surfaces up to vCenter and what it is capable of offering.
– Virtual machine storage requirements
• Requirements can only be used against available capabilities.
– VM Storage Policies
• Construct that stores virtual machine’s storage provisioning requirements based on storage capabilities.
26
27. Storage Policy Wizard
SPBM
VSAN object
VSAN object manager
virtual disk
VSAN objects may be
(1) mirrored across hosts &
(2) striped across disks/hosts to meet VM
storage profile policies
Datastore Profile
Virtual SAN SPBM Object Provisioning Mechanism
28. Virtual SAN Disk Groups
• Virtual SAN uses the concept of disk groups to pool together flash devices and magnetic disks
as single management constructs.
• Disk groups are composed of at least 1 flash device and 1 magnetic disk.
– Flash devices are use for performance (Read cache + Write buffer).
– Magnetic disks are used for storage capacity.
– Disk groups cannot be created without a flash device.
28
disk group disk group disk group disk group
Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs
disk group
HDD HDDHDDHDDHDD
29. Virtual SAN Datastore
• Virtual SAN is an object store solution that is presented to vSphere as a file system.
• The object store mounts the VMFS volumes from all hosts in a cluster and presents them as a
single shared datastore.
– Only members of the cluster can access the Virtual SAN datastore
– Not all hosts need to contribute storage, but its recommended.
29
disk group disk group disk group disk group
Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs
disk group
VSAN network VSAN network VSAN network VSAN networkVSAN network
vsanDatastore
HDD HDDHDDHDDHDD
30. Virtual SAN Network
• New Virtual SAN traffic VMkernel interface.
– Dedicated for Virtual SAN intra-cluster communication and data replication.
• Supports both Standard and Distributes vSwitches
– Leverage NIOC for QoS in shared scenarios
• NIC teaming – used for availability and not for bandwidth aggregation.
• Layer 2 Multicast must be enabled on physical switches.
– Much easier to manage and implement than Layer 3 Multicast
30
Management Virtual Machines vMotion Virtual SAN
Distributed Switch
20 shares 30 shares 50 shares 100 shares
uplink1 uplink2
vmk1 vmk2vmk0
31. Virtual SAN Network
• NIC teamed and load balancing algorithms:
– Route based on Port ID
• active / passive with explicit failover
– Route based on IP Hash
• active / active with LACP port channel
– Route based on Physical NIC load
• active / active with LACP port channel
Management Virtual Machines vMotion Virtual SAN
Distributed Switch
100 shares 150 shares 250 shares 500 shares
uplink1 uplink2
vmk1 vmk2vmk0
Multi chassis link aggregation capable switches
34. Configuring VMware Virtual SAN
• Radically Simple configuration procedure
34
Setup Virtual SAN
Network
Enable Virtual SAN
on the Cluster
Select Manual or
Automatic
If Manual, create
disk groups
35. Configure Network
35
• Configure the new dedicated Virtual SAN network
– vSphere Web Client network template configuration feature.
36. Enable Virtual SAN
• One click away!!!
– Virtual SAN configured in Automatic mode, all empty local disks are claimed by Virtual SAN for the
creation of the distributed vsanDatastore.
– Virtual SAN configured in Manual mode, the administrator must manually select disks to add the the
distributed vsanDatastore by creating Disk Groups.
36
37. Virtual SAN Datastore
• A single Virtual SAN Datastore is created and mounted, using storage from all multiple hosts
and disk groups in the cluster.
• Virtual SAN Datastore is automatically presented to all hosts in the cluster.
• Virtual SAN Datastore enforces thin-provisioning storage allocation by default.
37
38. Virtual SAN Capabilities
• Virtual SAN currently surfaces five unique storage capabilities to vCenter.
38
39. Number of Failures to Tolerate
• Number of failures to tolerate
– Defines the number of hosts, disk or network failures a storage object can tolerate. For “n” failures
tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required.
39
vsan network
vmdkvmdk witness
esxi-01 esxi-02 esxi-03 esxi-04
~50% of I/O ~50% of I/O
Virtual SAN Policy: “Number of failures to tolerate = 1”
raid-1
40. Number of Disk Stripes Per Object
• Number of disk stripes per object
– The number of HDDs across which each replica of a storage object is distributed. Higher values may
result in better performance.
40
vsan network
stripe-2b witness
esxi-01 esxi-02 esxi-03 esxi-04
stripe-1b
stripe-1a stripe-2a
raid-0raid-0
VSAN Policy: “Number of failures to tolerate = 1” + “Stripe Width =2”
raid-1
42. Virtual SAN Storage Capabilities
• Force provisioning
– if yes, the object will be provisioned even is the policy specified in the storage policy is not satisfiable
with the resources currently available.
• Flash read cache reservation (%)
– Flash capacity reserved as read cache for the storage object. Specified as a percentage of logical size
of the object.
• Object space reservation (%)
– Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM
provisioning. The rest of the storage object is thin provisioned.
42
43. VM Storage Policies Recommendations
• Number of Disk Stripes per object
– Should be left at 1, unless the IOPS requirements of the VM is not being met by the flash layer.
• Flash Read Cache Reservation
– Should be left at 0, unless there is a specific performance requirement to be met by a VM.
• Proportional Capacity
– Should be left at 0, unless thick provisioning of virtual machines is required.
• Force Provisioning
– Should be left disabled, unless the VM needs to be provisioned, even if not in compliance.
43
44. Failure Handling Philosophy
Traditional SANs
– Physical drive needs to be replaced to get back to full redundancy
– Hot-spare disks are set aside to take role of failed disks immediately
– In both cases: 1:1 replacement of disk
Virtual SAN
– Entire cluster is a “hot-spare”, we always want to get back to full redundancy
– When a disk fails, many small components (stripes or mirrors of objects) fail
– New copies of these components can be spread around the cluster for balancing
– Replacement of the physical disk just adds back resources
45. Understanding Failure Events
Degraded events are responsible to trigger the immediate recovery operations.
– Triggers the immediate recovery operation of objects and components
– Not configurable
Any of the following detected I/O errors are always deemed degraded:
– Magnetic disk failures
– Flash based devices failures
– Storage controller failures
Any of the following detected I/O errors are always deemed absent:
– Network failures
– Network Interface Cards (NICs)
– Host failures
45
46. Maintenance Mode – planned downtime
3 Maintenance mode options:
Ensure accessibility
Full data migration
No data migration
With Software-Defined Storage, we’re taking the operational model we pioneered in compute – and extending that to storage
Software-Defined Storage allows businesses to more efficiently manage their same storage infrastructure with software. How?
CLICK
First, Abstracting and pooling physical storage resources to create flexible logical pools of storage in the virtual data plane. We see three main pools going forward: SAN/NAS pool (enabled by VVOL), hypervisor-converged (enabled by Virtual SAN) and Cloud
CLICK
Second, providing VM-level data services like replication, snapshots caching, etc. from a broad partner ecosystem
CLICK
Lastly, enabling application-centric approach based on a common policy-based control plane. Storage requirements are captured for each individual VM in simple intuitive policies that follow the VM through its life cycle on any infrastructure. This policy-based management framework allows for seamless automation and orchestration, with the Virtual SAN software dynamically making adjustments to underlying storage pools to ensure application-driven policies are compliant and SLAs are met.
CLICK
Integration and interoperability with our storage ecosystem is a key element of our strategy. Across all elements SDS we plan to enable integration points through APIs that will allow our partners to enable value added capabilities on top of our platform.
Above are a list of partners that we have been working with to make the Software-Defined Storage solution a reality for our customers.
For example, EMC’s ViPR technology abstracts and pools third party external storage to create a virtual control plane for heterogeneous external storage. This is a great example of how Software-Defined Storage ecosystem vendors leverage the VMware platform to give customers more choice and the ability to transform their storage model.
Software-Defined Storage is using virtualization software to create a fundamentally new approach to storage that removes unnecessary complexity, puts the application in charge, and delivers many of the same benefits we see from SDDC… including simplicity, high performance, and increased efficiency.
T: Today, we’re excited to announce Virtual SAN…
BEN TALKING:
Abstracts and pools server-side disks and flash => shared datastore
CLICK
Decouples software from hardware // Converts physical to virtual
Embedded in ESXi kernel to create high performance storage tier running on x86 servers
Policy-based management framework automates routine tasks
Creates a resilient, scalable storage tier that is easy to use
Gives users the flexibility to configure the storage they need
T: Virtual SAN is a true Software-Defined Storage product that runs on standard x86 servers, giving users deployment flexibility…
We announced the public beta of Virtual SAN at Vmworld last year and it’s been a great success story.
We had over 10,000 registered participants
We’ve seen a lot of excitement and response from customers.
The team has over-achieved. We promised we’d deliver vSAN in the first half of 2014. As you know, that usually means June 32nd.
But I’m glad to announce that we’re almost ready and will be releasing vSAN ahead of schedule in Q1.
We also promised an 8-node solution for the first release, but I’m proud to announce that we’re going to support 16 nodes at GA.
Finally, to thank our Beta Customers, we’re offering a 20% discount on their first purchase.
BEN TALKING:
2 ways to deploy => ready node or component based
VSAN is completely HW independent
Flexibility of configuration to optimize for performance or capacity
Ready Node:
VMW working with OEM server vendors => “VIRTUAL SAN Ready Nodes”
Servers designed to make it easy to run Virtual SAN
Build Your Own:
VMW certifying VSAN to run on many different types of hardware
Servers, magnetic disks, solid state drives and controllers.
Gives you the flexibility to choose… build storage system based on your needs
VMware believes that a true Software-Defined Storage product gives users the flexibility when constructing storage architectures
T: VMware has been working with a broad array of ecosystem vendors to make this a reality…
BEN TALKING:
We have built a robust, global eco-system around Virtual SAN
Includes all major server manufacturers and systems solutions..
Includes a broad range of hardware components such as controllers and disks…
And a variety of data protection solutions.
As part of the SDDC approach Pat laid out, it is VMware offer customers great flexibility of choice
T: In addition to being hardware independent, VSAN has a policy-based management framework built-in to simplify storage
BEN TALKING:
SPBM framework allows you to define storage requirements based on application needs.
CLICK
It is simple => capacity, performance and availability
CLICK
VSAN matches requirements to underlying capabilities.
Unlike traditional external storage => provisioning done at array layer
Automation: policies governed by SLAs
CLICK
Orchestration: software abstracts underlying hardware
End result => No more LUNs of Volumes…
T: To give you a better idea, let me show you how all of this works together (DEMO)
John:
You mentioned policy-based framework. Help me understand how that works as I believe that is a fairly new concept when it comes to storage.
BEN TALKING:
Beyond the big numbers on this page….
…Virtual SAN scales to the needs of your environment
Powerful storage tier running on heterogeneous server hardware
Most importantly…scales to the needs of customers.
32 node VSAN cluster
4.4 TBs of capacity
2M IOPs
3,200 VMs
Not a toy
Ideal and viable storage tier for vSphere environments
VSAN is high performance, scalable and resilient… and runs on heterogeneous hardware
JOHN TALKING
That’s great, Ben. Couldn’t you just add more hardware to any other storage technologies in the market today to increase capacity?
T: What is impressive about Virtual SAN is not just its maximum capacity or IOPs… it is its efficiency and how it gets to these numbers…
BEN TALKING
Yes… Virtual SAN scales to 32 nodes and 2M IOPs, but it does so in a predictable and linear fashion
This is particularly helpful if you are trying to forecast storage capacity….
… or have a latent application in need of more performance
Virtual SAN gives you the ability to granularly scale-up or scale-out your cluster
Add more resources to achieve an intended outcome
One customer quote I liked from the beta was … “We can customize IO and capacity on demand”
Eliminates costly overprovisioning
Pause…
As customers look for every edge possible about efficiency, Virtual SAN delivers on this
This gives you the control to have Google-like and Amazon-like efficiency within your private cloud
On the left…
Linear and Predictable performance
Scales with your environment
Same functionality across different types of workloads
On the right…
High VM density in VDI environments.
Performance isn’t a constraint
VSAN has VM densities comparable to an all-flash array
(SLIDE AUTOMATICALLY BUILDS)
BEN TALKING:
Interoperability a key differentiator for Virtual SAN
Makes the product easy to use for our customers
[GO AROUND TO TALK THROUGH PRODUCTS]
High degree of convenience … makes storage simple for customers
John:
This is great to hear that Virtual SAN is resilient and interoperates with other VMware products. Could you show me how this works?
BEN:
Sure
T: Let me show you how this works in the product
Drivers on the right – Arrow – Bubbles (with range) $2.5GB
50% tco reduction
5-10x opex
Align Costs with Revenue
Take advantage of decreasing HW prices
Increase the performance
Get better economics
Save on CPU resources
--
So the cost of an I/O, in CPU cycles and overhead, is important. Gray and Shenoy derive some rules of thumb for I/O costs:
A disk I/O costs 5,000 instructions and 0.1 instructions per byte
The CPU cost of a Systems Area Network (SAN) network message is 3,000 clocks and 1 clock per byte
A network message costs 10,000 instructions and 10 instructions per byte
for an 8KB I/O, which is a standard I/O size for Unix systems, it costs
Disk: 5,000 + 800 = 5,800 instructions
SAN: 3,000 + 8,000 = 11,000 clocks
Network: 10,000 + 80,000 = 90,000 instructions
Thus it is obvious why IDCs implement local disks in general preference to SANs or networks. Not only is it cheaper economically, it is much cheaper in CPU resources. Looked another way, this simply confirms what many practitioners already have ample experience with: the EDC architecture doesn’t scale easily or well.
------------------
Two I/O intensive techniques are RAID 5 and RAID 6. In RAID 5, writing a block typically requires four disk accesses: two to read the existing data and parity and two more to write the new data and parity (RAID 6 requires even more). Not surprisingly, Google avoids RAID 5 or RAID 6 and favors mirroring, typically mirroring each chunk of data at least three times and many more times if it is hot. This effectively increases the IOPS per chunk of data at the expense of capacity, which is much cheaper than additional bandwidth or cache.
SSD Interface
PCIe vs SAS vs SATA – not really a decision point for performance, as the corresponding IOPS performance will dictate the interface selection.
Speaker notes:
vCenter is requirement for management since the VSAN is fully integrated into vSphere.
A minimum of 3 nodes and a maximum of 8 nodes (though there is some discussion around a higher node count in later versions).
SSD must make up 10% of all storage, but it could be larger than that.
We are also recommending a dedicated 10Gb network for VSAN too. We are in fact recommending a NIC team of 2 x 10Gb NICs for availability purposes.
vCenter server version 5.5
Central point of management
3 vSphere Hosts minimum
Running ESXi version 5.5 or later
Not all hosts need to have local storage. Some can be just compute nodes
Maximum of 8 nodes in a cluster in version 1.0.
Greater than 8 nodes planned for future releases
Local Storage
Combination of HDD & SSD
SSD’s used as a read cache and write buffer
HDD’s used as persistent store
SAS/SATA Controller
Raid Controller must work in “pass-thru” or “HBA” mode (no RAID)
1Gb or 10Gb Network (preferred)
Cluster communication/replication
We have not completed any real characterization yet, but it is expected that the overhead of CPU/Memory for VSAN is in the region of 10%.
VSAN supports the concept of compute nodes – ESXi hosts which do not present any storage, but still has access to, and can run VMs on the distributed datastore.
Best Practices:
- min 3 nodes with storage
- have a balanced cluster using identical host configurations
- Regarding boot image: no stateless, preferred is to use SD card/USB/satadom
Largest storage capacities:
5 disk groups * 7 HDDs * 4TB* 8 hosts = 1.1 PT
5 disk groups * 7 HDDs * 4TB* 16 hosts = 2.2 PT
Enable multicast
Disabling IGMP snooping
Configuring IGMP snooping for selective traffic
VSAN vmkernel multicast traffic should be isolated to a layer 2 non-routable VLAN
Layer 2 multicast traffic can be limited to specific port group using IGMP snooping
We do not recommend implementing multicast flooding across all ports as a best practice
We do not require layer 3 multicast