2. CloudFounders
vRUN
Converged infrastructure that
combines the benefits of the
hyperconverged approach yet
offers independent compute
and storage scaling.
Open vStorage
Core Storage Technology
FlexCloud
A hosted private cloud based
on the vRun technology
available at multiple data
centers world-wide.
A product by CloudFounders
3. 2 Types of Storage
Block Storage:
• EMC, Netapp, ...
• Virtual Machines
• High perfomance, low latency
• Small capacity, typically fixed size
• Expensive
• Zero-copy snapshots, linked clones
• $/IOPS
Object Storage:
• Swift, Cleversafe, ...
• Unstructured data
• Low performance, high latency
• Large capacity, scalable
• Inexpensive, commodity hardware
• No high-end datamanagement features
• $/GB
What is needed is a technology which offers Virtual Machines the
performance and high-end features of a SAN but also the benefits
of the low cost and scale-out capabilities of object storage!
4. What is Open vStorage
Open vStorage is an open-source superfast, scalable, VM-centric block
storage solution for OpenStack Virtual Machines on top of object storage
or a pool of (Kinetic) drives.
5. The architecture
OpenStack
Scale-outVM VM
VM VM
SSD
SSD
Open
vStorage
OpenStack
VM VM
VM VM
SSD
SSD
Open
vStorage
OpenStack
VM VM
VM VM
SSD
SSD
Open
vStorage
Unified Namespace
S3 compatible Object Storage
or a pool of (ethernet) drives
Tier 1 - Location Based
• Read/Write cache on SSD
• Block based storage
• Thin provisioning
• VM Centric
• Distributed Transaction Log
Tier 2 -Time Based
• Zero Copy Snapshot
• Zero Copy Cloning
• Continuous data protection
• Redundant storage
• Scale-out
6. From 4KB writes to 4MB SCOs
SSD or PCI FlashLBA 1: 4k block 1
LBA 2: 4k block 2
LBA 3: 4k block 3
LBA 4: 4k block 4
LBA 5: 4k block 5
LBA 1: 4k block 6
LBA 1: 4k block 7
LBA 3: 4k block 8
LBA 6: 4k block 9
LBA 7: 4k block 10
LBA 8: 4k block 11
LBA 2: 4k block 12
LBA 9: 4k block 13
LBA 10: 4k block 14
New writes
SCO 1
4k block 1
4k block 2
4k block 3
4k block 4
4k block 5
4k block 6
4k block 7
SCO 2
4k block 8
4k block 9
4k block 10
4k block 11
4k block 12
4k block 13
4k block 14
LBA 5: 4k block 15
LBA 10: 4k block 16
4k block 15
4k block 16
SCO 3
New writes
SCO1
SCO2
Transfer SCOs once they are full (4MB)
to the Storage Backend at slow pace
Each write is
appended
to the current
Storage
Container
Object (SCO)
7. Optimized storage architecture
Powered by
Memory & SSD
Deduplicated ReadCache:
50,000-70,000
HyperConverged
Thin Provisioning &
Zero Copy cloning
Offload storage maintenance
tasks to theTier 2
8. Unlimited scalability
Grow storage performance
by adding more SSDs
Grow storage capacity
by adding more disks
Asymmetric scalability
of CPU and storage
No bottlenecks
No dual controllers
9. Hyper Reliability
Better than RAID 5 protection Supports Live MigrationZero-shared architecture
Synchronized Distributed
Transaction Log
Unlimited snapshots,
longer retentions
10. Changes in Open vStorage 2.1
• Improved performance
– 50-70k iops per host
– Multiple caching devices
• HyperConverged!!!!!!
– Encryption, compression, forward error correction
– Manage a pool of SATA drives as Tier2 storage
• Focus on OpenStack/KVM
• Improved hardening against failure
– Seamless volume migration (no metadata rebuild)
Release date: now!
11. Deduplicated Clustered Tier One (A pool of Flash)
Futher down the road ...
• Distributed Clustered Tier One
– Uses SSDs across the env. as 1 big shared, deduplicated Tier 1 read cache.
– Speed comparable with an All-Flash array: almost all VM I/O will be from flash.
– Scale storage performance by adding more SSDs.
– Limits impact of an SSD failure. Hot cache in case of Live Migration.
OpenStack
VM VM
VM VM
OpenStack
VM VM
VM VM
OpenStack
VM VM
VM VM
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
hash 4k block
hash 4k block
hash 4k block
Scale-out
13. 2 Types of Storage
Block Storage:
• EMC, Netapp, ...
• Virtual Machines
• High perfomance, low latency
• Small capacity, typically fixed size
• Expensive
• Zero-copy snapshots, linked clones
• $/IOPS
Object Storage:
• Swift, Cleversafe, ...
• Unstructured data
• Low performance, high latency
• Large capacity, scalable
• Inexpensive, commodity hardware
• No high-end datamanagement features
• $/GB
What is needed is a technology which offers Virtual Machines the
performance and high-end features of a SAN but also the benefits
of the low cost and scale-out capabilities of object storage!
14. OpenStack Swift: some highlights
• Designed to store unstructered data in a cost-effictive way
– Use low cost, large capacity SATA disks
– Increase capacity by adding more disk/servers when needed
– Increase performance by adding spindles/proxies
• High reliability by distributing content across disks
– 3 way replication
– Erasure coding (on the roadmap)
• Easy to manage (no knowledge needed about RAID or
volumes)
Proxy Proxy
Storage
Node
Storage
Node
Storage
Node
15. Cinder: some highlights
• Cinder provides an infrastructure/API for managing volumes on OpenStack.
– Volume create, delete, list, show, attach, detach, extend
– Snapshot create, delete, list, show
– Backups create, restore, delete, list, show
– Manage volume types, quotas
– Migration
• By default Cinder uses local disks but plugins allow additional storage solutions to be
used:
– External appliances: EMC, Netapp, SolidFire
– Software solutions: GlusterFS, Ceph, …
16. Cinder with local disks has some problems ...
computenode01
vm1
vm5
vm6
computenode02
vm2
vm7
computenode03
vm3
vm8
vm9
computenode04
vm4
vm10
vm20
computenode01
vd5
vd1a
vd20
vd7b
vd8b
computenode02
vd7a
vd2a
vd10
vd6a
computenode03
vd8a
vd3a
vd4a
computenode04
vd1b
vd4b
vd3b
Vd2b
vd6b
I S C S I
NovaCinder
Management
Nightmare!
17. A traditional OpenStack setup
Nova
Instance
Management
Swift
Object Storage
Cinder
Block Storage
Glance
Image store
VM
Provides
volume for
Provisions
Stores
image in
Stores backups in
Provides
image for
SAN, NAS,
...
Provides
disk space
2 storage platforms?!
18. “Swift under Cinder”?
• Eventual consistency (the CAP Theorem)
• Latency & performance
– VMs require low latency and high performance
– Object stores are developed to contain lots of data
(large disks, low performance)
– Additional latency as Object Store is on the Local LAN instead of attached to the host like DAS
• Different Management Paradigms
– Object Stores understand Objects <> Hypervisors understand blocks, files
19. Open vStorage & OpenStack
Nova
Instance
Management
Swift
Object Storage
Cinder
Block Storage
Glance
Image store
VM
Provides
volume for
Provisions
Stores
image in
Stores backups in
Provides
image for
Provides
disk spaceOpen vStorageConverts Object Storage
into Block Storage
21. Get the software
• The unrestricted open-source version
– Open vStorage as open-source software is released under the Apache License, Version 2.0
– Backends: S3 compatible object storage (Swift, Ceph, ...)
– Free community help-forum : https://groups.google.com/forum/?hl=en#!forum/open-vstorage
– You can contribute: https://bitbucket.org/openvstorage/
• Free community version
– Open-source version + limited hyperconverged backend (max. 49 volumes, 4 nodes, 16 disks)
– Free community help-forum : https://groups.google.com/forum/?hl=en#!forum/open-vstorage
• Paying version with 24/7 support (Open vStorage & OpenStack)
– GA release June 2015
22. • To be released June 2015
– HyperConverged OpenStack Solution
– Stackable Chassis: 4 nodes, 10-40TB usable storage
– Supported Open vStorage version
– Supported OpenStack version (Mirantis)
– Monitoring, support and maintenance included
– Low cost:
• 50% lower than competitors (EVO:RAIL, Nutanix, ...)
• Starts at $35,000 for 4 nodes (256GB usable RAM, 3.5TB cache,
10TB usable storage)
Open vStorage Based Converged Solution
24. Open vStorage <> distributed file system
VSA 1 VSA 2 VSA 3
Arakoon – (config params, metadata, ...)
vDisk
1
vDisk
2
Internal
Bucket
vDisk
3
VFS2 VFS3
xml
VOL
DRV
VM
VOL
DRV
Object Router
FILE
DRV
FILE
DRV
VOL
DRV
Object Router
FILE
DRV
KVM1 KVM2 KVM3
VFS1
Object Router
25. Live Motion – In depth (Phase 1)
VSA 1 VSA 2 VSA 3
Arakoon – (config params, metadata, ...)
vDisk
1
vDisk
2
Internal
Bucket
vDisk
3
VFS3
vmx
VOL
DRV
VOL
DRV
Object Router
FILE
DRV
FILE
DRV
VM
VOL
DRV
Object Router
FILE
DRV
KVM1 KVM2 KVM3
VFS1
VMLive Motion
Object Router
VFS2
26. Live Motion – In depth (Phase 2)
VSA 1 VSA 2 VSA 3
Arakoon – (config params, metadata, ...)
vDisk
1
vDisk
2
Internal
Bucket
vDisk
3
VFS2 VFS3
xml
VOL
DRV
VOL
DRV
FILE
DRV
FILE
DRV
VM
VOL
DRV
Object Router
FILE
DRV
KVM1 KVM2 KVM3
VFS1
VMLive Motion
Object Router Handover Object Router
27. How does Open vStorage solve the problem
• Open vStorage is a middleware layer in between the hypervisor and the object store.
(Converts object storage into block storage)
– On the host: location based storage (block storage).
– On the backend: time based storage (ideal for objects stores).
– Open vStorage turns a volume into a single bucket.
• OpenStack Cinder Plugin for easy integration (snapshots, ...).
• Distributed file systems don’t work! Open vStorage is not a distributed file sysem!
– All hosts ‘think’ they see the same virtual file systems.
– Volume is ‘live’ on 1 host instead of all hosts.
– Only the virtual file system metadata is distributed.
• Caching inside the host fixes impedance mismatch between slow, high latency backend
and fast, low latency requirement of Virtual Machines.