2. “Hyperconvergence is a type of infrastructure system
with a software defined architecture which integrates
compute, storage, networking and virtualization and
other resources from scratch in a commodity hardware
box supported by a single vendor”
Hyperconvergence - Definition
4. ● Use of commodity X86 hardware
● Scalability
● Enhanced performance
● Centralised management
● Reliability
● Software focused
● VM focused
● Shared resources
● Data protection
HC - What does it offer ?
5. It is a regular server with CPU, RAM, network interfaces,
Disk controllers and drives.As far as drives are concerned
there are only three manufacturers in the world.There is
really nothing special about the hardware.
It is all about software….
There is nothing special about
storage servers
6. ● Scale out - Add compute + storage nodes
● Asymmetric scaling -Add only compute nodes
● Asymmetric scaling - Add only storage nodes
● Fault tolerance and High availability
● Add / remove drives on the fly
● Take out drives and insert in any other server.
● Drive agnostic -any mix of drives SS, spinny
● Add servers on the fly and servers need not be identical
● Performance increases with capacity increase
● Handle IO blender effect - Any application on any server
● No special skills are required to manage
Mission impossible ?
8. StorPool is a storage software installed on every server and
controls the drives (both hard disks and SSDs) in the server.
Servers communicate with each other to aggregate the capacity
and performance of all the drives. StorPool provides standard
block devices. Users can create one or more volumes through
its volume manager.
Data is replicated and striped across all drives in all servers to
provide redundancy and performance. Replication level can be
chosen by the user. There are no central management or
metadata servers. The cluster uses a shared-nothing
architecture. Performance scales with every added server or or
drive.
StorPool overview
9. ● Fully integrated with Opennebula
● Runs on commodity hardware
● Clean code built ground up and not a fork of something
● End to end data integrity with 64 bit checksum for each sector.
● No metadata servers to slow down operations
● Own network protocol designed for efficiency and performance
● Suitable for hyperconvergence as it uses ~10% of the server
resources of a typical server
● Shared nothing architecture for maximum scalability and
performance
● SSD support
● In service rolling upgrades
● Snapshots, clones, rolling upgrades, QoS, synchronous replication
StorPool - Features
13. Each test run consists of:
1. Configuring and starting a StorPool or Ceph cluster (3 nodes 12 HDD 3 SSD)
2. Creating one 200GB volume
3. Filling the volumes with incompressible data
4. Performing all test cases by running FIO on a client
Comparison with Ceph
16. Datastore – all common image operations including: define the datastore;
create, import images from Marketplace, clone images
Virtual Machines – instantiate a VM with raw disk images on an attached
StorPool block device, stop, suspend, start, migrate, migrate-live, snapshot-
hot of VM disk.
The add-on is implemented by writing StorPool drivers for datastore_mad and
tm_mad and a patch to Sunstone's datastores-tab.js for UI.
Opennebula - StorPool
18. Will try to connect to a working
Opennebula with storpool integrated.
Live demo
19. ● Wish list of features for hyperconvergent solution
based on Opennebula
● Search for a scalable Software Defined Storage solution
for Opennebula
● Test results with StorPool SDS
● Built in high availability
● Live demo of opennebula integration
Summary