SlideShare a Scribd company logo
OpenEBS 101
Container Attached Storage
Updated: August 2021
OpenEBS
• Leading Open Source Container Attached Storage Solution for
simplifying the running of Stateful workloads in Kubernetes.
• GitHub: https://github.com/openebs/openebs
• Website: https://openebs.io/
• Slack: https://slack.k8s.io, #openebs
• Twitter: https://twitter.com/openebs
• 121+ Companies contributing since joining CNCF
– (https://openebs.devstats.cncf.io/)
• 187+ New Contributors since May 2019
• 40+ Public References since May 2019
• Incubation PR: https://github.com/cncf/toc/pull/506
Kubernetes Clusters w/ OpenEBS
Data Platforms and Services
Data Platform
(Stateful)
(Databases, Key/Value
Stores, Message Bus)
read/write files,blocks
(POSIX)
Volume (Data
Engine)
read/write bits/bytes
(Block Device)
Services (Stateless)
read/write image, text,...
read/write object
read/write table
read/write message
Any End User Application
Retail, Healthcare, Automobiles,
Manufacturing, Human Resources,
....
Availability
Consistency
Durability
Performance
Scalability
Security
91%
55%
Better user
experience
Hyper-growth
Faster delivery
( Mobile, IoT/Edge)
Storage
HDD, SSD, Cloud Volumes,
NVMe/PCIe/SCSI/SAS
Hardware Utilization
( CPU/RAM
NVMe, DPU/IPU, High
Capacity Drivers )
Software
Paradigm Shift
(2020 CNCF Survey)
Agility
Productivity
Governance
Lock-in
Performance
Cost
Hardware
Paradigm Shift
Challenges with existing Storage
Agility and
Productivity
Monolithic data platform
software is being redesigned
with microservices. Need large
number of smaller volumes,
dynamically provision and
dynamically move with pods
to different nodes.
Connectivity and mounting
issues.
Needs prior design and
planning.
Bottlenecked with Siloed Team
and Storage.
Cost and
Performance
Hardware
Advancements
Improving performance using
Servers with 96 Cores, 1TB Flash,
16 TB Drives, NVMe
Device/Fabric,
IPU/DPU/SmartNICs, ARM
Needs hardware and software
refresh of the Storage. (Better
to replace and migrate).
Clouds are moving fast, but will
cause Data Gravity and
Lock-in.
Life-cycle
management with
Higher Availability
and Resiliency
Harder to setup and maintain.
Upgrades have to be
scheduled and coordinated.
Higher blast radius.
Has software layers that are
redundant to refectored
(cloud native) data platforms.
Legacy stacks.
Paradigm Shift. Change is inevitable.
Development and People
Processes have changed
Loosely coupled applications and loosely
coupled teams. Conway’s Law applied at all
layers. Data Mesh and Data as Product.
Examples: CNCF End users like Bloomberg
adopting cloud native for agility and open source.
Improve developer and application team
productivity. Platform Teams standardizing towards
API / Kubernetes.
Hardware Advancements
promise improved
performance and low cost
96 Cores, 1TB Flash, 16 TB Drives, NVMe Calling for a rewrite of the system software to fully
utilize the capabilities of the hardware.
Poll Mode Drivers/Lockless queues, by-pass kernel
OS and Software
Advancements for building
better performing software
DPDK, SPDK, io_uring, meta languages, user
space performance, huge pages
Build systems with expectation that components
will fail. Rust, Go used to write system software and
control plane software. Cloud native and
container native.
Nimble and Fungible Data
Platforms for meeting
demands from users and
government - Evolving Law
around Data Privacy and
Compliance.
HIPAA, GDPR, CCPA and many more with
stricter guidelines on data retention and
conformance.
Data Gravity should be avoided to get
locked in. Hybrid Clouds to mitigate the
issues.
Needs transparency in data storage, allowing
Application and Platform SREs to quickly comply
and provide proof of implementation. Ability to
switch in phases.
Origins of OpenEBS
Data Platform
(Stateful)
(Databases, Key/Value
Stores, Message Bus)
read/write files,blocks
(POSIX)
Volume (Data
Engine)
read/write bits/bytes
(Block Device)
Services (Stateless)
read/write image, text,...
read/write object
read/write table
read/write message
Any End User Application
Retail, Healthcare, Automobiles,
Manufacturing, Human Resources,
....
Availability
Consistency
Durability
Performance
Scalability
Security
91%
55%
Better user
experience
Hyper-growth
Faster delivery
Storage
HDD, SSD, Cloud Volumes,
NVMe/PCIe/SCSI/SAS
Hardware Utilization
CPU/RAM
NVMe, DPU/IPU, High
Capacity Drivers
Software
Paradigm Shift
(2020 CNCF Survey)
Agility
Cost
Governance
Lock-in
Productivity
Hardware
Paradigm Shift
20%
Why Data on Kubernetes?
• Hybrid Cloud Readiness
• Declarative installation of stateful stacks
for developer environments
• Increased Developer Productivity
• Improved Availability
• Improved resilience with compute storage
separation
DoK Day: Neeraj Bisht & Praveen Kumar GT "eCommerce giant
Flipkart on data on Kubernetes at scale"
(https://youtu.be/D77FLwUN9Oo)
OpenEBS Adopters
(https://github.com/openebs/openebs/blob/main/ADOPTERS.md)
Where does OpenEBS fit?
https://www.cncf.io/blog/2020/07/06/announcing-the-updated-cncf-storage-landscape-whitepaper/
Local and
Distributed
Block Storage
Control Plane
Workloads
(e.g. Databases, Key-Value/Object Stores, MQ, AI/ML, CI/CD)
Container Orchestrators
Data Engines
Framework and Tools
Storage Systems
Control-Plane Interface
(e.g. CSI, Others)
C
A
B
B ● Availability
● Consistency
● Durability
● Performance
● Scalability
● Security
● Ease of Use
Changing Storage Needs
Workload Type Standalone
(MinIO or MySQL)
Standalone
(Prometheus or Jenkins)
Distributed
(TiDB, Kafka)
Availability access to the
data continues
during a failure
condition
dependent on storage dependent on storage built-in
Consistency strong or weak need strong need strong need strong
Durability bit-rot,
endurance,
fat-fingers
needs protection for long
term
Not required. Easy to recreate. Tolerant to partial failures
Scalability clients, capacity,
throughput
capacity and vertical
scaling
capacity Scale out as adding more
capacities
Performance latency and
throughput
Avoid noisy
neighbour effects
storage should serve
throughput/io coming from
single node within
acceptable (SSD) latency
limits - < 2 ms
Hostpath, HDD, SSD
decent latency / throughput.
(HDD latency of 2-4ms is
acceptable)
Hostpath, HDD, SSD
Low I/O latency and high
throughput
(NVMe) SSD, Memory
Kubernetes as universal control plane
Functionality How does Kubernetes (aka containers help?)
Resource Management
and Scheduling
Discover the Storage Node and Storage
Devices. Aggregate and schedule volumes.
Scheduling includes providing Locality, Fault
tolerance, application awareness.
Volumes as services (Pods) and leverage the
capabilities of Kubernetes for scheduling.
Configuration
Management
Configuration Store, RBAC, Disaster Recovery Kubernetes Configuration store, Kubernetes
Operators for implementing the workflows
Usability Web UI, API Declarative, Kubernetes API Extensions, kubectl
plugin
High Availability and
Scalability
Scale up/down of Storage Nodes and
Devices, Movement of Volume Services to
the right nodes for High Availability, Highly
available Provisioning Services
Horizontal scaling with Kubernetes, Scale up/down
the provisioning deployments. Volume High
availability via extensions to Kubernetes scheduling
and Operators
Maintenance / Day 2
Operations
User Interface / CLI, Software Upgrades,
Telemetry and Alert tooling / Co-relation
between application and storage during
incidents
Declarative Upgrades, Standardized Monitoring,
Telemetry and Logging
Container Attached Storage
https://www.cncf.io/blog/2020/09/22/container-attached-storage-is-cloud-native-storage-cas/
https://www.cncf.io/webinars/kubernetes-for-storage-an-overview/
K8s Stateful Stack with OpenEBS
OpenEBS Control Plane
CSI Drivers
Storage Operators, Data
Engine Operators
Prometheus Exporters, Velero
Plugin, ...
Stateful Workloads
( MySQL, PostgreSQL, Kafka, Prometheus, Minio, MongoDB, Cassandra, …)
Kubernetes Storage Control Plane
(SC, PVC, PV, CSI)
OpenEBS Data Engines
Replicated Volumes - Mayastor,
cStor, Jiva
Local Volumes - LVM, ZFS,
hostpath, device
Enterprise Framework / Tools
(Velero, Prometheus, Grafana, EFK/ELK, … )
Any Platform, Any Storage
(On Premise/Cloud, Core/Edge, Bare metal/Virtual, NVMe/SCSI, SSD/HDD)
OpenEBS Persistent Volumes
Storage Devices
NVMe/SCSI, SSD/HDD, Cloud/SAN
Block Devices or Device Aggregation/Pooling using LVM, ZFS
Volume Replica
Jiva, cStor, Mayastor
Volume Target
Jiva, cStor, Mayastor
Stateful Workload
Persistent Volume
Mounted using Ext4, XFS, Btrfs, NFS or RawBlock
(Local
Volumes)
iSCSI/NVMeoF
TCP/NVMeoF
TCP/NVMeoF
Synchronous
Replication to
Volume
replicas on
other nodes.
Storage Layer
Volume Data
Layer
Direct
(Replicated
Volumes)
Volume
Services Layer
Volume
Access Layer
CAS
OpenEBS Persistent Volumes
Local Volumes Replicated Volumes
CAS - Hyperconverged
Kubernetes native
Runs anywhere.
Easy to install and manage!
Access from single
node.
Access from multiple
nodes.
Durability with
synchronous
replication
Data Services ...
Has overhead on
capacity and
performance
Low overhead on
capacity and
performance
Cloud Native and Distributed workloads - TiDB, etcd,
Kafka, ML Jobs
MySQL, MinIO, GitLab, Postgres and Cloud Native and
Distributed workloads - Cassandra,
CAS CAS
OpenEBS Persistent Volumes
Engine Type Local Volumes Replicated Volumes
Example Device, Hostpath, LVM,
Rawfile, ZFS
cStor, Jiva, Mayastor
Availability access to the data
continues during a failure
condition
available from a single node in
cluster.
available from multiple nodes - with
sync replicas.
Scalability clients, capacity, throughput scale-up on node. horizontal
scaling with K8s cluster.
scale-up on node. horizontal scaling
with K8s cluster.
Consistency strong or weak delegated to filesystems -
Example LVM, ZFS.
strong consistency at block level
Durability bit-rot, endurance,
fat-fingers
delegated to choice of
filesystems - LVM, ZFS or none.
provided via replicas
Performance latency and throughput depends on storage type and
type of filesystem used.
Low-overhead (except in case
of ZFS)
depends on storage type and
compute (CPU/RAM)
Low latency - Mayastor
How does OpenEBS work?
Storage Devices
NVMe/SCSI, SSD/HDD, Cloud/SAN
Volume Replica
Jiva, cStor,
Mayastor
Volume Target
Jiva, cStor,
Mayastor
Stateful Workload
Persistent Volume
CAS Storage
Control Plane
Platform SREs
will setup the
Kubernetes
nodes with
required Storage
1
Platform
SREs/K8s
administrators
(using K8s API)
will setup
OpenEBS and
create Storage
Classes
2
Application Developers will create
stateful workloads with Persistent
Volume Claims (PVCs)
3
OpenEBS, using
Data Engines,
CSI and K8s
extensions will
create the
required
Persistent
Volumes (PVs)
4
Platform and Operations team will observe
and maintain the system using cloud native
tooling.
5
OpenEBS - User Journey
Developer
SRE / Platform Engineer
Run Stateful with local storage
(ML Job or simple app, local s3)
Run Stateful with “enterprise” storage
(DBaaS, CI/CD, Object Storage, AI/ML
Pipelines)
OpenEBS Advocate or
Contributor
OpenEBS Advocate or
Contributor
Phase 1: Non critical workloads
(CI/CD) or resilient workloads
Phase 2: DBaaS
Phase 3: Volumes as Service to
other Data Platforms.
OpenEBS Adopter
Database Administrators
Platform Providers
OpenEBS Benefits and Limitations
Benefits
• Kubernetes native - ease of use and operations. integrates
into the standard cloud native tooling
• Lower footprint. Flexible deployment options
• Highly composable. Choice of data engines matching the
node capabilities and storage requirements
• Controlled and predictable blast radius. Easy to visualize
the location of the data of an application or volume
• Horizontally scalable. Scale up/down
• Avoid vendor lock-in with fully functional Open Source
Software
• Optimized to reduce operational costs on cloud or
on-prem.
Limitations
• Scale-out volumes is not supported. Only volumes with
capacity that can be served within a given node are
supported. OpenEBS believes the need for large volumes
will reduce as more and more workloads move into
Kubernetes.
• Read-write many is supported via NFS on top of Block
Storage volumes. OpenEBS believes that Read/Write many
usecases are served better via Object, Key/Value or API
based interfaces that offer more control and efficiency.
Quick Start
OpenEBS Local PV - Hostpath
Kubernetes Cluster
node2
node1
OpenEBS Local PV (hostpath)
Pod
Stateful
Workload
(DB, etc)
Setup OpenEBS
PV1
DevOps
admin
(1) openebs-,
provisioner,
(2) StorageClass
OS
Developer
Using OpenEBS
(3) StatefulSet with PVC
(4) PV OS
node3
OS
Dir: PV1 Dir: PV2
Pod
Stateful
Workload
(DB, etc)
PV2
PVC PVC
/mnt/openebs
/mnt/openebs /mnt/openebs
kubectl apply -f https://openebs.github.io/charts/hostpath-operator.yaml
OpenEBS Local PV (hostpath)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /var/local-hostpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-5ff697f967-nb7f4 1/1 Running 0 2m49s
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-hostpath-pvc
spec:
storageClassName: local-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
OpenEBS Local PV (hostpath)
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-hostpath-pvc Pending local-hostpath 3m7s
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425
...
spec:
capacity:
storage: 5G
claimRef:
kind: PersistentVolumeClaim
name: local-hostpath-pvc
...
local:
fsType: ""
path: /var/local-hostpath/pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- gke-kmova-helm-default-pool-3a63aff5-1tmf
storageClassName: openebs-hostpath
volumeMode: Filesystem
status:
phase: Bound
OpenEBS Local PV (hostpath)
OpenEBS Local PV - FAQ
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /var/local-hostpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- node1
- node2
- node3
How are the sub directories managed?
Can I create Local PVs with mounted storage like VMware
or GPDs?
Can I resize Local PV?
How do I monitor Local PV?
How do I backup Local PV?
Why is my PVC unable to bind to a PV?
How do I tell Kubernetes to schedule pods to nodes where
local storage is available?
Troubleshooting Local PV (hostpath)
Where to find us?
https://slack.k8s.io
#openebs
Join OpenEBS
Channel on
Kubernetes Slack
https://openebs.io/community
OpenEBS 201
Container Attached Storage
Updated: Aug 2021
Where does OpenEBS fit?
https://www.cncf.io/blog/2020/07/06/announcing-the-updated-cncf-storage-landscape-whitepaper/
Local and
(Distributed)
Replicated Block
Storage
Control Plane
Workloads
(e.g. Databases, Key-Value/Object Stores, MQ, AI/ML, CI/CD)
Container Orchestrators
Data Engines
Framework and Tools
Storage Systems
Control-Plane Interface
(e.g. CSI, Others)
C
A
B
B ● Availability
● Consistency
● Durability
● Performance
● Scalability
● Security
● Ease of Use
CAS
OpenEBS Data Engine Evolution
Local Volumes Replicated Volumes
CAS CAS
OpenEBS 1.0 Hostpath, Device Jiva, cStor
OpenEBS 2.0 Hostpath, Device, ZFS Jiva, cStor, Mayastor (alpha)
OpenEBS 3.0 Hostpath, Device, ZFS, LVM, Rawfile, Partition Jiva (CSI), cStor (CSI), Mayastor (beta)
OpenEBS 3.0
● GA:
a. cStor CSI
b. Local PV ZFS
c. Local PV LVM
d. Local PV Hostpath
● Beta:
a. Dynamic NFS
b. Mayastor
c. Jiva CSI
d. Local PV Rawfile
● Alpha:
a. Device (Partition)
● New management components:
a. Upgrade and Migration Operators
b. OpenEBS CLI,
c. Monitoring Mixins,
d. Kyverno Policy Add-on
● Deprecate cStor and Jiva External Provisioners
OpenEBS Data Engine comparison
Hostpath Device Rawfile LVM ZFS Jiva cStor Mayastor
Dynamic Provisioned Volumes Yes Yes Yes Yes Yes Yes Yes Yes
Capacity Management No No Yes Yes Yes Yes Yes Yes
Snapshots No No No Yes Yes No Yes Yes*
Incremental Backup No No No No Yes No Yes Yes*
Clones No No No No Yes No Yes Yes*
Performance Yes Yes Yes Yes No No No Yes
Node Failure (HA) No No No No No Yes Yes Yes
OpenEBS Data Engine comparison
Hostpath Device Rawfile LVM ZFS Jiva cStor Mayastor
Node Storage Pooling (RAID) Yes Yes Yes Yes Yes Yes Yes Yes
Full Backup/Restore Yes Yes Yes Yes Yes Yes Yes Yes
Capacity Based Scheduling Yes* Yes Yes* Yes Yes Yes Yes Yes
Application Aware Scheduling Yes Yes Yes Yes Yes Yes Yes Yes*
CLI Support Yes* Yes* Yes* Yes* Yes Yes Yes Yes*
Monitoring and Alerts Yes* Yes Yes* Yes Yes* Yes Yes Yes*
Kubernetes Installer (Helm) Yes Yes* Yes Yes Yes Yes Yes Yes
Rolling Upgrades Yes Yes Yes Yes Yes Yes Yes Yes
OpenEBS Local PV - Use cases
node1 node2 node3
Local PVs are great for Cloud Native Workloads (or distributed
system) that have:
● Built in Proxies to distribute the data
● Built in Backup and Migration solutions
● Need low latency access.
Or short lived Stateful workloads that need to save the state
and resume after reboot. ( ML Jobs)
Or edge nodes with single node K8s cluster.
LocalPV HostPath
Node 3
LocalPV Device
Node 1
ZFS or LVM LocalPV
Node 2 Pool
Application
Namespace
Internet
Physical Hard disks
OpenEBS LocalPV options
Stateful
Application
Running
Inside Pod in
Kubernetes
Persistent
Volume for
Application
Create LocalPV
StorageClass
XFS or EXT:
NDM knows if
disk is in use
Creates
volume in
user
defined
pool
1
2
3
OpenEBS 3.0 (Local PV)
OpenEBS Local Storage Operators make it easy to provision Local Volumes with different flavors of
local storage available on nodes.
● OpenEBS Hostpath LocalPV (stable), the first and the most widely used LocalPV now supports enforcing XFS
quota, ability to use a custom node label for node affinity (instead of the default kubernetes.io/hostname)
● OpenEBS ZFS LocalPV (stable), used widely for production workloads that need direct and resilient storage
has added new capabilities like:
○ Velero plugin to perform incremental backups that make use of the copy-on-write ZFS snapshots.
○ CSI Capacity based scheduling used with waitForFirstConsumer bound Persistent Volumes.
○ Improvements to inbuilt volume scheduler (used with immediate bound Persistent Volumes) that can
now take into account the capacity and the count of volumes provisioned per node.
● OpenEBS LVM LocalPV (stable), can be used to provision volume on top of LVM Volume Groups and
supports the following features:
○ Thick (Default) or Thin Provisioned Volumes
○ CSI Capacity based scheduling used with waitForFirstConsumer bound Persistent Volumes.
○ Snapshot that translates into LVM Snapshots
○ Ability to set QoS on the containers using LVM Volumes.
○ Also supports other CSI capabilities like volume expansion, raw or filesystem mode, metrics.
kubectl apply -f https://openebs.github.io/charts/hostpath-operator.yaml
OpenEBS LocalPV (hostpath)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /mnt/openebs-storage
- name: XFSQuota
enabled: "true"
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-5ff697f967-nb7f4 1/1 Running 0 2m49s
$ sudo mount -o rw,pquota /dev/nvme1n1 /mnt/openebs-storage
OpenEBS Replicated PV - Use cases
node1 node2 node3
Replicated PVs are great for Cloud Native Workloads (or
distributed system) that need:
● Performance
● Resiliency against single node and/or single device
failure.
● Need low latency access.
Replicated PVs are great if you would like to:
● Lower your blast radius, while still using bin-packing to
efficiently use your hardware resources.
● Efficiently use Capacity and Performance of NVMe
Devices. (with Mayastor)
Application
Namespace
Internet
OpenEBS CStor
Stateful
Application
Running
Inside Pod in
Kubernetes
Persistent
Volume for
Application
Create CStor
StorageClass Create CStor pools
on all storage node.
STS or
Deployment
CStor Pool
CStor Pool
CStor Pool
NDM knows if
disk is in use
CStor
Target
OpenEBS 3.0 (Replicated PV)
OpenEBS Replicated Volumes enable users make use of the local storage available to kubernetes
nodes to provide durable persistent volumes - that are resilient to node failures. The name
replicated stems from the fact that OpenEBS uses synchronous replication of volumes instead of
sharding block across different nodes.
● OpenEBS Jiva (stable), has added support for a CSI Driver and Jiva operator that include features like:
○ Enhanced management of the replicas
○ Ability to auto-remount the volumes marked as read-only due to iSCSI time to read-write.
○ Faster detection of the node failure and helping Kubernetes to move the application out of the failed node to a
new node.
● OpenEBS CStor (stable), has added support for a CSI Driver and also improved customer resources and operators for
managing the lifecycle of CStor Pools. This 3.0 version of the CStor includes:
○ The improved schema allows users to declaratively run operations like replacing the disks in mirrored CStor
pools, add new disks, scale-up replicas, or move the CStor Pools to a new node. The new custom resource for
configuring CStor is called CStorPoolCluster (CSPC) compared to older StoragePoolCluster(SPC).
○ Ability to auto-remount the volumes marked as read-only due to iSCSI time to read-write.
○ Faster detection of the node failure and helping Kubernetes to move the application out of the failed node to a
new node.
● 3.0 also deprecates the older CStor and Jiva volume provisioners - that was based on the kubernetes external
storage provisioner. There will be no more features added to the older provisioners and users are requested to
migrate their Pools and Volumes to CSI Drivers as soon as possible.
Node 3
Node 1
Node 2
Application
Namespace
Internet
OpenEBS Mayastor
Stateful
Application
Running
Inside Pod in
Kubernetes
Persistent
Volume for
Application
Create Mayastor
StorageClass
Create Mayastor
pools on all storage
node.
STS or
Deployment
Maya
Maya
Maya
OpenEBS Mayastor (Beta In Progress)
Mayastor delivers high performance access to
persistent data and services, using the industry leading
Storage Performance Developer Kit (SPDK)
● Uses SPDK for NVMe features
○ Poll-mode and event-loop design for
maximum performance
○ Memory utilization tuned for environments
with limited huge pages
○ Scales within the node and across nodes
● Implemented in Rust for memory safety
guarantees
● Configuration management using secure gRPC
API
● Volume Services
○ Resilient against node failures via
synchronous replication
Control Plane Improvements
● Control plane implements
application aware data placement
● Fine grained control over errors,
restarts and timeouts for
Kubernetes
● Prometheus Metrics exporter
● Integrate Mayastor into OpenEBS
tools - installer, CLI, monitoring
Core Enhancements
● Reduce fail-over time in loss of K8s
node situation
● Support for LVM as backing store
OpenEBS Mayastor (Beta In Progress)
Mayastor (ANA) Volumes
OpenEBS 3.1 Mayastor with ANA
CAS CAS CAS
Mayastor (ANA) Faster HA
CAS CAS CAS
OpenEBS 3.0 (Other Features)
Beyond the improvements to the data engines and their corresponding control plane, there are several new enhancements that will help
with ease of use of OpenEBS engines:
● Several fixes and enhancements to the Node Disk Manager like automatically adding a reservation tag to devices, detecting
filesystem changes and updating the block device CR (without the need for a reboot), metrics exporter and an API service that can
be extended in the future to implement storage pooling or cleanup hooks.
● Dynamic NFS Provisioner that allows users to launch a new NFS server on any RWO volume (called backend volume) and expose an
RWX volume that saves the data to the backend volume.
● Kubernetes Operator for automatically upgrading Jiva and CStor volumes that are driven by a Kubernetes Job
● Kubernetes Operator for automatically migrating CStor Pools and Volumes from older pool schema and legacy (external storage
based) provisioners to the new Pool Schema and CSI volumes respectively.
● OpenEBS CLI (a kubectl plugin) for easily checking the status of the block devices, pools (storage) and volumes (PVs).
● OpenEBS Dashboard (a prometheus and grafana mixin) that can be installed via jsonnet or helm chart with a set of default Grafana
Dashboards and AlertManager rules for OpenEBS storage engines.
● Enhanced OpenEBS helm chart that can easily enable or disable a data engine of choice. The 3.0 helm chart stops installing the
legacy CStor and Jiva provisioners. If you would like to continue to use them, you have to set the flag “legacy.enabled=true”.
● OpenEBS helm chart includes sample kyverno policies that can be used as an option for PodSecurityPolicies(PSP) replacement.
● OpenEBS images are delivered as multi-arch images with support for AMD64 and ARM64 and hosted on DockerHub, Quay and
GHCR.
● Support for installation in air gapped environments.
● Enhanced Documentation and Troubleshooting guides for each of the engines located in the respective engine repositories.
● A new and improved design for the OpenEBS website.
kubectl apply -f https://openebs.github.io/charts/nfs-operator.yaml
OpenEBS NFS (RWX Volumes)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-rwx
annotations:
openebs.io/cas-type: nfsrwx
cas.openebs.io/config: |
- name: NFSServerType
value: "kernel"
- name: BackendStorageClass
value: "openebs-hostpath"
provisioner: openebs.io/nfsrwx
reclaimPolicy: Delete
NAME READY STATUS RESTARTS AGE
openebs-nfs-provisioner-79b6ccd59-626pd 1/1 Running 0 62s
NFS Server
Backend
PV
Create a NFS Server on
top of Backend PV
Create a NFS PV
pointing to OpenEBS
NFS Server
kubectl krew install openebs
OpenEBS CLI
$ kubectl openebs version
COMPONENT VERSION
Client v0.4.0
OpenEBS CStor 3.0.0
OpenEBS Jiva Not Installed
OpenEBS LVM LocalPV Not Installed
OpenEBS ZFS LocalPV Not Installed
$ kubectl openebs get bd
NAME PATH SIZE CLAIMSTATE STATUS FSTYPE MOUNTPOINT
gke-kmova-helm-default-pool-595accd4-pgtf
├─blockdevice-2eff94561dab533cabfeb6b4ddbbe851 /dev/sdb 375GiB Unclaimed Active ext4 /mnt/disks/ssd0
├─blockdevice-a2247055ab6c06d27db1de47e61c3ac9 /dev/sdc1 375GiB Unclaimed Active
└─blockdevice-b90456e7143408f1c29738c4d4deafec /dev/sdd 375GiB Unclaimed Active ext4 /mnt/disks/ssd2
gke-kmova-helm-default-pool-595accd4-bwcd
├─blockdevice-3c679953243dfc1344d2a4ac352f4c6e /dev/sdd 375GiB Unclaimed Active ext4 /mnt/disks/ssd2
├─blockdevice-a5158511cf50b507e96fd628dca05af0 /dev/sdc1 375GiB Unclaimed Active
└─blockdevice-bc795daa24fc3589ee2f8b835bcdcba6 /dev/sdb 375GiB Unclaimed Active ext4 /mnt/disks/ssd0
OpenEBS CLI
$ kubectl openebs describe volume pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3
pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3 Details :
-----------------
NAME : pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3
ACCESS MODE : ReadWriteOnce
CSI DRIVER : cstor.csi.openebs.io
STORAGE CLASS : cstor-csi-disk
VOLUME PHASE : Bound
VERSION : 3.0.0
CSPC : cstor-disk-pool
SIZE : 10.0GiB
STATUS : Degraded
REPLICA COUNT : 3
Portal Details :
------------------
IQN : iqn.2016-09.com.openebs.cstor:pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3
VOLUME NAME : pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3
TARGET NODE NAME : gke-kmova-helm-default-pool-595accd4-bwcd
PORTAL : 10.3.248.245:3260
TARGET IP : 10.3.248.245
Replica Details :
-----------------
NAME TOTAL USED STATUS AGE
pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3-cstor-disk-pool-clz4 296.9KiB 5.4MiB Healthy 1m2s
pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3-cstor-disk-pool-h8b9 296.9KiB 5.6MiB Healthy 1m2s
pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3-cstor-disk-pool-jznw 300.8KiB 5.5MiB Healthy 1m2s
OpenEBS Monitoring
helm repo add openebs-monitoring https://openebs.github.io/monitoring/
helm repo update
helm install openebs-dashboard openebs-monitoring/openebs-monitoring --namespace openebs --create-namespace
OpenEBS Monitoring
helm repo add openebs-monitoring https://openebs.github.io/monitoring/
helm repo update
helm install openebs-dashboard openebs-monitoring/openebs-monitoring --namespace openebs --create-namespace
OpenEBS 3.1 ( Planning )
Stateful Operator (STS) with Local PV
● Fault tolerant scheduling for
distributed applications
● Stale PVCs
● Moving Data of Local PV on K8s
upgrade/node-recycle (Data
Populator)
Engineering Optimizations
● CI Infrastructure improvements
● Unified Local CSI Drivers
● NDM - Enclosure / Storage
Management
● Usability Enhancements (based on
user feedback). ( Upgrades, Pool
Creation, …)
● Automated Security Compliance
Checks
Local PV on Shared Device
● Devices visible to multiple nodes
- shared filesystem. Eg. Cluster
LVM.
● Allow pods to move across
nodes that have access to
device.
● Remote access via iSCSI / NVMe
Integration Hooks
● Setting up finalizers or other
metadata on Volume related
objects for add-on operators. Eg:
Billing/Auditing by Platform
operators
Mayastor Beta
OpenEBS 3.1 (LocalPV ++)
Local Volumes
OpenEBS 3.1 Local PV ++ ( Shared Devices)
Local Volumes (HA)
OpenEBS 3.1 (LocalPV ++)
Local Volumes
OpenEBS 3.1 Local PV ++ ( Shared Devices + Remote Access via
NVMe )
CAS
Local Volumes (HA)
CAS
OpenEBS 3.1 (LocalPV ++)
Local Volumes
OpenEBS 3.1 Local PV ++ ( Shared Devices + Remote Access via
NVMe )
CAS CAS
Local Volumes (HA)
CAS
CAS
OpenEBS 3.1 Storage Cohort
A storage cohort is an autonomous storage
unit that consists of a set of storage devices
(grouped together as storage pool) and a
storage software running on the nodes
attached to the devices.
The storage software (or the storage controller
aka SDS) helps create and manage storage
volumes and also helps create and manage
corresponding targets that storage initiators
can talk to for any I/O operations.
OpenEBS 3.1 (Cluster topology)
AZ-A1 AZ-A2 AZ-A3
AZ-A1x AZ-A2y AZ-A3z
FD-A1a FD-A1b FD-A2a FD-A2b FD-A3a FD-A3b
FD-A1a
FD-A2a
FD-A3a
FD-A1b
FD-A2b
FD-A3b
Application Nodes
Storage Nodes
( with JBODs / JBOFs )
OpenEBS 3.1 - Fault Tolerant Scheduling
NVMe NVMe NVMe NVMe NVMe NVMe
AZ-A1 AZ-A2 AZ-A3
AZ-A1x AZ-A2y AZ-A3z
FD-A1a FD-A1b FD-A2a FD-A2b FD-A3a FD-A3b
FD-A1a
FD-A2a
FD-A3a
FD-A1b
FD-A2b
FD-A3b
AZ-A1, AZ-A2, AZ-A3
Application Node
Application Node
Storage Cohort - 1
Storage Node
NVMe
Storage Node
pv
NVMe
Cohort
Manager
OpenEBS 3.1 (w Shared Device + NVMe)
OpenEBS CSI Controller
Volume
Scheduler
CSI Node
Agent
cohort
volume
SC
PVC
PV
Cohort
Controllers
Node
Agent
MTL...
pool
Shared device
Storage Cohort - 2
Storage Node
Storage Node
Cohort
Manager
Node
Agent
MTL...
Shared device
OpenEBS Volume Types (Recap)
NVMe NVMe
pv
OpenEBS Local PV ++
(Shared Local Device)
pv
OpenEBS Replicated
(Mayastor)
(over Node Local Devices)
Maya Maya
pv
OpenEBS Local PV (Shared
Device)
pv
OpenEBS
Local PV
OpenEBS 401
Container Attached Storage
Updated: Aug 2021
CAS
OpenEBS Future Deployments
Any Workload, Any Cluster
CAS CAS
OpenEBS 4.0 (Multipath) NVMe over Local (Multipath) Mayastor with ANA
NVMe NVMe NVMe
NW Fabric for NVMe
OpenEBS Volume Types (Recap)
NVMe NVMe
pv
OpenEBS Local PV ++
(Shared Local Device)
pv
OpenEBS Replicated
(Mayastor)
(over Node Local Devices)
Maya Maya
pv
OpenEBS Local PV (Shared
Device)
pv
OpenEBS
Local PV
OpenEBS Storage Cohort
A storage cohort is an autonomous storage
unit that consists of a set of storage devices
(grouped together as storage pool) and a
storage software running on the nodes
attached to the devices.
The storage software (or the storage controller
aka SDS) helps create and manage storage
volumes and also helps create and manage
corresponding targets that storage initiators
can talk to for any I/O operations.
Storage Cohort Examples
Storage Cohort Examples
Storage Cohort Examples
Storage Cohort Example
Nodes with PCIe SSDs
40-100Gb NICs
512 GB RAM
32-96 cores
(Horizontally scalable / Rack
Scaled)
NW Fabric for NVMe
Application Node
Application Node
Storage Cohort - 1
Storage Node
NVMe
Storage Node
pv
NVMe
Cohort
Manager
OpenEBS 3.1 Local PV ++ (Recap)
OpenEBS CSI Controller
Volume
Scheduler
CSI Node
Agent
cohort
volume
SC
PVC
PV
Cohort
Controllers
Node
Agent
MTL...
pool
Shared device
Storage Cohort - 2
Storage Node
Storage Node
Cohort
Manager
Node
Agent
MTL...
Shared device
NW Fabric for NVMe
Application Node
Application Node
Storage Cohort
NVMe
Storage Node
pv
NVMe
Cohort
Manager
Application Cluster
OpenEBS FK (w Shared Device + NVMe)
OpenEBS CSI Controller
OpenEBS Storage Manager
Volume
Scheduler
CSI Node
Agent
Cohort
Controllers
Node
Agent
MTL...
cohort
volume
SC
PVC
PV
pool
(x)
volume
OpenEBS 4.0 - Fault Tolerant Scheduling
NVMe NVMe NVMe NVMe NVMe NVMe
AZ-A1 AZ-A2 AZ-A3
AZ-S1 AZ-S2 AZ-S3
FD-S1a FD-S1b FD-S2a FD-S2b FD-S3a FD-S3b
FD-A1a
FD-A2a
FD-A3a
FD-A1b
FD-A2b
FD-A3b
AZ-S1x AZ-S2x AZ-S2z
OpenEBS 4.0 - Fault Tolerant Scheduling
NVMe NVMe NVMe NVMe NVMe NVMe
AZ-A1 AZ-A2 AZ-A3
AZ-S1x AZ-S1y AZ-S1z
FD-S1a FD-S1b FD-S1a FD-S1b FD-S1a FD-S1b
FD-A1a
FD-A2a
FD-A3a
FD-A1b
FD-A2b
FD-A3b
AZ-S1
OpenEBS 4.0 - Fault Tolerant Scheduling
NVMe NVMe NVMe NVMe NVMe NVMe
AZ-A1 AZ-A2 AZ-A3
AZ-S1x AZ-S1y AZ-S1z
FD-S1a FD-S1b FD-S2a FD-S2b FD-S3a FD-S3b
FD-A1a
FD-A2a
FD-A3a
FD-A1b
FD-A2b
FD-A3b
AZ-S1, AZ-S2, AZ-S3
OpenEBS 4.0 - Affinity Scheduling
NVMe NVMe NVMe NVMe NVMe NVMe
AZ-A1 AZ-A2 AZ-A3
AZ-A1x AZ-A2y AZ-A3z
FD-A1a FD-A1b FD-A2a FD-A2b FD-A3a FD-A3b
FD-A1a
FD-A2a
FD-A3a
FD-A1b
FD-A2b
FD-A3b
AZ-A1, AZ-A2, AZ-A3
Mayastor Control Plane
OpenEBS integration with MayaData
Application Cluster Node(s)
Application Node
Application Node
Mayastor Pool Node
OpenEBS CSI Controller
NW Fabric for NVMe
Cohort
Controller
pv
NVMe
API Server
CSI Node
Agent
Mayasator
Application Cluster
Storage Cluster
Mayastor Pool Node Mayastor Pool Node
MTL...
cohort
volume
SC
PVC
PV
pool
Mayasator Mayasator
NVMe
OpenEBS integration with VDA
Application Cluster Node(s)
Application Node
Application Node
VDA Node
OpenEBS CSI Controller
NW Fabric for NVMe
Cohort
Controller
pv
NVMe
VDA
Vol
VDA Portal
CSI Node
Agent
VDA ...
Application Cluster
Storage Cluster
MTL...
VDA Control Node
VDA Node
VDA ...
MTL...
VDA Node
VDA ...
MTL...
cohort
volume
SC
PVC
PV
pool
OpenEBS integration with RedFish (RF)
Application Cluster Node(s)
Application Node
Application Node
RF Node
OpenEBS CSI Controller
NW Fabric for NVMe
Cohort
Controller
pv
NVMe
RF
Vol
RF Portal
CSI Node
Agent
RF ...
Application Cluster
Storage Cluster
MTL...
RF Control Node
RF Node
RF ...
MTL...
RF Node
RF ...
MTL...
cohort
volume
SC
PVC
PV
pool
Enterprise Tools and
Operators
(Integrations)
NW Fabric for NVMe
Application Node
Application Node
Storage Cohort
Storage Node
NVMe
Storage Node
pv
NVMe
Cohort
Manager
OpenEBS
(CSI Driver)
Application Cluster
Storage Cohort Controller
OpenEBS with Enterprise Integrations
NW Fabric for NVMe
Application Node
Application Node
Storage Cohort
Storage Node
NVMe
Storage Node
pv
NVMe
Cohort
Manager
Application Cluster
OpenEBS with Enterprise Integrations
OpenEBS CSI Controller
OpenEBS Storage Manager
Volume
Scheduler
CSI Node
Agent
Cohort
Controllers
LVM
Node
Agent
MTL...
cohort
volume
SC
PVC
PV
pool
Platform Ops
K8s Cluster
Operator
SRE Ops
(MTL)
Infra
Operator
RBAC
Compliance
BCP
App
Operators
OpenEBS 4.0 ( Features )
CSI Driver (with Application and Platform
Awareness)
● Application and Storage proximity
● Application high availability
● Volume IO Fencing
● Volume Access control
● Scale up/down application replicas
(volume cleanup )
● Volume Migration (for local or single
replica volumes)
Storage (Cohort) Control Plane
● RBAC and Security
● Device Management
● Pool Management
○ RAID
● Volume Management
○ Fault Tolerant Scheduling
○ Durability
○ Snapshot
○ Backup / Restore
○ Migration
● High Availability
● MTL
● API Driven for integrating with Infra
Operators
○ Rook
○ Crossplane
○ MicroK8s
Mayastor
(Enhance, Optimize and Productise SPDK
for Block Storage)
● Pluggable Storage Layers (beyond
blobstore/lvol)
● Scale and Performance
● Secure
● High Availability
Thank you!
OpenEBS 3.0 (Hyperconvergence Achieved)
Cluster
Components
Helm Chart /
YAML
Data Engine Operator CSI Driver
CSI Driver
Plugins
(Velero, Metrics Exporter, … )
Node
Components
Node n
Node 1
CSI Driver
Plugins
(Velero, Metrics Exporter, … )
Plugins
(Velero, Metrics Exporter, … )
Data Engine Data Engine
CLI
Kubernetes API Server ++
(OpenEBS Custom Resources)
etcd ++
(OpenEBS configuration store)
OpenEBS 3.1 Release Timeline
Nov 31st 2021
3.1 POC
Mar 31st 2022
3.1 Alpha Release
● Virtual SAS Array
● VDA
OpenEBS 4.0 ( SCP for Any NVMe Target)
NVMe NVMe
pv
OpenEBS NVMe
over Shared Local Device
pv
Storage Array(s) with
exposing NVMe Targets
pv
Mayastor over Node Local
Devices
Maya Maya
OpenEBS NVMe
over NVMe (remote) Device
NVMe NVMe
pv
Node 3
Node 1
Node 2
Application
Namespace
Internet
SSD
OpenEBS Mayastor
Stateful
Application
Running
Inside Pod in
Kubernetes
Persistent
Volume for
Application
Create Mayastor
StorageClass
Create Mayastor
pools (MSP) on all
storage node.
STS with
(MSP) Node
Selectors
Mayastor
Mayastor
Mayastor
SSD
SSD
Mayastor
Control
plane
Mayastor
Control
plane
Install Mayastor
control plane
Node 4
Node 5
Mayastor
Control
plane
Mayastor
Control
plane
Mayastor
Control
plane

More Related Content

What's hot

[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
Ian Choi
 
Large scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsLarge scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutions
Han Zhou
 
Service Mesh @Lara Camp Myanmar - 02 Sep,2023
Service Mesh @Lara Camp Myanmar - 02 Sep,2023Service Mesh @Lara Camp Myanmar - 02 Sep,2023
Service Mesh @Lara Camp Myanmar - 02 Sep,2023
Hello Cloud
 
Terraform: An Overview & Introduction
Terraform: An Overview & IntroductionTerraform: An Overview & Introduction
Terraform: An Overview & Introduction
Lee Trout
 
The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...
The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...
The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...
HostedbyConfluent
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
Sage Weil
 
Migrating with Debezium
Migrating with DebeziumMigrating with Debezium
Migrating with Debezium
Mike Fowler
 
Ansible 101
Ansible 101Ansible 101
Ansible 101
Gena Mykhailiuta
 
How OpenShift SDN helps to automate
How OpenShift SDN helps to automateHow OpenShift SDN helps to automate
How OpenShift SDN helps to automate
Ilkka Tengvall
 
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...
Edureka!
 
MinIO January 2020 Briefing
MinIO January 2020 BriefingMinIO January 2020 Briefing
MinIO January 2020 Briefing
Jonathan Symonds
 
Building IAM for OpenStack
Building IAM for OpenStackBuilding IAM for OpenStack
Building IAM for OpenStack
Steve Martinelli
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
Sage Weil
 
Kubernetes CI/CD with Helm
Kubernetes CI/CD with HelmKubernetes CI/CD with Helm
Kubernetes CI/CD with Helm
Adnan Abdulhussein
 
Kubernetes or OpenShift - choosing your container platform for Dev and Ops
Kubernetes or OpenShift - choosing your container platform for Dev and OpsKubernetes or OpenShift - choosing your container platform for Dev and Ops
Kubernetes or OpenShift - choosing your container platform for Dev and Ops
Tomasz Cholewa
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes
VMware Tanzu
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
ShapeBlue
 
KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)
Stephen Gordon
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
Jose De La Rosa
 
Capture the Streams of Database Changes
Capture the Streams of Database ChangesCapture the Streams of Database Changes
Capture the Streams of Database Changes
confluent
 

What's hot (20)

[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
 
Large scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsLarge scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutions
 
Service Mesh @Lara Camp Myanmar - 02 Sep,2023
Service Mesh @Lara Camp Myanmar - 02 Sep,2023Service Mesh @Lara Camp Myanmar - 02 Sep,2023
Service Mesh @Lara Camp Myanmar - 02 Sep,2023
 
Terraform: An Overview & Introduction
Terraform: An Overview & IntroductionTerraform: An Overview & Introduction
Terraform: An Overview & Introduction
 
The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...
The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...
The Log of All Logs: Raft-based Consensus Inside Kafka | Guozhang Wang, Confl...
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
Migrating with Debezium
Migrating with DebeziumMigrating with Debezium
Migrating with Debezium
 
Ansible 101
Ansible 101Ansible 101
Ansible 101
 
How OpenShift SDN helps to automate
How OpenShift SDN helps to automateHow OpenShift SDN helps to automate
How OpenShift SDN helps to automate
 
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...
 
MinIO January 2020 Briefing
MinIO January 2020 BriefingMinIO January 2020 Briefing
MinIO January 2020 Briefing
 
Building IAM for OpenStack
Building IAM for OpenStackBuilding IAM for OpenStack
Building IAM for OpenStack
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
Kubernetes CI/CD with Helm
Kubernetes CI/CD with HelmKubernetes CI/CD with Helm
Kubernetes CI/CD with Helm
 
Kubernetes or OpenShift - choosing your container platform for Dev and Ops
Kubernetes or OpenShift - choosing your container platform for Dev and OpsKubernetes or OpenShift - choosing your container platform for Dev and Ops
Kubernetes or OpenShift - choosing your container platform for Dev and Ops
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
 
KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Capture the Streams of Database Changes
Capture the Streams of Database ChangesCapture the Streams of Database Changes
Capture the Streams of Database Changes
 

Similar to Open ebs 101

VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld
 
Planning for-high-performance-web-application
Planning for-high-performance-web-applicationPlanning for-high-performance-web-application
Planning for-high-performance-web-applicationNguyễn Duy Nhân
 
FalconStor NSS Presentation
FalconStor NSS PresentationFalconStor NSS Presentation
FalconStor NSS Presentation
rpsprowl
 
Storage Technology Overview
Storage Technology OverviewStorage Technology Overview
Storage Technology Overview
nomathjobs
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
Ceph Community
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
inside-BigData.com
 
GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GRGlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
Theophanis Kontogiannis
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
Marcel Hergaarden
 
Ceph at salesforce ceph day external presentation
Ceph at salesforce   ceph day external presentationCeph at salesforce   ceph day external presentation
Ceph at salesforce ceph day external presentation
Sameer Tiwari
 
Private Cloud with Open Stack, Docker
Private Cloud with Open Stack, DockerPrivate Cloud with Open Stack, Docker
Private Cloud with Open Stack, Docker
Davinder Kohli
 
HPE Storage KubeCon US 2018 Workshop
HPE Storage KubeCon US 2018 WorkshopHPE Storage KubeCon US 2018 Workshop
HPE Storage KubeCon US 2018 Workshop
Michael Mattsson
 
Membase Meetup 2010
Membase Meetup 2010Membase Meetup 2010
Membase Meetup 2010Membase
 
Storage Networks
Storage NetworksStorage Networks
Storage Networks
prakashjjaya
 
DevOps in Age of Kubernetes
DevOps in Age of KubernetesDevOps in Age of Kubernetes
DevOps in Age of Kubernetes
Mesosphere Inc.
 
Deploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron HavivDeploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron Haviv
Cloud Native Day Tel Aviv
 
OpenEBS Technical Workshop - KubeCon San Diego 2019
OpenEBS Technical Workshop - KubeCon San Diego 2019OpenEBS Technical Workshop - KubeCon San Diego 2019
OpenEBS Technical Workshop - KubeCon San Diego 2019
MayaData Inc
 
Big Data Streams Architectures. Why? What? How?
Big Data Streams Architectures. Why? What? How?Big Data Streams Architectures. Why? What? How?
Big Data Streams Architectures. Why? What? How?
Anton Nazaruk
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storage
Andrew Underwood
 
Apache Kafka Best Practices
Apache Kafka Best PracticesApache Kafka Best Practices
Apache Kafka Best Practices
DataWorks Summit/Hadoop Summit
 

Similar to Open ebs 101 (20)

VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
 
Planning for-high-performance-web-application
Planning for-high-performance-web-applicationPlanning for-high-performance-web-application
Planning for-high-performance-web-application
 
FalconStor NSS Presentation
FalconStor NSS PresentationFalconStor NSS Presentation
FalconStor NSS Presentation
 
Storage Technology Overview
Storage Technology OverviewStorage Technology Overview
Storage Technology Overview
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
 
GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GRGlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
 
Ceph at salesforce ceph day external presentation
Ceph at salesforce   ceph day external presentationCeph at salesforce   ceph day external presentation
Ceph at salesforce ceph day external presentation
 
Private Cloud with Open Stack, Docker
Private Cloud with Open Stack, DockerPrivate Cloud with Open Stack, Docker
Private Cloud with Open Stack, Docker
 
HPE Storage KubeCon US 2018 Workshop
HPE Storage KubeCon US 2018 WorkshopHPE Storage KubeCon US 2018 Workshop
HPE Storage KubeCon US 2018 Workshop
 
Membase Meetup 2010
Membase Meetup 2010Membase Meetup 2010
Membase Meetup 2010
 
Storage Networks
Storage NetworksStorage Networks
Storage Networks
 
DevOps in Age of Kubernetes
DevOps in Age of KubernetesDevOps in Age of Kubernetes
DevOps in Age of Kubernetes
 
Deploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron HavivDeploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron Haviv
 
OpenEBS Technical Workshop - KubeCon San Diego 2019
OpenEBS Technical Workshop - KubeCon San Diego 2019OpenEBS Technical Workshop - KubeCon San Diego 2019
OpenEBS Technical Workshop - KubeCon San Diego 2019
 
PROSE
PROSEPROSE
PROSE
 
Big Data Streams Architectures. Why? What? How?
Big Data Streams Architectures. Why? What? How?Big Data Streams Architectures. Why? What? How?
Big Data Streams Architectures. Why? What? How?
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storage
 
Apache Kafka Best Practices
Apache Kafka Best PracticesApache Kafka Best Practices
Apache Kafka Best Practices
 

More from LibbySchulze

Running distributed tests with k6.pdf
Running distributed tests with k6.pdfRunning distributed tests with k6.pdf
Running distributed tests with k6.pdf
LibbySchulze
 
Extending Kubectl.pptx
Extending Kubectl.pptxExtending Kubectl.pptx
Extending Kubectl.pptx
LibbySchulze
 
Enhancing Data Protection Workflows with Kanister And Argo Workflows
Enhancing Data Protection Workflows with Kanister And Argo WorkflowsEnhancing Data Protection Workflows with Kanister And Argo Workflows
Enhancing Data Protection Workflows with Kanister And Argo Workflows
LibbySchulze
 
Fallacies in Platform Engineering.pdf
Fallacies in Platform Engineering.pdfFallacies in Platform Engineering.pdf
Fallacies in Platform Engineering.pdf
LibbySchulze
 
Intro to Fluvio.pptx.pdf
Intro to Fluvio.pptx.pdfIntro to Fluvio.pptx.pdf
Intro to Fluvio.pptx.pdf
LibbySchulze
 
Enhance your Kafka Infrastructure with Fluvio.pptx
Enhance your Kafka Infrastructure with Fluvio.pptxEnhance your Kafka Infrastructure with Fluvio.pptx
Enhance your Kafka Infrastructure with Fluvio.pptx
LibbySchulze
 
CNCF On-Demand Webinar_ LitmusChaos Project Updates.pdf
CNCF On-Demand Webinar_ LitmusChaos Project Updates.pdfCNCF On-Demand Webinar_ LitmusChaos Project Updates.pdf
CNCF On-Demand Webinar_ LitmusChaos Project Updates.pdf
LibbySchulze
 
Oh The Places You'll Sign.pdf
Oh The Places You'll Sign.pdfOh The Places You'll Sign.pdf
Oh The Places You'll Sign.pdf
LibbySchulze
 
Rancher MasterClass - Avoiding-configuration-drift.pptx
Rancher  MasterClass - Avoiding-configuration-drift.pptxRancher  MasterClass - Avoiding-configuration-drift.pptx
Rancher MasterClass - Avoiding-configuration-drift.pptx
LibbySchulze
 
vFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptx
vFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptxvFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptx
vFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptx
LibbySchulze
 
CNCF Live Webinar: Low Footprint Java Containers with GraalVM
CNCF Live Webinar: Low Footprint Java Containers with GraalVMCNCF Live Webinar: Low Footprint Java Containers with GraalVM
CNCF Live Webinar: Low Footprint Java Containers with GraalVM
LibbySchulze
 
EnRoute-OPA-Integration.pdf
EnRoute-OPA-Integration.pdfEnRoute-OPA-Integration.pdf
EnRoute-OPA-Integration.pdf
LibbySchulze
 
AirGap_zusammen_neu.pdf
AirGap_zusammen_neu.pdfAirGap_zusammen_neu.pdf
AirGap_zusammen_neu.pdf
LibbySchulze
 
Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...
Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...
Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...
LibbySchulze
 
OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...
OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...
OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...
LibbySchulze
 
CNCF_ A step to step guide to platforming your delivery setup.pdf
CNCF_ A step to step guide to platforming your delivery setup.pdfCNCF_ A step to step guide to platforming your delivery setup.pdf
CNCF_ A step to step guide to platforming your delivery setup.pdf
LibbySchulze
 
CNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdf
CNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdfCNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdf
CNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdf
LibbySchulze
 
Securing Windows workloads.pdf
Securing Windows workloads.pdfSecuring Windows workloads.pdf
Securing Windows workloads.pdf
LibbySchulze
 
Securing Windows workloads.pdf
Securing Windows workloads.pdfSecuring Windows workloads.pdf
Securing Windows workloads.pdf
LibbySchulze
 
Advancements in Kubernetes Workload Identity for Azure
Advancements in Kubernetes Workload Identity for AzureAdvancements in Kubernetes Workload Identity for Azure
Advancements in Kubernetes Workload Identity for Azure
LibbySchulze
 

More from LibbySchulze (20)

Running distributed tests with k6.pdf
Running distributed tests with k6.pdfRunning distributed tests with k6.pdf
Running distributed tests with k6.pdf
 
Extending Kubectl.pptx
Extending Kubectl.pptxExtending Kubectl.pptx
Extending Kubectl.pptx
 
Enhancing Data Protection Workflows with Kanister And Argo Workflows
Enhancing Data Protection Workflows with Kanister And Argo WorkflowsEnhancing Data Protection Workflows with Kanister And Argo Workflows
Enhancing Data Protection Workflows with Kanister And Argo Workflows
 
Fallacies in Platform Engineering.pdf
Fallacies in Platform Engineering.pdfFallacies in Platform Engineering.pdf
Fallacies in Platform Engineering.pdf
 
Intro to Fluvio.pptx.pdf
Intro to Fluvio.pptx.pdfIntro to Fluvio.pptx.pdf
Intro to Fluvio.pptx.pdf
 
Enhance your Kafka Infrastructure with Fluvio.pptx
Enhance your Kafka Infrastructure with Fluvio.pptxEnhance your Kafka Infrastructure with Fluvio.pptx
Enhance your Kafka Infrastructure with Fluvio.pptx
 
CNCF On-Demand Webinar_ LitmusChaos Project Updates.pdf
CNCF On-Demand Webinar_ LitmusChaos Project Updates.pdfCNCF On-Demand Webinar_ LitmusChaos Project Updates.pdf
CNCF On-Demand Webinar_ LitmusChaos Project Updates.pdf
 
Oh The Places You'll Sign.pdf
Oh The Places You'll Sign.pdfOh The Places You'll Sign.pdf
Oh The Places You'll Sign.pdf
 
Rancher MasterClass - Avoiding-configuration-drift.pptx
Rancher  MasterClass - Avoiding-configuration-drift.pptxRancher  MasterClass - Avoiding-configuration-drift.pptx
Rancher MasterClass - Avoiding-configuration-drift.pptx
 
vFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptx
vFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptxvFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptx
vFunction Konveyor Meetup - Why App Modernization Projects Fail - Aug 2022.pptx
 
CNCF Live Webinar: Low Footprint Java Containers with GraalVM
CNCF Live Webinar: Low Footprint Java Containers with GraalVMCNCF Live Webinar: Low Footprint Java Containers with GraalVM
CNCF Live Webinar: Low Footprint Java Containers with GraalVM
 
EnRoute-OPA-Integration.pdf
EnRoute-OPA-Integration.pdfEnRoute-OPA-Integration.pdf
EnRoute-OPA-Integration.pdf
 
AirGap_zusammen_neu.pdf
AirGap_zusammen_neu.pdfAirGap_zusammen_neu.pdf
AirGap_zusammen_neu.pdf
 
Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...
Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...
Copy of OTel Me All About OpenTelemetry The Current & Future State, Navigatin...
 
OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...
OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...
OTel Me All About OpenTelemetry The Current & Future State, Navigating the Pr...
 
CNCF_ A step to step guide to platforming your delivery setup.pdf
CNCF_ A step to step guide to platforming your delivery setup.pdfCNCF_ A step to step guide to platforming your delivery setup.pdf
CNCF_ A step to step guide to platforming your delivery setup.pdf
 
CNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdf
CNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdfCNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdf
CNCF Online - Data Protection Guardrails using Open Policy Agent (OPA).pdf
 
Securing Windows workloads.pdf
Securing Windows workloads.pdfSecuring Windows workloads.pdf
Securing Windows workloads.pdf
 
Securing Windows workloads.pdf
Securing Windows workloads.pdfSecuring Windows workloads.pdf
Securing Windows workloads.pdf
 
Advancements in Kubernetes Workload Identity for Azure
Advancements in Kubernetes Workload Identity for AzureAdvancements in Kubernetes Workload Identity for Azure
Advancements in Kubernetes Workload Identity for Azure
 

Recently uploaded

APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC
 
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdfJAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
Javier Lasa
 
可查真实(Monash毕业证)西澳大学毕业证成绩单退学买
可查真实(Monash毕业证)西澳大学毕业证成绩单退学买可查真实(Monash毕业证)西澳大学毕业证成绩单退学买
可查真实(Monash毕业证)西澳大学毕业证成绩单退学买
cuobya
 
Explore-Insanony: Watch Instagram Stories Secretly
Explore-Insanony: Watch Instagram Stories SecretlyExplore-Insanony: Watch Instagram Stories Secretly
Explore-Insanony: Watch Instagram Stories Secretly
Trending Blogers
 
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
ufdana
 
办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理
办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理
办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理
uehowe
 
重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理
重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理
重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理
vmemo1
 
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
eutxy
 
学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作
学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作
学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作
zyfovom
 
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Brad Spiegel Macon GA
 
Understanding User Behavior with Google Analytics.pdf
Understanding User Behavior with Google Analytics.pdfUnderstanding User Behavior with Google Analytics.pdf
Understanding User Behavior with Google Analytics.pdf
SEO Article Boost
 
Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...
Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...
Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...
CIOWomenMagazine
 
Gen Z and the marketplaces - let's translate their needs
Gen Z and the marketplaces - let's translate their needsGen Z and the marketplaces - let's translate their needs
Gen Z and the marketplaces - let's translate their needs
Laura Szabó
 
[HUN][hackersuli] Red Teaming alapok 2024
[HUN][hackersuli] Red Teaming alapok 2024[HUN][hackersuli] Red Teaming alapok 2024
[HUN][hackersuli] Red Teaming alapok 2024
hackersuli
 
Italy Agriculture Equipment Market Outlook to 2027
Italy Agriculture Equipment Market Outlook to 2027Italy Agriculture Equipment Market Outlook to 2027
Italy Agriculture Equipment Market Outlook to 2027
harveenkaur52
 
Bài tập unit 1 English in the world.docx
Bài tập unit 1 English in the world.docxBài tập unit 1 English in the world.docx
Bài tập unit 1 English in the world.docx
nhiyenphan2005
 
Ready to Unlock the Power of Blockchain!
Ready to Unlock the Power of Blockchain!Ready to Unlock the Power of Blockchain!
Ready to Unlock the Power of Blockchain!
Toptal Tech
 
7 Best Cloud Hosting Services to Try Out in 2024
7 Best Cloud Hosting Services to Try Out in 20247 Best Cloud Hosting Services to Try Out in 2024
7 Best Cloud Hosting Services to Try Out in 2024
Danica Gill
 
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
3ipehhoa
 
制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理
制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理
制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理
cuobya
 

Recently uploaded (20)

APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
 
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdfJAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
 
可查真实(Monash毕业证)西澳大学毕业证成绩单退学买
可查真实(Monash毕业证)西澳大学毕业证成绩单退学买可查真实(Monash毕业证)西澳大学毕业证成绩单退学买
可查真实(Monash毕业证)西澳大学毕业证成绩单退学买
 
Explore-Insanony: Watch Instagram Stories Secretly
Explore-Insanony: Watch Instagram Stories SecretlyExplore-Insanony: Watch Instagram Stories Secretly
Explore-Insanony: Watch Instagram Stories Secretly
 
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
 
办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理
办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理
办理毕业证(UPenn毕业证)宾夕法尼亚大学毕业证成绩单快速办理
 
重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理
重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理
重新申请毕业证书(RMIT毕业证)皇家墨尔本理工大学毕业证成绩单精仿办理
 
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
 
学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作
学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作
学位认证网(DU毕业证)迪肯大学毕业证成绩单一比一原版制作
 
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
 
Understanding User Behavior with Google Analytics.pdf
Understanding User Behavior with Google Analytics.pdfUnderstanding User Behavior with Google Analytics.pdf
Understanding User Behavior with Google Analytics.pdf
 
Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...
Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...
Internet of Things in Manufacturing: Revolutionizing Efficiency & Quality | C...
 
Gen Z and the marketplaces - let's translate their needs
Gen Z and the marketplaces - let's translate their needsGen Z and the marketplaces - let's translate their needs
Gen Z and the marketplaces - let's translate their needs
 
[HUN][hackersuli] Red Teaming alapok 2024
[HUN][hackersuli] Red Teaming alapok 2024[HUN][hackersuli] Red Teaming alapok 2024
[HUN][hackersuli] Red Teaming alapok 2024
 
Italy Agriculture Equipment Market Outlook to 2027
Italy Agriculture Equipment Market Outlook to 2027Italy Agriculture Equipment Market Outlook to 2027
Italy Agriculture Equipment Market Outlook to 2027
 
Bài tập unit 1 English in the world.docx
Bài tập unit 1 English in the world.docxBài tập unit 1 English in the world.docx
Bài tập unit 1 English in the world.docx
 
Ready to Unlock the Power of Blockchain!
Ready to Unlock the Power of Blockchain!Ready to Unlock the Power of Blockchain!
Ready to Unlock the Power of Blockchain!
 
7 Best Cloud Hosting Services to Try Out in 2024
7 Best Cloud Hosting Services to Try Out in 20247 Best Cloud Hosting Services to Try Out in 2024
7 Best Cloud Hosting Services to Try Out in 2024
 
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
 
制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理
制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理
制作毕业证书(ANU毕业证)莫纳什大学毕业证成绩单官方原版办理
 

Open ebs 101

  • 1. OpenEBS 101 Container Attached Storage Updated: August 2021
  • 2. OpenEBS • Leading Open Source Container Attached Storage Solution for simplifying the running of Stateful workloads in Kubernetes. • GitHub: https://github.com/openebs/openebs • Website: https://openebs.io/ • Slack: https://slack.k8s.io, #openebs • Twitter: https://twitter.com/openebs • 121+ Companies contributing since joining CNCF – (https://openebs.devstats.cncf.io/) • 187+ New Contributors since May 2019 • 40+ Public References since May 2019 • Incubation PR: https://github.com/cncf/toc/pull/506
  • 4. Data Platforms and Services Data Platform (Stateful) (Databases, Key/Value Stores, Message Bus) read/write files,blocks (POSIX) Volume (Data Engine) read/write bits/bytes (Block Device) Services (Stateless) read/write image, text,... read/write object read/write table read/write message Any End User Application Retail, Healthcare, Automobiles, Manufacturing, Human Resources, .... Availability Consistency Durability Performance Scalability Security 91% 55% Better user experience Hyper-growth Faster delivery ( Mobile, IoT/Edge) Storage HDD, SSD, Cloud Volumes, NVMe/PCIe/SCSI/SAS Hardware Utilization ( CPU/RAM NVMe, DPU/IPU, High Capacity Drivers ) Software Paradigm Shift (2020 CNCF Survey) Agility Productivity Governance Lock-in Performance Cost Hardware Paradigm Shift
  • 5. Challenges with existing Storage Agility and Productivity Monolithic data platform software is being redesigned with microservices. Need large number of smaller volumes, dynamically provision and dynamically move with pods to different nodes. Connectivity and mounting issues. Needs prior design and planning. Bottlenecked with Siloed Team and Storage. Cost and Performance Hardware Advancements Improving performance using Servers with 96 Cores, 1TB Flash, 16 TB Drives, NVMe Device/Fabric, IPU/DPU/SmartNICs, ARM Needs hardware and software refresh of the Storage. (Better to replace and migrate). Clouds are moving fast, but will cause Data Gravity and Lock-in. Life-cycle management with Higher Availability and Resiliency Harder to setup and maintain. Upgrades have to be scheduled and coordinated. Higher blast radius. Has software layers that are redundant to refectored (cloud native) data platforms. Legacy stacks.
  • 6. Paradigm Shift. Change is inevitable. Development and People Processes have changed Loosely coupled applications and loosely coupled teams. Conway’s Law applied at all layers. Data Mesh and Data as Product. Examples: CNCF End users like Bloomberg adopting cloud native for agility and open source. Improve developer and application team productivity. Platform Teams standardizing towards API / Kubernetes. Hardware Advancements promise improved performance and low cost 96 Cores, 1TB Flash, 16 TB Drives, NVMe Calling for a rewrite of the system software to fully utilize the capabilities of the hardware. Poll Mode Drivers/Lockless queues, by-pass kernel OS and Software Advancements for building better performing software DPDK, SPDK, io_uring, meta languages, user space performance, huge pages Build systems with expectation that components will fail. Rust, Go used to write system software and control plane software. Cloud native and container native. Nimble and Fungible Data Platforms for meeting demands from users and government - Evolving Law around Data Privacy and Compliance. HIPAA, GDPR, CCPA and many more with stricter guidelines on data retention and conformance. Data Gravity should be avoided to get locked in. Hybrid Clouds to mitigate the issues. Needs transparency in data storage, allowing Application and Platform SREs to quickly comply and provide proof of implementation. Ability to switch in phases.
  • 7. Origins of OpenEBS Data Platform (Stateful) (Databases, Key/Value Stores, Message Bus) read/write files,blocks (POSIX) Volume (Data Engine) read/write bits/bytes (Block Device) Services (Stateless) read/write image, text,... read/write object read/write table read/write message Any End User Application Retail, Healthcare, Automobiles, Manufacturing, Human Resources, .... Availability Consistency Durability Performance Scalability Security 91% 55% Better user experience Hyper-growth Faster delivery Storage HDD, SSD, Cloud Volumes, NVMe/PCIe/SCSI/SAS Hardware Utilization CPU/RAM NVMe, DPU/IPU, High Capacity Drivers Software Paradigm Shift (2020 CNCF Survey) Agility Cost Governance Lock-in Productivity Hardware Paradigm Shift 20%
  • 8. Why Data on Kubernetes? • Hybrid Cloud Readiness • Declarative installation of stateful stacks for developer environments • Increased Developer Productivity • Improved Availability • Improved resilience with compute storage separation DoK Day: Neeraj Bisht & Praveen Kumar GT "eCommerce giant Flipkart on data on Kubernetes at scale" (https://youtu.be/D77FLwUN9Oo) OpenEBS Adopters (https://github.com/openebs/openebs/blob/main/ADOPTERS.md)
  • 9. Where does OpenEBS fit? https://www.cncf.io/blog/2020/07/06/announcing-the-updated-cncf-storage-landscape-whitepaper/ Local and Distributed Block Storage Control Plane Workloads (e.g. Databases, Key-Value/Object Stores, MQ, AI/ML, CI/CD) Container Orchestrators Data Engines Framework and Tools Storage Systems Control-Plane Interface (e.g. CSI, Others) C A B B ● Availability ● Consistency ● Durability ● Performance ● Scalability ● Security ● Ease of Use
  • 10. Changing Storage Needs Workload Type Standalone (MinIO or MySQL) Standalone (Prometheus or Jenkins) Distributed (TiDB, Kafka) Availability access to the data continues during a failure condition dependent on storage dependent on storage built-in Consistency strong or weak need strong need strong need strong Durability bit-rot, endurance, fat-fingers needs protection for long term Not required. Easy to recreate. Tolerant to partial failures Scalability clients, capacity, throughput capacity and vertical scaling capacity Scale out as adding more capacities Performance latency and throughput Avoid noisy neighbour effects storage should serve throughput/io coming from single node within acceptable (SSD) latency limits - < 2 ms Hostpath, HDD, SSD decent latency / throughput. (HDD latency of 2-4ms is acceptable) Hostpath, HDD, SSD Low I/O latency and high throughput (NVMe) SSD, Memory
  • 11. Kubernetes as universal control plane Functionality How does Kubernetes (aka containers help?) Resource Management and Scheduling Discover the Storage Node and Storage Devices. Aggregate and schedule volumes. Scheduling includes providing Locality, Fault tolerance, application awareness. Volumes as services (Pods) and leverage the capabilities of Kubernetes for scheduling. Configuration Management Configuration Store, RBAC, Disaster Recovery Kubernetes Configuration store, Kubernetes Operators for implementing the workflows Usability Web UI, API Declarative, Kubernetes API Extensions, kubectl plugin High Availability and Scalability Scale up/down of Storage Nodes and Devices, Movement of Volume Services to the right nodes for High Availability, Highly available Provisioning Services Horizontal scaling with Kubernetes, Scale up/down the provisioning deployments. Volume High availability via extensions to Kubernetes scheduling and Operators Maintenance / Day 2 Operations User Interface / CLI, Software Upgrades, Telemetry and Alert tooling / Co-relation between application and storage during incidents Declarative Upgrades, Standardized Monitoring, Telemetry and Logging
  • 13. K8s Stateful Stack with OpenEBS OpenEBS Control Plane CSI Drivers Storage Operators, Data Engine Operators Prometheus Exporters, Velero Plugin, ... Stateful Workloads ( MySQL, PostgreSQL, Kafka, Prometheus, Minio, MongoDB, Cassandra, …) Kubernetes Storage Control Plane (SC, PVC, PV, CSI) OpenEBS Data Engines Replicated Volumes - Mayastor, cStor, Jiva Local Volumes - LVM, ZFS, hostpath, device Enterprise Framework / Tools (Velero, Prometheus, Grafana, EFK/ELK, … ) Any Platform, Any Storage (On Premise/Cloud, Core/Edge, Bare metal/Virtual, NVMe/SCSI, SSD/HDD)
  • 14. OpenEBS Persistent Volumes Storage Devices NVMe/SCSI, SSD/HDD, Cloud/SAN Block Devices or Device Aggregation/Pooling using LVM, ZFS Volume Replica Jiva, cStor, Mayastor Volume Target Jiva, cStor, Mayastor Stateful Workload Persistent Volume Mounted using Ext4, XFS, Btrfs, NFS or RawBlock (Local Volumes) iSCSI/NVMeoF TCP/NVMeoF TCP/NVMeoF Synchronous Replication to Volume replicas on other nodes. Storage Layer Volume Data Layer Direct (Replicated Volumes) Volume Services Layer Volume Access Layer
  • 15. CAS OpenEBS Persistent Volumes Local Volumes Replicated Volumes CAS - Hyperconverged Kubernetes native Runs anywhere. Easy to install and manage! Access from single node. Access from multiple nodes. Durability with synchronous replication Data Services ... Has overhead on capacity and performance Low overhead on capacity and performance Cloud Native and Distributed workloads - TiDB, etcd, Kafka, ML Jobs MySQL, MinIO, GitLab, Postgres and Cloud Native and Distributed workloads - Cassandra, CAS CAS
  • 16. OpenEBS Persistent Volumes Engine Type Local Volumes Replicated Volumes Example Device, Hostpath, LVM, Rawfile, ZFS cStor, Jiva, Mayastor Availability access to the data continues during a failure condition available from a single node in cluster. available from multiple nodes - with sync replicas. Scalability clients, capacity, throughput scale-up on node. horizontal scaling with K8s cluster. scale-up on node. horizontal scaling with K8s cluster. Consistency strong or weak delegated to filesystems - Example LVM, ZFS. strong consistency at block level Durability bit-rot, endurance, fat-fingers delegated to choice of filesystems - LVM, ZFS or none. provided via replicas Performance latency and throughput depends on storage type and type of filesystem used. Low-overhead (except in case of ZFS) depends on storage type and compute (CPU/RAM) Low latency - Mayastor
  • 17. How does OpenEBS work? Storage Devices NVMe/SCSI, SSD/HDD, Cloud/SAN Volume Replica Jiva, cStor, Mayastor Volume Target Jiva, cStor, Mayastor Stateful Workload Persistent Volume CAS Storage Control Plane Platform SREs will setup the Kubernetes nodes with required Storage 1 Platform SREs/K8s administrators (using K8s API) will setup OpenEBS and create Storage Classes 2 Application Developers will create stateful workloads with Persistent Volume Claims (PVCs) 3 OpenEBS, using Data Engines, CSI and K8s extensions will create the required Persistent Volumes (PVs) 4 Platform and Operations team will observe and maintain the system using cloud native tooling. 5
  • 18. OpenEBS - User Journey Developer SRE / Platform Engineer Run Stateful with local storage (ML Job or simple app, local s3) Run Stateful with “enterprise” storage (DBaaS, CI/CD, Object Storage, AI/ML Pipelines) OpenEBS Advocate or Contributor OpenEBS Advocate or Contributor Phase 1: Non critical workloads (CI/CD) or resilient workloads Phase 2: DBaaS Phase 3: Volumes as Service to other Data Platforms. OpenEBS Adopter Database Administrators Platform Providers
  • 19. OpenEBS Benefits and Limitations Benefits • Kubernetes native - ease of use and operations. integrates into the standard cloud native tooling • Lower footprint. Flexible deployment options • Highly composable. Choice of data engines matching the node capabilities and storage requirements • Controlled and predictable blast radius. Easy to visualize the location of the data of an application or volume • Horizontally scalable. Scale up/down • Avoid vendor lock-in with fully functional Open Source Software • Optimized to reduce operational costs on cloud or on-prem. Limitations • Scale-out volumes is not supported. Only volumes with capacity that can be served within a given node are supported. OpenEBS believes the need for large volumes will reduce as more and more workloads move into Kubernetes. • Read-write many is supported via NFS on top of Block Storage volumes. OpenEBS believes that Read/Write many usecases are served better via Object, Key/Value or API based interfaces that offer more control and efficiency.
  • 20. Quick Start OpenEBS Local PV - Hostpath
  • 21. Kubernetes Cluster node2 node1 OpenEBS Local PV (hostpath) Pod Stateful Workload (DB, etc) Setup OpenEBS PV1 DevOps admin (1) openebs-, provisioner, (2) StorageClass OS Developer Using OpenEBS (3) StatefulSet with PVC (4) PV OS node3 OS Dir: PV1 Dir: PV2 Pod Stateful Workload (DB, etc) PV2 PVC PVC /mnt/openebs /mnt/openebs /mnt/openebs
  • 22. kubectl apply -f https://openebs.github.io/charts/hostpath-operator.yaml OpenEBS Local PV (hostpath) apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-hostpath annotations: openebs.io/cas-type: local cas.openebs.io/config: | - name: StorageType value: hostpath - name: BasePath value: /var/local-hostpath provisioner: openebs.io/local reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer NAME READY STATUS RESTARTS AGE openebs-localpv-provisioner-5ff697f967-nb7f4 1/1 Running 0 2m49s
  • 23. kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-hostpath-pvc spec: storageClassName: local-hostpath accessModes: - ReadWriteOnce resources: requests: storage: 5G OpenEBS Local PV (hostpath) NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE local-hostpath-pvc Pending local-hostpath 3m7s
  • 24. apiVersion: v1 kind: PersistentVolume metadata: name: pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 ... spec: capacity: storage: 5G claimRef: kind: PersistentVolumeClaim name: local-hostpath-pvc ... local: fsType: "" path: /var/local-hostpath/pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - gke-kmova-helm-default-pool-3a63aff5-1tmf storageClassName: openebs-hostpath volumeMode: Filesystem status: phase: Bound OpenEBS Local PV (hostpath)
  • 25. OpenEBS Local PV - FAQ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-hostpath annotations: openebs.io/cas-type: local cas.openebs.io/config: | - name: StorageType value: hostpath - name: BasePath value: /var/local-hostpath provisioner: openebs.io/local reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions: - key: kubernetes.io/hostname values: - node1 - node2 - node3 How are the sub directories managed? Can I create Local PVs with mounted storage like VMware or GPDs? Can I resize Local PV? How do I monitor Local PV? How do I backup Local PV? Why is my PVC unable to bind to a PV? How do I tell Kubernetes to schedule pods to nodes where local storage is available?
  • 27. Where to find us? https://slack.k8s.io #openebs Join OpenEBS Channel on Kubernetes Slack https://openebs.io/community
  • 28. OpenEBS 201 Container Attached Storage Updated: Aug 2021
  • 29. Where does OpenEBS fit? https://www.cncf.io/blog/2020/07/06/announcing-the-updated-cncf-storage-landscape-whitepaper/ Local and (Distributed) Replicated Block Storage Control Plane Workloads (e.g. Databases, Key-Value/Object Stores, MQ, AI/ML, CI/CD) Container Orchestrators Data Engines Framework and Tools Storage Systems Control-Plane Interface (e.g. CSI, Others) C A B B ● Availability ● Consistency ● Durability ● Performance ● Scalability ● Security ● Ease of Use
  • 30. CAS OpenEBS Data Engine Evolution Local Volumes Replicated Volumes CAS CAS OpenEBS 1.0 Hostpath, Device Jiva, cStor OpenEBS 2.0 Hostpath, Device, ZFS Jiva, cStor, Mayastor (alpha) OpenEBS 3.0 Hostpath, Device, ZFS, LVM, Rawfile, Partition Jiva (CSI), cStor (CSI), Mayastor (beta)
  • 31. OpenEBS 3.0 ● GA: a. cStor CSI b. Local PV ZFS c. Local PV LVM d. Local PV Hostpath ● Beta: a. Dynamic NFS b. Mayastor c. Jiva CSI d. Local PV Rawfile ● Alpha: a. Device (Partition) ● New management components: a. Upgrade and Migration Operators b. OpenEBS CLI, c. Monitoring Mixins, d. Kyverno Policy Add-on ● Deprecate cStor and Jiva External Provisioners
  • 32. OpenEBS Data Engine comparison Hostpath Device Rawfile LVM ZFS Jiva cStor Mayastor Dynamic Provisioned Volumes Yes Yes Yes Yes Yes Yes Yes Yes Capacity Management No No Yes Yes Yes Yes Yes Yes Snapshots No No No Yes Yes No Yes Yes* Incremental Backup No No No No Yes No Yes Yes* Clones No No No No Yes No Yes Yes* Performance Yes Yes Yes Yes No No No Yes Node Failure (HA) No No No No No Yes Yes Yes
  • 33. OpenEBS Data Engine comparison Hostpath Device Rawfile LVM ZFS Jiva cStor Mayastor Node Storage Pooling (RAID) Yes Yes Yes Yes Yes Yes Yes Yes Full Backup/Restore Yes Yes Yes Yes Yes Yes Yes Yes Capacity Based Scheduling Yes* Yes Yes* Yes Yes Yes Yes Yes Application Aware Scheduling Yes Yes Yes Yes Yes Yes Yes Yes* CLI Support Yes* Yes* Yes* Yes* Yes Yes Yes Yes* Monitoring and Alerts Yes* Yes Yes* Yes Yes* Yes Yes Yes* Kubernetes Installer (Helm) Yes Yes* Yes Yes Yes Yes Yes Yes Rolling Upgrades Yes Yes Yes Yes Yes Yes Yes Yes
  • 34. OpenEBS Local PV - Use cases node1 node2 node3 Local PVs are great for Cloud Native Workloads (or distributed system) that have: ● Built in Proxies to distribute the data ● Built in Backup and Migration solutions ● Need low latency access. Or short lived Stateful workloads that need to save the state and resume after reboot. ( ML Jobs) Or edge nodes with single node K8s cluster.
  • 35. LocalPV HostPath Node 3 LocalPV Device Node 1 ZFS or LVM LocalPV Node 2 Pool Application Namespace Internet Physical Hard disks OpenEBS LocalPV options Stateful Application Running Inside Pod in Kubernetes Persistent Volume for Application Create LocalPV StorageClass XFS or EXT: NDM knows if disk is in use Creates volume in user defined pool 1 2 3
  • 36. OpenEBS 3.0 (Local PV) OpenEBS Local Storage Operators make it easy to provision Local Volumes with different flavors of local storage available on nodes. ● OpenEBS Hostpath LocalPV (stable), the first and the most widely used LocalPV now supports enforcing XFS quota, ability to use a custom node label for node affinity (instead of the default kubernetes.io/hostname) ● OpenEBS ZFS LocalPV (stable), used widely for production workloads that need direct and resilient storage has added new capabilities like: ○ Velero plugin to perform incremental backups that make use of the copy-on-write ZFS snapshots. ○ CSI Capacity based scheduling used with waitForFirstConsumer bound Persistent Volumes. ○ Improvements to inbuilt volume scheduler (used with immediate bound Persistent Volumes) that can now take into account the capacity and the count of volumes provisioned per node. ● OpenEBS LVM LocalPV (stable), can be used to provision volume on top of LVM Volume Groups and supports the following features: ○ Thick (Default) or Thin Provisioned Volumes ○ CSI Capacity based scheduling used with waitForFirstConsumer bound Persistent Volumes. ○ Snapshot that translates into LVM Snapshots ○ Ability to set QoS on the containers using LVM Volumes. ○ Also supports other CSI capabilities like volume expansion, raw or filesystem mode, metrics.
  • 37. kubectl apply -f https://openebs.github.io/charts/hostpath-operator.yaml OpenEBS LocalPV (hostpath) apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-hostpath annotations: openebs.io/cas-type: local cas.openebs.io/config: | - name: StorageType value: hostpath - name: BasePath value: /mnt/openebs-storage - name: XFSQuota enabled: "true" provisioner: openebs.io/local reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer NAME READY STATUS RESTARTS AGE openebs-localpv-provisioner-5ff697f967-nb7f4 1/1 Running 0 2m49s $ sudo mount -o rw,pquota /dev/nvme1n1 /mnt/openebs-storage
  • 38. OpenEBS Replicated PV - Use cases node1 node2 node3 Replicated PVs are great for Cloud Native Workloads (or distributed system) that need: ● Performance ● Resiliency against single node and/or single device failure. ● Need low latency access. Replicated PVs are great if you would like to: ● Lower your blast radius, while still using bin-packing to efficiently use your hardware resources. ● Efficiently use Capacity and Performance of NVMe Devices. (with Mayastor)
  • 39. Application Namespace Internet OpenEBS CStor Stateful Application Running Inside Pod in Kubernetes Persistent Volume for Application Create CStor StorageClass Create CStor pools on all storage node. STS or Deployment CStor Pool CStor Pool CStor Pool NDM knows if disk is in use CStor Target
  • 40. OpenEBS 3.0 (Replicated PV) OpenEBS Replicated Volumes enable users make use of the local storage available to kubernetes nodes to provide durable persistent volumes - that are resilient to node failures. The name replicated stems from the fact that OpenEBS uses synchronous replication of volumes instead of sharding block across different nodes. ● OpenEBS Jiva (stable), has added support for a CSI Driver and Jiva operator that include features like: ○ Enhanced management of the replicas ○ Ability to auto-remount the volumes marked as read-only due to iSCSI time to read-write. ○ Faster detection of the node failure and helping Kubernetes to move the application out of the failed node to a new node. ● OpenEBS CStor (stable), has added support for a CSI Driver and also improved customer resources and operators for managing the lifecycle of CStor Pools. This 3.0 version of the CStor includes: ○ The improved schema allows users to declaratively run operations like replacing the disks in mirrored CStor pools, add new disks, scale-up replicas, or move the CStor Pools to a new node. The new custom resource for configuring CStor is called CStorPoolCluster (CSPC) compared to older StoragePoolCluster(SPC). ○ Ability to auto-remount the volumes marked as read-only due to iSCSI time to read-write. ○ Faster detection of the node failure and helping Kubernetes to move the application out of the failed node to a new node. ● 3.0 also deprecates the older CStor and Jiva volume provisioners - that was based on the kubernetes external storage provisioner. There will be no more features added to the older provisioners and users are requested to migrate their Pools and Volumes to CSI Drivers as soon as possible.
  • 41. Node 3 Node 1 Node 2 Application Namespace Internet OpenEBS Mayastor Stateful Application Running Inside Pod in Kubernetes Persistent Volume for Application Create Mayastor StorageClass Create Mayastor pools on all storage node. STS or Deployment Maya Maya Maya
  • 42. OpenEBS Mayastor (Beta In Progress) Mayastor delivers high performance access to persistent data and services, using the industry leading Storage Performance Developer Kit (SPDK) ● Uses SPDK for NVMe features ○ Poll-mode and event-loop design for maximum performance ○ Memory utilization tuned for environments with limited huge pages ○ Scales within the node and across nodes ● Implemented in Rust for memory safety guarantees ● Configuration management using secure gRPC API ● Volume Services ○ Resilient against node failures via synchronous replication Control Plane Improvements ● Control plane implements application aware data placement ● Fine grained control over errors, restarts and timeouts for Kubernetes ● Prometheus Metrics exporter ● Integrate Mayastor into OpenEBS tools - installer, CLI, monitoring Core Enhancements ● Reduce fail-over time in loss of K8s node situation ● Support for LVM as backing store
  • 43. OpenEBS Mayastor (Beta In Progress) Mayastor (ANA) Volumes OpenEBS 3.1 Mayastor with ANA CAS CAS CAS Mayastor (ANA) Faster HA CAS CAS CAS
  • 44. OpenEBS 3.0 (Other Features) Beyond the improvements to the data engines and their corresponding control plane, there are several new enhancements that will help with ease of use of OpenEBS engines: ● Several fixes and enhancements to the Node Disk Manager like automatically adding a reservation tag to devices, detecting filesystem changes and updating the block device CR (without the need for a reboot), metrics exporter and an API service that can be extended in the future to implement storage pooling or cleanup hooks. ● Dynamic NFS Provisioner that allows users to launch a new NFS server on any RWO volume (called backend volume) and expose an RWX volume that saves the data to the backend volume. ● Kubernetes Operator for automatically upgrading Jiva and CStor volumes that are driven by a Kubernetes Job ● Kubernetes Operator for automatically migrating CStor Pools and Volumes from older pool schema and legacy (external storage based) provisioners to the new Pool Schema and CSI volumes respectively. ● OpenEBS CLI (a kubectl plugin) for easily checking the status of the block devices, pools (storage) and volumes (PVs). ● OpenEBS Dashboard (a prometheus and grafana mixin) that can be installed via jsonnet or helm chart with a set of default Grafana Dashboards and AlertManager rules for OpenEBS storage engines. ● Enhanced OpenEBS helm chart that can easily enable or disable a data engine of choice. The 3.0 helm chart stops installing the legacy CStor and Jiva provisioners. If you would like to continue to use them, you have to set the flag “legacy.enabled=true”. ● OpenEBS helm chart includes sample kyverno policies that can be used as an option for PodSecurityPolicies(PSP) replacement. ● OpenEBS images are delivered as multi-arch images with support for AMD64 and ARM64 and hosted on DockerHub, Quay and GHCR. ● Support for installation in air gapped environments. ● Enhanced Documentation and Troubleshooting guides for each of the engines located in the respective engine repositories. ● A new and improved design for the OpenEBS website.
  • 45. kubectl apply -f https://openebs.github.io/charts/nfs-operator.yaml OpenEBS NFS (RWX Volumes) apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-rwx annotations: openebs.io/cas-type: nfsrwx cas.openebs.io/config: | - name: NFSServerType value: "kernel" - name: BackendStorageClass value: "openebs-hostpath" provisioner: openebs.io/nfsrwx reclaimPolicy: Delete NAME READY STATUS RESTARTS AGE openebs-nfs-provisioner-79b6ccd59-626pd 1/1 Running 0 62s NFS Server Backend PV Create a NFS Server on top of Backend PV Create a NFS PV pointing to OpenEBS NFS Server
  • 46. kubectl krew install openebs OpenEBS CLI $ kubectl openebs version COMPONENT VERSION Client v0.4.0 OpenEBS CStor 3.0.0 OpenEBS Jiva Not Installed OpenEBS LVM LocalPV Not Installed OpenEBS ZFS LocalPV Not Installed $ kubectl openebs get bd NAME PATH SIZE CLAIMSTATE STATUS FSTYPE MOUNTPOINT gke-kmova-helm-default-pool-595accd4-pgtf ├─blockdevice-2eff94561dab533cabfeb6b4ddbbe851 /dev/sdb 375GiB Unclaimed Active ext4 /mnt/disks/ssd0 ├─blockdevice-a2247055ab6c06d27db1de47e61c3ac9 /dev/sdc1 375GiB Unclaimed Active └─blockdevice-b90456e7143408f1c29738c4d4deafec /dev/sdd 375GiB Unclaimed Active ext4 /mnt/disks/ssd2 gke-kmova-helm-default-pool-595accd4-bwcd ├─blockdevice-3c679953243dfc1344d2a4ac352f4c6e /dev/sdd 375GiB Unclaimed Active ext4 /mnt/disks/ssd2 ├─blockdevice-a5158511cf50b507e96fd628dca05af0 /dev/sdc1 375GiB Unclaimed Active └─blockdevice-bc795daa24fc3589ee2f8b835bcdcba6 /dev/sdb 375GiB Unclaimed Active ext4 /mnt/disks/ssd0
  • 47. OpenEBS CLI $ kubectl openebs describe volume pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3 pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3 Details : ----------------- NAME : pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3 ACCESS MODE : ReadWriteOnce CSI DRIVER : cstor.csi.openebs.io STORAGE CLASS : cstor-csi-disk VOLUME PHASE : Bound VERSION : 3.0.0 CSPC : cstor-disk-pool SIZE : 10.0GiB STATUS : Degraded REPLICA COUNT : 3 Portal Details : ------------------ IQN : iqn.2016-09.com.openebs.cstor:pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3 VOLUME NAME : pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3 TARGET NODE NAME : gke-kmova-helm-default-pool-595accd4-bwcd PORTAL : 10.3.248.245:3260 TARGET IP : 10.3.248.245 Replica Details : ----------------- NAME TOTAL USED STATUS AGE pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3-cstor-disk-pool-clz4 296.9KiB 5.4MiB Healthy 1m2s pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3-cstor-disk-pool-h8b9 296.9KiB 5.6MiB Healthy 1m2s pvc-dea356c2-7bd0-442e-a92f-98d503c65fb3-cstor-disk-pool-jznw 300.8KiB 5.5MiB Healthy 1m2s
  • 48. OpenEBS Monitoring helm repo add openebs-monitoring https://openebs.github.io/monitoring/ helm repo update helm install openebs-dashboard openebs-monitoring/openebs-monitoring --namespace openebs --create-namespace
  • 49. OpenEBS Monitoring helm repo add openebs-monitoring https://openebs.github.io/monitoring/ helm repo update helm install openebs-dashboard openebs-monitoring/openebs-monitoring --namespace openebs --create-namespace
  • 50. OpenEBS 3.1 ( Planning ) Stateful Operator (STS) with Local PV ● Fault tolerant scheduling for distributed applications ● Stale PVCs ● Moving Data of Local PV on K8s upgrade/node-recycle (Data Populator) Engineering Optimizations ● CI Infrastructure improvements ● Unified Local CSI Drivers ● NDM - Enclosure / Storage Management ● Usability Enhancements (based on user feedback). ( Upgrades, Pool Creation, …) ● Automated Security Compliance Checks Local PV on Shared Device ● Devices visible to multiple nodes - shared filesystem. Eg. Cluster LVM. ● Allow pods to move across nodes that have access to device. ● Remote access via iSCSI / NVMe Integration Hooks ● Setting up finalizers or other metadata on Volume related objects for add-on operators. Eg: Billing/Auditing by Platform operators Mayastor Beta
  • 51. OpenEBS 3.1 (LocalPV ++) Local Volumes OpenEBS 3.1 Local PV ++ ( Shared Devices) Local Volumes (HA)
  • 52. OpenEBS 3.1 (LocalPV ++) Local Volumes OpenEBS 3.1 Local PV ++ ( Shared Devices + Remote Access via NVMe ) CAS Local Volumes (HA) CAS
  • 53. OpenEBS 3.1 (LocalPV ++) Local Volumes OpenEBS 3.1 Local PV ++ ( Shared Devices + Remote Access via NVMe ) CAS CAS Local Volumes (HA) CAS CAS
  • 54. OpenEBS 3.1 Storage Cohort A storage cohort is an autonomous storage unit that consists of a set of storage devices (grouped together as storage pool) and a storage software running on the nodes attached to the devices. The storage software (or the storage controller aka SDS) helps create and manage storage volumes and also helps create and manage corresponding targets that storage initiators can talk to for any I/O operations.
  • 55. OpenEBS 3.1 (Cluster topology) AZ-A1 AZ-A2 AZ-A3 AZ-A1x AZ-A2y AZ-A3z FD-A1a FD-A1b FD-A2a FD-A2b FD-A3a FD-A3b FD-A1a FD-A2a FD-A3a FD-A1b FD-A2b FD-A3b Application Nodes Storage Nodes ( with JBODs / JBOFs )
  • 56. OpenEBS 3.1 - Fault Tolerant Scheduling NVMe NVMe NVMe NVMe NVMe NVMe AZ-A1 AZ-A2 AZ-A3 AZ-A1x AZ-A2y AZ-A3z FD-A1a FD-A1b FD-A2a FD-A2b FD-A3a FD-A3b FD-A1a FD-A2a FD-A3a FD-A1b FD-A2b FD-A3b AZ-A1, AZ-A2, AZ-A3
  • 57. Application Node Application Node Storage Cohort - 1 Storage Node NVMe Storage Node pv NVMe Cohort Manager OpenEBS 3.1 (w Shared Device + NVMe) OpenEBS CSI Controller Volume Scheduler CSI Node Agent cohort volume SC PVC PV Cohort Controllers Node Agent MTL... pool Shared device Storage Cohort - 2 Storage Node Storage Node Cohort Manager Node Agent MTL... Shared device
  • 58. OpenEBS Volume Types (Recap) NVMe NVMe pv OpenEBS Local PV ++ (Shared Local Device) pv OpenEBS Replicated (Mayastor) (over Node Local Devices) Maya Maya pv OpenEBS Local PV (Shared Device) pv OpenEBS Local PV
  • 59. OpenEBS 401 Container Attached Storage Updated: Aug 2021
  • 60. CAS OpenEBS Future Deployments Any Workload, Any Cluster CAS CAS OpenEBS 4.0 (Multipath) NVMe over Local (Multipath) Mayastor with ANA NVMe NVMe NVMe NW Fabric for NVMe
  • 61. OpenEBS Volume Types (Recap) NVMe NVMe pv OpenEBS Local PV ++ (Shared Local Device) pv OpenEBS Replicated (Mayastor) (over Node Local Devices) Maya Maya pv OpenEBS Local PV (Shared Device) pv OpenEBS Local PV
  • 62. OpenEBS Storage Cohort A storage cohort is an autonomous storage unit that consists of a set of storage devices (grouped together as storage pool) and a storage software running on the nodes attached to the devices. The storage software (or the storage controller aka SDS) helps create and manage storage volumes and also helps create and manage corresponding targets that storage initiators can talk to for any I/O operations.
  • 66. Storage Cohort Example Nodes with PCIe SSDs 40-100Gb NICs 512 GB RAM 32-96 cores (Horizontally scalable / Rack Scaled) NW Fabric for NVMe
  • 67. Application Node Application Node Storage Cohort - 1 Storage Node NVMe Storage Node pv NVMe Cohort Manager OpenEBS 3.1 Local PV ++ (Recap) OpenEBS CSI Controller Volume Scheduler CSI Node Agent cohort volume SC PVC PV Cohort Controllers Node Agent MTL... pool Shared device Storage Cohort - 2 Storage Node Storage Node Cohort Manager Node Agent MTL... Shared device
  • 68. NW Fabric for NVMe Application Node Application Node Storage Cohort NVMe Storage Node pv NVMe Cohort Manager Application Cluster OpenEBS FK (w Shared Device + NVMe) OpenEBS CSI Controller OpenEBS Storage Manager Volume Scheduler CSI Node Agent Cohort Controllers Node Agent MTL... cohort volume SC PVC PV pool (x) volume
  • 69. OpenEBS 4.0 - Fault Tolerant Scheduling NVMe NVMe NVMe NVMe NVMe NVMe AZ-A1 AZ-A2 AZ-A3 AZ-S1 AZ-S2 AZ-S3 FD-S1a FD-S1b FD-S2a FD-S2b FD-S3a FD-S3b FD-A1a FD-A2a FD-A3a FD-A1b FD-A2b FD-A3b AZ-S1x AZ-S2x AZ-S2z
  • 70. OpenEBS 4.0 - Fault Tolerant Scheduling NVMe NVMe NVMe NVMe NVMe NVMe AZ-A1 AZ-A2 AZ-A3 AZ-S1x AZ-S1y AZ-S1z FD-S1a FD-S1b FD-S1a FD-S1b FD-S1a FD-S1b FD-A1a FD-A2a FD-A3a FD-A1b FD-A2b FD-A3b AZ-S1
  • 71. OpenEBS 4.0 - Fault Tolerant Scheduling NVMe NVMe NVMe NVMe NVMe NVMe AZ-A1 AZ-A2 AZ-A3 AZ-S1x AZ-S1y AZ-S1z FD-S1a FD-S1b FD-S2a FD-S2b FD-S3a FD-S3b FD-A1a FD-A2a FD-A3a FD-A1b FD-A2b FD-A3b AZ-S1, AZ-S2, AZ-S3
  • 72. OpenEBS 4.0 - Affinity Scheduling NVMe NVMe NVMe NVMe NVMe NVMe AZ-A1 AZ-A2 AZ-A3 AZ-A1x AZ-A2y AZ-A3z FD-A1a FD-A1b FD-A2a FD-A2b FD-A3a FD-A3b FD-A1a FD-A2a FD-A3a FD-A1b FD-A2b FD-A3b AZ-A1, AZ-A2, AZ-A3
  • 73. Mayastor Control Plane OpenEBS integration with MayaData Application Cluster Node(s) Application Node Application Node Mayastor Pool Node OpenEBS CSI Controller NW Fabric for NVMe Cohort Controller pv NVMe API Server CSI Node Agent Mayasator Application Cluster Storage Cluster Mayastor Pool Node Mayastor Pool Node MTL... cohort volume SC PVC PV pool Mayasator Mayasator NVMe
  • 74. OpenEBS integration with VDA Application Cluster Node(s) Application Node Application Node VDA Node OpenEBS CSI Controller NW Fabric for NVMe Cohort Controller pv NVMe VDA Vol VDA Portal CSI Node Agent VDA ... Application Cluster Storage Cluster MTL... VDA Control Node VDA Node VDA ... MTL... VDA Node VDA ... MTL... cohort volume SC PVC PV pool
  • 75. OpenEBS integration with RedFish (RF) Application Cluster Node(s) Application Node Application Node RF Node OpenEBS CSI Controller NW Fabric for NVMe Cohort Controller pv NVMe RF Vol RF Portal CSI Node Agent RF ... Application Cluster Storage Cluster MTL... RF Control Node RF Node RF ... MTL... RF Node RF ... MTL... cohort volume SC PVC PV pool
  • 76. Enterprise Tools and Operators (Integrations) NW Fabric for NVMe Application Node Application Node Storage Cohort Storage Node NVMe Storage Node pv NVMe Cohort Manager OpenEBS (CSI Driver) Application Cluster Storage Cohort Controller OpenEBS with Enterprise Integrations
  • 77. NW Fabric for NVMe Application Node Application Node Storage Cohort Storage Node NVMe Storage Node pv NVMe Cohort Manager Application Cluster OpenEBS with Enterprise Integrations OpenEBS CSI Controller OpenEBS Storage Manager Volume Scheduler CSI Node Agent Cohort Controllers LVM Node Agent MTL... cohort volume SC PVC PV pool Platform Ops K8s Cluster Operator SRE Ops (MTL) Infra Operator RBAC Compliance BCP App Operators
  • 78. OpenEBS 4.0 ( Features ) CSI Driver (with Application and Platform Awareness) ● Application and Storage proximity ● Application high availability ● Volume IO Fencing ● Volume Access control ● Scale up/down application replicas (volume cleanup ) ● Volume Migration (for local or single replica volumes) Storage (Cohort) Control Plane ● RBAC and Security ● Device Management ● Pool Management ○ RAID ● Volume Management ○ Fault Tolerant Scheduling ○ Durability ○ Snapshot ○ Backup / Restore ○ Migration ● High Availability ● MTL ● API Driven for integrating with Infra Operators ○ Rook ○ Crossplane ○ MicroK8s Mayastor (Enhance, Optimize and Productise SPDK for Block Storage) ● Pluggable Storage Layers (beyond blobstore/lvol) ● Scale and Performance ● Secure ● High Availability
  • 80.
  • 81. OpenEBS 3.0 (Hyperconvergence Achieved) Cluster Components Helm Chart / YAML Data Engine Operator CSI Driver CSI Driver Plugins (Velero, Metrics Exporter, … ) Node Components Node n Node 1 CSI Driver Plugins (Velero, Metrics Exporter, … ) Plugins (Velero, Metrics Exporter, … ) Data Engine Data Engine CLI Kubernetes API Server ++ (OpenEBS Custom Resources) etcd ++ (OpenEBS configuration store)
  • 82. OpenEBS 3.1 Release Timeline Nov 31st 2021 3.1 POC Mar 31st 2022 3.1 Alpha Release ● Virtual SAS Array ● VDA
  • 83. OpenEBS 4.0 ( SCP for Any NVMe Target) NVMe NVMe pv OpenEBS NVMe over Shared Local Device pv Storage Array(s) with exposing NVMe Targets pv Mayastor over Node Local Devices Maya Maya OpenEBS NVMe over NVMe (remote) Device NVMe NVMe pv
  • 84. Node 3 Node 1 Node 2 Application Namespace Internet SSD OpenEBS Mayastor Stateful Application Running Inside Pod in Kubernetes Persistent Volume for Application Create Mayastor StorageClass Create Mayastor pools (MSP) on all storage node. STS with (MSP) Node Selectors Mayastor Mayastor Mayastor SSD SSD Mayastor Control plane Mayastor Control plane Install Mayastor control plane Node 4 Node 5 Mayastor Control plane Mayastor Control plane Mayastor Control plane