1
Or Friedmann, Cloud consultant
Deploying and Managing Ceph
on the
OpenShift Container Platform
using Rook
CephStorageforOCP
Ceph Storage for OCP
What is
?
3
Ceph is
Red Hat Ceph Storage is:
● Software-defined storage
● An open, massively scalable storage solution for modern
workloads
● Unified storage solution, Supports: Block, File and Object storage
● A very mature and stable solution (development started in 2003)
● Standard de facto storage for OpenStack deployments
● No single point of failure
4
Ceph Cluster daemons
Ceph cluster are managed by 5 types of daemons:
● MONs - Handles the state of the cluster
● MGRs - Gathers and sends metrics & RESTful API
● OSDs - Gets or Puts I/O to the cluster
● RGWs - Implements AWS S3 API
● MDSs - Implements CEPHFS
5
Monitor (MON)
MONs are the microservice daemons that hold the cluster state:
● There is more than one MON in a production cluster to prevent a
SPoF.
● The cluster mechanism for MON uses the Paxos quorum algorithm
6
Manager (MGR)
MGRs are the microservice daemons that gathers and sends metrics on
the cluster:
● There is more than 1 MGR in a production cluster to prevent a SPoF.
● MGR can send metrics to Telegraf, Prometheus, Zabbix, etc
● Dashboard UI runs on top of MGR
7
Object Storage Daemon (OSD)
OSDs are the microservice daemons that Get or Put I/O to the cluster:
● There are more than 3 OSDs in a production cluster to prevent a
SPoF.
● Each OSD process provides access to a single block device (HDD or
SSD). A cluster can have more than 10K OSDs processes!
● Data written to an OSD are always safe, OSD is in charge of
replicating the data to 2 more OSDs on a different physical nodes,
then sending an ack to the client
8
RADOS Gateway (RGW)
RGWs are the microservice daemons that bring Object storage API such
as AWS S3:
● There is more than 1 RGW in a production cluster to prevent a SPoF.
● All requests to the RGWs are stateless
● Each RGW process is independent from the other RGWs on the
cluster
9
MetaData Server (MDS)
MDSs are the microservice daemons that provide "file" access over
CEPHFS:
● There is more than 1 MDS in a production cluster to prevent a SPoF.
● CEPHFS is a POSIX-like distributed file system
● As a NAS solution, It can be attached across multiple pods at the
same time.
● MDSs are not in the DATA path, only metadata
What is
?
11
ROOK is
● A Cloud Native Computing Foundation (CNCF) project
● A storage orchestrator for Kubernetes
● ROOK is an operator for deploying storage (Ceph) on top of
Kubernetes
● ROOK defines a new Kubernetes Custom Resource Definition (CRD)
for easy cluster deployment and easy provisioning
12
‘s
CRDs
13
CephCluster CRD
● Defines Ceph cluster as a custom resource in the OpenShift cluster
● Makes Ceph deployment in OpenShift much easier
● For example contains:
○ Version of Ceph container images
○ Number of MONs to deploy
○ Use SSDs or HDDs for the OSDs pods
14
CephBlockPool CRD
● Defines a Ceph data pool as a custom resource in the OpenShift
cluster
● This pool will be used to store the pods’ volumes
● Configurable:
○ Number of copies of the data to save
○ The failure domain for the volumes, host? rack?
15
CephObjectStore CRD
● Defines a Ceph object store as a custom resource in the OpenShift
cluster
● Creates a pool for storing the object store data
● Creates S3 endpoints pods (RGWs pods)
16
CephFileSystem CRD
● Defines a Ceph File System as a custom resource in the OpenShift
cluster
● Creates a pool for storing the file system data
● Creates MDSs
17
Ceph Container Storage Interface (CSI) Drivers
● There are 2 Ceph CSI drivers to easily consume Ceph block storage
and CEPHFS:
○ RBD - This driver is optimized for RWO pod access where only
one pod may access the storage - as it has block level access
○ CEPHFS - This driver allows for RWX with one or more pods
accessing the same storage
18
DEMO
19
Q&A
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHat
20
Or Friedmann, Cloud consultant
ofriedma@redhat.com
Thank you
CephStorageforOCP

Ceph storage for ocp deploying and managing ceph on top of open shift container platform using rook

  • 1.
    1 Or Friedmann, Cloudconsultant Deploying and Managing Ceph on the OpenShift Container Platform using Rook CephStorageforOCP Ceph Storage for OCP
  • 2.
  • 3.
    3 Ceph is Red HatCeph Storage is: ● Software-defined storage ● An open, massively scalable storage solution for modern workloads ● Unified storage solution, Supports: Block, File and Object storage ● A very mature and stable solution (development started in 2003) ● Standard de facto storage for OpenStack deployments ● No single point of failure
  • 4.
    4 Ceph Cluster daemons Cephcluster are managed by 5 types of daemons: ● MONs - Handles the state of the cluster ● MGRs - Gathers and sends metrics & RESTful API ● OSDs - Gets or Puts I/O to the cluster ● RGWs - Implements AWS S3 API ● MDSs - Implements CEPHFS
  • 5.
    5 Monitor (MON) MONs arethe microservice daemons that hold the cluster state: ● There is more than one MON in a production cluster to prevent a SPoF. ● The cluster mechanism for MON uses the Paxos quorum algorithm
  • 6.
    6 Manager (MGR) MGRs arethe microservice daemons that gathers and sends metrics on the cluster: ● There is more than 1 MGR in a production cluster to prevent a SPoF. ● MGR can send metrics to Telegraf, Prometheus, Zabbix, etc ● Dashboard UI runs on top of MGR
  • 7.
    7 Object Storage Daemon(OSD) OSDs are the microservice daemons that Get or Put I/O to the cluster: ● There are more than 3 OSDs in a production cluster to prevent a SPoF. ● Each OSD process provides access to a single block device (HDD or SSD). A cluster can have more than 10K OSDs processes! ● Data written to an OSD are always safe, OSD is in charge of replicating the data to 2 more OSDs on a different physical nodes, then sending an ack to the client
  • 8.
    8 RADOS Gateway (RGW) RGWsare the microservice daemons that bring Object storage API such as AWS S3: ● There is more than 1 RGW in a production cluster to prevent a SPoF. ● All requests to the RGWs are stateless ● Each RGW process is independent from the other RGWs on the cluster
  • 9.
    9 MetaData Server (MDS) MDSsare the microservice daemons that provide "file" access over CEPHFS: ● There is more than 1 MDS in a production cluster to prevent a SPoF. ● CEPHFS is a POSIX-like distributed file system ● As a NAS solution, It can be attached across multiple pods at the same time. ● MDSs are not in the DATA path, only metadata
  • 10.
  • 11.
    11 ROOK is ● ACloud Native Computing Foundation (CNCF) project ● A storage orchestrator for Kubernetes ● ROOK is an operator for deploying storage (Ceph) on top of Kubernetes ● ROOK defines a new Kubernetes Custom Resource Definition (CRD) for easy cluster deployment and easy provisioning
  • 12.
  • 13.
    13 CephCluster CRD ● DefinesCeph cluster as a custom resource in the OpenShift cluster ● Makes Ceph deployment in OpenShift much easier ● For example contains: ○ Version of Ceph container images ○ Number of MONs to deploy ○ Use SSDs or HDDs for the OSDs pods
  • 14.
    14 CephBlockPool CRD ● Definesa Ceph data pool as a custom resource in the OpenShift cluster ● This pool will be used to store the pods’ volumes ● Configurable: ○ Number of copies of the data to save ○ The failure domain for the volumes, host? rack?
  • 15.
    15 CephObjectStore CRD ● Definesa Ceph object store as a custom resource in the OpenShift cluster ● Creates a pool for storing the object store data ● Creates S3 endpoints pods (RGWs pods)
  • 16.
    16 CephFileSystem CRD ● Definesa Ceph File System as a custom resource in the OpenShift cluster ● Creates a pool for storing the file system data ● Creates MDSs
  • 17.
    17 Ceph Container StorageInterface (CSI) Drivers ● There are 2 Ceph CSI drivers to easily consume Ceph block storage and CEPHFS: ○ RBD - This driver is optimized for RWO pod access where only one pod may access the storage - as it has block level access ○ CEPHFS - This driver allows for RWX with one or more pods accessing the same storage
  • 18.
  • 19.
  • 20.