3. Rook.io
● an open source cloud-native storage
orchestrator
● Various storage providers supported (2
stable)
● Apache 2.0 license
● CSI support
● CRD configuration
Provisioning
● Helm.sh deployment of operator
● K8s v.1.10 minimum
● RBAC & Flexvolume config needed
● RBD, LVM2 required module in OS
4. Rook.io - Overview
● Incubator state in CNCF
● Framework for various Storage providers
○ Ceph, CockroachDB, Minio, Nexenta, Cassandra
● The most feasible way how to provide Storage provider if the provider is missing
the storage integration
● Testing suite in Rook.io
● Operator paradigm
6. Rook Ceph - Overview
● Ceph is a highly scalable distributed storage solution
● block storage, object storage, and shared file systems
● production tested solution for distributed storage
● Ceph system is being run using Kubernetes primitives
● Encryption support for underlying storage
● Monitoring support for Prometheus
7. Ceph Terminology 1
● Object storage (RGW)
○ S3 and Swift API compatible (for the most part)
○ User management, snapshots, atomic transaction, partial/complete RW
○ object level key-value mappings
● Block storage (RBD)
○ Automatic replication, Image import/export, Read-only snapshots, Resizable images
○ Kubespray does also generic configuration and setup of the OS system, etc.
○ Ability to mount with Linux or QEMU KVM clients!
● CephFS
○ POSIX-compliant network file system
○ Automatically balances the file system to deliver maximum performance
○ Virtually unlimited storage to file systems
8. Ceph Terminology 2
● Monitors (MONs)
○ Most important components of Ceph structure
○ Cluster state, OSD map, CRUSH map, MGR map, MON map
○ Critical cluster state (including all maps) required for Ceph daemons to coordinate with each other
○ Managing authentication between daemons and clients
● Managers (MGRs)
○ Keeping track of runtime metrics, current state of the cluster
○ Provides Ceph Admin UI Dashboard/REST API (subset of actions from CLI present in UI)
○ Integration with Prometheus is directly possible (but not advisable for PROD yet)
● Object Storage Daemons (OSDs) a Metadata Storage Servers (MDSs)
○ stores data, handles data replication, recovery, rebalancing
○ Automatically balances the file system to deliver maximum performance
○ MDSs stores meta information for i.e `find, ls` to unburden OSDs and speed up response (HDDs) for
CephFS
11. Live Demo - Rook.io v1.2.1
● Configure Helm.sh deployment for Rook Ceph Operator
● Deploy Rook.io operator for Ceph
● Create CRD for Ceph structures referencing HW
● UC: MySQL DB deployment
● UC: Add another server
● UC: Format one drive and recover
12. Rook Ceph Operator
● Helm based deployment
○ values.yaml for configuration of the deployment
More in live demo...
13. CRD - Ceph Cluster
● Main CRD to configure Ceph Cluster
● Host based
○ Specify target hosts and raw devices
○ Configure specifics per hots or globally
○ dataDirHostPath - Host Path to store config and data for components will be stored (!)
● PVC based
○ Specify Storage class for Rook
○ Volume Claim Templates for specifying storage requirements
○
14. CRD - Ceph Block Pool
● Object representation for block storage
● Block storage supports connection from Ceph StorageClass
● Replication based pool
● Erasure Coded pool (lowest increase only 1.25x)
○ BlueStore filesystem
○ Performance overhead of creating and distributing the chunks in the cluster
15. CRD - StorageClass
● Connection through Ceph CSI driver
● Be aware of “reclaimPolicy”
● Parameters configuration in demo
16. Difference between standalone Ceph and Rook
● MONs are failovered by operator automatically with timeout configuration - i.e
health check management
● Even if one of your nodes get HW wiped (ephemeral SSDs) operator can recover
state and “copy” data
● First class citizen Ceph through CRD objects
● Storage options (metadata, osds, filetypes) are managed from one place per cluster
● Partially automated updates between versions, some even worry free.
● Without vendor lock in
● Rook offers also other managed storage options (NFS, EdgeFS, Minio,
CockroachDB..)
17. We are hiring! Looking for DevOps engineer!
● If you’d like to work with us on multiple K8s clusters
● We are using many CNCF solutions
○ Rook.io, Harbor.io, Fluentd
○ Prometheus
○ Jaeger Tracing
○ Vault by HashiCorp
○ Consul Templates
○ NGiNX Ingress
○ and many more.
● Create and manage CI/CD pipelines
● Automate infrastructure
● Security oriented work
● Ansible for provisioning outside k8s
● Helm.sh charts
● Dockerfiles
● Help improve and grow our stack!