Managing Ceph operational
complexity using Juju
James Page, Principal Engineer, OpenStack Engineering
Ceph Day London 2019
$ whois jamespage
[ ceph | ubuntu | debian| openstack | juju | charms ]
Ceph? it’s just servers and disks right?
Deployment Considerations
Block Devices
SATA
SSD
NVMe
Networks
10G
25G
100G
40G
Leaf/Spine
(Clos)
Physical Zoning
Racks
ToR
Switches
Power
Cluster/Public
Servers
RAM
CPU
Disaster Recovery
An auto-magic deployment tool for an
auto-magic SDS platform?
Juju
Controllers
Applications
Machines
Relations
Networking
Storage
Model driven, re-usable
open source operations
Charms
Installation
Configuration
Connection
Upgrades and Updates
Scale-out (and in)
Health
Operations
Encapsulation of
operational knowledge of
applications
Ceph Charms
MON and OSD
RADOS Gateway
RBD mirror
CephFS
Upgrades
Operations
Deploying Ceph
since 2011
MAAS
Automated physical server provisioning
Dynamic allocation of workloads
IPAM
Zones
Web UI and REST API
Open source bare-metal
automation
LXD
Machine containers
Resource management
REST API
Juju integration
Faster, denser, lower
latency Linux machine
containers
Ceph + Juju
Application Model
Machine View
Cross Model Relations
us-east us-west
Operations
juju run-action -m us-west --wait 
ceph-mon/leader create-pool 
name=another-rbd-pool app-name=rbd
juju run-action -m us-west --wait 
rbd-mirror-us-west/leader refresh-pools
Operations
[ Demo ]
Operations
juju run-action -m us-west --wait 
rbd-mirror-us-west/leader demote
juju run-action -m us-east --wait 
rbd-mirror-us-east/leader promote
Operations - Upgrades
juju config ceph-mon source=cloud:bionic-train
juju config ceph-osd source=cloud:bionic-train
Encryption at Rest
https://github.com/javacruft/ceph-day-london-2019
https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/
irc: #openstack-charms
Thankyou! Questions?

Managing Ceph operational complexity with Juju