2. Hi, I’m Haomai Wang
❖ Join Ceph community Since 2013
❖ GSOC 2015 Ceph Mentor
❖ Maintains KeyValueStore and AsyncMessenger
❖ Active in RBD, Performance, ObjectStore things
❖ Newer to Containers!
❖ haomaiwang@gmail.com
8. VM + Block(RBD)
❖ Model
❖ Nova → libvirt → KVM →librbd.so
❖ Cinder → rbd.py →librbd.so
❖ Glance → rbd.py → librbd.so
❖ Pros
❖ proven
❖ decent performance good security
❖ Cons
❖ performance could be better
❖ Status
❖ most common deployment model today
(~44% in latest survey)
9. Container + Block(RBD)
❖ The model
❖ libvirt-based lxc containers(Or Docker)
❖ map kernel RBD on host
❖ pass host device to libvirt, container
❖ Pros
❖ fast and efficient
❖ implement existing Nova API Cons
❖ weaker security than VM
❖ Status
❖ lxc is maintained
11. Different App Provision Model
❖ Container VS Virtualization
❖ Hardware abstraction
❖ Application Centric
❖ Per VM Isolation, Guest Environment and
lifecycle defined by Application
❖ Application Isolation
❖ Density
❖ New Provision
❖ Micro-Service
❖ Multi-instance, Multi-version, Maximal
flexible, Minimal overhead
❖ Block
❖ Physical block abstraction
❖ Unknown user data layout
❖ Difficult to bind block to container(s)
16. File Storage
❖ Familiar POSIX semantics(POSIX
is a lingua-franca)
❖ Fully shared volume – many
clients can mount and share data
❖ Elastic storage – amount of data
can grow/shrink without explicit
provisioning
CephFS
18. Detecting failures
❖ MDS
❖ “beacon” pings to RADOS MONs. Logic on MONs
decides when to mark an MDS failed and promote
another daemon to take its place
❖ Clients:
❖ “RenewCaps” pings to each MDS with which it has a
session. MDSs individually decide to drop a client's
session (and release capabilities) if it is too late.
19. The Now
❖ Priority
❖ Complete FSCK & repair
tools
❖ Tenant Security/Auth
❖ Other work:
❖ Multi-MDS hardening
❖ Snapshot hardening
22. Nova-Docker & CephFS
❖ Model
❖ host mounts CephFS directly
❖ mount --bind share into container
namespace
❖ Pros
❖ best performance
❖ full CephFS semantics
❖ Cons
❖ rely on container for security
❖ Status
❖ no prototype
23. Kubernetes & CephFS
❖ Pure Kubernetes
❖ Volume Driver
❖ AWS EBS, Google Block
❖ CephFS
❖ NFS
❖ …
❖ Status
❖ Under review(https://github.com/
GoogleCloudPlatform/kubernetes/pull/
6649)
❖ Drivers expect pre-existing volumes
❖ Expected deploy mode
❖ Pod(Shared File Volume)
❖ Make micro-service ease with shared storage
24. Kubernetes on OpenStack
❖ Provision Nova VMs
❖ KVM or ironic
❖ Atomic or CoreOS
❖ Kubernetes per tenant
❖ Provision storage devices
❖ Cinder for volumes
❖ Manila for shares
❖ Kubernetes binds into pod/container
❖ Status
❖ Prototype Cinder plugin for Kubernetes
(https://github.com/spothanis/
kubernetes/tree/cinder-vol-plugin)