3 ubuntu open_stack_ceph
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

3 ubuntu open_stack_ceph

on

  • 1,159 views

 

Statistics

Views

Total Views
1,159
Views on SlideShare
1,159
Embed Views
0

Actions

Likes
1
Downloads
32
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

3 ubuntu open_stack_ceph Presentation Transcript

  • 1. Ubuntu OpenStack & Ceph
  • 2. Agenda ● Canonical? ● What is Ceph? ● How does it work? ● How is Ceph supported in OpenStack? ● How can I try is out? ● Q&A
  • 3. IaaS ● Openstack → Swift → Nova → Quantum → Horizon → Glance Core areas for scale out with Ubuntu SDN ● Nicera ● BigSwitch ● NEC ● MidoNet PaaS ● Cloud Foundry ● Engine Yard Storage ● Block| Object ● Ceph All share Ubuntu as a common platform
  • 4. IaaS ● Openstack → Swift → Nova → Quantum → Horizon → Glance Core areas for scale out with Ubuntu Storage ● Ceph
  • 5. In OpenStack before it was OpenStack....
  • 6. Ceph + Ubuntu OpenStack Ceph is a fully supported option as part of Ubuntu OpenStack Cinder Glance Support is backed by Inktank
  • 7. What is Ceph? ● Free to develop and use ● Backed by Inktank ● Solves most OpenStack storage requirements ● Provides block, object and file (!) storage ● Fully supported in Ubuntu
  • 8. Ceph is based on a distributed, autonomic, redundant native object store called RADOS Reliable Autonomic Distributed Object Store
  • 9. What is Ceph? ● Object Store (RADOS) ● Block Storage (RDB) ● Swift/S3 REST API (RGW) ● Distributed File System (CephFS) ● Librados (python/C/C++/Java/PHP)
  • 10. What is Ceph? Object Store ● Self healing and replicated across failure domains ● No single point of failure ● Runs on commodity hardware ● No RAID or expensive disks needed
  • 11. How does it work? 2 types of daemon ● Monitor (MON) – odd number in quorum – Maintains cluster status ● Object Storage daemon – Typically uses XFS – Typically 1:1 with disks – Maintain data and replicas
  • 12. How does it work? CRUSH Algorithm ● OSDs, Buckets and rules ● Defines failure domains ● Provides deterministic object placement Controlled Replication Under Scaleble Hashing
  • 13. How does it work? Pools ● Logically partition cluster based on storage type/performance ● Provide client access to specific pool using cephx
  • 14. How does it work?
  • 15. Ceph Support in OpenStack ● Supported in OpenStack since Cactus ● Supported in: – Nova (Compute) – KVM + Xen – Cinder (Storage) – Glance (images)
  • 16. Ceph support in OpenStack ● RDB driver for volumes, snapshots, clones ● Bootable volumes using copy on write clone of Glance images ● Incremental differential backup of cinder volumes
  • 17. Ceph support in OpenStack
  • 18. How can I try it out? ● Simplest is to use Juju
  • 19. Juju Charms ● Juju utilizes service formulas called Charms” ● Charms are building blocks ● Charms contain instructions : Deploy, Install, and Configure ● Charms can be instantiated one or many times Database Ceph Juju environment
  • 20. ● Juju maintains the relations between the services ● Eliminates complex configuration management Ceph Ceph-Radosgw Juju relation Juju relation Ceph-OSD
  • 21. ● Multiple charms can provide the same service and can be easily switched Cloud app HAProxy Depends Provides Depends Provides Ceph
  • 22. Juju maintains the relations between the services Eliminates complex configuration management Ceph Ceph-Radosgw Juju relation Juju relation Ceph-OSD
  • 23. juju deploy -n 3 --config ceph.yaml ceph Deploying Ceph Monitors Deploying Ceph- OSDS juju set ceph-osd "osd-devices=/dev/xvdf" Set Ceph to use Volumes juju add-relation ceph-osd ceph Build the relationships between Ceph Monitor and OSDS http://ceph.com/dev-notes/deploying-ceph-with-juju/
  • 24. Questions please Thank you Mark Baker Mark.baker@canonical.com