3 ubuntu open_stack_ceph

1,494 views

Published on

Published in: Technology, Business
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,494
On SlideShare
0
From Embeds
0
Number of Embeds
10
Actions
Shares
0
Downloads
50
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

3 ubuntu open_stack_ceph

  1. 1. Ubuntu OpenStack & Ceph
  2. 2. Agenda ● Canonical? ● What is Ceph? ● How does it work? ● How is Ceph supported in OpenStack? ● How can I try is out? ● Q&A
  3. 3. IaaS ● Openstack → Swift → Nova → Quantum → Horizon → Glance Core areas for scale out with Ubuntu SDN ● Nicera ● BigSwitch ● NEC ● MidoNet PaaS ● Cloud Foundry ● Engine Yard Storage ● Block| Object ● Ceph All share Ubuntu as a common platform
  4. 4. IaaS ● Openstack → Swift → Nova → Quantum → Horizon → Glance Core areas for scale out with Ubuntu Storage ● Ceph
  5. 5. In OpenStack before it was OpenStack....
  6. 6. Ceph + Ubuntu OpenStack Ceph is a fully supported option as part of Ubuntu OpenStack Cinder Glance Support is backed by Inktank
  7. 7. What is Ceph? ● Free to develop and use ● Backed by Inktank ● Solves most OpenStack storage requirements ● Provides block, object and file (!) storage ● Fully supported in Ubuntu
  8. 8. Ceph is based on a distributed, autonomic, redundant native object store called RADOS Reliable Autonomic Distributed Object Store
  9. 9. What is Ceph? ● Object Store (RADOS) ● Block Storage (RDB) ● Swift/S3 REST API (RGW) ● Distributed File System (CephFS) ● Librados (python/C/C++/Java/PHP)
  10. 10. What is Ceph? Object Store ● Self healing and replicated across failure domains ● No single point of failure ● Runs on commodity hardware ● No RAID or expensive disks needed
  11. 11. How does it work? 2 types of daemon ● Monitor (MON) – odd number in quorum – Maintains cluster status ● Object Storage daemon – Typically uses XFS – Typically 1:1 with disks – Maintain data and replicas
  12. 12. How does it work? CRUSH Algorithm ● OSDs, Buckets and rules ● Defines failure domains ● Provides deterministic object placement Controlled Replication Under Scaleble Hashing
  13. 13. How does it work? Pools ● Logically partition cluster based on storage type/performance ● Provide client access to specific pool using cephx
  14. 14. How does it work?
  15. 15. Ceph Support in OpenStack ● Supported in OpenStack since Cactus ● Supported in: – Nova (Compute) – KVM + Xen – Cinder (Storage) – Glance (images)
  16. 16. Ceph support in OpenStack ● RDB driver for volumes, snapshots, clones ● Bootable volumes using copy on write clone of Glance images ● Incremental differential backup of cinder volumes
  17. 17. Ceph support in OpenStack
  18. 18. How can I try it out? ● Simplest is to use Juju
  19. 19. Juju Charms ● Juju utilizes service formulas called Charms” ● Charms are building blocks ● Charms contain instructions : Deploy, Install, and Configure ● Charms can be instantiated one or many times Database Ceph Juju environment
  20. 20. ● Juju maintains the relations between the services ● Eliminates complex configuration management Ceph Ceph-Radosgw Juju relation Juju relation Ceph-OSD
  21. 21. ● Multiple charms can provide the same service and can be easily switched Cloud app HAProxy Depends Provides Depends Provides Ceph
  22. 22. Juju maintains the relations between the services Eliminates complex configuration management Ceph Ceph-Radosgw Juju relation Juju relation Ceph-OSD
  23. 23. juju deploy -n 3 --config ceph.yaml ceph Deploying Ceph Monitors Deploying Ceph- OSDS juju set ceph-osd "osd-devices=/dev/xvdf" Set Ceph to use Volumes juju add-relation ceph-osd ceph Build the relationships between Ceph Monitor and OSDS http://ceph.com/dev-notes/deploying-ceph-with-juju/
  24. 24. Questions please Thank you Mark Baker Mark.baker@canonical.com

×