3 ubuntu open_stack_ceph
Upcoming SlideShare
Loading in...5
×
 

3 ubuntu open_stack_ceph

on

  • 1,103 views

 

Statistics

Views

Total Views
1,103
Views on SlideShare
1,103
Embed Views
0

Actions

Likes
1
Downloads
31
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

3 ubuntu open_stack_ceph 3 ubuntu open_stack_ceph Presentation Transcript

  • Ubuntu OpenStack & Ceph
  • Agenda ● Canonical? ● What is Ceph? ● How does it work? ● How is Ceph supported in OpenStack? ● How can I try is out? ● Q&A
  • IaaS ● Openstack → Swift → Nova → Quantum → Horizon → Glance Core areas for scale out with Ubuntu SDN ● Nicera ● BigSwitch ● NEC ● MidoNet PaaS ● Cloud Foundry ● Engine Yard Storage ● Block| Object ● Ceph All share Ubuntu as a common platform
  • IaaS ● Openstack → Swift → Nova → Quantum → Horizon → Glance Core areas for scale out with Ubuntu Storage ● Ceph
  • In OpenStack before it was OpenStack....
  • Ceph + Ubuntu OpenStack Ceph is a fully supported option as part of Ubuntu OpenStack Cinder Glance Support is backed by Inktank
  • What is Ceph? ● Free to develop and use ● Backed by Inktank ● Solves most OpenStack storage requirements ● Provides block, object and file (!) storage ● Fully supported in Ubuntu
  • Ceph is based on a distributed, autonomic, redundant native object store called RADOS Reliable Autonomic Distributed Object Store
  • What is Ceph? ● Object Store (RADOS) ● Block Storage (RDB) ● Swift/S3 REST API (RGW) ● Distributed File System (CephFS) ● Librados (python/C/C++/Java/PHP)
  • What is Ceph? Object Store ● Self healing and replicated across failure domains ● No single point of failure ● Runs on commodity hardware ● No RAID or expensive disks needed
  • How does it work? 2 types of daemon ● Monitor (MON) – odd number in quorum – Maintains cluster status ● Object Storage daemon – Typically uses XFS – Typically 1:1 with disks – Maintain data and replicas
  • How does it work? CRUSH Algorithm ● OSDs, Buckets and rules ● Defines failure domains ● Provides deterministic object placement Controlled Replication Under Scaleble Hashing
  • How does it work? Pools ● Logically partition cluster based on storage type/performance ● Provide client access to specific pool using cephx
  • How does it work?
  • Ceph Support in OpenStack ● Supported in OpenStack since Cactus ● Supported in: – Nova (Compute) – KVM + Xen – Cinder (Storage) – Glance (images)
  • Ceph support in OpenStack ● RDB driver for volumes, snapshots, clones ● Bootable volumes using copy on write clone of Glance images ● Incremental differential backup of cinder volumes
  • Ceph support in OpenStack
  • How can I try it out? ● Simplest is to use Juju
  • Juju Charms ● Juju utilizes service formulas called Charms” ● Charms are building blocks ● Charms contain instructions : Deploy, Install, and Configure ● Charms can be instantiated one or many times Database Ceph Juju environment
  • ● Juju maintains the relations between the services ● Eliminates complex configuration management Ceph Ceph-Radosgw Juju relation Juju relation Ceph-OSD
  • ● Multiple charms can provide the same service and can be easily switched Cloud app HAProxy Depends Provides Depends Provides Ceph
  • Juju maintains the relations between the services Eliminates complex configuration management Ceph Ceph-Radosgw Juju relation Juju relation Ceph-OSD
  • juju deploy -n 3 --config ceph.yaml ceph Deploying Ceph Monitors Deploying Ceph- OSDS juju set ceph-osd "osd-devices=/dev/xvdf" Set Ceph to use Volumes juju add-relation ceph-osd ceph Build the relationships between Ceph Monitor and OSDS http://ceph.com/dev-notes/deploying-ceph-with-juju/
  • Questions please Thank you Mark Baker Mark.baker@canonical.com