Ceph de facto storage backend for OpenStack
Upcoming SlideShare
Loading in...5
×
 

Ceph de facto storage backend for OpenStack

on

  • 2,660 views

Sebastien Han's presentation @ Fosdem 2014 edition

Sebastien Han's presentation @ Fosdem 2014 edition

Statistics

Views

Total Views
2,660
Views on SlideShare
1,144
Embed Views
1,516

Actions

Likes
2
Downloads
22
Comments
0

6 Embeds 1,516

http://www.sebastien-han.fr 1461
http://octopress.dev 26
https://twitter.com 20
http://www.linkedin.com 5
http://www.slideee.com 3
http://webcache.googleusercontent.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • It provides numerous features: <br /> Self healing: if something breaks, the cluster reacts and triggers a recovery process <br /> Self balancing, as soon as you add a new disk or a new node, the cluster moves and re-balance data <br /> Self managing: periodic tasks such as scrubbing to check object consistency and if something is wrong ceph repairs the object <br /> Painless scaling: it’s fearly easy to add a new disk, node especially with all the tools outthere to deploy ceph (puppet, chef, ceph-deploy) <br /> Intelligent data placement, so you can logically reflect your physical infrastructure and you can build placement rules <br /> objects are automatically placed, balanced, migrated in a dynamic cluster <br /> Controlled replication under scalable hashing <br /> pseudo-random placement algorithm <br /> fast calculation, no lookup <br /> repeatable, deterministic <br /> rule-based configuration <br /> infrastructure topology aware <br /> adjustable replication <br /> The way CRUSH is configured is somewhat unique. Instead of defining pools for different data types, workgroups, subnets, or applications, CRUSH is configured with the physical topology of your storage network. You tell it how many buildings, rooms, shelves, racks, and nodes you have, and you tell it how you want data placed. For example, you could tell CRUSH that it’s okay to have two replicas in the same building, but not on the same power circuit. You also tell it how many copies to keep. <br />
  • RADOS is a distributed object store. On top of RADOS, we have built three systems that allow us to store data <br /> Several ways to access data <br /> RGW <br /> Native RESTful <br /> S3 and Swift compatible <br /> Multi-tenant and quota <br /> Multi-site capabilities <br /> Disaster recovery <br /> RBD <br /> Thinly provisioned <br /> Full and Incremental Snapshots <br /> Copy-on-write cloning <br /> Native Linux kernel driver support <br /> Supported by KVM and Xen <br /> CephFS <br /> POSIX-compliant semantics <br /> Subdirectory snapshots <br />

Ceph de facto storage backend for OpenStack Ceph de facto storage backend for OpenStack Presentation Transcript

  • Ceph: de facto storage backend for OpenStack FOSDEM 2014 - Sébastien Han - French Cloud Engineer working for eNovance - Daily job focused on Ceph and OpenStack
  • Ceph What is it?
  • Unified distributed storage system ➜ Started in 2006 | Open Source LGPL | Written in C++ ➜ Self managing/healing ➜ Self balancing (uniform distribution) ➜ Painless scaling ➜ Data placement with CRUSH ➜ Pseudo-random placement algorithm ➜ Rule-based configuration
  • Overview
  • State of the integration OpenStack Havana
  • Today’s Havana integration
  • Havana is not the perfect stack … ➜Nova RBD ephemeral backend is buggy: https://github.com/jdurgin/nova/commits/havanaephemeral-rbd
  • Icehouse status Future
  • Tomorrow’s integration
  • Icehouse progress BLUEPRINTS / BUGS STATUS Swift RADOS backend In progress DevStack Ceph In progress RBD TGT for other hypervisors Not started Enable cloning for rbd-backed ephemeral disks In progress Clone non-raw images in Glance RBD backend Implemented Nova ephemeral backend dedicated pool and user Implemented Volume migration support Not started Use RBD snapshot instead of qemu- Not started img
  • Ceph, what’s coming up? Roadmap
  • Firefly ➜ Tiering - cache pool overlay ➜ Erasure code ➜ Ceph OSD ZFS ➜ Filestore multi-backend
  • Many thanks! Questions? Contact: sebastien@enovance.com Twitter: @sebastien_han IRC: leseb Company blog: http://techs.enovance.com/ Personal blog: http://www.sebastien-han.fr/blog/