Successfully reported this slideshow.

Openstack Summit HK - Ceph defacto - eNovance

9,570 views

Published on

by Sebastien Han

Published in: Technology
  • Be the first to comment

Openstack Summit HK - Ceph defacto - eNovance

  1. 1. Ceph: de factor storage backend for OpenStack OpenStack Summit 2013 Hong Kong
  2. 2. Whoami � Sébastien Han � French Cloud Engineer working for eNovance � Daily job focused on Ceph and OpenStack � Blogger Personal blog: http://www.sebastien-han.fr/blog/ Company blog: http://techs.enovance.com/ Worldwide offices We design, build and run clouds – anytime coverage
  3. 3. Ceph What is it?
  4. 4. The project ➜ Unified distributed storage system ➜ Started in 2006 as a PhD by Sage Weil ➜ Open source under LGPL license ➜ Written in C++ ➜ Build the future of storage on commodity hardware
  5. 5. Key features ➜ Self managing/healing ➜ Self balancing ➜ Painless scaling ➜ Data placement with CRUSH
  6. 6. Controlled replication under scalable hashing ➜ Pseudo-random placement algorithm ➜ Statistically uniform distribution ➜ Rule-based configuration
  7. 7. Overview
  8. 8. Building a Ceph cluster General considerations
  9. 9. How to start? ➜ Use case • IO profile: Bandwidth? IOPS? Mixed? • Guaranteed IOs : how many IOPS or Bandwidth per client do I want to deliver? • Usage: do I use Ceph in standalone or is it combined with a software solution? ➜ Amount of data (usable not RAW) • Replica count • Failure ratio - How much data am I willing to rebalance if a node fail? • Do I have a data growth planning? ➜ Budget :-)
  10. 10. Things that you must not do ➜ Don't put a RAID underneath your OSD • Ceph already manages the replication • Degraded RAID breaks performances • Reduce usable space on the cluster ➜ Don't build high density nodes with a tiny cluster • Failure consideration and data to re-balance • Potential full cluster ➜ Don't run Ceph on your hypervisors (unless you're broke)
  11. 11. State of the integration Including best Havana’s additions
  12. 12. Why is Ceph so good? It unifies OpenStack components
  13. 13. Havana’s additions ➜ Complete refactor of the Cinder driver: • Librados and librbd usage • Flatten volumes created from snapshots • Clone depth ➜ Cinder backup with a Ceph backend: • • • • • backing up within the same Ceph pool (not recommended) backing up between different Ceph pools backing up between different Ceph clusters Support RBD stripes Differentials ➜ Nova Libvirt_image_type = rbd • Directly boot all the VMs in Ceph • Volume QoS
  14. 14. Today’s Havana integration
  15. 15. Is Havana the perfect stack? …
  16. 16. Well, almost…
  17. 17. What’s missing? ➜ Direct URL download for Nova • Already on the pipe, probably for 2013.2.1 ➜ Nova’s snapshots integration • Ceph snapshot https://github.com/jdurgin/nova/commits/havana-ephemeralrbd
  18. 18. Icehouse and beyond Future
  19. 19. Tomorrow’s integration
  20. 20. Icehouse roadmap ➜ Implement “bricks” for RBD ➜ Re-implement snapshotting function to use RBD snapshot ➜ RBD on Nova bare metal ➜ Volume migration support ➜ RBD stripes support « J » potential roadmap ➜ Manila support
  21. 21. Ceph, what’s coming up? Roadmap
  22. 22. Firefly ➜ Tiering - cache pool overlay ➜ Erasure code ➜ Ceph OSD ZFS ➜ Full support of OpenStack Icehouse
  23. 23. Many thanks! Questions? Contact: sebastien@enovance.com Twitter: @sebastien_han IRC: leseb

×