Ceph:  de  factor  storage  backend  for  
OpenStack  
OpenStack in Action 4! Paris, 5th December
Whoami  
💥  Sébas9en  Han  
💥  French  Cloud  Engineer  working  for  eNovance  
💥  Daily  job  focused  on  Ceph  and  Op...
Ceph
  
What  is  it?
  
The  project  
➜  Unified distributed storage system
➜  Started in 2006 as a PhD by Sage Weil
➜  Open source under LGPL lic...
Key  features  
➜  Self managing/healing
➜  Self balancing  
➜  Painless scaling
➜  Data placement with CRUSH
Controlled  replica9on  under  scalable  hashing  
➜  Pseudo-random placement algorithm
➜  Statistically uniform distribut...
Overview  
State  of  the  integra9on
  
Including  best  Havana’s  addi9ons
  
Why  is  Ceph  so  good?  
It unifies OpenStack components
Havana’s  addi9ons  
➜  Complete refactor of the Cinder driver:
•  Librados and librbd usage
•  Flatten volumes created fr...
Today’s  Havana  integra9on  
Is  Havana  the  perfect  stack?
  
…
  
Well, almost…
What’s  missing?  
➜  Direct URL download for Nova
•  Already on the pipe, probably for 2013.2.1

➜  Nova’s snapshots inte...
Icehouse  and  beyond
  
Future
  
Tomorrow’s  integra9on  
Icehouse  roadmap  
➜  Implement “bricks” for RBD
➜  Re-implement snapshotting function to use RBD snapshot
➜  RBD on Nova...
Ceph,  what’s  coming  up?
  
Roadmap
  
Firefly  
➜  Tiering - cache pool overlay
➜  Erasure code
➜  Ceph OSD ZFS
➜  Full support of OpenStack Icehouse
Many thanks!


Questions?



Contact: sebastien@enovance.com
Twitter: @sebastien_han
IRC: leseb
Upcoming SlideShare
Loading in …5
×

OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack

2,117 views

Published on

Paris, 5th December 2013 : OpenStack in Action 4! organized by eNovance, brings together members of the OpenStack community.

Published in: Technology, Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,117
On SlideShare
0
From Embeds
0
Number of Embeds
520
Actions
Shares
0
Downloads
29
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack

  1. 1. Ceph:  de  factor  storage  backend  for   OpenStack   OpenStack in Action 4! Paris, 5th December
  2. 2. Whoami   💥  Sébas9en  Han   💥  French  Cloud  Engineer  working  for  eNovance   💥  Daily  job  focused  on  Ceph  and  OpenStack   💥  Blogger     Personal  blog:  hGp://www.sebas9en-­‐han.fr/blog/   Company  blog:  hGp://techs.enovance.com/   Worldwide  offices  coverage   We  design,  build  and  run  clouds  –  any9me  -­‐  anywhere  
  3. 3. Ceph   What  is  it?  
  4. 4. The  project   ➜  Unified distributed storage system ➜  Started in 2006 as a PhD by Sage Weil ➜  Open source under LGPL license ➜  Written in C++ ➜  Build the future of storage on commodity hardware
  5. 5. Key  features   ➜  Self managing/healing ➜  Self balancing   ➜  Painless scaling ➜  Data placement with CRUSH
  6. 6. Controlled  replica9on  under  scalable  hashing   ➜  Pseudo-random placement algorithm ➜  Statistically uniform distribution ➜  Rule-based configuration
  7. 7. Overview  
  8. 8. State  of  the  integra9on   Including  best  Havana’s  addi9ons  
  9. 9. Why  is  Ceph  so  good?   It unifies OpenStack components
  10. 10. Havana’s  addi9ons   ➜  Complete refactor of the Cinder driver: •  Librados and librbd usage •  Flatten volumes created from snapshots •  Clone depth ➜  Cinder backup with a Ceph backend: •  •  •  •  •  backing up within the same Ceph pool (not recommended) backing up between different Ceph pools backing up between different Ceph clusters Support RBD stripes Differentials ➜  Nova Libvirt_image_type = rbd •  Directly boot all the VMs in Ceph •  Volume QoS
  11. 11. Today’s  Havana  integra9on  
  12. 12. Is  Havana  the  perfect  stack?   …  
  13. 13. Well, almost…
  14. 14. What’s  missing?   ➜  Direct URL download for Nova •  Already on the pipe, probably for 2013.2.1 ➜  Nova’s snapshots integration •  Ceph snapshot https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd
  15. 15. Icehouse  and  beyond   Future  
  16. 16. Tomorrow’s  integra9on  
  17. 17. Icehouse  roadmap   ➜  Implement “bricks” for RBD ➜  Re-implement snapshotting function to use RBD snapshot ➜  RBD on Nova bare metal ➜  Volume migration support ➜  RBD stripes support «  J  »  poten9al  roadmap   ➜  Manila support
  18. 18. Ceph,  what’s  coming  up?   Roadmap  
  19. 19. Firefly   ➜  Tiering - cache pool overlay ➜  Erasure code ➜  Ceph OSD ZFS ➜  Full support of OpenStack Icehouse
  20. 20. Many thanks! Questions? Contact: sebastien@enovance.com Twitter: @sebastien_han IRC: leseb

×