Ceph:  de  factor  storage  backend  for  
OpenStack  
OpenStack in Action 4! Paris, 5th December
Whoami  
💥  Sébas9en  Han  
💥  French  Cloud  Engineer  working  for  eNovance  
💥  Daily  job  focused  on  Ceph  and  OpenStack  
💥  Blogger  
  
Personal  blog:  hGp://www.sebas9en-­‐han.fr/blog/  
Company  blog:  hGp://techs.enovance.com/  

Worldwide  offices  coverage  
We  design,  build  and  run  clouds  –  any9me  -­‐  anywhere  
Ceph
  
What  is  it?
  
The  project  
➜  Unified distributed storage system
➜  Started in 2006 as a PhD by Sage Weil
➜  Open source under LGPL license
➜  Written in C++
➜  Build the future of storage on commodity hardware
Key  features  
➜  Self managing/healing
➜  Self balancing  
➜  Painless scaling
➜  Data placement with CRUSH
Controlled  replica9on  under  scalable  hashing  
➜  Pseudo-random placement algorithm
➜  Statistically uniform distribution
➜  Rule-based configuration
Overview  
State  of  the  integra9on
  
Including  best  Havana’s  addi9ons
  
Why  is  Ceph  so  good?  
It unifies OpenStack components
Havana’s  addi9ons  
➜  Complete refactor of the Cinder driver:
•  Librados and librbd usage
•  Flatten volumes created from snapshots
•  Clone depth


➜  Cinder backup with a Ceph backend:
• 
• 
• 
• 
• 

backing up within the same Ceph pool (not recommended)
backing up between different Ceph pools
backing up between different Ceph clusters
Support RBD stripes
Differentials



➜  Nova Libvirt_image_type = rbd
•  Directly boot all the VMs in Ceph
•  Volume QoS
Today’s  Havana  integra9on  
Is  Havana  the  perfect  stack?
  
…
  
Well, almost…
What’s  missing?  
➜  Direct URL download for Nova
•  Already on the pipe, probably for 2013.2.1

➜  Nova’s snapshots integration
•  Ceph snapshot

https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd
Icehouse  and  beyond
  
Future
  
Tomorrow’s  integra9on  
Icehouse  roadmap  
➜  Implement “bricks” for RBD
➜  Re-implement snapshotting function to use RBD snapshot
➜  RBD on Nova bare metal
➜  Volume migration support
➜  RBD stripes support

«  J  »  poten9al  roadmap  
➜  Manila support
Ceph,  what’s  coming  up?
  
Roadmap
  
Firefly  
➜  Tiering - cache pool overlay
➜  Erasure code
➜  Ceph OSD ZFS
➜  Full support of OpenStack Icehouse
Many thanks!


Questions?



Contact: sebastien@enovance.com
Twitter: @sebastien_han
IRC: leseb

OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack

  • 1.
    Ceph:  de  factor storage  backend  for   OpenStack   OpenStack in Action 4! Paris, 5th December
  • 2.
    Whoami   💥  Sébas9en Han   💥  French  Cloud  Engineer  working  for  eNovance   💥  Daily  job  focused  on  Ceph  and  OpenStack   💥  Blogger     Personal  blog:  hGp://www.sebas9en-­‐han.fr/blog/   Company  blog:  hGp://techs.enovance.com/   Worldwide  offices  coverage   We  design,  build  and  run  clouds  –  any9me  -­‐  anywhere  
  • 3.
  • 4.
    The  project   ➜ Unified distributed storage system ➜  Started in 2006 as a PhD by Sage Weil ➜  Open source under LGPL license ➜  Written in C++ ➜  Build the future of storage on commodity hardware
  • 5.
    Key  features   ➜ Self managing/healing ➜  Self balancing   ➜  Painless scaling ➜  Data placement with CRUSH
  • 6.
    Controlled  replica9on  under scalable  hashing   ➜  Pseudo-random placement algorithm ➜  Statistically uniform distribution ➜  Rule-based configuration
  • 7.
  • 8.
    State  of  the integra9on   Including  best  Havana’s  addi9ons  
  • 9.
    Why  is  Ceph so  good?   It unifies OpenStack components
  • 10.
    Havana’s  addi9ons   ➜ Complete refactor of the Cinder driver: •  Librados and librbd usage •  Flatten volumes created from snapshots •  Clone depth ➜  Cinder backup with a Ceph backend: •  •  •  •  •  backing up within the same Ceph pool (not recommended) backing up between different Ceph pools backing up between different Ceph clusters Support RBD stripes Differentials ➜  Nova Libvirt_image_type = rbd •  Directly boot all the VMs in Ceph •  Volume QoS
  • 11.
  • 12.
    Is  Havana  the perfect  stack?   …  
  • 13.
  • 14.
    What’s  missing?   ➜ Direct URL download for Nova •  Already on the pipe, probably for 2013.2.1 ➜  Nova’s snapshots integration •  Ceph snapshot https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd
  • 15.
  • 16.
  • 17.
    Icehouse  roadmap   ➜ Implement “bricks” for RBD ➜  Re-implement snapshotting function to use RBD snapshot ➜  RBD on Nova bare metal ➜  Volume migration support ➜  RBD stripes support «  J  »  poten9al  roadmap   ➜  Manila support
  • 18.
    Ceph,  what’s  coming up?   Roadmap  
  • 19.
    Firefly   ➜  Tiering- cache pool overlay ➜  Erasure code ➜  Ceph OSD ZFS ➜  Full support of OpenStack Icehouse
  • 20.