Your SlideShare is downloading. ×
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for OpenStack

1,019

Published on

Paris, 5th December 2013 : OpenStack in Action 4! organized by eNovance, brings together members of the OpenStack community.

Paris, 5th December 2013 : OpenStack in Action 4! organized by eNovance, brings together members of the OpenStack community.

Published in: Technology, Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,019
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
25
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Ceph:  de  factor  storage  backend  for   OpenStack   OpenStack in Action 4! Paris, 5th December
  • 2. Whoami   💥  Sébas9en  Han   💥  French  Cloud  Engineer  working  for  eNovance   💥  Daily  job  focused  on  Ceph  and  OpenStack   💥  Blogger     Personal  blog:  hGp://www.sebas9en-­‐han.fr/blog/   Company  blog:  hGp://techs.enovance.com/   Worldwide  offices  coverage   We  design,  build  and  run  clouds  –  any9me  -­‐  anywhere  
  • 3. Ceph   What  is  it?  
  • 4. The  project   ➜  Unified distributed storage system ➜  Started in 2006 as a PhD by Sage Weil ➜  Open source under LGPL license ➜  Written in C++ ➜  Build the future of storage on commodity hardware
  • 5. Key  features   ➜  Self managing/healing ➜  Self balancing   ➜  Painless scaling ➜  Data placement with CRUSH
  • 6. Controlled  replica9on  under  scalable  hashing   ➜  Pseudo-random placement algorithm ➜  Statistically uniform distribution ➜  Rule-based configuration
  • 7. Overview  
  • 8. State  of  the  integra9on   Including  best  Havana’s  addi9ons  
  • 9. Why  is  Ceph  so  good?   It unifies OpenStack components
  • 10. Havana’s  addi9ons   ➜  Complete refactor of the Cinder driver: •  Librados and librbd usage •  Flatten volumes created from snapshots •  Clone depth ➜  Cinder backup with a Ceph backend: •  •  •  •  •  backing up within the same Ceph pool (not recommended) backing up between different Ceph pools backing up between different Ceph clusters Support RBD stripes Differentials ➜  Nova Libvirt_image_type = rbd •  Directly boot all the VMs in Ceph •  Volume QoS
  • 11. Today’s  Havana  integra9on  
  • 12. Is  Havana  the  perfect  stack?   …  
  • 13. Well, almost…
  • 14. What’s  missing?   ➜  Direct URL download for Nova •  Already on the pipe, probably for 2013.2.1 ➜  Nova’s snapshots integration •  Ceph snapshot https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd
  • 15. Icehouse  and  beyond   Future  
  • 16. Tomorrow’s  integra9on  
  • 17. Icehouse  roadmap   ➜  Implement “bricks” for RBD ➜  Re-implement snapshotting function to use RBD snapshot ➜  RBD on Nova bare metal ➜  Volume migration support ➜  RBD stripes support «  J  »  poten9al  roadmap   ➜  Manila support
  • 18. Ceph,  what’s  coming  up?   Roadmap  
  • 19. Firefly   ➜  Tiering - cache pool overlay ➜  Erasure code ➜  Ceph OSD ZFS ➜  Full support of OpenStack Icehouse
  • 20. Many thanks! Questions? Contact: sebastien@enovance.com Twitter: @sebastien_han IRC: leseb

×