Your SlideShare is downloading. ×
Openstack with ceph
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Openstack with ceph

2,757
views

Published on


0 Comments
7 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,757
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
166
Comments
0
Likes
7
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. InktankOpenstack with Ceph
  • 2. Who is this guy? Ian Colle Ceph Program Manager, Inktank ian@inktank.com @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com
  • 3. Selecting the Best Cloud Storage SystemPeople need storage solutions that…•  …are open•  …are easy to manage•  …satisfy their requirements - performance - functional - financial (cha’ ching!)
  • 4. Hard Drives Are Tiny Record Players and They Fail Oftenjon_a_ross, Flickr / CC BY 2.0
  • 5. D D D D D D = D Dx 1 MILLION 55 times / day
  • 6. I got it!“That’s why I use Swift in my Openstack implementation”Hmmm, what about block storage?
  • 7. Benefits of Block Storage• Persistent - More familiar to users• Not tied to a single host - Decouples compute and storage - Enables Live migration• Extra capabilities of storage system - Efficient snapshots - Different types of storage available - Cloning for fast restore or scaling
  • 8. Ceph over SwiftCeph has reduced administration costs - “Intelligent Devices” that use a peer-to-peer mechanism to detect failures and react automatically – rapidly ensuring replication policies are still honored if a node becomes unavailable. - Swift requires an operator to notice a failure and update the ring configuration before redistribution of data is started.Ceph guarantees the consistency of your data - Even with large volumes of data, Ceph ensures clients get a consistent copy from any node within a region. - Swift’s replication system means that users may get stale data, even with a single site, due to slow asynchronous replication as the volume of data builds up.
  • 9. Swift over CephSwift has quotas, we do not (coming this Fall)Swift has object expiration, we do not (coming this Fall)
  • 10. Total Solution ComparisonCeph Ceph provides object AND block storage in a single system that is compatible with the Swift and Cinder APIs and is self-healing without operator intervention.Swift If you use Swift, you still have to provision and manage a totally separate system to handle your block storage (in addition to paying the poor guy to go update the ring configuration)
  • 11. Openstack I know, but what is Ceph?
  • 12. philosophy design OPEN SOURCE SCALABLECOMMUNITY-FOCUSED NO SINGLE POINT OF FAILURE SOFTWARE BASED SELF-MANAGING
  • 13. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHPRADOSA Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,intelligent storage nodes
  • 14. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHPRADOSA Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,intelligent storage nodes
  • 15. Monitors:M • Maintain cluster map • Provide consensus for distributed decision- making • Must have an odd number • These do not serve stored objects to clients OSDs: • One per disk (recommended) • At least three in a cluster • Serve stored objects to clients • Intelligently peer to perform replication tasks • Supports object classes
  • 16. OSD OSD OSD OSD OSDFS FS FS FS FS btrfs xfs ext4DISK DISK DISK DISK DISK M M M 16
  • 17. HUMAN MM M
  • 18. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHPRADOSA Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,intelligent storage nodes
  • 19. LIBRADOSL • Provides direct access to RADOS for applications • C, C++, Python, PHP, Java • No HTTP overhead
  • 20. APP LIBRADOS native MM M
  • 21. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHPRADOSA Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,intelligent storage nodes
  • 22. APP APP RESTRGW RGWLIBRADOS LIBRADOS native M M M
  • 23. RADOS Gateway: • REST-based interface to RADOS • Supports buckets, accounting • Compatible with S3 and Swift applications
  • 24. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHPRADOSA Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,intelligent storage nodes
  • 25. VMVIRTUALIZATION CONTAINER LIBRBD LIBRADOS M M M
  • 26. RADOS Block Device: • Storage of virtual disks in RADOS • Allows decoupling of VMs and containers • Live migration! • Images are striped across the cluster • Boot support in QEMU, KVM, and OpenStack Nova (more on that later!) • Mount support in the Linux kernel
  • 27. APP APP HOST/VM CLIENT RADOSGW RBD CEPH FS LIBRADOS A bucket-based REST A reliable and fully- A POSIX-compliant A library allowing gateway, compatible distributed block distributed file apps to directly with S3 and Swift device, with a Linux system, with a Linux access RADOS, kernel client and a kernel client and with support for QEMU/KVM driver support for FUSE C, C++, Java, Python, Ruby, and PHPRADOSA reliable, autonomous, distributed object store comprised of self-healing, self-managing,intelligent storage nodes
  • 28. What Makes Ceph Unique?Part one: CRUSH
  • 29. C D C D C D C D C D ??APP C D C D C D C D C D C D C D
  • 30. C D C D C D C D C DAPP C D C D C D C D C D C D C D
  • 31. C D C D A-G C D C D C D H-NAPP F* C D C D C D O-T C D C D C D U-Z C D
  • 32. 10 10 01 01 10 10 01 11 01 10 hash(object name) % num pg10 10 01 01 10 10 01 11 01 10 CRUSH(pg, cluster state, rule set)
  • 33. 10 10 01 01 10 10 01 11 01 1010 10 01 01 10 10 01 11 01 10
  • 34. CRUSH • Pseudo-random placement algorithm • Ensures even distribution • Repeatable, deterministic • Rule-based configuration • Replica count • Infrastructure topology • Weighting
  • 35. 35
  • 36. 36
  • 37. 37
  • 38. 38
  • 39. What Makes Ceph UniquePart two: thin provisioning
  • 40. 40
  • 41. HOW DO YOU SPIN UPTHOUSANDS OF VMs INSTANTLY AND EFFICIENTLY?
  • 42. 42
  • 43. 43
  • 44. 44
  • 45. How Does Ceph work with Openstack?
  • 46. Ceph / Openstack IntegrationRBD support initially added in CactusHave increased features / integration with each subsequent releaseYou can use both the Swift (object/blob store) and Keystone (identityservice) APIs to talk to RGWCinder block storage as a service talks directly to RBDNova cloud computing controller talks to RBD via the hypervisorComing in Havana – Ability to create a volume from an RBD image viathe Horizon UI
  • 47. What is Inktank?I really like your polo shirt, please tell me what it means!
  • 48. Who?The majority of Ceph contributorsFormed by Sage Weil (CTO), the creator of Ceph, in 2011Funded by DreamHost and other investors (Mark Shuttleworth, etc.)
  • 49. Why?To ensure the long-term success of CephTo help companies adopt Ceph through services, support, training, andconsulting
  • 50. What?Guide the Ceph roadmap - Hosting a virtual Ceph Design Summit in early MayStandardize the Ceph development and release schedule - Quarterly stable releases, interim releases every 2 weeks * May 2013 – Cuttlefish RBD Incremental Snapshots! * Aug 2013 – Dumpling Disaster Recovery (Multisite) Admin API * Nov 2013 – Some really cool cephalopod name that starts with an EEnsure Quality - Maintain Teuthology test suite - Harden each stable release via extensive manual and automated testingDevelop reference and custom architectures for implementation
  • 51. Inktank/Dell Partnership• Inktank is a Strategic partner for Dell in Emerging Solutions• The Emerging Solutions Ecosystem Partner Program is designed todeliver complementary cloud components• As part of this program, Dell and Inktank provide: > Ceph Storage Software - Adds scalable cloud storage to the Dell OpenStack-powered cloud - Uses Crowbar to provision and configure a Ceph cluster (Yeah Crowbar!) > Professional Services, Support, and Training - Collaborative Support for Dell hardware customers > Joint Solution - Validated against Dell Reference Architectures via the Technology Partner program
  • 52. What do we want from you??Try Ceph and tell us what you think!http://ceph.com/resources/downloads/http://ceph.com/resources/mailing-list-irc/ - Ask, if you need help. - Help others, if you can!Ask your company to start dedicating dev resources to the project!http://github.com/cephFind a bug (http://tracker.ceph.com) and fix it!Participate in our Ceph Design Summit!
  • 53. One final request…We’re planning the next release of Ceph and would love your input.What features would you like us to include? iSCSI? Live Migration?
  • 54. 56 Questions? Ian Colle Ceph Program Manager, Inktank ian@inktank.com @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com

×