2014 Ceph NYLUG Talk

846 views

Published on

Talk from 05 June 2014 NYLUG meeting at Bloomberg NYC. Short history of where Ceph came from, an architectural overview, and the current state of the community.

Published in: Software, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
846
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
28
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

2014 Ceph NYLUG Talk

  1. 1. 2014 New York, NY Ceph @ NYLUG
  2. 2. Copyright © 2014 by Inktank | Private and Confidential WHO? 2
  3. 3. Copyright © 2014 by Inktank | Private and Confidential AGENDA 3
  4. 4. THE FORECAST By 2020 over 15 ZB of data will be stored. 1 .5 ZB are stored today. 4 Copyright © 2014 by Inktank | Private and Confidential
  5. 5. THE PROBLEM  Existing systems don’t scale  Increasing cost and complexity  Need to invest in new platforms ahead of time2010 2020 IT Storage Budget Growth of data 5 Copyright © 2014 by Inktank | Private and Confidential
  6. 6. THE SOLUTION PAST: SCALE UP FUTURE: SCALE OUT 6 Copyright © 2014 by Inktank | Private and Confidential
  7. 7. INTRO TO CEPH
  8. 8. HISTORICAL TIMELINE Copyright © 2013 by Inktank | Private and Confidential 8 RHEL-OSP & RHEV Support FEB 2014 MAY 2012 Launch of Inktank OpenStack Integration 2011 2010 Mainline Linux Kernel Open Source 2006 2004 Project Starts at UCSC Production Ready Ceph SEPT 2012 2012 CloudStack Integration OCT 2013 Inktank Ceph Enterprise Launch Xen Integration 2013
  9. 9. A STORAGE REVOLUTION
  10. 10. Copyright © 2014 by Inktank | Private and Confidential ARCHITECTURE
  11. 11. Copyright © 2014 by Inktank | Private and Confidential ARCHITECTURAL COMPONENTS 11 APP HOST/VM CLIENT
  12. 12. Copyright © 2014 by Inktank | Private and Confidential ARCHITECTURAL COMPONENTS 12 APP HOST/VM CLIENT
  13. 13. OBJECT STORAGE DAEMONS 13 btrfs xfs ext4
  14. 14. RADOS CLUSTER 14 RADOS CLUSTER
  15. 15. RADOS COMPONENTS 15 OSDs:  10s to 10000s in a cluster  One per disk (or one per SSD, RAID group…)  Serve stored objects to clients  Intelligently peer for replication & recovery Monitors:  Maintain cluster membership and state  Provide consensus for distributed decision- making  Small, odd number  These do not serve stored objects to clients
  16. 16. WHERE DO OBJECTS LIVE? 16 ??
  17. 17. A METADATA SERVER? 17 1 2
  18. 18. CALCULATED PLACEMENT 18 A-G H-N O-T U-Z
  19. 19. EVEN BETTER: CRUSH! 19 RADOS CLUSTER
  20. 20. CRUSH IS A QUICK CALCULATION 20 RADOS CLUSTER
  21. 21. CRUSH: DYNAMIC DATA PLACEMENT 21 CRUSH:  Pseudo-random placement algorithm  Fast calculation, no lookup  Repeatable, deterministic  Statistically uniform distribution  Stable mapping  Limited data migration on change  Rule-based configuration  Infrastructure topology aware  Adjustable replication  Weighting
  22. 22. Copyright © 2014 by Inktank | Private and Confidential ARCHITECTURAL COMPONENTS 22 APP HOST/VM CLIENT
  23. 23. ACCESSING A RADOS CLUSTER 23 RADOS CLUSTER socket
  24. 24. LIBRADOS: RADOS ACCESS FOR APPS 24 LIBRADOS:  Direct access to RADOS for applications  C, C++, Python, PHP, Java, Erlang  Direct access to storage nodes  No HTTP overhead
  25. 25. Copyright © 2014 by Inktank | Private and Confidential ARCHITECTURAL COMPONENTS 25 APP HOST/VM CLIENT
  26. 26. THE RADOS GATEWAY 26 RADOS CLUSTER socket REST
  27. 27. RADOSGW MAKES RADOS WEBBY 27 RADOSGW:  REST-based object storage proxy  Uses RADOS to store objects  API supports buckets, accounts  Usage accounting for billing  Compatible with S3 and Swift applications
  28. 28. Copyright © 2014 by Inktank | Private and Confidential ARCHITECTURAL COMPONENTS 28 APP HOST/VM CLIENT
  29. 29. STORING VIRTUAL DISKS 29 RADOS CLUSTER
  30. 30. SEPARATE COMPUTE FROM STORAGE 30 RADOS CLUSTER
  31. 31. KERNEL MODULE FOR MAX FLEXIBLE! 31 RADOS CLUSTER
  32. 32. RBD STORES VIRTUAL DISKS 32 RADOS BLOCK DEVICE:  Storage of disk images in RADOS  Decouples VMs from host  Images are striped across the cluster (pool)  Snapshots  Copy-on-write clones  Support in:  Mainline Linux Kernel (2.6.39+)  Qemu/KVM, native Xen coming soon  OpenStack, CloudStack, Nebula, Proxmox
  33. 33. Copyright © 2014 by Inktank | Private and Confidential ARCHITECTURAL COMPONENTS 33 APP HOST/VM CLIENT
  34. 34. SEPARATE METADATA SERVER 34 RADOS CLUSTER datametadata
  35. 35. SCALABLE METADATA SERVERS 35 METADATA SERVER  Manages metadata for a POSIX-compliant shared filesystem  Directory hierarchy  File metadata (owner, timestamps, mode, etc.)  Stores metadata in RADOS  Does not serve file data to clients  Only required for shared filesystem
  36. 36. CEPH AND OPENSTACK 36 RADOS CLUSTER
  37. 37. Ceph Developer Summit 38 • Recent: “Giant” • March 04-05 • wiki.ceph.com • Virtual (irc, hangout, pad, blueprint, youtube) • 2 days (soon to be 3?) • Discuss all work • Recruit for your projects!
  38. 38. New Contribute Page 39 • http://ceph.com/ community/ Contribute • Source tree • Issues • Share experiences • Standups • One-stop shop
  39. 39. New Ceph Wiki 40
  40. 40.  Accepted as a mentoring organization  8 mentors from Inktank & Community  http://ceph.com/gsoc2014/  2 student proposals accepted  Hope to turn this into academic outreach Google Summer of Code 2014 41
  41. 41. Ceph Days 42 • inktank.com/ cephdays • Recently: London, Frankfurt, NYC, Santa Clara • Aggressive program • Upcoming: Sunnyvale, Austin, Boston, Kuala Lumpur
  42. 42. Meetups 43 • Community organized • World wide • Wiki • Ceph-community • Goodies available • Logistical support • Drinkup to tradeshow
  43. 43.  We haven’t forgotten!  Looking for potential founding members  Especially important to keep the IP clean Ceph Foundation 44
  44. 44. Coordinated Efforts 45 • Always need help • CentOS SIG • OCP • Xen • Hadoop • OpenStack • CloudStack • Ganetti • Many more!
  45. 45. http://metrics.ceph.com 46 Copyright © 2014 by Inktank | Private and Confidential
  46. 46. THE PRODUCT
  47. 47. 48 INKTANK CEPH ENTERPRISE WHAT’S INSIDE? Ceph Object and Ceph Block Calamari Enterprise Plugins (2014) Support Services Copyright © 2014 by Inktank | Private and Confidential
  48. 48. ROADMAP INKTANK CEPH ENTERPRISE 50 Copyright © 2014 by Inktank | Private and Confidential April 2014 September 2014 2015
  49. 49. RELEASE SCHEDULE Copyright © 2014 by Inktank | Private and Confidential 51 2013 2014 2015 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2
  50. 50.  Read about the latest version of Ceph.  The latest stuff is always at http://ceph.com/get  Deploy a test cluster using ceph-deploy.  Read the quick-start guide at http://ceph.com/qsg  Read the rest of the docs!  Find docs for the latest release at http://ceph.com/docs  Ask for help when you get stuck!  Community volunteers are waiting for you at http://ceph.com/help Copyright © 2014 by Inktank | Private and Confidential GETTING STARTED WITH CEPH 52
  51. 51. THANK YOU! Patrick McGarry Director, Community Red Hat pmcgarry@redhat.co m @scuttlemonkey

×