SlideShare a Scribd company logo
1 of 54
Ceph: a decade in the making and
still going strong
Sage Weil
Today the part of Sage Weil will be played by...
RESEARCH
INCUBATIONRESEARCH
INKTANKINCUBATIONRESEARCH
Research beginnings
8
RESEARCH
UCSC research grant
●
“Petascale object storage”
●
DOE: LANL, LLNL, Sandia
●
Scalability, reliability, performance
●
HPC file system workloads
●
Scalable metadata management
● First line of Ceph code
●
Summer internship at LLNL
●
High security national lab environment
●
Could write anything, as long as it was OSS
The rest of Ceph
●
RADOS – distributed object storage cluster (2005)
● EBOFS – local object storage (2004/2006)
●
CRUSH – hashing for the real world (2005)
● Paxos monitors – cluster consensus (2006)
→ emphasis on consistent, reliable storage
→ scale by pushing intelligence to the edges
→ a different but compelling architecture
Industry black hole
●
Many large storage vendors
●
Proprietary solutions that don't scale well
● Few open source alternatives (2006)
●
Very limited scale, or
●
Limited community and architecture (Lustre)
●
No enterprise feature sets (snapshots, quotas)
●
PhD grads all built interesting systems...
●
...and then went to work for Netapp, DDN, EMC, Veritas.
●
They want you, not your project
A different path?
●
Change the storage world with open source
●
Do what Linux did to Solaris, Irix, Ultrix, etc.
● License
●
LGPL: share changes, okay to link to proprietary code
● Avoid unfriendly practices
●
Dual licensing
●
Copyright assignment
● Platform
●
Remember sourceforge.net?
Incubation
15
INCUBATIONRESEARCH
DreamHost!
●
Move back to LA, continue hacking
● Hired a few developers
●
Pure development
● No deliverables
Ambitious feature set
●
Native Linux kernel client (2007-)
● Per-directory snapshots (2008)
●
Recursive accounting (2008)
● Object classes (2009)
● librados (2009)
●
radosgw (2009)
● strong authentication (2009)
● RBD: rados block device (2010)
The kernel client
●
ceph-fuse was limited, not very fast
● Build native Linux kernel implementation
●
Began attending Linux file system developer events (LSF)
●
Early words of encouragement from ex-Lustre dev
●
Engage Linux fs developer community as peer
●
Initial attempts merge rejected by Linus
●
Not sufficient evidence of user demand
●
A few fans and would-be users chimed in...
●
Eventually merged for v2.6.34 (early 2010)
Part of a larger ecosystem
●
Ceph need not solve all problems as monolithic stack
● Replaced ebofs object file system with btrfs
●
Same design goals; avoid reinventing the wheel
●
Robust, supported, well-optimized
●
Kernel-level cache management
●
Copy-on-write, checksumming, other goodness
●
Contributed some early functionality
●
Cloning files
●
Async snapshots
Budding community
●
#ceph on irc.oftc.net, ceph-devel@vger.kernel.org
● Many interested users
●
A few developers
● Many fans
●
Too unstable for any real deployments
● Still mostly focused on right architecture and technical
solutions
Road to product
●
DreamHost decides to build an S3-compatible object
storage service with Ceph
● Stability
●
Focus on core RADOS, RBD, radosgw
● Paying back some technical debt
●
Build testing automation
●
Code review!
● Expand engineering team
The reality
●
Growing incoming commercial interest
●
Early attempts from organizations large and small
●
Difficult to engage with a web hosting company
●
No means to support commercial deployments
● Project needed a company to back it
●
Fund the engineering effort
●
Build and test a product
●
Support users
● Orchestrated a spin out of DreamHost in 2012
Inktank
24
INKTANKINCUBATIONRESEARCH
Do it right
●
How do we build a strong open source company?
● How do we build a strong open source community?
●
Models?
●
Red Hat, SUSE, Cloudera, MySQL, Canonical, …
● Initial funding from DreamHost, Mark Shuttleworth
Goals
●
A stable Ceph release for production deployment
●
DreamObjects
● Lay foundation for widespread adoption
●
Platform support (Ubuntu, Red Hat, SUSE)
●
Documentation
●
Build and test infrastructure
●
Build a sales and support organization
● Expand engineering organization
Branding
●
Early decision to engage professional agency
● Terms like
●
“Brand core”
●
“Design system”
● Company vs Project
●
Inktank != Ceph
●
Establish a healthy relationship with the community
● Aspirational messaging: The Future of Storage
Slick graphics
●
broken powerpoint template
29
Traction
●
Too many production deployments to count
●
We don't know about most of them!
● Too many customers (for me) to count
● Growing partner list
●
Lots of buzz
● OpenStack
Quality
●
Increased adoption means increased demands on robust
testing
●
Across multiple platforms
●
Include platforms we don't use
● Upgrades
●
Rolling upgrades
●
Inter-version compatibility
Developer community
●
Significant external contributors
● First-class feature contributions from contributors
●
Non-Inktank participants in daily stand-ups
● External access to build/test lab infrastructure
● Common toolset
●
Github
●
Email (kernel.org)
●
IRC (oftc.net)
● Linux distros
CDS: Ceph Developer Summit
●
Community process for building project roadmap
● 100% online
●
Google hangouts
●
Wikis
●
Etherpad
●
First was in Spring 2013, 7th
just completed
● Great feedback, growing participation
● Indoctrinating our own developers to an open
development model
And then...
s/Red Hat of Storage/Storage of Red Hat/
Calamari
●
Inktank strategy was to package Ceph for the Enterprise
● Inktank Ceph Enterprise (ICE)
●
Ceph: a hardened, tested, validated version
●
Calamari: management layer and GUI (proprietary!)
●
Enterprise integrations: SNMP, HyperV, VMWare
●
Support SLAs
● Red Hat model is pure open source
●
Open sourced Calamari
The Present
36
Tiering
●
Client side caches are great, but only buy so much.
● Can we separate hot and cold data onto different storage
devices?
●
Cache pools: promote hot objects from an existing pool into a fast
(e.g., FusionIO) pool
●
Cold pools: demote cold data to a slow, archival pool (e.g.,
erasure coding, NYI)
●
Very Cold Pools (efficient erasure coding, compression, osd spin
down to save power) OR tape/public cloud
●
How do you identify what is hot and cold?
● Common in enterprise solutions; not found in open source
scale-out systems
→ cache pools new in Firefly, better in Giant, continued in
Hammer
Erasure coding
●
Replication for redundancy is flexible and fast
● For larger clusters, it can be expensive
●
We can trade recovery performance for storage
● Erasure coded data is hard to modify, but ideal for cold or
read-only objects
●
Cold storage tiering
●
Will be used directly by radosgw
Storage
overhead
Repair
traffic
MTTDL
(days)
3x replication 3x 1x 2.3 E10
RS (10, 4) 1.4x 10x 3.3 E13
LRC (10, 6, 5) 1.6x 5x 1.2 E15
Erasure coding (cont'd)
●
In firefly
● LRC in Giant
●
Intel ISA-L (optimized library) in Giant, maybe backported
to Firefly
●
Talk of ARM optimized (NEON) jerasure
Async Replication in RADOS
●
Clinic project with Harvey Mudd
● Group of students working on real world project
●
Reason the bounds on clock drift so we can achieve point-
in-time consistency across a distributed set of nodes
CephFS
●
Dogfooding for internal QA infrastructure
● Learning lots
●
Many rough edges, but working quite well!
● We want to hear from you!
The Future
43
CephFS
→ This is where it all started – let's get there
●
Today
●
QA coverage and bug squashing continues
●
NFS and CIFS now large complete and robust
●
Multi-MDS stability continues to improve
● Need
●
QA investment
●
Snapshot work
● Amazing community effort
The larger ecosystem
Storage backends
●
Backends are pluggable
● Recent work to use rocksdb everywhere leveldb can be
used (mon/osd); can easily plug in other key/value store
libraries
●
Other possibilities include LMDB or NVNKV (from fusionIO)
●
Prototype kinetic backend
● Alternative OSD backends
●
KeyValueStore – put all data in a k/v db (Haomai @ UnitedStack)
●
KeyFileStore initial plans (2nd
gen?)
●
Some partners looking at backends tuned to their hardware
Governance
How do we strengthen the project community?
●
Acknowledge Sage's role as maintainer / BDL
● Recognize project leads
●
RBD, RGW, RADOS, CephFS, Calamari, etc.
● Formalize processes around CDS, community roadmap
● Formal foundation?
●
Community build and test lab infrastructure (getting IPs this
week!)
●
Build and test for broad range of OSs, distros, hardware
Technical roadmap
●
How do we reach new use-cases and users?
● How do we better satisfy existing users?
●
How do we ensure Ceph can succeed in enough markets
for business investment to thrive?
● Enough breadth to expand and grow the community
● Enough focus to do well
Performance
●
Lots of work with partners to improve performance
● High-end flash back ends. Optimize hot paths to limit CPU
usage, drive up IOPS
●
Improve threading, fine-grained locks
● Low-power processors. Run well on small ARM devices
(including those new-fangled ethernet drives)
Ethernet Drives
●
Multiple vendors are building 'ethernet drives'
● Normal hard drives w/ small ARM host on board
●
Could run OSD natively on the drive, completely remove
the “host” from the deployment
●
Many different implementations, some vendors need help w/ open
architecture and ecosystem concepts
●
Current devices are hard disks; no reason they couldn't
also be flash-based, or hybrid
●
This is exactly what we were thinking when Ceph was
originally designed!
Big data
Why is “big data” built on such a weak storage model?
●
Move computation to the data
● Evangelize RADOS classes
● librados case studies and proof points
●
Build a general purpose compute and storage platform
The enterprise
How do we pay for all our toys?
●
Support legacy and transitional interfaces
●
iSCSI, NFS, pNFS, CIFS
●
Vmware, Hyper-v
●
Identify the beachhead use-cases
●
Only takes one use-case to get in the door
●
Single platform – shared storage resource
●
Bottom-up: earn respect of engineers and admins
● Top-down: strong brand and compelling product
Why we can beat the old guard
●
It is hard to compete with free and open source software
●
Unbeatable value proposition
●
Ultimately a more efficient development model
●
It is hard to manufacture community
● Strong foundational architecture
●
Native protocols, Linux kernel support
●
Unencumbered by legacy protocols like NFS
●
Move beyond traditional client/server model
●
Ongoing paradigm shift
●
Software defined infrastructure, data center
Thanks!

More Related Content

What's hot

Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Cephfsglusterfs.talk
Cephfsglusterfs.talkCephfsglusterfs.talk
Cephfsglusterfs.talkUdo Seidel
 
OpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaOpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with cephIan Colle
 
Ostd.ksplice.talk
Ostd.ksplice.talkOstd.ksplice.talk
Ostd.ksplice.talkUdo Seidel
 
Osdc2012 xtfs.talk
Osdc2012 xtfs.talkOsdc2012 xtfs.talk
Osdc2012 xtfs.talkUdo Seidel
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetupktdreyer
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storageAndrew Underwood
 
Building reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise StorageBuilding reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise StorageLars Marowsky-Brée
 
Multi-OS Continuous Packaging with docker and Project-Builder.org
Multi-OS Continuous Packaging with docker and Project-Builder.orgMulti-OS Continuous Packaging with docker and Project-Builder.org
Multi-OS Continuous Packaging with docker and Project-Builder.orgBruno Cornec
 
[FOSDEM 2020] Lazy distribution of container images
[FOSDEM 2020] Lazy distribution of container images[FOSDEM 2020] Lazy distribution of container images
[FOSDEM 2020] Lazy distribution of container imagesAkihiro Suda
 
Upgrading CentOS on the Facebook fleet
Upgrading CentOS on the Facebook fleetUpgrading CentOS on the Facebook fleet
Upgrading CentOS on the Facebook fleetDavide Cavalca
 
Salt conf 2014-installing-openstack-using-saltstack-v02
Salt conf 2014-installing-openstack-using-saltstack-v02Salt conf 2014-installing-openstack-using-saltstack-v02
Salt conf 2014-installing-openstack-using-saltstack-v02Yazz Atlas
 
Manila, an update from Liberty, OpenStack Summit - Tokyo
Manila, an update from Liberty, OpenStack Summit - TokyoManila, an update from Liberty, OpenStack Summit - Tokyo
Manila, an update from Liberty, OpenStack Summit - TokyoSean Cohen
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
Putting The PaaS in OpenStack with Diane Mueller @RedHat
Putting The PaaS in OpenStack with Diane Mueller @RedHat Putting The PaaS in OpenStack with Diane Mueller @RedHat
Putting The PaaS in OpenStack with Diane Mueller @RedHat OpenShift Origin
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Ceph Community
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewMarcel Hergaarden
 

What's hot (20)

Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Cephfsglusterfs.talk
Cephfsglusterfs.talkCephfsglusterfs.talk
Cephfsglusterfs.talk
 
OpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaOpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of Alabama
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with ceph
 
Ostd.ksplice.talk
Ostd.ksplice.talkOstd.ksplice.talk
Ostd.ksplice.talk
 
Osdc2012 xtfs.talk
Osdc2012 xtfs.talkOsdc2012 xtfs.talk
Osdc2012 xtfs.talk
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storage
 
Building reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise StorageBuilding reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise Storage
 
Multi-OS Continuous Packaging with docker and Project-Builder.org
Multi-OS Continuous Packaging with docker and Project-Builder.orgMulti-OS Continuous Packaging with docker and Project-Builder.org
Multi-OS Continuous Packaging with docker and Project-Builder.org
 
kpatch.kgraft
kpatch.kgraftkpatch.kgraft
kpatch.kgraft
 
[FOSDEM 2020] Lazy distribution of container images
[FOSDEM 2020] Lazy distribution of container images[FOSDEM 2020] Lazy distribution of container images
[FOSDEM 2020] Lazy distribution of container images
 
Upgrading CentOS on the Facebook fleet
Upgrading CentOS on the Facebook fleetUpgrading CentOS on the Facebook fleet
Upgrading CentOS on the Facebook fleet
 
Salt conf 2014-installing-openstack-using-saltstack-v02
Salt conf 2014-installing-openstack-using-saltstack-v02Salt conf 2014-installing-openstack-using-saltstack-v02
Salt conf 2014-installing-openstack-using-saltstack-v02
 
Manila, an update from Liberty, OpenStack Summit - Tokyo
Manila, an update from Liberty, OpenStack Summit - TokyoManila, an update from Liberty, OpenStack Summit - Tokyo
Manila, an update from Liberty, OpenStack Summit - Tokyo
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Putting The PaaS in OpenStack with Diane Mueller @RedHat
Putting The PaaS in OpenStack with Diane Mueller @RedHat Putting The PaaS in OpenStack with Diane Mueller @RedHat
Putting The PaaS in OpenStack with Diane Mueller @RedHat
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
 

Viewers also liked

Telivisiones inteligentes
Telivisiones inteligentesTelivisiones inteligentes
Telivisiones inteligentesOmar Rosales
 
How to add corvette rpo option codes with gm tech2
How to add corvette rpo option codes with gm tech2How to add corvette rpo option codes with gm tech2
How to add corvette rpo option codes with gm tech2spobd2
 
De la programe la aplicații cu noul Windows – familiar și inovator
De la programe la aplicații cu noul Windows – familiar și inovatorDe la programe la aplicații cu noul Windows – familiar și inovator
De la programe la aplicații cu noul Windows – familiar și inovatorCosmin Tataru
 
Story of 140 Characters - Twitter
Story of 140 Characters - TwitterStory of 140 Characters - Twitter
Story of 140 Characters - TwitterIhtisham Waseer
 
M2ModuleDevelopmenteBook
M2ModuleDevelopmenteBookM2ModuleDevelopmenteBook
M2ModuleDevelopmenteBookTrọng Huỳnh
 
1527 13 aug_2016
1527 13 aug_20161527 13 aug_2016
1527 13 aug_2016amconnect
 

Viewers also liked (8)

Telivisiones inteligentes
Telivisiones inteligentesTelivisiones inteligentes
Telivisiones inteligentes
 
C502021133
C502021133C502021133
C502021133
 
These Are The Data You Are Looking For
These Are The Data You Are Looking ForThese Are The Data You Are Looking For
These Are The Data You Are Looking For
 
How to add corvette rpo option codes with gm tech2
How to add corvette rpo option codes with gm tech2How to add corvette rpo option codes with gm tech2
How to add corvette rpo option codes with gm tech2
 
De la programe la aplicații cu noul Windows – familiar și inovator
De la programe la aplicații cu noul Windows – familiar și inovatorDe la programe la aplicații cu noul Windows – familiar și inovator
De la programe la aplicații cu noul Windows – familiar și inovator
 
Story of 140 Characters - Twitter
Story of 140 Characters - TwitterStory of 140 Characters - Twitter
Story of 140 Characters - Twitter
 
M2ModuleDevelopmenteBook
M2ModuleDevelopmenteBookM2ModuleDevelopmenteBook
M2ModuleDevelopmenteBook
 
1527 13 aug_2016
1527 13 aug_20161527 13 aug_2016
1527 13 aug_2016
 

Similar to Ceph Day SF 2015 - Keynote

Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongPatrick McGarry
 
London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph Ceph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
The Future of GlusterFS and Gluster.org
The Future of GlusterFS and Gluster.orgThe Future of GlusterFS and Gluster.org
The Future of GlusterFS and Gluster.orgJohn Mark Walker
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project UpdateCeph Community
 
Introduction to OpenStack Storage
Introduction to OpenStack StorageIntroduction to OpenStack Storage
Introduction to OpenStack StorageNetApp
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfClyso GmbH
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
 
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt Ceph Community
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
 
Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Mainframe Project
 
OpenStack Cinder Best Practices - Meet Up
OpenStack Cinder Best Practices - Meet UpOpenStack Cinder Best Practices - Meet Up
OpenStack Cinder Best Practices - Meet UpAaron Delp
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications OpenEBS
 
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...Russell Pavlicek
 

Similar to Ceph Day SF 2015 - Keynote (20)

Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
 
London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
 
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
 
The Future of GlusterFS and Gluster.org
The Future of GlusterFS and Gluster.orgThe Future of GlusterFS and Gluster.org
The Future of GlusterFS and Gluster.org
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project Update
 
Introduction to OpenStack Storage
Introduction to OpenStack StorageIntroduction to OpenStack Storage
Introduction to OpenStack Storage
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
 
Os Lamothe
Os LamotheOs Lamothe
Os Lamothe
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 
Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...
 
OpenStack Cinder Best Practices - Meet Up
OpenStack Cinder Best Practices - Meet UpOpenStack Cinder Best Practices - Meet Up
OpenStack Cinder Best Practices - Meet Up
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications
 
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...
 

Ceph Day SF 2015 - Keynote

  • 1. Ceph: a decade in the making and still going strong Sage Weil
  • 2. Today the part of Sage Weil will be played by...
  • 3.
  • 4.
  • 10. UCSC research grant ● “Petascale object storage” ● DOE: LANL, LLNL, Sandia ● Scalability, reliability, performance ● HPC file system workloads ● Scalable metadata management ● First line of Ceph code ● Summer internship at LLNL ● High security national lab environment ● Could write anything, as long as it was OSS
  • 11. The rest of Ceph ● RADOS – distributed object storage cluster (2005) ● EBOFS – local object storage (2004/2006) ● CRUSH – hashing for the real world (2005) ● Paxos monitors – cluster consensus (2006) → emphasis on consistent, reliable storage → scale by pushing intelligence to the edges → a different but compelling architecture
  • 12.
  • 13. Industry black hole ● Many large storage vendors ● Proprietary solutions that don't scale well ● Few open source alternatives (2006) ● Very limited scale, or ● Limited community and architecture (Lustre) ● No enterprise feature sets (snapshots, quotas) ● PhD grads all built interesting systems... ● ...and then went to work for Netapp, DDN, EMC, Veritas. ● They want you, not your project
  • 14. A different path? ● Change the storage world with open source ● Do what Linux did to Solaris, Irix, Ultrix, etc. ● License ● LGPL: share changes, okay to link to proprietary code ● Avoid unfriendly practices ● Dual licensing ● Copyright assignment ● Platform ● Remember sourceforge.net?
  • 17. DreamHost! ● Move back to LA, continue hacking ● Hired a few developers ● Pure development ● No deliverables
  • 18. Ambitious feature set ● Native Linux kernel client (2007-) ● Per-directory snapshots (2008) ● Recursive accounting (2008) ● Object classes (2009) ● librados (2009) ● radosgw (2009) ● strong authentication (2009) ● RBD: rados block device (2010)
  • 19. The kernel client ● ceph-fuse was limited, not very fast ● Build native Linux kernel implementation ● Began attending Linux file system developer events (LSF) ● Early words of encouragement from ex-Lustre dev ● Engage Linux fs developer community as peer ● Initial attempts merge rejected by Linus ● Not sufficient evidence of user demand ● A few fans and would-be users chimed in... ● Eventually merged for v2.6.34 (early 2010)
  • 20. Part of a larger ecosystem ● Ceph need not solve all problems as monolithic stack ● Replaced ebofs object file system with btrfs ● Same design goals; avoid reinventing the wheel ● Robust, supported, well-optimized ● Kernel-level cache management ● Copy-on-write, checksumming, other goodness ● Contributed some early functionality ● Cloning files ● Async snapshots
  • 21. Budding community ● #ceph on irc.oftc.net, ceph-devel@vger.kernel.org ● Many interested users ● A few developers ● Many fans ● Too unstable for any real deployments ● Still mostly focused on right architecture and technical solutions
  • 22. Road to product ● DreamHost decides to build an S3-compatible object storage service with Ceph ● Stability ● Focus on core RADOS, RBD, radosgw ● Paying back some technical debt ● Build testing automation ● Code review! ● Expand engineering team
  • 23. The reality ● Growing incoming commercial interest ● Early attempts from organizations large and small ● Difficult to engage with a web hosting company ● No means to support commercial deployments ● Project needed a company to back it ● Fund the engineering effort ● Build and test a product ● Support users ● Orchestrated a spin out of DreamHost in 2012
  • 26. Do it right ● How do we build a strong open source company? ● How do we build a strong open source community? ● Models? ● Red Hat, SUSE, Cloudera, MySQL, Canonical, … ● Initial funding from DreamHost, Mark Shuttleworth
  • 27. Goals ● A stable Ceph release for production deployment ● DreamObjects ● Lay foundation for widespread adoption ● Platform support (Ubuntu, Red Hat, SUSE) ● Documentation ● Build and test infrastructure ● Build a sales and support organization ● Expand engineering organization
  • 28. Branding ● Early decision to engage professional agency ● Terms like ● “Brand core” ● “Design system” ● Company vs Project ● Inktank != Ceph ● Establish a healthy relationship with the community ● Aspirational messaging: The Future of Storage
  • 30. Traction ● Too many production deployments to count ● We don't know about most of them! ● Too many customers (for me) to count ● Growing partner list ● Lots of buzz ● OpenStack
  • 31. Quality ● Increased adoption means increased demands on robust testing ● Across multiple platforms ● Include platforms we don't use ● Upgrades ● Rolling upgrades ● Inter-version compatibility
  • 32. Developer community ● Significant external contributors ● First-class feature contributions from contributors ● Non-Inktank participants in daily stand-ups ● External access to build/test lab infrastructure ● Common toolset ● Github ● Email (kernel.org) ● IRC (oftc.net) ● Linux distros
  • 33. CDS: Ceph Developer Summit ● Community process for building project roadmap ● 100% online ● Google hangouts ● Wikis ● Etherpad ● First was in Spring 2013, 7th just completed ● Great feedback, growing participation ● Indoctrinating our own developers to an open development model
  • 34. And then... s/Red Hat of Storage/Storage of Red Hat/
  • 35. Calamari ● Inktank strategy was to package Ceph for the Enterprise ● Inktank Ceph Enterprise (ICE) ● Ceph: a hardened, tested, validated version ● Calamari: management layer and GUI (proprietary!) ● Enterprise integrations: SNMP, HyperV, VMWare ● Support SLAs ● Red Hat model is pure open source ● Open sourced Calamari
  • 37. Tiering ● Client side caches are great, but only buy so much. ● Can we separate hot and cold data onto different storage devices? ● Cache pools: promote hot objects from an existing pool into a fast (e.g., FusionIO) pool ● Cold pools: demote cold data to a slow, archival pool (e.g., erasure coding, NYI) ● Very Cold Pools (efficient erasure coding, compression, osd spin down to save power) OR tape/public cloud ● How do you identify what is hot and cold? ● Common in enterprise solutions; not found in open source scale-out systems → cache pools new in Firefly, better in Giant, continued in Hammer
  • 38. Erasure coding ● Replication for redundancy is flexible and fast ● For larger clusters, it can be expensive ● We can trade recovery performance for storage ● Erasure coded data is hard to modify, but ideal for cold or read-only objects ● Cold storage tiering ● Will be used directly by radosgw Storage overhead Repair traffic MTTDL (days) 3x replication 3x 1x 2.3 E10 RS (10, 4) 1.4x 10x 3.3 E13 LRC (10, 6, 5) 1.6x 5x 1.2 E15
  • 39. Erasure coding (cont'd) ● In firefly ● LRC in Giant ● Intel ISA-L (optimized library) in Giant, maybe backported to Firefly ● Talk of ARM optimized (NEON) jerasure
  • 40. Async Replication in RADOS ● Clinic project with Harvey Mudd ● Group of students working on real world project ● Reason the bounds on clock drift so we can achieve point- in-time consistency across a distributed set of nodes
  • 41. CephFS ● Dogfooding for internal QA infrastructure ● Learning lots ● Many rough edges, but working quite well! ● We want to hear from you!
  • 42.
  • 44. CephFS → This is where it all started – let's get there ● Today ● QA coverage and bug squashing continues ● NFS and CIFS now large complete and robust ● Multi-MDS stability continues to improve ● Need ● QA investment ● Snapshot work ● Amazing community effort
  • 46. Storage backends ● Backends are pluggable ● Recent work to use rocksdb everywhere leveldb can be used (mon/osd); can easily plug in other key/value store libraries ● Other possibilities include LMDB or NVNKV (from fusionIO) ● Prototype kinetic backend ● Alternative OSD backends ● KeyValueStore – put all data in a k/v db (Haomai @ UnitedStack) ● KeyFileStore initial plans (2nd gen?) ● Some partners looking at backends tuned to their hardware
  • 47. Governance How do we strengthen the project community? ● Acknowledge Sage's role as maintainer / BDL ● Recognize project leads ● RBD, RGW, RADOS, CephFS, Calamari, etc. ● Formalize processes around CDS, community roadmap ● Formal foundation? ● Community build and test lab infrastructure (getting IPs this week!) ● Build and test for broad range of OSs, distros, hardware
  • 48. Technical roadmap ● How do we reach new use-cases and users? ● How do we better satisfy existing users? ● How do we ensure Ceph can succeed in enough markets for business investment to thrive? ● Enough breadth to expand and grow the community ● Enough focus to do well
  • 49. Performance ● Lots of work with partners to improve performance ● High-end flash back ends. Optimize hot paths to limit CPU usage, drive up IOPS ● Improve threading, fine-grained locks ● Low-power processors. Run well on small ARM devices (including those new-fangled ethernet drives)
  • 50. Ethernet Drives ● Multiple vendors are building 'ethernet drives' ● Normal hard drives w/ small ARM host on board ● Could run OSD natively on the drive, completely remove the “host” from the deployment ● Many different implementations, some vendors need help w/ open architecture and ecosystem concepts ● Current devices are hard disks; no reason they couldn't also be flash-based, or hybrid ● This is exactly what we were thinking when Ceph was originally designed!
  • 51. Big data Why is “big data” built on such a weak storage model? ● Move computation to the data ● Evangelize RADOS classes ● librados case studies and proof points ● Build a general purpose compute and storage platform
  • 52. The enterprise How do we pay for all our toys? ● Support legacy and transitional interfaces ● iSCSI, NFS, pNFS, CIFS ● Vmware, Hyper-v ● Identify the beachhead use-cases ● Only takes one use-case to get in the door ● Single platform – shared storage resource ● Bottom-up: earn respect of engineers and admins ● Top-down: strong brand and compelling product
  • 53. Why we can beat the old guard ● It is hard to compete with free and open source software ● Unbeatable value proposition ● Ultimately a more efficient development model ● It is hard to manufacture community ● Strong foundational architecture ● Native protocols, Linux kernel support ● Unencumbered by legacy protocols like NFS ● Move beyond traditional client/server model ● Ongoing paradigm shift ● Software defined infrastructure, data center

Editor's Notes

  1. <number>
  2. <number>
  3. <number>