SlideShare a Scribd company logo
1
Ceph Project Update
Sage Weil & Josh Durgin
Ceph Month - 2021.06.01
2
● Ceph background
● Ceph Month
● Ceph Foundation update
● Sepia lab update
● Telemetry
● The future
AGENDA
3
The buzzwords
● “Software defined storage”
● “Unified storage system”
● “Scalable distributed storage”
● “The future of storage”
● “The Linux of storage”
WHAT IS CEPH?
The substance
● Ceph is open source software
● Runs on commodity hardware
○ Commodity servers
○ IP networks
○ HDDs, SSDs, NVMe, NV-DIMMs, ...
● A single cluster can serve object,
block, and file workloads
4
● Freedom to use (free as in beer)
● Freedom to introspect, modify,
and share (free as in speech)
● Freedom from vendor lock-in
● Freedom to innovate
CEPH IS FREE AND OPEN SOURCE
5
● Reliable storage service out of unreliable components
○ No single point of failure
○ Data durability via replication or erasure coding
○ No interruption of service from rolling upgrades, online expansion, etc.
● Favor consistency and correctness over performance
CEPH IS RELIABLE
6
● Ceph is elastic storage infrastructure
○ Storage cluster may grow or shrink
○ Add or remove hardware while system is
online and under load
● Scale up with bigger, faster hardware
● Scale out within a single cluster for
capacity and performance
● Federate multiple clusters across
sites with asynchronous replication
and disaster recovery capabilities
CEPH IS SCALABLE
7
CEPH IS A UNIFIED STORAGE SYSTEM
RGW
S3 and Swift
object storage
LIBRADOS
Low-level storage API
RADOS
Reliable, elastic, distributed storage layer with
replication and erasure coding
RBD
Virtual block device
CEPHFS
Distributed network
file system
OBJECT BLOCK FILE
8
RELEASE SCHEDULE
Octopus
Mar 2020
14.2.z
Nautilus
Mar 2019
WE ARE
HERE
15.2.z
16.2.z
Pacific
Mar 2021
17.2.z
Quincy
Mar 2022
● Stable, named release every 12 months
● Backports for 2 releases
○ Bug fixes and security updates
○ Nautilus reaches EOL shortly after Pacific is released
● Upgrade up to 2 releases at a time
○ Nautilus → Pacific, Octopus → Quincy
● Released as packages (deb, rpm) and container images
● Process improvements (security hotfixes; regular cadence)
9
CEPH MONTH - JUNE 2021
10
CEPH MONTH
● Goals
○ More interactive
○ Bite-sized
● Format
○ 1-2 hrs
○ ~2 blocks per week
○ A few planned talks
○ Un/semi-structured discussion time
○ Lighting talks sprinkled throughout
● Etherpads
○ Add your questions, or ask them verbally
○ Add any discussion topics
● Week of June 1 - 4
○ RADOS
○ Windows
● Week of June 7 - 11
○ RGW
○ Performance
● Week of June 14 - 18
○ RBD
○ Dashboard
○ Lighting talks
● Week of June 21 - 25
○ CephFS
○ cephadm
https://pad.ceph.com/p/ceph-month-june-2021
11
● It will be in March 2022…
● No location yet
○ Seoul?
○ North America? (Portland?)
○ ???
● Expected to be in-person
○ Possibly with hybrid elements?
● We are very interested in community feedback!
CEPHALOCON 2022
12
CEPH FOUNDATION UPDATE
PREMIER MEMBERS
GENERAL MEMBERS
ASSOCIATE MEMBERS
16
CURRENT PROJECTS
● Ceph documentation
○ Zac Dover, full-time technical writer
● ceph.io web site update
○ Spearheaded by SoftIron
○ Static site generator; github; no more wordpress
○ https://github.com/ceph/ceph.io
○ Planned launch next month!
● Training materials
○ Working with Linux Foundation’s training group
○ Building out initial free course material (w/ JC Lopez)
○ edX and/or LF hosted; can support both self-paced or instructor-led
○ Potential in future for advanced material, paid courses, and/or certifications
○ LF training group is revenue neutral; collaborative development process with community
17
CURRENT PROJECTS
● Reducing cloud spend with OVH
○ Build and CI hardware purchases for Sepia lab
○ We are now only hosting public-facing infra in OVH
● Lab hardware
○ Build machines
○ Expanding lab’s Ceph cluster (more storage for test results, etc)
● Windows support
○ Contract with CloudBase to finish initial development, build sustainable CI infrastructure
○ RBD, CephFS
● New marketing committee
18
SEPIA LAB UPDATE
● More hardware from the Ceph Foundation
○ Expanding the lab’s Ceph cluster
○ More build machines (braggi)
○ More test nodes (gibba)
● Improved teuthology test infrastructure
○ Moved to a single process dispatcher (Shraddha Agrawal)
○ Replaced in-memory queue with limited features with postgres (Aishwarya Mathuria)
○ Enables larger scale test clusters
○ Ability to prioritize and use lab more efficiently
● Downgrade testing (WIP)
○ Downgrade within a major release (e.g. 16.2.4 -> 16.2.3)
○ Now feasible with cephadm
19
ARM AARCH64 SUPPORT
● Hardware donated by Ampere
● CI builds for teuthology, releases
○ CentOS 8 RPMs, Ubuntu Focal 20.04
○ Container images (based on CentOS)
● Addressing some issues with bleeding edge of podman/quay and multi-arch
support
20
TELEMETRY UPDATE
https://telemetry-public.ceph.com/
21
TELEMETRY AND CRASH REPORTS
● Opt-in
○ Will require re-opt-in if telemetry content
is expanded in the future
○ Explicitly acknowledge data sharing
license
● Basic channel
○ Cluster size, version
○ Which features are enabled
● Crash channel
○ Anonymized crash metadata
○ Where in the code the problem happened,
what version, etc.
○ Extensive (private) dashboard
○ Integration into tracker.ceph.com WIP
● Device channel
○ HDD vs SSD, vendors, models
○ Health metrics (e.g., SMART)
○ Extensive dashboard (link from top right)
● Ident channel
○ Off by default
○ Optional contact information
● Future performance channel
○ Planned for quincy
○ Optional more granular (but still
anonymized) data about workloads, IO
sizes, IO rates, cache hit rates, etc.
○ Help developers optimize Ceph
○ Possibly tuning suggestions for users
● Transparency!
https://telemetry-public.ceph.com/
22
IS TELEMETRY ENABLED?
23
WHY IS TELEMETRY NOT ENABLED?
24
IT’S EASY!
● Review and opt-in
● Enable SOCKS proxy
● https://docs.ceph.com/en/latest/mgr/telemetry/
25
THE FUTURE...
26
● Cephadm has brought end-to-end management of Ceph deployments
● Cluster management via Ceph dashboard
● Simple experience for non-enterprise deployments
○ Small/medium businesses, remote offices, etc.
○ NAS replacement
● Turn-key support for NFS, object
○ SMB coming in Quincy
OUT OF THE BOX EXPERIENCE
27
NEW DEVICES
● ZNS SSDs
○ 3D NAND … dense, but the erase blocks are huge
○ Zone-based write interface
○ Combines capacity, low cost, and good performance
○ Key focus of Crimson’s SeaStore!
● Multi-actuator HDDs
○ Recent devices double IOPS in existing HDD package
○ Ceph treats them as two OSDs with shared failure domain
● Persistent memory
○ Will be well-supported (but not required) by Crimson
○ Recent support in RBD client-side write-back cache
28
● Client-side
○ NVMeoF target that presents an RBD device
○ Alternative to iSCSI
○ Can be combined with new hardware (e.g., SmartNICs like Nvidia’s Bluefield) to present a
NVME device on PCI bus while running gateway/librbd code on the card’s “DPU”
○ Useful for “metal as a service” cloud infrastructure
● Server-side
○ Some discussion around Crimson “phase 2”
○ Enable primary OSD to write directly to replica OSD’s devices
○ Mechanism to reduce CPU cost per IO
NVMe FABRICS
29
● Maturing
○ Rook
■ Key focus: Ceph orchestrator / dashboard integration with rook
○ Knative
○ Spark
■ S3 SELECT
○ Multisite
■ interop with public cloud
● New
○ Apache Arrow / Parquet
■ Data interchange formats for data pipelines
INTEGRATIONS / ECOSYSTEMS
30
https://pad.ceph.com/p/r
NAMING THE R RELEASE
Questions?
Up next: RADOS
https://pad.ceph.com/p/ceph-month-june-2021

More Related Content

What's hot

Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
Ceph Community
 
CEPH DAY BERLIN - CEPH ON THE BRAIN!
CEPH DAY BERLIN - CEPH ON THE BRAIN!CEPH DAY BERLIN - CEPH ON THE BRAIN!
CEPH DAY BERLIN - CEPH ON THE BRAIN!
Ceph Community
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
Red_Hat_Storage
 
Evaluation of RBD replication options @CERN
Evaluation of RBD replication options @CERNEvaluation of RBD replication options @CERN
Evaluation of RBD replication options @CERN
Ceph Community
 
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Ceph Community
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage Weil
Ceph Community
 
Erasure Code at Scale - Thomas William Byrne
Erasure Code at Scale - Thomas William ByrneErasure Code at Scale - Thomas William Byrne
Erasure Code at Scale - Thomas William Byrne
Ceph Community
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
James Saint-Rossy
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
Ceph Community
 
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOKCEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
Ceph Community
 
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong ZhuBuild a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Ceph Community
 
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Red_Hat_Storage
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
TomBarron
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
Ceph Community
 
Linux Block Cache Practice on Ceph BlueStore - Junxin Zhang
Linux Block Cache Practice on Ceph BlueStore - Junxin ZhangLinux Block Cache Practice on Ceph BlueStore - Junxin Zhang
Linux Block Cache Practice on Ceph BlueStore - Junxin Zhang
Ceph Community
 
RBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason DillamanRBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason Dillaman
Ceph Community
 
GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS  GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS
Gluster.org
 
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red_Hat_Storage
 

What's hot (20)

Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
 
CEPH DAY BERLIN - CEPH ON THE BRAIN!
CEPH DAY BERLIN - CEPH ON THE BRAIN!CEPH DAY BERLIN - CEPH ON THE BRAIN!
CEPH DAY BERLIN - CEPH ON THE BRAIN!
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
 
Evaluation of RBD replication options @CERN
Evaluation of RBD replication options @CERNEvaluation of RBD replication options @CERN
Evaluation of RBD replication options @CERN
 
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage Weil
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Erasure Code at Scale - Thomas William Byrne
Erasure Code at Scale - Thomas William ByrneErasure Code at Scale - Thomas William Byrne
Erasure Code at Scale - Thomas William Byrne
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOKCEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
 
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong ZhuBuild a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
 
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
 
Linux Block Cache Practice on Ceph BlueStore - Junxin Zhang
Linux Block Cache Practice on Ceph BlueStore - Junxin ZhangLinux Block Cache Practice on Ceph BlueStore - Junxin Zhang
Linux Block Cache Practice on Ceph BlueStore - Junxin Zhang
 
RBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason DillamanRBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason Dillaman
 
GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS  GlusterFS w/ Tiered XFS
GlusterFS w/ Tiered XFS
 
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS Plans
 

Similar to 2021.06. Ceph Project Update

What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
Ceph Community
 
London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph
Ceph Community
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
OpenStack_Online
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
Ceph Community
 
Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
Patrick McGarry
 
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Community
 
Ceph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's CephCeph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's Ceph
Ceph Community
 
Ceph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade in
Ceph Community
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Danny Al-Gaaf
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
Clyso GmbH
 
Ceph Day SF 2015 - Keynote
Ceph Day SF 2015 - Keynote Ceph Day SF 2015 - Keynote
Ceph Day SF 2015 - Keynote
Ceph Community
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4
OpenEBS
 
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebula Project
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
MayaData Inc
 
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
OpenStack
 
Hadoop 3 @ Hadoop Summit San Jose 2017
Hadoop 3 @ Hadoop Summit San Jose 2017Hadoop 3 @ Hadoop Summit San Jose 2017
Hadoop 3 @ Hadoop Summit San Jose 2017
Junping Du
 
Apache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateApache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community Update
DataWorks Summit
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Community
 
Discoblocks.pptx.pdf
Discoblocks.pptx.pdfDiscoblocks.pptx.pdf
Discoblocks.pptx.pdf
Richárd Kovács
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
Patrick McGarry
 

Similar to 2021.06. Ceph Project Update (20)

What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
 
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
 
Ceph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's CephCeph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's Ceph
 
Ceph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade in
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
 
Ceph Day SF 2015 - Keynote
Ceph Day SF 2015 - Keynote Ceph Day SF 2015 - Keynote
Ceph Day SF 2015 - Keynote
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4
 
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
 
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
 
Hadoop 3 @ Hadoop Summit San Jose 2017
Hadoop 3 @ Hadoop Summit San Jose 2017Hadoop 3 @ Hadoop Summit San Jose 2017
Hadoop 3 @ Hadoop Summit San Jose 2017
 
Apache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateApache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community Update
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Discoblocks.pptx.pdf
Discoblocks.pptx.pdfDiscoblocks.pptx.pdf
Discoblocks.pptx.pdf
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
 

Recently uploaded

20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 

Recently uploaded (20)

20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 

2021.06. Ceph Project Update

  • 1. 1 Ceph Project Update Sage Weil & Josh Durgin Ceph Month - 2021.06.01
  • 2. 2 ● Ceph background ● Ceph Month ● Ceph Foundation update ● Sepia lab update ● Telemetry ● The future AGENDA
  • 3. 3 The buzzwords ● “Software defined storage” ● “Unified storage system” ● “Scalable distributed storage” ● “The future of storage” ● “The Linux of storage” WHAT IS CEPH? The substance ● Ceph is open source software ● Runs on commodity hardware ○ Commodity servers ○ IP networks ○ HDDs, SSDs, NVMe, NV-DIMMs, ... ● A single cluster can serve object, block, and file workloads
  • 4. 4 ● Freedom to use (free as in beer) ● Freedom to introspect, modify, and share (free as in speech) ● Freedom from vendor lock-in ● Freedom to innovate CEPH IS FREE AND OPEN SOURCE
  • 5. 5 ● Reliable storage service out of unreliable components ○ No single point of failure ○ Data durability via replication or erasure coding ○ No interruption of service from rolling upgrades, online expansion, etc. ● Favor consistency and correctness over performance CEPH IS RELIABLE
  • 6. 6 ● Ceph is elastic storage infrastructure ○ Storage cluster may grow or shrink ○ Add or remove hardware while system is online and under load ● Scale up with bigger, faster hardware ● Scale out within a single cluster for capacity and performance ● Federate multiple clusters across sites with asynchronous replication and disaster recovery capabilities CEPH IS SCALABLE
  • 7. 7 CEPH IS A UNIFIED STORAGE SYSTEM RGW S3 and Swift object storage LIBRADOS Low-level storage API RADOS Reliable, elastic, distributed storage layer with replication and erasure coding RBD Virtual block device CEPHFS Distributed network file system OBJECT BLOCK FILE
  • 8. 8 RELEASE SCHEDULE Octopus Mar 2020 14.2.z Nautilus Mar 2019 WE ARE HERE 15.2.z 16.2.z Pacific Mar 2021 17.2.z Quincy Mar 2022 ● Stable, named release every 12 months ● Backports for 2 releases ○ Bug fixes and security updates ○ Nautilus reaches EOL shortly after Pacific is released ● Upgrade up to 2 releases at a time ○ Nautilus → Pacific, Octopus → Quincy ● Released as packages (deb, rpm) and container images ● Process improvements (security hotfixes; regular cadence)
  • 9. 9 CEPH MONTH - JUNE 2021
  • 10. 10 CEPH MONTH ● Goals ○ More interactive ○ Bite-sized ● Format ○ 1-2 hrs ○ ~2 blocks per week ○ A few planned talks ○ Un/semi-structured discussion time ○ Lighting talks sprinkled throughout ● Etherpads ○ Add your questions, or ask them verbally ○ Add any discussion topics ● Week of June 1 - 4 ○ RADOS ○ Windows ● Week of June 7 - 11 ○ RGW ○ Performance ● Week of June 14 - 18 ○ RBD ○ Dashboard ○ Lighting talks ● Week of June 21 - 25 ○ CephFS ○ cephadm https://pad.ceph.com/p/ceph-month-june-2021
  • 11. 11 ● It will be in March 2022… ● No location yet ○ Seoul? ○ North America? (Portland?) ○ ??? ● Expected to be in-person ○ Possibly with hybrid elements? ● We are very interested in community feedback! CEPHALOCON 2022
  • 16. 16 CURRENT PROJECTS ● Ceph documentation ○ Zac Dover, full-time technical writer ● ceph.io web site update ○ Spearheaded by SoftIron ○ Static site generator; github; no more wordpress ○ https://github.com/ceph/ceph.io ○ Planned launch next month! ● Training materials ○ Working with Linux Foundation’s training group ○ Building out initial free course material (w/ JC Lopez) ○ edX and/or LF hosted; can support both self-paced or instructor-led ○ Potential in future for advanced material, paid courses, and/or certifications ○ LF training group is revenue neutral; collaborative development process with community
  • 17. 17 CURRENT PROJECTS ● Reducing cloud spend with OVH ○ Build and CI hardware purchases for Sepia lab ○ We are now only hosting public-facing infra in OVH ● Lab hardware ○ Build machines ○ Expanding lab’s Ceph cluster (more storage for test results, etc) ● Windows support ○ Contract with CloudBase to finish initial development, build sustainable CI infrastructure ○ RBD, CephFS ● New marketing committee
  • 18. 18 SEPIA LAB UPDATE ● More hardware from the Ceph Foundation ○ Expanding the lab’s Ceph cluster ○ More build machines (braggi) ○ More test nodes (gibba) ● Improved teuthology test infrastructure ○ Moved to a single process dispatcher (Shraddha Agrawal) ○ Replaced in-memory queue with limited features with postgres (Aishwarya Mathuria) ○ Enables larger scale test clusters ○ Ability to prioritize and use lab more efficiently ● Downgrade testing (WIP) ○ Downgrade within a major release (e.g. 16.2.4 -> 16.2.3) ○ Now feasible with cephadm
  • 19. 19 ARM AARCH64 SUPPORT ● Hardware donated by Ampere ● CI builds for teuthology, releases ○ CentOS 8 RPMs, Ubuntu Focal 20.04 ○ Container images (based on CentOS) ● Addressing some issues with bleeding edge of podman/quay and multi-arch support
  • 21. 21 TELEMETRY AND CRASH REPORTS ● Opt-in ○ Will require re-opt-in if telemetry content is expanded in the future ○ Explicitly acknowledge data sharing license ● Basic channel ○ Cluster size, version ○ Which features are enabled ● Crash channel ○ Anonymized crash metadata ○ Where in the code the problem happened, what version, etc. ○ Extensive (private) dashboard ○ Integration into tracker.ceph.com WIP ● Device channel ○ HDD vs SSD, vendors, models ○ Health metrics (e.g., SMART) ○ Extensive dashboard (link from top right) ● Ident channel ○ Off by default ○ Optional contact information ● Future performance channel ○ Planned for quincy ○ Optional more granular (but still anonymized) data about workloads, IO sizes, IO rates, cache hit rates, etc. ○ Help developers optimize Ceph ○ Possibly tuning suggestions for users ● Transparency! https://telemetry-public.ceph.com/
  • 23. 23 WHY IS TELEMETRY NOT ENABLED?
  • 24. 24 IT’S EASY! ● Review and opt-in ● Enable SOCKS proxy ● https://docs.ceph.com/en/latest/mgr/telemetry/
  • 26. 26 ● Cephadm has brought end-to-end management of Ceph deployments ● Cluster management via Ceph dashboard ● Simple experience for non-enterprise deployments ○ Small/medium businesses, remote offices, etc. ○ NAS replacement ● Turn-key support for NFS, object ○ SMB coming in Quincy OUT OF THE BOX EXPERIENCE
  • 27. 27 NEW DEVICES ● ZNS SSDs ○ 3D NAND … dense, but the erase blocks are huge ○ Zone-based write interface ○ Combines capacity, low cost, and good performance ○ Key focus of Crimson’s SeaStore! ● Multi-actuator HDDs ○ Recent devices double IOPS in existing HDD package ○ Ceph treats them as two OSDs with shared failure domain ● Persistent memory ○ Will be well-supported (but not required) by Crimson ○ Recent support in RBD client-side write-back cache
  • 28. 28 ● Client-side ○ NVMeoF target that presents an RBD device ○ Alternative to iSCSI ○ Can be combined with new hardware (e.g., SmartNICs like Nvidia’s Bluefield) to present a NVME device on PCI bus while running gateway/librbd code on the card’s “DPU” ○ Useful for “metal as a service” cloud infrastructure ● Server-side ○ Some discussion around Crimson “phase 2” ○ Enable primary OSD to write directly to replica OSD’s devices ○ Mechanism to reduce CPU cost per IO NVMe FABRICS
  • 29. 29 ● Maturing ○ Rook ■ Key focus: Ceph orchestrator / dashboard integration with rook ○ Knative ○ Spark ■ S3 SELECT ○ Multisite ■ interop with public cloud ● New ○ Apache Arrow / Parquet ■ Data interchange formats for data pipelines INTEGRATIONS / ECOSYSTEMS