SlideShare a Scribd company logo
1 of 56
Download to read offline
Inktank
Openstack with Ceph
Who is this guy?

 Ian Colle
 Ceph Program Manager, Inktank

 ian@inktank.com
 @ircolle
 www.linkedin.com/in/ircolle
 ircolle on freenode

 inktank.com | ceph.com
Selecting the Best Cloud Storage System
People need storage solutions that…

•  …are open

•  …are easy to manage

•  …satisfy their requirements
        - performance
        - functional
        - financial (cha’ ching!)
Hard Drives Are Tiny Record Players and They Fail Often
jon_a_ross, Flickr / CC BY 2.0
D    D

  D    D


  D    D      =
  D    D


x 1 MILLION
                  55 times / day
I got it!
“That’s why I use Swift in my Openstack implementation”


Hmmm, what about block storage?
Benefits of Block Storage
• Persistent
        - More familiar to users

• Not tied to a single host
        - Decouples compute and storage
        - Enables Live migration

• Extra capabilities of storage system
        - Efficient snapshots
        - Different types of storage available
        - Cloning for fast restore or scaling
Ceph over Swift
Ceph has reduced administration costs
       - “Intelligent Devices” that use a peer-to-peer mechanism to
       detect failures and react automatically – rapidly ensuring
       replication policies are still honored if a node becomes
       unavailable.
       - Swift requires an operator to notice a failure and update the
       ring configuration before redistribution of data is started.

Ceph guarantees the consistency of your data
       - Even with large volumes of data, Ceph ensures clients get a
       consistent copy from any node within a region.
       - Swift’s replication system means that users may get stale
       data, even with a single site, due to slow asynchronous
       replication as the volume of data builds up.
Swift over Ceph
Swift has quotas, we do not (coming this Fall)

Swift has object expiration, we do not (coming this Fall)
Total Solution Comparison
Ceph
        Ceph provides object AND block storage in a single system that
        is compatible with the Swift and Cinder APIs and is self-healing
        without operator intervention.

Swift
        If you use Swift, you still have to provision and manage a totally
        separate system to handle your block storage (in addition to
        paying the poor guy to go update the ring configuration)
Openstack I know, but what is Ceph?
philosophy   design


      OPEN SOURCE     SCALABLE

COMMUNITY-FOCUSED     NO SINGLE POINT OF FAILURE

                      SOFTWARE BASED

                      SELF-MANAGING
APP                   APP                   HOST/VM                   CLIENT



                       RGW                     RBD                     CEPH FS
  LIBRADOS             (RADOS                  (RADOS Block
                       Gateway)                Device)                 A POSIX-compliant
  A library allowing                                                   distributed file
  apps to directly                                                     system, with a Linux
                       A bucket-based REST     A reliable and fully-   kernel client and
  access RADOS,
                       gateway, compatible     distributed block       support for FUSE
  with support for
                       with S3 and Swift       device, with a Linux
  C, C++, Java,                                kernel client and a
  Python, Ruby,                                QEMU/KVM driver
  and PHP




RADOS

A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,
intelligent storage nodes
APP                   APP                   HOST/VM                   CLIENT



                       RGW                     RBD                     CEPH FS
  LIBRADOS             (RADOS                  (RADOS Block
                       Gateway)                Device)                 A POSIX-compliant
  A library allowing                                                   distributed file
  apps to directly                                                     system, with a Linux
                       A bucket-based REST     A reliable and fully-   kernel client and
  access RADOS,
                       gateway, compatible     distributed block       support for FUSE
  with support for
                       with S3 and Swift       device, with a Linux
  C, C++, Java,                                kernel client and a
  Python, Ruby,                                QEMU/KVM driver
  and PHP




RADOS

A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,
intelligent storage nodes
Monitors:



M
       • Maintain cluster map
       • Provide consensus for
       distributed decision-
       making
       • Must have an odd number
       • These do not serve
       stored objects to clients

    OSDs:
       • One per disk
       (recommended)
       • At least three in a cluster
       • Serve stored objects to
       clients
       • Intelligently peer to
       perform replication tasks
       • Supports object classes
OSD    OSD    OSD    OSD    OSD




FS      FS    FS     FS     FS     btrfs
                                   xfs
                                   ext4
DISK   DISK   DISK   DISK   DISK




  M            M            M



                                           16
HUMAN




        M




M           M
APP                   APP                   HOST/VM                   CLIENT



                       RGW                     RBD                     CEPH FS
  LIBRADOS             (RADOS                  (RADOS Block
                       Gateway)                Device)                 A POSIX-compliant
  A library allowing                                                   distributed file
  apps to directly                                                     system, with a Linux
                       A bucket-based REST     A reliable and fully-   kernel client and
  access RADOS,
                       gateway, compatible     distributed block       support for FUSE
  with support for
                       with S3 and Swift       device, with a Linux
  C, C++, Java,                                kernel client and a
  Python, Ruby,                                QEMU/KVM driver
  and PHP




RADOS

A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,
intelligent storage nodes
LIBRADOS



L
       • Provides direct access to
       RADOS for applications
       • C, C++, Python, PHP,
       Java
       • No HTTP overhead
APP
    LIBRADOS

               native




    M
M               M
APP                   APP                   HOST/VM                   CLIENT



                       RGW                     RBD                     CEPH FS
  LIBRADOS             (RADOS                  (RADOS Block
                       Gateway)                Device)                 A POSIX-compliant
  A library allowing                                                   distributed file
  apps to directly                                                     system, with a Linux
                       A bucket-based REST     A reliable and fully-   kernel client and
  access RADOS,
                       gateway, compatible     distributed block       support for FUSE
  with support for
                       with S3 and Swift       device, with a Linux
  C, C++, Java,                                kernel client and a
  Python, Ruby,                                QEMU/KVM driver
  and PHP




RADOS

A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,
intelligent storage nodes
APP                APP
                              REST




RGW                RGW
LIBRADOS           LIBRADOS


                                     native




           M
     M         M
RADOS Gateway:
   • REST-based interface to
   RADOS
   • Supports buckets,
   accounting
   • Compatible with S3 and
   Swift applications
APP                   APP                   HOST/VM                   CLIENT



                       RGW                     RBD                     CEPH FS
  LIBRADOS             (RADOS                  (RADOS Block
                       Gateway)                Device)                 A POSIX-compliant
  A library allowing                                                   distributed file
  apps to directly                                                     system, with a Linux
                       A bucket-based REST     A reliable and fully-   kernel client and
  access RADOS,
                       gateway, compatible     distributed block       support for FUSE
  with support for
                       with S3 and Swift       device, with a Linux
  C, C++, Java,                                kernel client and a
  Python, Ruby,                                QEMU/KVM driver
  and PHP




RADOS

A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing,
intelligent storage nodes
VM




VIRTUALIZATION CONTAINER
            LIBRBD
          LIBRADOS




        M
   M                 M
RADOS Block Device:
   • Storage of virtual disks in
   RADOS
   • Allows decoupling of VMs
   and containers
   • Live migration!
   • Images are striped across
   the cluster
   • Boot support in QEMU,
   KVM, and OpenStack Nova
   (more on that later!)
   • Mount support in the
   Linux kernel
APP                    APP                  HOST/VM                   CLIENT



                       RADOSGW                 RBD                      CEPH FS
  LIBRADOS
                       A bucket-based REST     A reliable and fully-    A POSIX-compliant
  A library allowing   gateway, compatible     distributed block        distributed file
  apps to directly     with S3 and Swift       device, with a Linux     system, with a Linux
  access RADOS,                                kernel client and a      kernel client and
  with support for                             QEMU/KVM driver          support for FUSE
  C, C++, Java,
  Python, Ruby,
  and PHP




RADOS

A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
What Makes Ceph Unique?
Part one: CRUSH
C D
           C D
           C D
           C D
           C D
      ??
APP        C D
           C D
           C D
           C D
           C D
           C D
           C D
C D
      C D
      C D
      C D
      C D
APP   C D
      C D
      C D
      C D
      C D
      C D
      C D
C D
           C D   A-G
           C D
           C D
           C D   H-N
APP   F*   C D
           C D
           C D   O-T
           C D
           C D
           C D   U-Z
           C D
10 10 01 01 10 10 01 11 01 10

                               hash(object name) % num pg

10   10    01   01   10   10    01   11   01   10




                               CRUSH(pg, cluster state, rule set)
10 10 01 01 10 10 01 11 01 10




10   10    01   01   10   10   01   11    01   10
CRUSH
  • Pseudo-random placement
  algorithm
  • Ensures even distribution
  • Repeatable, deterministic
  • Rule-based configuration
       • Replica count
       • Infrastructure topology
       • Weighting
35
36
37
38
What Makes Ceph Unique
Part two: thin provisioning
40
HOW DO YOU
      SPIN UP
THOUSANDS OF VMs
    INSTANTLY
       AND
  EFFICIENTLY?
42
43
44
How Does Ceph work with Openstack?
Ceph / Openstack Integration
RBD support initially added in Cactus

Have increased features / integration with each subsequent release

You can use both the Swift (object/blob store) and Keystone (identity
service) APIs to talk to RGW

Cinder block storage as a service talks directly to RBD

Nova cloud computing controller talks to RBD via the hypervisor

Coming in Havana – Ability to create a volume from an RBD image via
the Horizon UI
What is Inktank?
I really like your polo shirt, please tell me what it means!
Who?
The majority of Ceph contributors

Formed by Sage Weil (CTO), the creator of Ceph, in 2011

Funded by DreamHost and other investors (Mark Shuttleworth, etc.)
Why?
To ensure the long-term success of Ceph

To help companies adopt Ceph through services, support, training, and
consulting
What?
Guide the Ceph roadmap
        - Hosting a virtual Ceph Design Summit in early May
Standardize the Ceph development and release schedule
       - Quarterly stable releases, interim releases every 2 weeks
               * May 2013 – Cuttlefish
                       RBD Incremental Snapshots!
               * Aug 2013 – Dumpling
                       Disaster Recovery (Multisite)
                       Admin API
               * Nov 2013 – Some really cool cephalopod name that
               starts with an E
Ensure Quality
       - Maintain Teuthology test suite
       - Harden each stable release via extensive manual and
       automated testing
Develop reference and custom architectures for implementation
Inktank/Dell Partnership

• Inktank is a Strategic partner for Dell in Emerging Solutions
• The Emerging Solutions Ecosystem Partner Program is designed to
deliver complementary cloud components
• As part of this program, Dell and Inktank provide:
      > Ceph Storage Software
        - Adds scalable cloud storage to the Dell OpenStack-powered cloud
        - Uses Crowbar to provision and configure a Ceph cluster (Yeah
        Crowbar!)
      > Professional Services, Support, and Training
        - Collaborative Support for Dell hardware customers
      > Joint Solution
        - Validated against Dell Reference Architectures via the
        Technology Partner program
What do we want from you??
Try Ceph and tell us what you think!
http://ceph.com/resources/downloads/

http://ceph.com/resources/mailing-list-irc/
        - Ask, if you need help.
        - Help others, if you can!

Ask your company to start dedicating dev resources to the project!
http://github.com/ceph

Find a bug (http://tracker.ceph.com) and fix it!

Participate in our Ceph Design Summit!
One final request…
We’re planning the next release of Ceph and would love your input.

What features would you like us to include?

       iSCSI?

       Live Migration?
56



     Questions?

     Ian Colle
     Ceph Program Manager, Inktank

     ian@inktank.com
     @ircolle
     www.linkedin.com/in/ircolle
     ircolle on freenode

     inktank.com | ceph.com

More Related Content

What's hot

Storing VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdfStoring VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdfOpenStack Foundation
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxRobert Starmer
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ian Colle
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014Kyle Bader
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Ceph storage for ocp deploying and managing ceph on top of open shift conta...
Ceph storage for ocp   deploying and managing ceph on top of open shift conta...Ceph storage for ocp   deploying and managing ceph on top of open shift conta...
Ceph storage for ocp deploying and managing ceph on top of open shift conta...OrFriedmann
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFinside-BigData.com
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewMarcel Hergaarden
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
 

What's hot (20)

Storing VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdfStoring VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdf
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
 
librados
libradoslibrados
librados
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph storage for ocp deploying and managing ceph on top of open shift conta...
Ceph storage for ocp   deploying and managing ceph on top of open shift conta...Ceph storage for ocp   deploying and managing ceph on top of open shift conta...
Ceph storage for ocp deploying and managing ceph on top of open shift conta...
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oF
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 

Viewers also liked

Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing GuideJose De La Rosa
 
Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production
Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production
Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production Ceph Community
 
[超强腿击术].马中碧
[超强腿击术].马中碧[超强腿击术].马中碧
[超强腿击术].马中碧军山 马
 
虚拟化与云计算
虚拟化与云计算虚拟化与云计算
虚拟化与云计算ITband
 
Webinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will Win
Webinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will WinWebinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will Win
Webinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will WinStorage Switzerland
 
Webinar: Sizing Up Object Storage for the Enterprise
Webinar: Sizing Up Object Storage for the EnterpriseWebinar: Sizing Up Object Storage for the Enterprise
Webinar: Sizing Up Object Storage for the EnterpriseStorage Switzerland
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackDaniel Schneller
 
Scality, Cloud Storage pour Zimbra
Scality, Cloud Storage pour ZimbraScality, Cloud Storage pour Zimbra
Scality, Cloud Storage pour ZimbraAntony Barroux
 
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?Primend
 
Hackathon scality holberton seagate 2016 v5
Hackathon scality holberton seagate 2016 v5Hackathon scality holberton seagate 2016 v5
Hackathon scality holberton seagate 2016 v5Scality
 
Disaster Recovery in oVirt
Disaster Recovery in oVirtDisaster Recovery in oVirt
Disaster Recovery in oVirtMaor Lipchuk
 
Which Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephWhich Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephRed_Hat_Storage
 
Storage best practices
Storage best practicesStorage best practices
Storage best practicesMaor Lipchuk
 
Presentazione SimpliVity @ VMUGIT UserCon 2015
Presentazione SimpliVity @ VMUGIT UserCon 2015Presentazione SimpliVity @ VMUGIT UserCon 2015
Presentazione SimpliVity @ VMUGIT UserCon 2015VMUG IT
 
Simplivity webinar presentation
Simplivity webinar presentationSimplivity webinar presentation
Simplivity webinar presentationRyan Hadden
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 

Viewers also liked (20)

Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production
Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production
Ceph Day SF 2015 - SysAdmin's Toolbox: Tools for Running Ceph in Production
 
[超强腿击术].马中碧
[超强腿击术].马中碧[超强腿击术].马中碧
[超强腿击术].马中碧
 
虚拟化与云计算
虚拟化与云计算虚拟化与云计算
虚拟化与云计算
 
Webinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will Win
Webinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will WinWebinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will Win
Webinar: NAS vs. Object Storage: 10 Reasons Why Object Storage Will Win
 
Webinar: Sizing Up Object Storage for the Enterprise
Webinar: Sizing Up Object Storage for the EnterpriseWebinar: Sizing Up Object Storage for the Enterprise
Webinar: Sizing Up Object Storage for the Enterprise
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
 
Scality, Cloud Storage pour Zimbra
Scality, Cloud Storage pour ZimbraScality, Cloud Storage pour Zimbra
Scality, Cloud Storage pour Zimbra
 
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?
 
Hackathon scality holberton seagate 2016 v5
Hackathon scality holberton seagate 2016 v5Hackathon scality holberton seagate 2016 v5
Hackathon scality holberton seagate 2016 v5
 
Disaster Recovery in oVirt
Disaster Recovery in oVirtDisaster Recovery in oVirt
Disaster Recovery in oVirt
 
Bluestore
BluestoreBluestore
Bluestore
 
Which Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephWhich Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on Ceph
 
Storage best practices
Storage best practicesStorage best practices
Storage best practices
 
Presentazione SimpliVity @ VMUGIT UserCon 2015
Presentazione SimpliVity @ VMUGIT UserCon 2015Presentazione SimpliVity @ VMUGIT UserCon 2015
Presentazione SimpliVity @ VMUGIT UserCon 2015
 
Ceph Object Store
Ceph Object StoreCeph Object Store
Ceph Object Store
 
Simplivity webinar presentation
Simplivity webinar presentationSimplivity webinar presentation
Simplivity webinar presentation
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
oVirt introduction
oVirt introduction oVirt introduction
oVirt introduction
 

Similar to Openstack with ceph

New features for Ceph with Cinder and Beyond
New features for Ceph with Cinder and BeyondNew features for Ceph with Cinder and Beyond
New features for Ceph with Cinder and BeyondCeph Community
 
New Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and BeyondNew Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and BeyondOpenStack Foundation
 
Ceph Day NYC: Ceph Fundamentals
Ceph Day NYC: Ceph FundamentalsCeph Day NYC: Ceph Fundamentals
Ceph Day NYC: Ceph FundamentalsCeph Community
 
Ceph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFSCeph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFSCeph Community
 
Ceph Day London 2014 - Ceph Ecosystem Overview
Ceph Day London 2014 - Ceph Ecosystem Overview Ceph Day London 2014 - Ceph Ecosystem Overview
Ceph Day London 2014 - Ceph Ecosystem Overview Ceph Community
 
London Ceph Day: The Future of CephFS
London Ceph Day: The Future of CephFSLondon Ceph Day: The Future of CephFS
London Ceph Day: The Future of CephFSCeph Community
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
 
Storage Developer Conference - 09/19/2012
Storage Developer Conference - 09/19/2012Storage Developer Conference - 09/19/2012
Storage Developer Conference - 09/19/2012Ceph Community
 
Ceph Day Santa Clara: Ceph Fundamentals
Ceph Day Santa Clara: Ceph Fundamentals Ceph Day Santa Clara: Ceph Fundamentals
Ceph Day Santa Clara: Ceph Fundamentals Ceph Community
 
Ceph - Desmistificando Software-Define Storage
Ceph - Desmistificando Software-Define StorageCeph - Desmistificando Software-Define Storage
Ceph - Desmistificando Software-Define StorageItalo Santos
 
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Community
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
 
Cloudjiffy vs Open Shift (private cloud)
Cloudjiffy vs Open Shift (private cloud)Cloudjiffy vs Open Shift (private cloud)
Cloudjiffy vs Open Shift (private cloud)Sharma Aashish
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Ceph Day LA - RBD: A deep dive
Ceph Day LA - RBD: A deep dive Ceph Day LA - RBD: A deep dive
Ceph Day LA - RBD: A deep dive Ceph Community
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Divejoshdurgin
 

Similar to Openstack with ceph (20)

New features for Ceph with Cinder and Beyond
New features for Ceph with Cinder and BeyondNew features for Ceph with Cinder and Beyond
New features for Ceph with Cinder and Beyond
 
New Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and BeyondNew Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and Beyond
 
Ceph Day NYC: Ceph Fundamentals
Ceph Day NYC: Ceph FundamentalsCeph Day NYC: Ceph Fundamentals
Ceph Day NYC: Ceph Fundamentals
 
Ceph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFSCeph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFS
 
Ceph Day London 2014 - Ceph Ecosystem Overview
Ceph Day London 2014 - Ceph Ecosystem Overview Ceph Day London 2014 - Ceph Ecosystem Overview
Ceph Day London 2014 - Ceph Ecosystem Overview
 
London Ceph Day: The Future of CephFS
London Ceph Day: The Future of CephFSLondon Ceph Day: The Future of CephFS
London Ceph Day: The Future of CephFS
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
 
Inktank:ceph overview
Inktank:ceph overviewInktank:ceph overview
Inktank:ceph overview
 
Storage Developer Conference - 09/19/2012
Storage Developer Conference - 09/19/2012Storage Developer Conference - 09/19/2012
Storage Developer Conference - 09/19/2012
 
Ceph Day Santa Clara: Ceph Fundamentals
Ceph Day Santa Clara: Ceph Fundamentals Ceph Day Santa Clara: Ceph Fundamentals
Ceph Day Santa Clara: Ceph Fundamentals
 
Ceph - Desmistificando Software-Define Storage
Ceph - Desmistificando Software-Define StorageCeph - Desmistificando Software-Define Storage
Ceph - Desmistificando Software-Define Storage
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage System
 
librados
libradoslibrados
librados
 
Cloudjiffy vs Open Shift (private cloud)
Cloudjiffy vs Open Shift (private cloud)Cloudjiffy vs Open Shift (private cloud)
Cloudjiffy vs Open Shift (private cloud)
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Ceph Day LA - RBD: A deep dive
Ceph Day LA - RBD: A deep dive Ceph Day LA - RBD: A deep dive
Ceph Day LA - RBD: A deep dive
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Dive
 

Openstack with ceph

  • 2. Who is this guy? Ian Colle Ceph Program Manager, Inktank ian@inktank.com @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com
  • 3. Selecting the Best Cloud Storage System People need storage solutions that… •  …are open •  …are easy to manage •  …satisfy their requirements - performance - functional - financial (cha’ ching!)
  • 4. Hard Drives Are Tiny Record Players and They Fail Often jon_a_ross, Flickr / CC BY 2.0
  • 5. D D D D D D = D D x 1 MILLION 55 times / day
  • 6. I got it! “That’s why I use Swift in my Openstack implementation” Hmmm, what about block storage?
  • 7. Benefits of Block Storage • Persistent - More familiar to users • Not tied to a single host - Decouples compute and storage - Enables Live migration • Extra capabilities of storage system - Efficient snapshots - Different types of storage available - Cloning for fast restore or scaling
  • 8. Ceph over Swift Ceph has reduced administration costs - “Intelligent Devices” that use a peer-to-peer mechanism to detect failures and react automatically – rapidly ensuring replication policies are still honored if a node becomes unavailable. - Swift requires an operator to notice a failure and update the ring configuration before redistribution of data is started. Ceph guarantees the consistency of your data - Even with large volumes of data, Ceph ensures clients get a consistent copy from any node within a region. - Swift’s replication system means that users may get stale data, even with a single site, due to slow asynchronous replication as the volume of data builds up.
  • 9. Swift over Ceph Swift has quotas, we do not (coming this Fall) Swift has object expiration, we do not (coming this Fall)
  • 10. Total Solution Comparison Ceph Ceph provides object AND block storage in a single system that is compatible with the Swift and Cinder APIs and is self-healing without operator intervention. Swift If you use Swift, you still have to provision and manage a totally separate system to handle your block storage (in addition to paying the poor guy to go update the ring configuration)
  • 11. Openstack I know, but what is Ceph?
  • 12. philosophy design OPEN SOURCE SCALABLE COMMUNITY-FOCUSED NO SINGLE POINT OF FAILURE SOFTWARE BASED SELF-MANAGING
  • 13. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHP RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes
  • 14. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHP RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes
  • 15. Monitors: M • Maintain cluster map • Provide consensus for distributed decision- making • Must have an odd number • These do not serve stored objects to clients OSDs: • One per disk (recommended) • At least three in a cluster • Serve stored objects to clients • Intelligently peer to perform replication tasks • Supports object classes
  • 16. OSD OSD OSD OSD OSD FS FS FS FS FS btrfs xfs ext4 DISK DISK DISK DISK DISK M M M 16
  • 17. HUMAN M M M
  • 18. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHP RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes
  • 19. LIBRADOS L • Provides direct access to RADOS for applications • C, C++, Python, PHP, Java • No HTTP overhead
  • 20. APP LIBRADOS native M M M
  • 21. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHP RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes
  • 22. APP APP REST RGW RGW LIBRADOS LIBRADOS native M M M
  • 23. RADOS Gateway: • REST-based interface to RADOS • Supports buckets, accounting • Compatible with S3 and Swift applications
  • 24. APP APP HOST/VM CLIENT RGW RBD CEPH FS LIBRADOS (RADOS (RADOS Block Gateway) Device) A POSIX-compliant A library allowing distributed file apps to directly system, with a Linux A bucket-based REST A reliable and fully- kernel client and access RADOS, gateway, compatible distributed block support for FUSE with support for with S3 and Swift device, with a Linux C, C++, Java, kernel client and a Python, Ruby, QEMU/KVM driver and PHP RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes
  • 25. VM VIRTUALIZATION CONTAINER LIBRBD LIBRADOS M M M
  • 26. RADOS Block Device: • Storage of virtual disks in RADOS • Allows decoupling of VMs and containers • Live migration! • Images are striped across the cluster • Boot support in QEMU, KVM, and OpenStack Nova (more on that later!) • Mount support in the Linux kernel
  • 27. APP APP HOST/VM CLIENT RADOSGW RBD CEPH FS LIBRADOS A bucket-based REST A reliable and fully- A POSIX-compliant A library allowing gateway, compatible distributed block distributed file apps to directly with S3 and Swift device, with a Linux system, with a Linux access RADOS, kernel client and a kernel client and with support for QEMU/KVM driver support for FUSE C, C++, Java, Python, Ruby, and PHP RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes
  • 28. What Makes Ceph Unique? Part one: CRUSH
  • 29. C D C D C D C D C D ?? APP C D C D C D C D C D C D C D
  • 30. C D C D C D C D C D APP C D C D C D C D C D C D C D
  • 31. C D C D A-G C D C D C D H-N APP F* C D C D C D O-T C D C D C D U-Z C D
  • 32. 10 10 01 01 10 10 01 11 01 10 hash(object name) % num pg 10 10 01 01 10 10 01 11 01 10 CRUSH(pg, cluster state, rule set)
  • 33. 10 10 01 01 10 10 01 11 01 10 10 10 01 01 10 10 01 11 01 10
  • 34. CRUSH • Pseudo-random placement algorithm • Ensures even distribution • Repeatable, deterministic • Rule-based configuration • Replica count • Infrastructure topology • Weighting
  • 35. 35
  • 36. 36
  • 37. 37
  • 38. 38
  • 39. What Makes Ceph Unique Part two: thin provisioning
  • 40. 40
  • 41. HOW DO YOU SPIN UP THOUSANDS OF VMs INSTANTLY AND EFFICIENTLY?
  • 42. 42
  • 43. 43
  • 44. 44
  • 45. How Does Ceph work with Openstack?
  • 46. Ceph / Openstack Integration RBD support initially added in Cactus Have increased features / integration with each subsequent release You can use both the Swift (object/blob store) and Keystone (identity service) APIs to talk to RGW Cinder block storage as a service talks directly to RBD Nova cloud computing controller talks to RBD via the hypervisor Coming in Havana – Ability to create a volume from an RBD image via the Horizon UI
  • 47.
  • 48. What is Inktank? I really like your polo shirt, please tell me what it means!
  • 49.
  • 50. Who? The majority of Ceph contributors Formed by Sage Weil (CTO), the creator of Ceph, in 2011 Funded by DreamHost and other investors (Mark Shuttleworth, etc.)
  • 51. Why? To ensure the long-term success of Ceph To help companies adopt Ceph through services, support, training, and consulting
  • 52. What? Guide the Ceph roadmap - Hosting a virtual Ceph Design Summit in early May Standardize the Ceph development and release schedule - Quarterly stable releases, interim releases every 2 weeks * May 2013 – Cuttlefish RBD Incremental Snapshots! * Aug 2013 – Dumpling Disaster Recovery (Multisite) Admin API * Nov 2013 – Some really cool cephalopod name that starts with an E Ensure Quality - Maintain Teuthology test suite - Harden each stable release via extensive manual and automated testing Develop reference and custom architectures for implementation
  • 53. Inktank/Dell Partnership • Inktank is a Strategic partner for Dell in Emerging Solutions • The Emerging Solutions Ecosystem Partner Program is designed to deliver complementary cloud components • As part of this program, Dell and Inktank provide: > Ceph Storage Software - Adds scalable cloud storage to the Dell OpenStack-powered cloud - Uses Crowbar to provision and configure a Ceph cluster (Yeah Crowbar!) > Professional Services, Support, and Training - Collaborative Support for Dell hardware customers > Joint Solution - Validated against Dell Reference Architectures via the Technology Partner program
  • 54. What do we want from you?? Try Ceph and tell us what you think! http://ceph.com/resources/downloads/ http://ceph.com/resources/mailing-list-irc/ - Ask, if you need help. - Help others, if you can! Ask your company to start dedicating dev resources to the project! http://github.com/ceph Find a bug (http://tracker.ceph.com) and fix it! Participate in our Ceph Design Summit!
  • 55. One final request… We’re planning the next release of Ceph and would love your input. What features would you like us to include? iSCSI? Live Migration?
  • 56. 56 Questions? Ian Colle Ceph Program Manager, Inktank ian@inktank.com @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com