SlideShare a Scribd company logo
Inktank
Delivering the Future of Storage


Getting Started with Ceph
January 17, 2013
Agenda
•    Inktank and Ceph Introduction

•    Ceph Technology

•    Getting Started Walk-through

•    Resources

•    Next steps
•    Company that provides       •  Distributed unified object,
     professional services and      block and file storage
     support for Ceph
                                    platform
•    Founded in 2011
                                 •    Created by storage
•    Funded by DreamHost              experts

•    Mark Shuttleworth           •    Open source
     invested $1M
                                 •    In the Linux Kernel
•    Sage Weil, CTO and
     creator of Ceph             •    Integrated into Cloud
                                      Platforms
Ceph Technological Foundations

Ceph was built with the following goals:

   l    Every component must scale

   l    There can be no single point of failure

   l    The solution must be software-based, not an appliance

   l    Should run on readily-available, commodity hardware

   l    Everything must self-manage wherever possible

   l    Must be open source

                                                                 4
Key Differences
•  CRUSH data placement algorithm (Object)
       Intelligent storage nodes


•  Unified storage platform (Object + Block + File)
       All uses cases (cloud, big data, legacy, web app,
       archival, etc.) satisfied in a single cluster


•  Thinly provisioned virtual block device (Block)
       Cloud storage block for VM images


•  Distributed scalable metadata servers (CephFS)
Ceph Use Cases
Object
  •  Archival and backup storage
  •  Primary data storage
  •  S3-like storage
  •  Web services and platforms
  •  Application development
Block
  •  SAN replacement
  •  Virtual block device, VM images
File
 •  HPC
 •  Posix-compatible applications
Ceph Technology Overview
Ceph
        APP                    APP                  HOST/VM                   CLIENT



                        Ceph Object            Ceph Block               Ceph Distributed
  Ceph Object           Gateway                (RBD)                    File System
  Library               (RADOS                                          (CephFS)
  (LIBRADOS)            Gateway)               A reliable and fully-
                                               distributed block        A POSIX-compliant
                                               device                   distributed file
  A library allowing    A RESTful gateway
  applications to       for object storage                              system
  directly
  access Ceph Object
  Storage




Ceph Object Storage
(RADOS)
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
RADOS Components
        Monitors:


M
         • Maintain cluster map
         • Provide consensus for
          distributed decision-making
         • Must have an odd number
         • These do not serve stored
          objects to clients


        RADOS Storage Nodes
        containing Object Storage
        Daemons (OSDs):
        •  One OSD per disk (recommended)
        •  At least three nodes in a cluster
        •  Serve stored objects to clients
        •  Intelligently peer to perform
           replication tasks
        •  Supports object classes

                                               9
RADOS Cluster Makeup


          OSD    OSD    OSD    OSD      OSD
RADOS
 Node
                                                   btrfs
          FS     FS     FS         FS   FS         xfs
                                                   ext4

          DISK   DISK   DISK   DISK     DISK




                 M             M               M
RADOS
Cluster



                                                           10
VOTE
Using the Votes Bottom on the top of the presentation panel
   please take 30 seconds answer the following questions to
   help us better understand you.

1.  Are you exploring Ceph for a current project?

2.  Are you looking to implement Ceph within the next 6
    months?

3.  Do you need help deploying Ceph?
Getting Started Walk-through
Overview

•  This tutorial and walk-through based on VirtualBox, but
   other hypervisor platforms will work just as well.
•  Relaxed security best practices to speed things up, and
   will omit some of the security setup steps here.
•  We will:
 1.    Create the VirtualBox VMs
 2.    Prepare the VMs for Creating the Ceph Cluster
 3.    Install Ceph on all VMs from the Client
 4.    Configure Ceph on all the server nodes and the client
 5.    Experiment with Ceph’s Virtual Block Device (RBD)
 6.    Experiment with the Ceph Distributed Filesystem
 7.    Unmount, stop Ceph, and shut down the VMs safely
Create the VMs

•     1 or more CPU cores
•     512MB or more memory
•     Ubuntu 12.04 with latest updates
•     VirtualBox Guest Addons
•     Three virtual disks (dynamically allocated):
     • 28GB OS disk with boot partition
     • 8GB disk for Ceph data
     • 8GB disk for Ceph data
•  Two virtual network interfaces:
     • eth0 Host-Only interface for Ceph
     • eth1 NAT interface for updates

Consider creating a template based on the above, and then
cloning the template to save time creating all four VMs
Adjust Networking in the VM OS

•  Edit /etc/network/interfaces
   # The primary network interface
   auto eth0
   iface eth0 inet static
   address 192.168.56.20
   netmask 255.255.255.0
   # The secondary NAT interface with outside access
   auto eth1
   iface eth1 inet dhcp
   gateway 10.0.3.2



•  Edit /etc/udev/rules.d/70-persistent-net.rules
 If the VMs were cloned from a template, the MAC addresses for the
 virtual NICs should have been regenerated to stay unique. Edit this
 file to make sure that the right NIC is mapped as eth0 and eth1.
Security Shortcuts

To streamline and simplify access for this tutorial, we:
 •  Configured the user “ubuntu” to SSH between hosts using
    authorized keys instead of a password.
 •  Added “ubuntu” to /etc/sudoers with full access.
 •  Configured root on the server nodes to SSH between nodes
    using authorized keys without a password set.
 •  Relaxed SSH checking of known hosts to avoid interactive
    confirmation when accessing a new host.
 •  Disabled cephx authentication for the Ceph cluster
Edit /etc/hosts to resolve names

•  Use the /etc/hosts file for simple name resolution
   for all the VMs on the Host-Only network.
•  Create a portable /etc/hosts file on the client
    127.0.0.1         localhost

    192.168.56.20     ceph-client
    192.168.56.21     ceph-node1
    192.168.56.22     ceph-node2
    192.168.56.23     ceph-node3


•  Copy the file to all the VMs so that names are
   consistently resolved across all machines.
Install the Ceph Bobtail release
ubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/
release.asc | ssh ceph-node1 sudo apt-key add -
OK

ubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc)
main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.list
deb http://ceph.com/debian-bobtail/ precise main

ubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph”
...
Setting up librados2 (0.56.1-1precise) ...
Setting up librbd1 (0.56.1-1precise) ...
Setting up ceph-common (0.56.1-1precise) ...
Installing new version of config file /etc/bash_completion.d/rbd ...
Setting up ceph (0.56.1-1precise) ...
Setting up ceph-fs-common (0.56.1-1precise) ...
Setting up ceph-fuse (0.56.1-1precise) ...
Setting up ceph-mds (0.56.1-1precise) ...
Setting up libcephfs1 (0.56.1-1precise) ...
...
ldconfig deferred processing now taking place
Create the Ceph Configuration File
~$ sudo cat <<! > /etc/ceph/ceph.conf        [mon.c]
[global]                                        host = ceph-node3
   auth cluster required = none                 mon addr = 192.168.56.23:6789
   auth service required = none              [osd.0]
   auth client required = none                  host = ceph-node1
[osd]                                           devs = /dev/sdb
   osd journal size = 1000                    [osd.1]
   filestore xattr use omap = true              host = ceph-node1
   osd mkfs type = ext4                         devs = /dev/sdc
   osd mount options ext4 = user_xattr,rw,
                        noexec,nodev,        …
                        noatime,nodiratime
[mon.a]                                      [osd.5]
   host = ceph-node1                            host = ceph-node3
   mon addr = 192.168.56.21:6789                devs = /dev/sdc
[mon.b]                                      [mds.a]
   host = ceph-node2                            host = ceph-node1
   mon addr = 192.168.56.22:6789             !
Complete Ceph Cluster Creation
•  Copy the /etc/ceph/ceph.conf file to all nodes
•  Create the Ceph deamon working directories:
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-0
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-1
    ~$   ssh   ceph-node2   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-2
    ~$   ssh   ceph-node2   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-3
    ~$   ssh   ceph-node3   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-4
    ~$   ssh   ceph-node3   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-5
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/mon/ceph-a
    ~$   ssh   ceph-node2   sudo   mkdir   -p   /var/lib/ceph/mon/ceph-b
    ~$   ssh   ceph-node3   sudo   mkdir   -p   /var/lib/ceph/mon/ceph-c
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/mds/ceph-a
•  Run the mkcephfs command from a server node:
    ~$ ubuntu@ceph-client:~$ ssh ceph-node1
    Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic
    x86_64)
    ...
    ubuntu@ceph-node1:~$ sudo -i
    root@ceph-node1:~# cd /etc/ceph
    root@ceph-node1:/etc/ceph#
    mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
Start the Ceph Cluster
On a server node, start the Ceph service:
    root@ceph-node1:/etc/ceph# service ceph -a start
    === mon.a ===
    Starting Ceph mon.a on ceph-node1...
    starting mon.a rank 0 at 192.168.56.21:6789/0 mon_data /var/lib/ceph/mon/
    ceph-a fsid 11309f36-9955-413c-9463-efae6c293fd6
    === mon.b ===
    === mon.c ===
    === mds.a ===
    Starting Ceph mds.a on ceph-node1...
    starting mds.a at :/0
    === osd.0 ===
    Mounting ext4 on ceph-node1:/var/lib/ceph/osd/ceph-0
    Starting Ceph osd.0 on ceph-node1...
    starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/
    osd/ceph-0/journal
    === osd.1 ===
    === osd.2 ===
    === osd.3 ===
    === osd.4 ===
    === osd.5 ===
Verify Cluster Health
root@ceph-node1:/etc/ceph# ceph status
   health HEALTH_OK
   monmap e1: 3 mons at
{a=192.168.56.21:6789/0,b=192.168.56.22:6789/0,c=192.168.56.23:6789/0},
election epoch 6, quorum 0,1,2 a,b,c
   osdmap e17: 6 osds: 6 up, 6 in
         pgmap v473: 1344 pgs: 1344 active+clean; 8730 bytes data, 7525 MB
used,
       39015 MB / 48997 MB avail
   mdsmap e9: 1/1/1 up {0=a=up:active}

root@ceph-node1:/etc/ceph# ceph osd tree
# id     weight type name            up/down reweight
-1       6        root default
-3       6                  rack unknownrack
-2       2                           host ceph-node1
0        1                                    osd.0     up      1
1        1                                    osd.1     up      1
-4       2                           host ceph-node2
2        1                                    osd.2     up      1
3        1                                    osd.3     up      1
-5       2                           host ceph-node3
4        1                                    osd.4     up      1
5        1                                    osd.5     up      1
Access Ceph’s Virtual Block Device
ubuntu@ceph-client:~$ rbd ls
rbd: pool rbd doesn't contain rbd images
ubuntu@ceph-client:~$ rbd create myLun --size 4096
ubuntu@ceph-client:~$ rbd ls -l
NAME   SIZE PARENT FMT PROT LOCK
myLun 4096M      1
ubuntu@ceph-client:~$ sudo modprobe rbd
ubuntu@ceph-client:~$ sudo rbd map myLun --pool rbd
ubuntu@ceph-client:~$ sudo rbd showmapped
id pool image snap device
0 rbd myLun - /dev/rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd
rbd/ rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd/rbd/myLun
… 1 root root 10 Jan 16 21:15 /dev/rbd/rbd/myLun -> ../../rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd0
brw-rw---- 1 root disk 251, 0 Jan 16 21:15 /dev/rbd0
Format RBD image and use it
ubuntu@ceph-client:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/myLun
mke2fs 1.42 (29-Nov-2011)
...
Writing superblocks and filesystem accounting information: done
ubuntu@ceph-client:~$ sudo mkdir /mnt/myLun
ubuntu@ceph-client:~$ sudo mount /dev/rbd/rbd/myLun /mnt/myLun
ubuntu@ceph-client:~$ df -h | grep myLun
/dev/rbd0                         4.0G 190M 3.9G      5% /mnt/myLun
ubuntu@ceph-client:~$ sudo dd if=/dev/zero of=/mnt/myLun/testfile
bs=4K count=128
128+0 records in
128+0 records out
524288 bytes (524 kB) copied, 0.000431868 s, 1.2 GB/s
ubuntu@ceph-client:~$ ls -lh /mnt/myLun/
total 528K
drwx------ 2 root root 16K Jan 16 21:24 lost+found
-rw-r--r-- 1 root root 512K Jan 16 21:29 testfile
Access Ceph Distributed Filesystem
~$ sudo mkdir /mnt/myCephFS
~$ sudo mount.ceph ceph-node1,ceph-node2,ceph-node3:/ /mnt/myCephFS
~$ df -h | grep my
192.168.56.21,192.168.56.22,192.168.56.23:/    48G    11G    38G   22% /mnt/myCephFS
/dev/rbd0                                     4.0G   190M   3.9G    5% /mnt/myLun


~$ sudo dd if=/dev/zero of=/mnt/myCephFS/testfile bs=4K count=128
128+0 records in
128+0 records out
524288 bytes (524 kB) copied, 0.000439191 s, 1.2 GB/s
~$ ls -lh /mnt/myCephFS/
total 512K
-rw-r--r-- 1 root root 512K Jan 16 23:04 testfile
Unmount, Stop Ceph, and Halt
ubuntu@ceph-client:~$ sudo umount /mnt/myCephFS
ubuntu@ceph-client:~$ sudo umount /mnt/myLun/
ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0
ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop
=== mon.a ===
Stopping Ceph mon.a on ceph-node1...kill 19863...done
=== mon.b ===
=== mon.c ===
=== mds.a ===
=== osd.0 ===
=== osd.1 ===
=== osd.2 ===
=== osd.3 ===
=== osd.4 ===
=== osd.5 ===
ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop
 * Will now halt
^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop
 * Will now halt
^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop
 * Will now halt
^Cubuntu@ceph-client:~$ sudo service halt stop
 * Will now halt
Review
We:
 1.    Created the VirtualBox VMs
 2.    Prepared the VMs for Creating the Ceph Cluster
 3.    Installed Ceph on all VMs from the Client
 4.    Configured Ceph on all the server nodes and the client
 5.    Experimented with Ceph’s Virtual Block Device (RBD)
 6.    Experimented with the Ceph Distributed Filesystem
 7.    Unmounted, stopped Ceph, and shut down the VMs safely

•  Based on VirtualBox; other hypervisors work too.
•  Relaxed security best practices to speed things up, but
   recommend following them in most circumstances.
Resources for Learning More
Leverage great online resources

Documentation on the Ceph web site:
 •  http://ceph.com/docs/master/

Blogs from Inktank and the Ceph community:
 •  http://www.inktank.com/news-events/blog/
 •  http://ceph.com/community/blog/

Developer resources:
 •  http://ceph.com/resources/development/
 •  http://ceph.com/resources/mailing-list-irc/
 •  http://dir.gmane.org/gmane.comp.file-systems.ceph.devel
What Next?




             30
Try it yourself!

•  Use the information in this webinar as a starting point
•  Consult the Ceph documentation online:
  http://ceph.com/docs/master/
  http://ceph.com/docs/master/start/
Inktank’s Professional Services
Consulting Services:
 •    Technical Overview
 •    Infrastructure Assessment
 •    Proof of Concept
 •    Implementation Support
 •    Performance Tuning

Support Subscriptions:
 •    Pre-Production Support
 •    Production Support

A full description of our services can be found at the following:

Consulting Services: http://www.inktank.com/consulting-services/

Support Subscriptions: http://www.inktank.com/support-services/



                                                                    32
Check out our upcoming webinars

1.  Introduction to Ceph with OpenStack
    January 24, 2013
    10:00AM PT, 12:00PM CT, 1:00PM ET
    https://www.brighttalk.com/webcast/8847/63177


2.  DreamHost Case Study: DreamObjects with Ceph
    February 7, 2013
    10:00AM PT, 12:00PM CT, 1:00PM ET
    https://www.brighttalk.com/webcast/8847/63181


3.  Advanced Features of Ceph Distributed Storage
    (delivered by Sage Weil, creator of Ceph)
    February 12, 2013
    10:00AM PT, 12:00PM CT, 1:00PM ET
    https://www.brighttalk.com/webcast/8847/63179
Contact Us
Info@inktank.com
1-855-INKTANK

Don’t forget to follow us on:

Twitter: https://twitter.com/inktank

Facebook: http://www.facebook.com/inktank

YouTube: http://www.youtube.com/inktankstorage

More Related Content

What's hot

Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Community
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Community
 
LXC
LXCLXC
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red_Hat_Storage
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
Sage Weil
 
Docking postgres
Docking postgresDocking postgres
Docking postgres
rycamor
 
Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...
Boden Russell
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
Ceph Community
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Community
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
Jose De La Rosa
 
Performance analysis with_ceph
Performance analysis with_cephPerformance analysis with_ceph
Performance analysis with_ceph
Alex Lau
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Community
 
inwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetesinwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetes
inwin stack
 
Using cobbler in a not so small environment 1.77
Using cobbler in a not so small environment 1.77Using cobbler in a not so small environment 1.77
Using cobbler in a not so small environment 1.77
chhorn
 
Bluestore
BluestoreBluestore
Bluestore
Patrick McGarry
 
LXC, Docker, security: is it safe to run applications in Linux Containers?
LXC, Docker, security: is it safe to run applications in Linux Containers?LXC, Docker, security: is it safe to run applications in Linux Containers?
LXC, Docker, security: is it safe to run applications in Linux Containers?
Jérôme Petazzoni
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Community
 
SUSE - performance analysis-with_ceph
SUSE - performance analysis-with_cephSUSE - performance analysis-with_ceph
SUSE - performance analysis-with_ceph
inwin stack
 
Instrumenting the real-time web: Node.js in production
Instrumenting the real-time web: Node.js in productionInstrumenting the real-time web: Node.js in production
Instrumenting the real-time web: Node.js in production
bcantrill
 

What's hot (20)

Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
LXC
LXCLXC
LXC
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Docking postgres
Docking postgresDocking postgres
Docking postgres
 
Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Performance analysis with_ceph
Performance analysis with_cephPerformance analysis with_ceph
Performance analysis with_ceph
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
inwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetesinwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetes
 
Using cobbler in a not so small environment 1.77
Using cobbler in a not so small environment 1.77Using cobbler in a not so small environment 1.77
Using cobbler in a not so small environment 1.77
 
Bluestore
BluestoreBluestore
Bluestore
 
LXC, Docker, security: is it safe to run applications in Linux Containers?
LXC, Docker, security: is it safe to run applications in Linux Containers?LXC, Docker, security: is it safe to run applications in Linux Containers?
LXC, Docker, security: is it safe to run applications in Linux Containers?
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
SUSE - performance analysis-with_ceph
SUSE - performance analysis-with_cephSUSE - performance analysis-with_ceph
SUSE - performance analysis-with_ceph
 
Instrumenting the real-time web: Node.js in production
Instrumenting the real-time web: Node.js in productionInstrumenting the real-time web: Node.js in production
Instrumenting the real-time web: Node.js in production
 

Similar to Webinar - Getting Started With Ceph

CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
John Spray
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
Patrick Quairoli
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNET
Nikos Kormpakis
 
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
Leonid Vasilyev  "Building, deploying and running production code at Dropbox"Leonid Vasilyev  "Building, deploying and running production code at Dropbox"
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
IT Event
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container Ecosystem
Vinay Rao
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarWicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Kamesh Pemmaraju
 
Ceph in the GRNET cloud stack
Ceph in the GRNET cloud stackCeph in the GRNET cloud stack
Ceph in the GRNET cloud stack
Nikos Kormpakis
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Sean Cohen
 
Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
Ceph, Docker, Heroku Slugs, CoreOS and Deis OverviewCeph, Docker, Heroku Slugs, CoreOS and Deis Overview
Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
Leo Lorieri
 
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzureDevoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Patrick Chanezon
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
Mahmoud Shiri Varamini
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
Ceph Community
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
Emma Haruka Iwao
 
Linux containers and docker
Linux containers and dockerLinux containers and docker
Linux containers and docker
Fabio Fumarola
 
Big Data in Container; Hadoop Spark in Docker and Mesos
Big Data in Container; Hadoop Spark in Docker and MesosBig Data in Container; Hadoop Spark in Docker and Mesos
Big Data in Container; Hadoop Spark in Docker and Mesos
Heiko Loewe
 
End of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph ReplicationEnd of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph Replication
Ceph Community
 
Kubernetes
KubernetesKubernetes
Kubernetes
Linjith Kunnon
 
OpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning PairOpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning Pair
Red_Hat_Storage
 
Lions, Tigers and Deers: What building zoos can teach us about securing micro...
Lions, Tigers and Deers: What building zoos can teach us about securing micro...Lions, Tigers and Deers: What building zoos can teach us about securing micro...
Lions, Tigers and Deers: What building zoos can teach us about securing micro...
Sysdig
 

Similar to Webinar - Getting Started With Ceph (20)

CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNET
 
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
Leonid Vasilyev  "Building, deploying and running production code at Dropbox"Leonid Vasilyev  "Building, deploying and running production code at Dropbox"
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container Ecosystem
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarWicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
 
Ceph in the GRNET cloud stack
Ceph in the GRNET cloud stackCeph in the GRNET cloud stack
Ceph in the GRNET cloud stack
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
Ceph, Docker, Heroku Slugs, CoreOS and Deis OverviewCeph, Docker, Heroku Slugs, CoreOS and Deis Overview
Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
 
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzureDevoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Linux containers and docker
Linux containers and dockerLinux containers and docker
Linux containers and docker
 
Big Data in Container; Hadoop Spark in Docker and Mesos
Big Data in Container; Hadoop Spark in Docker and MesosBig Data in Container; Hadoop Spark in Docker and Mesos
Big Data in Container; Hadoop Spark in Docker and Mesos
 
End of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph ReplicationEnd of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph Replication
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
OpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning PairOpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning Pair
 
Lions, Tigers and Deers: What building zoos can teach us about securing micro...
Lions, Tigers and Deers: What building zoos can teach us about securing micro...Lions, Tigers and Deers: What building zoos can teach us about securing micro...
Lions, Tigers and Deers: What building zoos can teach us about securing micro...
 

Recently uploaded

FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 

Webinar - Getting Started With Ceph

  • 1. Inktank Delivering the Future of Storage Getting Started with Ceph January 17, 2013
  • 2. Agenda •  Inktank and Ceph Introduction •  Ceph Technology •  Getting Started Walk-through •  Resources •  Next steps
  • 3. •  Company that provides •  Distributed unified object, professional services and block and file storage support for Ceph platform •  Founded in 2011 •  Created by storage •  Funded by DreamHost experts •  Mark Shuttleworth •  Open source invested $1M •  In the Linux Kernel •  Sage Weil, CTO and creator of Ceph •  Integrated into Cloud Platforms
  • 4. Ceph Technological Foundations Ceph was built with the following goals: l  Every component must scale l  There can be no single point of failure l  The solution must be software-based, not an appliance l  Should run on readily-available, commodity hardware l  Everything must self-manage wherever possible l  Must be open source 4
  • 5. Key Differences •  CRUSH data placement algorithm (Object) Intelligent storage nodes •  Unified storage platform (Object + Block + File) All uses cases (cloud, big data, legacy, web app, archival, etc.) satisfied in a single cluster •  Thinly provisioned virtual block device (Block) Cloud storage block for VM images •  Distributed scalable metadata servers (CephFS)
  • 6. Ceph Use Cases Object •  Archival and backup storage •  Primary data storage •  S3-like storage •  Web services and platforms •  Application development Block •  SAN replacement •  Virtual block device, VM images File •  HPC •  Posix-compatible applications
  • 8. Ceph APP APP HOST/VM CLIENT Ceph Object Ceph Block Ceph Distributed Ceph Object Gateway (RBD) File System Library (RADOS (CephFS) (LIBRADOS) Gateway) A reliable and fully- distributed block A POSIX-compliant device distributed file A library allowing A RESTful gateway applications to for object storage system directly access Ceph Object Storage Ceph Object Storage (RADOS) A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes
  • 9. RADOS Components Monitors: M • Maintain cluster map • Provide consensus for distributed decision-making • Must have an odd number • These do not serve stored objects to clients RADOS Storage Nodes containing Object Storage Daemons (OSDs): •  One OSD per disk (recommended) •  At least three nodes in a cluster •  Serve stored objects to clients •  Intelligently peer to perform replication tasks •  Supports object classes 9
  • 10. RADOS Cluster Makeup OSD OSD OSD OSD OSD RADOS Node btrfs FS FS FS FS FS xfs ext4 DISK DISK DISK DISK DISK M M M RADOS Cluster 10
  • 11. VOTE Using the Votes Bottom on the top of the presentation panel please take 30 seconds answer the following questions to help us better understand you. 1.  Are you exploring Ceph for a current project? 2.  Are you looking to implement Ceph within the next 6 months? 3.  Do you need help deploying Ceph?
  • 13. Overview •  This tutorial and walk-through based on VirtualBox, but other hypervisor platforms will work just as well. •  Relaxed security best practices to speed things up, and will omit some of the security setup steps here. •  We will: 1.  Create the VirtualBox VMs 2.  Prepare the VMs for Creating the Ceph Cluster 3.  Install Ceph on all VMs from the Client 4.  Configure Ceph on all the server nodes and the client 5.  Experiment with Ceph’s Virtual Block Device (RBD) 6.  Experiment with the Ceph Distributed Filesystem 7.  Unmount, stop Ceph, and shut down the VMs safely
  • 14. Create the VMs •  1 or more CPU cores •  512MB or more memory •  Ubuntu 12.04 with latest updates •  VirtualBox Guest Addons •  Three virtual disks (dynamically allocated): • 28GB OS disk with boot partition • 8GB disk for Ceph data • 8GB disk for Ceph data •  Two virtual network interfaces: • eth0 Host-Only interface for Ceph • eth1 NAT interface for updates Consider creating a template based on the above, and then cloning the template to save time creating all four VMs
  • 15. Adjust Networking in the VM OS •  Edit /etc/network/interfaces # The primary network interface auto eth0 iface eth0 inet static address 192.168.56.20 netmask 255.255.255.0 # The secondary NAT interface with outside access auto eth1 iface eth1 inet dhcp gateway 10.0.3.2 •  Edit /etc/udev/rules.d/70-persistent-net.rules If the VMs were cloned from a template, the MAC addresses for the virtual NICs should have been regenerated to stay unique. Edit this file to make sure that the right NIC is mapped as eth0 and eth1.
  • 16. Security Shortcuts To streamline and simplify access for this tutorial, we: •  Configured the user “ubuntu” to SSH between hosts using authorized keys instead of a password. •  Added “ubuntu” to /etc/sudoers with full access. •  Configured root on the server nodes to SSH between nodes using authorized keys without a password set. •  Relaxed SSH checking of known hosts to avoid interactive confirmation when accessing a new host. •  Disabled cephx authentication for the Ceph cluster
  • 17. Edit /etc/hosts to resolve names •  Use the /etc/hosts file for simple name resolution for all the VMs on the Host-Only network. •  Create a portable /etc/hosts file on the client 127.0.0.1 localhost 192.168.56.20 ceph-client 192.168.56.21 ceph-node1 192.168.56.22 ceph-node2 192.168.56.23 ceph-node3 •  Copy the file to all the VMs so that names are consistently resolved across all machines.
  • 18. Install the Ceph Bobtail release ubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/ release.asc | ssh ceph-node1 sudo apt-key add - OK ubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc) main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.list deb http://ceph.com/debian-bobtail/ precise main ubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph” ... Setting up librados2 (0.56.1-1precise) ... Setting up librbd1 (0.56.1-1precise) ... Setting up ceph-common (0.56.1-1precise) ... Installing new version of config file /etc/bash_completion.d/rbd ... Setting up ceph (0.56.1-1precise) ... Setting up ceph-fs-common (0.56.1-1precise) ... Setting up ceph-fuse (0.56.1-1precise) ... Setting up ceph-mds (0.56.1-1precise) ... Setting up libcephfs1 (0.56.1-1precise) ... ... ldconfig deferred processing now taking place
  • 19. Create the Ceph Configuration File ~$ sudo cat <<! > /etc/ceph/ceph.conf [mon.c] [global] host = ceph-node3 auth cluster required = none mon addr = 192.168.56.23:6789 auth service required = none [osd.0] auth client required = none host = ceph-node1 [osd] devs = /dev/sdb osd journal size = 1000  [osd.1] filestore xattr use omap = true host = ceph-node1 osd mkfs type = ext4 devs = /dev/sdc osd mount options ext4 = user_xattr,rw, noexec,nodev, … noatime,nodiratime [mon.a] [osd.5] host = ceph-node1 host = ceph-node3 mon addr = 192.168.56.21:6789 devs = /dev/sdc [mon.b] [mds.a] host = ceph-node2 host = ceph-node1 mon addr = 192.168.56.22:6789 !
  • 20. Complete Ceph Cluster Creation •  Copy the /etc/ceph/ceph.conf file to all nodes •  Create the Ceph deamon working directories: ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-0 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-1 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-2 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-3 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-4 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-5 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mon/ceph-a ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/mon/ceph-b ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/mon/ceph-c ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mds/ceph-a •  Run the mkcephfs command from a server node: ~$ ubuntu@ceph-client:~$ ssh ceph-node1 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic x86_64) ... ubuntu@ceph-node1:~$ sudo -i root@ceph-node1:~# cd /etc/ceph root@ceph-node1:/etc/ceph# mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
  • 21. Start the Ceph Cluster On a server node, start the Ceph service: root@ceph-node1:/etc/ceph# service ceph -a start === mon.a === Starting Ceph mon.a on ceph-node1... starting mon.a rank 0 at 192.168.56.21:6789/0 mon_data /var/lib/ceph/mon/ ceph-a fsid 11309f36-9955-413c-9463-efae6c293fd6 === mon.b === === mon.c === === mds.a === Starting Ceph mds.a on ceph-node1... starting mds.a at :/0 === osd.0 === Mounting ext4 on ceph-node1:/var/lib/ceph/osd/ceph-0 Starting Ceph osd.0 on ceph-node1... starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/ osd/ceph-0/journal === osd.1 === === osd.2 === === osd.3 === === osd.4 === === osd.5 ===
  • 22. Verify Cluster Health root@ceph-node1:/etc/ceph# ceph status health HEALTH_OK monmap e1: 3 mons at {a=192.168.56.21:6789/0,b=192.168.56.22:6789/0,c=192.168.56.23:6789/0}, election epoch 6, quorum 0,1,2 a,b,c osdmap e17: 6 osds: 6 up, 6 in pgmap v473: 1344 pgs: 1344 active+clean; 8730 bytes data, 7525 MB used, 39015 MB / 48997 MB avail mdsmap e9: 1/1/1 up {0=a=up:active} root@ceph-node1:/etc/ceph# ceph osd tree # id weight type name up/down reweight -1 6 root default -3 6 rack unknownrack -2 2 host ceph-node1 0 1 osd.0 up 1 1 1 osd.1 up 1 -4 2 host ceph-node2 2 1 osd.2 up 1 3 1 osd.3 up 1 -5 2 host ceph-node3 4 1 osd.4 up 1 5 1 osd.5 up 1
  • 23. Access Ceph’s Virtual Block Device ubuntu@ceph-client:~$ rbd ls rbd: pool rbd doesn't contain rbd images ubuntu@ceph-client:~$ rbd create myLun --size 4096 ubuntu@ceph-client:~$ rbd ls -l NAME SIZE PARENT FMT PROT LOCK myLun 4096M 1 ubuntu@ceph-client:~$ sudo modprobe rbd ubuntu@ceph-client:~$ sudo rbd map myLun --pool rbd ubuntu@ceph-client:~$ sudo rbd showmapped id pool image snap device 0 rbd myLun - /dev/rbd0 ubuntu@ceph-client:~$ ls -l /dev/rbd rbd/ rbd0 ubuntu@ceph-client:~$ ls -l /dev/rbd/rbd/myLun … 1 root root 10 Jan 16 21:15 /dev/rbd/rbd/myLun -> ../../rbd0 ubuntu@ceph-client:~$ ls -l /dev/rbd0 brw-rw---- 1 root disk 251, 0 Jan 16 21:15 /dev/rbd0
  • 24. Format RBD image and use it ubuntu@ceph-client:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/myLun mke2fs 1.42 (29-Nov-2011) ... Writing superblocks and filesystem accounting information: done ubuntu@ceph-client:~$ sudo mkdir /mnt/myLun ubuntu@ceph-client:~$ sudo mount /dev/rbd/rbd/myLun /mnt/myLun ubuntu@ceph-client:~$ df -h | grep myLun /dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun ubuntu@ceph-client:~$ sudo dd if=/dev/zero of=/mnt/myLun/testfile bs=4K count=128 128+0 records in 128+0 records out 524288 bytes (524 kB) copied, 0.000431868 s, 1.2 GB/s ubuntu@ceph-client:~$ ls -lh /mnt/myLun/ total 528K drwx------ 2 root root 16K Jan 16 21:24 lost+found -rw-r--r-- 1 root root 512K Jan 16 21:29 testfile
  • 25. Access Ceph Distributed Filesystem ~$ sudo mkdir /mnt/myCephFS ~$ sudo mount.ceph ceph-node1,ceph-node2,ceph-node3:/ /mnt/myCephFS ~$ df -h | grep my 192.168.56.21,192.168.56.22,192.168.56.23:/ 48G 11G 38G 22% /mnt/myCephFS /dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun ~$ sudo dd if=/dev/zero of=/mnt/myCephFS/testfile bs=4K count=128 128+0 records in 128+0 records out 524288 bytes (524 kB) copied, 0.000439191 s, 1.2 GB/s ~$ ls -lh /mnt/myCephFS/ total 512K -rw-r--r-- 1 root root 512K Jan 16 23:04 testfile
  • 26. Unmount, Stop Ceph, and Halt ubuntu@ceph-client:~$ sudo umount /mnt/myCephFS ubuntu@ceph-client:~$ sudo umount /mnt/myLun/ ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0 ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop === mon.a === Stopping Ceph mon.a on ceph-node1...kill 19863...done === mon.b === === mon.c === === mds.a === === osd.0 === === osd.1 === === osd.2 === === osd.3 === === osd.4 === === osd.5 === ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ sudo service halt stop * Will now halt
  • 27. Review We: 1.  Created the VirtualBox VMs 2.  Prepared the VMs for Creating the Ceph Cluster 3.  Installed Ceph on all VMs from the Client 4.  Configured Ceph on all the server nodes and the client 5.  Experimented with Ceph’s Virtual Block Device (RBD) 6.  Experimented with the Ceph Distributed Filesystem 7.  Unmounted, stopped Ceph, and shut down the VMs safely •  Based on VirtualBox; other hypervisors work too. •  Relaxed security best practices to speed things up, but recommend following them in most circumstances.
  • 29. Leverage great online resources Documentation on the Ceph web site: •  http://ceph.com/docs/master/ Blogs from Inktank and the Ceph community: •  http://www.inktank.com/news-events/blog/ •  http://ceph.com/community/blog/ Developer resources: •  http://ceph.com/resources/development/ •  http://ceph.com/resources/mailing-list-irc/ •  http://dir.gmane.org/gmane.comp.file-systems.ceph.devel
  • 31. Try it yourself! •  Use the information in this webinar as a starting point •  Consult the Ceph documentation online: http://ceph.com/docs/master/ http://ceph.com/docs/master/start/
  • 32. Inktank’s Professional Services Consulting Services: •  Technical Overview •  Infrastructure Assessment •  Proof of Concept •  Implementation Support •  Performance Tuning Support Subscriptions: •  Pre-Production Support •  Production Support A full description of our services can be found at the following: Consulting Services: http://www.inktank.com/consulting-services/ Support Subscriptions: http://www.inktank.com/support-services/ 32
  • 33. Check out our upcoming webinars 1.  Introduction to Ceph with OpenStack January 24, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63177 2.  DreamHost Case Study: DreamObjects with Ceph February 7, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63181 3.  Advanced Features of Ceph Distributed Storage (delivered by Sage Weil, creator of Ceph) February 12, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63179
  • 34. Contact Us Info@inktank.com 1-855-INKTANK Don’t forget to follow us on: Twitter: https://twitter.com/inktank Facebook: http://www.facebook.com/inktank YouTube: http://www.youtube.com/inktankstorage