SlideShare a Scribd company logo
1 of 34
Download to read offline
Inktank
Delivering the Future of Storage


Getting Started with Ceph
January 17, 2013
Agenda
•    Inktank and Ceph Introduction

•    Ceph Technology

•    Getting Started Walk-through

•    Resources

•    Next steps
•    Company that provides       •  Distributed unified object,
     professional services and      block and file storage
     support for Ceph
                                    platform
•    Founded in 2011
                                 •    Created by storage
•    Funded by DreamHost              experts

•    Mark Shuttleworth           •    Open source
     invested $1M
                                 •    In the Linux Kernel
•    Sage Weil, CTO and
     creator of Ceph             •    Integrated into Cloud
                                      Platforms
Ceph Technological Foundations

Ceph was built with the following goals:

   l    Every component must scale

   l    There can be no single point of failure

   l    The solution must be software-based, not an appliance

   l    Should run on readily-available, commodity hardware

   l    Everything must self-manage wherever possible

   l    Must be open source

                                                                 4
Key Differences
•  CRUSH data placement algorithm (Object)
       Intelligent storage nodes


•  Unified storage platform (Object + Block + File)
       All uses cases (cloud, big data, legacy, web app,
       archival, etc.) satisfied in a single cluster


•  Thinly provisioned virtual block device (Block)
       Cloud storage block for VM images


•  Distributed scalable metadata servers (CephFS)
Ceph Use Cases
Object
  •  Archival and backup storage
  •  Primary data storage
  •  S3-like storage
  •  Web services and platforms
  •  Application development
Block
  •  SAN replacement
  •  Virtual block device, VM images
File
 •  HPC
 •  Posix-compatible applications
Ceph Technology Overview
Ceph
        APP                    APP                  HOST/VM                   CLIENT



                        Ceph Object            Ceph Block               Ceph Distributed
  Ceph Object           Gateway                (RBD)                    File System
  Library               (RADOS                                          (CephFS)
  (LIBRADOS)            Gateway)               A reliable and fully-
                                               distributed block        A POSIX-compliant
                                               device                   distributed file
  A library allowing    A RESTful gateway
  applications to       for object storage                              system
  directly
  access Ceph Object
  Storage




Ceph Object Storage
(RADOS)
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
RADOS Components
        Monitors:


M
         • Maintain cluster map
         • Provide consensus for
          distributed decision-making
         • Must have an odd number
         • These do not serve stored
          objects to clients


        RADOS Storage Nodes
        containing Object Storage
        Daemons (OSDs):
        •  One OSD per disk (recommended)
        •  At least three nodes in a cluster
        •  Serve stored objects to clients
        •  Intelligently peer to perform
           replication tasks
        •  Supports object classes

                                               9
RADOS Cluster Makeup


          OSD    OSD    OSD    OSD      OSD
RADOS
 Node
                                                   btrfs
          FS     FS     FS         FS   FS         xfs
                                                   ext4

          DISK   DISK   DISK   DISK     DISK




                 M             M               M
RADOS
Cluster



                                                           10
VOTE
Using the Votes Bottom on the top of the presentation panel
   please take 30 seconds answer the following questions to
   help us better understand you.

1.  Are you exploring Ceph for a current project?

2.  Are you looking to implement Ceph within the next 6
    months?

3.  Do you need help deploying Ceph?
Getting Started Walk-through
Overview

•  This tutorial and walk-through based on VirtualBox, but
   other hypervisor platforms will work just as well.
•  Relaxed security best practices to speed things up, and
   will omit some of the security setup steps here.
•  We will:
 1.    Create the VirtualBox VMs
 2.    Prepare the VMs for Creating the Ceph Cluster
 3.    Install Ceph on all VMs from the Client
 4.    Configure Ceph on all the server nodes and the client
 5.    Experiment with Ceph’s Virtual Block Device (RBD)
 6.    Experiment with the Ceph Distributed Filesystem
 7.    Unmount, stop Ceph, and shut down the VMs safely
Create the VMs

•     1 or more CPU cores
•     512MB or more memory
•     Ubuntu 12.04 with latest updates
•     VirtualBox Guest Addons
•     Three virtual disks (dynamically allocated):
     • 28GB OS disk with boot partition
     • 8GB disk for Ceph data
     • 8GB disk for Ceph data
•  Two virtual network interfaces:
     • eth0 Host-Only interface for Ceph
     • eth1 NAT interface for updates

Consider creating a template based on the above, and then
cloning the template to save time creating all four VMs
Adjust Networking in the VM OS

•  Edit /etc/network/interfaces
   # The primary network interface
   auto eth0
   iface eth0 inet static
   address 192.168.56.20
   netmask 255.255.255.0
   # The secondary NAT interface with outside access
   auto eth1
   iface eth1 inet dhcp
   gateway 10.0.3.2



•  Edit /etc/udev/rules.d/70-persistent-net.rules
 If the VMs were cloned from a template, the MAC addresses for the
 virtual NICs should have been regenerated to stay unique. Edit this
 file to make sure that the right NIC is mapped as eth0 and eth1.
Security Shortcuts

To streamline and simplify access for this tutorial, we:
 •  Configured the user “ubuntu” to SSH between hosts using
    authorized keys instead of a password.
 •  Added “ubuntu” to /etc/sudoers with full access.
 •  Configured root on the server nodes to SSH between nodes
    using authorized keys without a password set.
 •  Relaxed SSH checking of known hosts to avoid interactive
    confirmation when accessing a new host.
 •  Disabled cephx authentication for the Ceph cluster
Edit /etc/hosts to resolve names

•  Use the /etc/hosts file for simple name resolution
   for all the VMs on the Host-Only network.
•  Create a portable /etc/hosts file on the client
    127.0.0.1         localhost

    192.168.56.20     ceph-client
    192.168.56.21     ceph-node1
    192.168.56.22     ceph-node2
    192.168.56.23     ceph-node3


•  Copy the file to all the VMs so that names are
   consistently resolved across all machines.
Install the Ceph Bobtail release
ubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/
release.asc | ssh ceph-node1 sudo apt-key add -
OK

ubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc)
main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.list
deb http://ceph.com/debian-bobtail/ precise main

ubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph”
...
Setting up librados2 (0.56.1-1precise) ...
Setting up librbd1 (0.56.1-1precise) ...
Setting up ceph-common (0.56.1-1precise) ...
Installing new version of config file /etc/bash_completion.d/rbd ...
Setting up ceph (0.56.1-1precise) ...
Setting up ceph-fs-common (0.56.1-1precise) ...
Setting up ceph-fuse (0.56.1-1precise) ...
Setting up ceph-mds (0.56.1-1precise) ...
Setting up libcephfs1 (0.56.1-1precise) ...
...
ldconfig deferred processing now taking place
Create the Ceph Configuration File
~$ sudo cat <<! > /etc/ceph/ceph.conf        [mon.c]
[global]                                        host = ceph-node3
   auth cluster required = none                 mon addr = 192.168.56.23:6789
   auth service required = none              [osd.0]
   auth client required = none                  host = ceph-node1
[osd]                                           devs = /dev/sdb
   osd journal size = 1000                    [osd.1]
   filestore xattr use omap = true              host = ceph-node1
   osd mkfs type = ext4                         devs = /dev/sdc
   osd mount options ext4 = user_xattr,rw,
                        noexec,nodev,        …
                        noatime,nodiratime
[mon.a]                                      [osd.5]
   host = ceph-node1                            host = ceph-node3
   mon addr = 192.168.56.21:6789                devs = /dev/sdc
[mon.b]                                      [mds.a]
   host = ceph-node2                            host = ceph-node1
   mon addr = 192.168.56.22:6789             !
Complete Ceph Cluster Creation
•  Copy the /etc/ceph/ceph.conf file to all nodes
•  Create the Ceph deamon working directories:
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-0
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-1
    ~$   ssh   ceph-node2   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-2
    ~$   ssh   ceph-node2   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-3
    ~$   ssh   ceph-node3   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-4
    ~$   ssh   ceph-node3   sudo   mkdir   -p   /var/lib/ceph/osd/ceph-5
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/mon/ceph-a
    ~$   ssh   ceph-node2   sudo   mkdir   -p   /var/lib/ceph/mon/ceph-b
    ~$   ssh   ceph-node3   sudo   mkdir   -p   /var/lib/ceph/mon/ceph-c
    ~$   ssh   ceph-node1   sudo   mkdir   -p   /var/lib/ceph/mds/ceph-a
•  Run the mkcephfs command from a server node:
    ~$ ubuntu@ceph-client:~$ ssh ceph-node1
    Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic
    x86_64)
    ...
    ubuntu@ceph-node1:~$ sudo -i
    root@ceph-node1:~# cd /etc/ceph
    root@ceph-node1:/etc/ceph#
    mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
Start the Ceph Cluster
On a server node, start the Ceph service:
    root@ceph-node1:/etc/ceph# service ceph -a start
    === mon.a ===
    Starting Ceph mon.a on ceph-node1...
    starting mon.a rank 0 at 192.168.56.21:6789/0 mon_data /var/lib/ceph/mon/
    ceph-a fsid 11309f36-9955-413c-9463-efae6c293fd6
    === mon.b ===
    === mon.c ===
    === mds.a ===
    Starting Ceph mds.a on ceph-node1...
    starting mds.a at :/0
    === osd.0 ===
    Mounting ext4 on ceph-node1:/var/lib/ceph/osd/ceph-0
    Starting Ceph osd.0 on ceph-node1...
    starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/
    osd/ceph-0/journal
    === osd.1 ===
    === osd.2 ===
    === osd.3 ===
    === osd.4 ===
    === osd.5 ===
Verify Cluster Health
root@ceph-node1:/etc/ceph# ceph status
   health HEALTH_OK
   monmap e1: 3 mons at
{a=192.168.56.21:6789/0,b=192.168.56.22:6789/0,c=192.168.56.23:6789/0},
election epoch 6, quorum 0,1,2 a,b,c
   osdmap e17: 6 osds: 6 up, 6 in
         pgmap v473: 1344 pgs: 1344 active+clean; 8730 bytes data, 7525 MB
used,
       39015 MB / 48997 MB avail
   mdsmap e9: 1/1/1 up {0=a=up:active}

root@ceph-node1:/etc/ceph# ceph osd tree
# id     weight type name            up/down reweight
-1       6        root default
-3       6                  rack unknownrack
-2       2                           host ceph-node1
0        1                                    osd.0     up      1
1        1                                    osd.1     up      1
-4       2                           host ceph-node2
2        1                                    osd.2     up      1
3        1                                    osd.3     up      1
-5       2                           host ceph-node3
4        1                                    osd.4     up      1
5        1                                    osd.5     up      1
Access Ceph’s Virtual Block Device
ubuntu@ceph-client:~$ rbd ls
rbd: pool rbd doesn't contain rbd images
ubuntu@ceph-client:~$ rbd create myLun --size 4096
ubuntu@ceph-client:~$ rbd ls -l
NAME   SIZE PARENT FMT PROT LOCK
myLun 4096M      1
ubuntu@ceph-client:~$ sudo modprobe rbd
ubuntu@ceph-client:~$ sudo rbd map myLun --pool rbd
ubuntu@ceph-client:~$ sudo rbd showmapped
id pool image snap device
0 rbd myLun - /dev/rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd
rbd/ rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd/rbd/myLun
… 1 root root 10 Jan 16 21:15 /dev/rbd/rbd/myLun -> ../../rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd0
brw-rw---- 1 root disk 251, 0 Jan 16 21:15 /dev/rbd0
Format RBD image and use it
ubuntu@ceph-client:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/myLun
mke2fs 1.42 (29-Nov-2011)
...
Writing superblocks and filesystem accounting information: done
ubuntu@ceph-client:~$ sudo mkdir /mnt/myLun
ubuntu@ceph-client:~$ sudo mount /dev/rbd/rbd/myLun /mnt/myLun
ubuntu@ceph-client:~$ df -h | grep myLun
/dev/rbd0                         4.0G 190M 3.9G      5% /mnt/myLun
ubuntu@ceph-client:~$ sudo dd if=/dev/zero of=/mnt/myLun/testfile
bs=4K count=128
128+0 records in
128+0 records out
524288 bytes (524 kB) copied, 0.000431868 s, 1.2 GB/s
ubuntu@ceph-client:~$ ls -lh /mnt/myLun/
total 528K
drwx------ 2 root root 16K Jan 16 21:24 lost+found
-rw-r--r-- 1 root root 512K Jan 16 21:29 testfile
Access Ceph Distributed Filesystem
~$ sudo mkdir /mnt/myCephFS
~$ sudo mount.ceph ceph-node1,ceph-node2,ceph-node3:/ /mnt/myCephFS
~$ df -h | grep my
192.168.56.21,192.168.56.22,192.168.56.23:/    48G    11G    38G   22% /mnt/myCephFS
/dev/rbd0                                     4.0G   190M   3.9G    5% /mnt/myLun


~$ sudo dd if=/dev/zero of=/mnt/myCephFS/testfile bs=4K count=128
128+0 records in
128+0 records out
524288 bytes (524 kB) copied, 0.000439191 s, 1.2 GB/s
~$ ls -lh /mnt/myCephFS/
total 512K
-rw-r--r-- 1 root root 512K Jan 16 23:04 testfile
Unmount, Stop Ceph, and Halt
ubuntu@ceph-client:~$ sudo umount /mnt/myCephFS
ubuntu@ceph-client:~$ sudo umount /mnt/myLun/
ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0
ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop
=== mon.a ===
Stopping Ceph mon.a on ceph-node1...kill 19863...done
=== mon.b ===
=== mon.c ===
=== mds.a ===
=== osd.0 ===
=== osd.1 ===
=== osd.2 ===
=== osd.3 ===
=== osd.4 ===
=== osd.5 ===
ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop
 * Will now halt
^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop
 * Will now halt
^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop
 * Will now halt
^Cubuntu@ceph-client:~$ sudo service halt stop
 * Will now halt
Review
We:
 1.    Created the VirtualBox VMs
 2.    Prepared the VMs for Creating the Ceph Cluster
 3.    Installed Ceph on all VMs from the Client
 4.    Configured Ceph on all the server nodes and the client
 5.    Experimented with Ceph’s Virtual Block Device (RBD)
 6.    Experimented with the Ceph Distributed Filesystem
 7.    Unmounted, stopped Ceph, and shut down the VMs safely

•  Based on VirtualBox; other hypervisors work too.
•  Relaxed security best practices to speed things up, but
   recommend following them in most circumstances.
Resources for Learning More
Leverage great online resources

Documentation on the Ceph web site:
 •  http://ceph.com/docs/master/

Blogs from Inktank and the Ceph community:
 •  http://www.inktank.com/news-events/blog/
 •  http://ceph.com/community/blog/

Developer resources:
 •  http://ceph.com/resources/development/
 •  http://ceph.com/resources/mailing-list-irc/
 •  http://dir.gmane.org/gmane.comp.file-systems.ceph.devel
What Next?




             30
Try it yourself!

•  Use the information in this webinar as a starting point
•  Consult the Ceph documentation online:
  http://ceph.com/docs/master/
  http://ceph.com/docs/master/start/
Inktank’s Professional Services
Consulting Services:
 •    Technical Overview
 •    Infrastructure Assessment
 •    Proof of Concept
 •    Implementation Support
 •    Performance Tuning

Support Subscriptions:
 •    Pre-Production Support
 •    Production Support

A full description of our services can be found at the following:

Consulting Services: http://www.inktank.com/consulting-services/

Support Subscriptions: http://www.inktank.com/support-services/



                                                                    32
Check out our upcoming webinars

1.  Introduction to Ceph with OpenStack
    January 24, 2013
    10:00AM PT, 12:00PM CT, 1:00PM ET
    https://www.brighttalk.com/webcast/8847/63177


2.  DreamHost Case Study: DreamObjects with Ceph
    February 7, 2013
    10:00AM PT, 12:00PM CT, 1:00PM ET
    https://www.brighttalk.com/webcast/8847/63181


3.  Advanced Features of Ceph Distributed Storage
    (delivered by Sage Weil, creator of Ceph)
    February 12, 2013
    10:00AM PT, 12:00PM CT, 1:00PM ET
    https://www.brighttalk.com/webcast/8847/63179
Contact Us
Info@inktank.com
1-855-INKTANK

Don’t forget to follow us on:

Twitter: https://twitter.com/inktank

Facebook: http://www.facebook.com/inktank

YouTube: http://www.youtube.com/inktankstorage

More Related Content

What's hot

What's hot (20)

Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
LXC
LXCLXC
LXC
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Docking postgres
Docking postgresDocking postgres
Docking postgres
 
Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Performance analysis with_ceph
Performance analysis with_cephPerformance analysis with_ceph
Performance analysis with_ceph
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
inwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetesinwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetes
 
Using cobbler in a not so small environment 1.77
Using cobbler in a not so small environment 1.77Using cobbler in a not so small environment 1.77
Using cobbler in a not so small environment 1.77
 
Bluestore
BluestoreBluestore
Bluestore
 
LXC, Docker, security: is it safe to run applications in Linux Containers?
LXC, Docker, security: is it safe to run applications in Linux Containers?LXC, Docker, security: is it safe to run applications in Linux Containers?
LXC, Docker, security: is it safe to run applications in Linux Containers?
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
SUSE - performance analysis-with_ceph
SUSE - performance analysis-with_cephSUSE - performance analysis-with_ceph
SUSE - performance analysis-with_ceph
 
Instrumenting the real-time web: Node.js in production
Instrumenting the real-time web: Node.js in productionInstrumenting the real-time web: Node.js in production
Instrumenting the real-time web: Node.js in production
 

Similar to Webinar - Getting Started With Ceph

OpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning PairOpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning Pair
Red_Hat_Storage
 

Similar to Webinar - Getting Started With Ceph (20)

CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNET
 
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
Leonid Vasilyev  "Building, deploying and running production code at Dropbox"Leonid Vasilyev  "Building, deploying and running production code at Dropbox"
Leonid Vasilyev "Building, deploying and running production code at Dropbox"
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container Ecosystem
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarWicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
 
Ceph in the GRNET cloud stack
Ceph in the GRNET cloud stackCeph in the GRNET cloud stack
Ceph in the GRNET cloud stack
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
Ceph, Docker, Heroku Slugs, CoreOS and Deis OverviewCeph, Docker, Heroku Slugs, CoreOS and Deis Overview
Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
 
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzureDevoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Linux containers and docker
Linux containers and dockerLinux containers and docker
Linux containers and docker
 
Big Data in Container; Hadoop Spark in Docker and Mesos
Big Data in Container; Hadoop Spark in Docker and MesosBig Data in Container; Hadoop Spark in Docker and Mesos
Big Data in Container; Hadoop Spark in Docker and Mesos
 
End of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph ReplicationEnd of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph Replication
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
OpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning PairOpenStack and Ceph: the Winning Pair
OpenStack and Ceph: the Winning Pair
 
Lions, Tigers and Deers: What building zoos can teach us about securing micro...
Lions, Tigers and Deers: What building zoos can teach us about securing micro...Lions, Tigers and Deers: What building zoos can teach us about securing micro...
Lions, Tigers and Deers: What building zoos can teach us about securing micro...
 

Recently uploaded

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Recently uploaded (20)

Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 

Webinar - Getting Started With Ceph

  • 1. Inktank Delivering the Future of Storage Getting Started with Ceph January 17, 2013
  • 2. Agenda •  Inktank and Ceph Introduction •  Ceph Technology •  Getting Started Walk-through •  Resources •  Next steps
  • 3. •  Company that provides •  Distributed unified object, professional services and block and file storage support for Ceph platform •  Founded in 2011 •  Created by storage •  Funded by DreamHost experts •  Mark Shuttleworth •  Open source invested $1M •  In the Linux Kernel •  Sage Weil, CTO and creator of Ceph •  Integrated into Cloud Platforms
  • 4. Ceph Technological Foundations Ceph was built with the following goals: l  Every component must scale l  There can be no single point of failure l  The solution must be software-based, not an appliance l  Should run on readily-available, commodity hardware l  Everything must self-manage wherever possible l  Must be open source 4
  • 5. Key Differences •  CRUSH data placement algorithm (Object) Intelligent storage nodes •  Unified storage platform (Object + Block + File) All uses cases (cloud, big data, legacy, web app, archival, etc.) satisfied in a single cluster •  Thinly provisioned virtual block device (Block) Cloud storage block for VM images •  Distributed scalable metadata servers (CephFS)
  • 6. Ceph Use Cases Object •  Archival and backup storage •  Primary data storage •  S3-like storage •  Web services and platforms •  Application development Block •  SAN replacement •  Virtual block device, VM images File •  HPC •  Posix-compatible applications
  • 8. Ceph APP APP HOST/VM CLIENT Ceph Object Ceph Block Ceph Distributed Ceph Object Gateway (RBD) File System Library (RADOS (CephFS) (LIBRADOS) Gateway) A reliable and fully- distributed block A POSIX-compliant device distributed file A library allowing A RESTful gateway applications to for object storage system directly access Ceph Object Storage Ceph Object Storage (RADOS) A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes
  • 9. RADOS Components Monitors: M • Maintain cluster map • Provide consensus for distributed decision-making • Must have an odd number • These do not serve stored objects to clients RADOS Storage Nodes containing Object Storage Daemons (OSDs): •  One OSD per disk (recommended) •  At least three nodes in a cluster •  Serve stored objects to clients •  Intelligently peer to perform replication tasks •  Supports object classes 9
  • 10. RADOS Cluster Makeup OSD OSD OSD OSD OSD RADOS Node btrfs FS FS FS FS FS xfs ext4 DISK DISK DISK DISK DISK M M M RADOS Cluster 10
  • 11. VOTE Using the Votes Bottom on the top of the presentation panel please take 30 seconds answer the following questions to help us better understand you. 1.  Are you exploring Ceph for a current project? 2.  Are you looking to implement Ceph within the next 6 months? 3.  Do you need help deploying Ceph?
  • 13. Overview •  This tutorial and walk-through based on VirtualBox, but other hypervisor platforms will work just as well. •  Relaxed security best practices to speed things up, and will omit some of the security setup steps here. •  We will: 1.  Create the VirtualBox VMs 2.  Prepare the VMs for Creating the Ceph Cluster 3.  Install Ceph on all VMs from the Client 4.  Configure Ceph on all the server nodes and the client 5.  Experiment with Ceph’s Virtual Block Device (RBD) 6.  Experiment with the Ceph Distributed Filesystem 7.  Unmount, stop Ceph, and shut down the VMs safely
  • 14. Create the VMs •  1 or more CPU cores •  512MB or more memory •  Ubuntu 12.04 with latest updates •  VirtualBox Guest Addons •  Three virtual disks (dynamically allocated): • 28GB OS disk with boot partition • 8GB disk for Ceph data • 8GB disk for Ceph data •  Two virtual network interfaces: • eth0 Host-Only interface for Ceph • eth1 NAT interface for updates Consider creating a template based on the above, and then cloning the template to save time creating all four VMs
  • 15. Adjust Networking in the VM OS •  Edit /etc/network/interfaces # The primary network interface auto eth0 iface eth0 inet static address 192.168.56.20 netmask 255.255.255.0 # The secondary NAT interface with outside access auto eth1 iface eth1 inet dhcp gateway 10.0.3.2 •  Edit /etc/udev/rules.d/70-persistent-net.rules If the VMs were cloned from a template, the MAC addresses for the virtual NICs should have been regenerated to stay unique. Edit this file to make sure that the right NIC is mapped as eth0 and eth1.
  • 16. Security Shortcuts To streamline and simplify access for this tutorial, we: •  Configured the user “ubuntu” to SSH between hosts using authorized keys instead of a password. •  Added “ubuntu” to /etc/sudoers with full access. •  Configured root on the server nodes to SSH between nodes using authorized keys without a password set. •  Relaxed SSH checking of known hosts to avoid interactive confirmation when accessing a new host. •  Disabled cephx authentication for the Ceph cluster
  • 17. Edit /etc/hosts to resolve names •  Use the /etc/hosts file for simple name resolution for all the VMs on the Host-Only network. •  Create a portable /etc/hosts file on the client 127.0.0.1 localhost 192.168.56.20 ceph-client 192.168.56.21 ceph-node1 192.168.56.22 ceph-node2 192.168.56.23 ceph-node3 •  Copy the file to all the VMs so that names are consistently resolved across all machines.
  • 18. Install the Ceph Bobtail release ubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/ release.asc | ssh ceph-node1 sudo apt-key add - OK ubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc) main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.list deb http://ceph.com/debian-bobtail/ precise main ubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph” ... Setting up librados2 (0.56.1-1precise) ... Setting up librbd1 (0.56.1-1precise) ... Setting up ceph-common (0.56.1-1precise) ... Installing new version of config file /etc/bash_completion.d/rbd ... Setting up ceph (0.56.1-1precise) ... Setting up ceph-fs-common (0.56.1-1precise) ... Setting up ceph-fuse (0.56.1-1precise) ... Setting up ceph-mds (0.56.1-1precise) ... Setting up libcephfs1 (0.56.1-1precise) ... ... ldconfig deferred processing now taking place
  • 19. Create the Ceph Configuration File ~$ sudo cat <<! > /etc/ceph/ceph.conf [mon.c] [global] host = ceph-node3 auth cluster required = none mon addr = 192.168.56.23:6789 auth service required = none [osd.0] auth client required = none host = ceph-node1 [osd] devs = /dev/sdb osd journal size = 1000  [osd.1] filestore xattr use omap = true host = ceph-node1 osd mkfs type = ext4 devs = /dev/sdc osd mount options ext4 = user_xattr,rw, noexec,nodev, … noatime,nodiratime [mon.a] [osd.5] host = ceph-node1 host = ceph-node3 mon addr = 192.168.56.21:6789 devs = /dev/sdc [mon.b] [mds.a] host = ceph-node2 host = ceph-node1 mon addr = 192.168.56.22:6789 !
  • 20. Complete Ceph Cluster Creation •  Copy the /etc/ceph/ceph.conf file to all nodes •  Create the Ceph deamon working directories: ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-0 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-1 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-2 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-3 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-4 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-5 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mon/ceph-a ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/mon/ceph-b ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/mon/ceph-c ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mds/ceph-a •  Run the mkcephfs command from a server node: ~$ ubuntu@ceph-client:~$ ssh ceph-node1 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic x86_64) ... ubuntu@ceph-node1:~$ sudo -i root@ceph-node1:~# cd /etc/ceph root@ceph-node1:/etc/ceph# mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
  • 21. Start the Ceph Cluster On a server node, start the Ceph service: root@ceph-node1:/etc/ceph# service ceph -a start === mon.a === Starting Ceph mon.a on ceph-node1... starting mon.a rank 0 at 192.168.56.21:6789/0 mon_data /var/lib/ceph/mon/ ceph-a fsid 11309f36-9955-413c-9463-efae6c293fd6 === mon.b === === mon.c === === mds.a === Starting Ceph mds.a on ceph-node1... starting mds.a at :/0 === osd.0 === Mounting ext4 on ceph-node1:/var/lib/ceph/osd/ceph-0 Starting Ceph osd.0 on ceph-node1... starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/ osd/ceph-0/journal === osd.1 === === osd.2 === === osd.3 === === osd.4 === === osd.5 ===
  • 22. Verify Cluster Health root@ceph-node1:/etc/ceph# ceph status health HEALTH_OK monmap e1: 3 mons at {a=192.168.56.21:6789/0,b=192.168.56.22:6789/0,c=192.168.56.23:6789/0}, election epoch 6, quorum 0,1,2 a,b,c osdmap e17: 6 osds: 6 up, 6 in pgmap v473: 1344 pgs: 1344 active+clean; 8730 bytes data, 7525 MB used, 39015 MB / 48997 MB avail mdsmap e9: 1/1/1 up {0=a=up:active} root@ceph-node1:/etc/ceph# ceph osd tree # id weight type name up/down reweight -1 6 root default -3 6 rack unknownrack -2 2 host ceph-node1 0 1 osd.0 up 1 1 1 osd.1 up 1 -4 2 host ceph-node2 2 1 osd.2 up 1 3 1 osd.3 up 1 -5 2 host ceph-node3 4 1 osd.4 up 1 5 1 osd.5 up 1
  • 23. Access Ceph’s Virtual Block Device ubuntu@ceph-client:~$ rbd ls rbd: pool rbd doesn't contain rbd images ubuntu@ceph-client:~$ rbd create myLun --size 4096 ubuntu@ceph-client:~$ rbd ls -l NAME SIZE PARENT FMT PROT LOCK myLun 4096M 1 ubuntu@ceph-client:~$ sudo modprobe rbd ubuntu@ceph-client:~$ sudo rbd map myLun --pool rbd ubuntu@ceph-client:~$ sudo rbd showmapped id pool image snap device 0 rbd myLun - /dev/rbd0 ubuntu@ceph-client:~$ ls -l /dev/rbd rbd/ rbd0 ubuntu@ceph-client:~$ ls -l /dev/rbd/rbd/myLun … 1 root root 10 Jan 16 21:15 /dev/rbd/rbd/myLun -> ../../rbd0 ubuntu@ceph-client:~$ ls -l /dev/rbd0 brw-rw---- 1 root disk 251, 0 Jan 16 21:15 /dev/rbd0
  • 24. Format RBD image and use it ubuntu@ceph-client:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/myLun mke2fs 1.42 (29-Nov-2011) ... Writing superblocks and filesystem accounting information: done ubuntu@ceph-client:~$ sudo mkdir /mnt/myLun ubuntu@ceph-client:~$ sudo mount /dev/rbd/rbd/myLun /mnt/myLun ubuntu@ceph-client:~$ df -h | grep myLun /dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun ubuntu@ceph-client:~$ sudo dd if=/dev/zero of=/mnt/myLun/testfile bs=4K count=128 128+0 records in 128+0 records out 524288 bytes (524 kB) copied, 0.000431868 s, 1.2 GB/s ubuntu@ceph-client:~$ ls -lh /mnt/myLun/ total 528K drwx------ 2 root root 16K Jan 16 21:24 lost+found -rw-r--r-- 1 root root 512K Jan 16 21:29 testfile
  • 25. Access Ceph Distributed Filesystem ~$ sudo mkdir /mnt/myCephFS ~$ sudo mount.ceph ceph-node1,ceph-node2,ceph-node3:/ /mnt/myCephFS ~$ df -h | grep my 192.168.56.21,192.168.56.22,192.168.56.23:/ 48G 11G 38G 22% /mnt/myCephFS /dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun ~$ sudo dd if=/dev/zero of=/mnt/myCephFS/testfile bs=4K count=128 128+0 records in 128+0 records out 524288 bytes (524 kB) copied, 0.000439191 s, 1.2 GB/s ~$ ls -lh /mnt/myCephFS/ total 512K -rw-r--r-- 1 root root 512K Jan 16 23:04 testfile
  • 26. Unmount, Stop Ceph, and Halt ubuntu@ceph-client:~$ sudo umount /mnt/myCephFS ubuntu@ceph-client:~$ sudo umount /mnt/myLun/ ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0 ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop === mon.a === Stopping Ceph mon.a on ceph-node1...kill 19863...done === mon.b === === mon.c === === mds.a === === osd.0 === === osd.1 === === osd.2 === === osd.3 === === osd.4 === === osd.5 === ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ sudo service halt stop * Will now halt
  • 27. Review We: 1.  Created the VirtualBox VMs 2.  Prepared the VMs for Creating the Ceph Cluster 3.  Installed Ceph on all VMs from the Client 4.  Configured Ceph on all the server nodes and the client 5.  Experimented with Ceph’s Virtual Block Device (RBD) 6.  Experimented with the Ceph Distributed Filesystem 7.  Unmounted, stopped Ceph, and shut down the VMs safely •  Based on VirtualBox; other hypervisors work too. •  Relaxed security best practices to speed things up, but recommend following them in most circumstances.
  • 29. Leverage great online resources Documentation on the Ceph web site: •  http://ceph.com/docs/master/ Blogs from Inktank and the Ceph community: •  http://www.inktank.com/news-events/blog/ •  http://ceph.com/community/blog/ Developer resources: •  http://ceph.com/resources/development/ •  http://ceph.com/resources/mailing-list-irc/ •  http://dir.gmane.org/gmane.comp.file-systems.ceph.devel
  • 31. Try it yourself! •  Use the information in this webinar as a starting point •  Consult the Ceph documentation online: http://ceph.com/docs/master/ http://ceph.com/docs/master/start/
  • 32. Inktank’s Professional Services Consulting Services: •  Technical Overview •  Infrastructure Assessment •  Proof of Concept •  Implementation Support •  Performance Tuning Support Subscriptions: •  Pre-Production Support •  Production Support A full description of our services can be found at the following: Consulting Services: http://www.inktank.com/consulting-services/ Support Subscriptions: http://www.inktank.com/support-services/ 32
  • 33. Check out our upcoming webinars 1.  Introduction to Ceph with OpenStack January 24, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63177 2.  DreamHost Case Study: DreamObjects with Ceph February 7, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63181 3.  Advanced Features of Ceph Distributed Storage (delivered by Sage Weil, creator of Ceph) February 12, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63179
  • 34. Contact Us Info@inktank.com 1-855-INKTANK Don’t forget to follow us on: Twitter: https://twitter.com/inktank Facebook: http://www.facebook.com/inktank YouTube: http://www.youtube.com/inktankstorage