Your SlideShare is downloading. ×

Webinar - Getting Started With Ceph

2,227

Published on

The slides from our first webinar on getting started with Ceph. You can watch the full webinar on demand from http://www.inktank.com/news-events/webinars/. Enjoy!

The slides from our first webinar on getting started with Ceph. You can watch the full webinar on demand from http://www.inktank.com/news-events/webinars/. Enjoy!

Published in: Technology
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,227
On Slideshare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
186
Comments
0
Likes
4
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. InktankDelivering the Future of StorageGetting Started with CephJanuary 17, 2013
  • 2. Agenda•  Inktank and Ceph Introduction•  Ceph Technology•  Getting Started Walk-through•  Resources•  Next steps
  • 3. •  Company that provides •  Distributed unified object, professional services and block and file storage support for Ceph platform•  Founded in 2011 •  Created by storage•  Funded by DreamHost experts•  Mark Shuttleworth •  Open source invested $1M •  In the Linux Kernel•  Sage Weil, CTO and creator of Ceph •  Integrated into Cloud Platforms
  • 4. Ceph Technological FoundationsCeph was built with the following goals: l  Every component must scale l  There can be no single point of failure l  The solution must be software-based, not an appliance l  Should run on readily-available, commodity hardware l  Everything must self-manage wherever possible l  Must be open source 4
  • 5. Key Differences•  CRUSH data placement algorithm (Object) Intelligent storage nodes•  Unified storage platform (Object + Block + File) All uses cases (cloud, big data, legacy, web app, archival, etc.) satisfied in a single cluster•  Thinly provisioned virtual block device (Block) Cloud storage block for VM images•  Distributed scalable metadata servers (CephFS)
  • 6. Ceph Use CasesObject •  Archival and backup storage •  Primary data storage •  S3-like storage •  Web services and platforms •  Application developmentBlock •  SAN replacement •  Virtual block device, VM imagesFile •  HPC •  Posix-compatible applications
  • 7. Ceph Technology Overview
  • 8. Ceph APP APP HOST/VM CLIENT Ceph Object Ceph Block Ceph Distributed Ceph Object Gateway (RBD) File System Library (RADOS (CephFS) (LIBRADOS) Gateway) A reliable and fully- distributed block A POSIX-compliant device distributed file A library allowing A RESTful gateway applications to for object storage system directly access Ceph Object StorageCeph Object Storage(RADOS)A reliable, autonomous, distributed object store comprised of self-healing, self-managing,intelligent storage nodes
  • 9. RADOS Components Monitors:M • Maintain cluster map • Provide consensus for distributed decision-making • Must have an odd number • These do not serve stored objects to clients RADOS Storage Nodes containing Object Storage Daemons (OSDs): •  One OSD per disk (recommended) •  At least three nodes in a cluster •  Serve stored objects to clients •  Intelligently peer to perform replication tasks •  Supports object classes 9
  • 10. RADOS Cluster Makeup OSD OSD OSD OSD OSDRADOS Node btrfs FS FS FS FS FS xfs ext4 DISK DISK DISK DISK DISK M M MRADOSCluster 10
  • 11. VOTEUsing the Votes Bottom on the top of the presentation panel please take 30 seconds answer the following questions to help us better understand you.1.  Are you exploring Ceph for a current project?2.  Are you looking to implement Ceph within the next 6 months?3.  Do you need help deploying Ceph?
  • 12. Getting Started Walk-through
  • 13. Overview•  This tutorial and walk-through based on VirtualBox, but other hypervisor platforms will work just as well.•  Relaxed security best practices to speed things up, and will omit some of the security setup steps here.•  We will: 1.  Create the VirtualBox VMs 2.  Prepare the VMs for Creating the Ceph Cluster 3.  Install Ceph on all VMs from the Client 4.  Configure Ceph on all the server nodes and the client 5.  Experiment with Ceph’s Virtual Block Device (RBD) 6.  Experiment with the Ceph Distributed Filesystem 7.  Unmount, stop Ceph, and shut down the VMs safely
  • 14. Create the VMs•  1 or more CPU cores•  512MB or more memory•  Ubuntu 12.04 with latest updates•  VirtualBox Guest Addons•  Three virtual disks (dynamically allocated): • 28GB OS disk with boot partition • 8GB disk for Ceph data • 8GB disk for Ceph data•  Two virtual network interfaces: • eth0 Host-Only interface for Ceph • eth1 NAT interface for updatesConsider creating a template based on the above, and thencloning the template to save time creating all four VMs
  • 15. Adjust Networking in the VM OS•  Edit /etc/network/interfaces # The primary network interface auto eth0 iface eth0 inet static address 192.168.56.20 netmask 255.255.255.0 # The secondary NAT interface with outside access auto eth1 iface eth1 inet dhcp gateway 10.0.3.2•  Edit /etc/udev/rules.d/70-persistent-net.rules If the VMs were cloned from a template, the MAC addresses for the virtual NICs should have been regenerated to stay unique. Edit this file to make sure that the right NIC is mapped as eth0 and eth1.
  • 16. Security ShortcutsTo streamline and simplify access for this tutorial, we: •  Configured the user “ubuntu” to SSH between hosts using authorized keys instead of a password. •  Added “ubuntu” to /etc/sudoers with full access. •  Configured root on the server nodes to SSH between nodes using authorized keys without a password set. •  Relaxed SSH checking of known hosts to avoid interactive confirmation when accessing a new host. •  Disabled cephx authentication for the Ceph cluster
  • 17. Edit /etc/hosts to resolve names•  Use the /etc/hosts file for simple name resolution for all the VMs on the Host-Only network.•  Create a portable /etc/hosts file on the client 127.0.0.1 localhost 192.168.56.20 ceph-client 192.168.56.21 ceph-node1 192.168.56.22 ceph-node2 192.168.56.23 ceph-node3•  Copy the file to all the VMs so that names are consistently resolved across all machines.
  • 18. Install the Ceph Bobtail releaseubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc | ssh ceph-node1 sudo apt-key add -OKubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc)main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.listdeb http://ceph.com/debian-bobtail/ precise mainubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph”...Setting up librados2 (0.56.1-1precise) ...Setting up librbd1 (0.56.1-1precise) ...Setting up ceph-common (0.56.1-1precise) ...Installing new version of config file /etc/bash_completion.d/rbd ...Setting up ceph (0.56.1-1precise) ...Setting up ceph-fs-common (0.56.1-1precise) ...Setting up ceph-fuse (0.56.1-1precise) ...Setting up ceph-mds (0.56.1-1precise) ...Setting up libcephfs1 (0.56.1-1precise) ......ldconfig deferred processing now taking place
  • 19. Create the Ceph Configuration File~$ sudo cat <<! > /etc/ceph/ceph.conf [mon.c][global] host = ceph-node3 auth cluster required = none mon addr = 192.168.56.23:6789 auth service required = none [osd.0] auth client required = none host = ceph-node1[osd] devs = /dev/sdb osd journal size = 1000  [osd.1] filestore xattr use omap = true host = ceph-node1 osd mkfs type = ext4 devs = /dev/sdc osd mount options ext4 = user_xattr,rw, noexec,nodev, … noatime,nodiratime[mon.a] [osd.5] host = ceph-node1 host = ceph-node3 mon addr = 192.168.56.21:6789 devs = /dev/sdc[mon.b] [mds.a] host = ceph-node2 host = ceph-node1 mon addr = 192.168.56.22:6789 !
  • 20. Complete Ceph Cluster Creation•  Copy the /etc/ceph/ceph.conf file to all nodes•  Create the Ceph deamon working directories: ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-0 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-1 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-2 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-3 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-4 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-5 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mon/ceph-a ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/mon/ceph-b ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/mon/ceph-c ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mds/ceph-a•  Run the mkcephfs command from a server node: ~$ ubuntu@ceph-client:~$ ssh ceph-node1 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic x86_64) ... ubuntu@ceph-node1:~$ sudo -i root@ceph-node1:~# cd /etc/ceph root@ceph-node1:/etc/ceph# mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
  • 21. Start the Ceph ClusterOn a server node, start the Ceph service: root@ceph-node1:/etc/ceph# service ceph -a start === mon.a === Starting Ceph mon.a on ceph-node1... starting mon.a rank 0 at 192.168.56.21:6789/0 mon_data /var/lib/ceph/mon/ ceph-a fsid 11309f36-9955-413c-9463-efae6c293fd6 === mon.b === === mon.c === === mds.a === Starting Ceph mds.a on ceph-node1... starting mds.a at :/0 === osd.0 === Mounting ext4 on ceph-node1:/var/lib/ceph/osd/ceph-0 Starting Ceph osd.0 on ceph-node1... starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/ osd/ceph-0/journal === osd.1 === === osd.2 === === osd.3 === === osd.4 === === osd.5 ===
  • 22. Verify Cluster Healthroot@ceph-node1:/etc/ceph# ceph status health HEALTH_OK monmap e1: 3 mons at{a=192.168.56.21:6789/0,b=192.168.56.22:6789/0,c=192.168.56.23:6789/0},election epoch 6, quorum 0,1,2 a,b,c osdmap e17: 6 osds: 6 up, 6 in pgmap v473: 1344 pgs: 1344 active+clean; 8730 bytes data, 7525 MBused, 39015 MB / 48997 MB avail mdsmap e9: 1/1/1 up {0=a=up:active}root@ceph-node1:/etc/ceph# ceph osd tree# id weight type name up/down reweight-1 6 root default-3 6 rack unknownrack-2 2 host ceph-node10 1 osd.0 up 11 1 osd.1 up 1-4 2 host ceph-node22 1 osd.2 up 13 1 osd.3 up 1-5 2 host ceph-node34 1 osd.4 up 15 1 osd.5 up 1
  • 23. Access Ceph’s Virtual Block Deviceubuntu@ceph-client:~$ rbd lsrbd: pool rbd doesnt contain rbd imagesubuntu@ceph-client:~$ rbd create myLun --size 4096ubuntu@ceph-client:~$ rbd ls -lNAME SIZE PARENT FMT PROT LOCKmyLun 4096M 1ubuntu@ceph-client:~$ sudo modprobe rbdubuntu@ceph-client:~$ sudo rbd map myLun --pool rbdubuntu@ceph-client:~$ sudo rbd showmappedid pool image snap device0 rbd myLun - /dev/rbd0ubuntu@ceph-client:~$ ls -l /dev/rbdrbd/ rbd0ubuntu@ceph-client:~$ ls -l /dev/rbd/rbd/myLun… 1 root root 10 Jan 16 21:15 /dev/rbd/rbd/myLun -> ../../rbd0ubuntu@ceph-client:~$ ls -l /dev/rbd0brw-rw---- 1 root disk 251, 0 Jan 16 21:15 /dev/rbd0
  • 24. Format RBD image and use itubuntu@ceph-client:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/myLunmke2fs 1.42 (29-Nov-2011)...Writing superblocks and filesystem accounting information: doneubuntu@ceph-client:~$ sudo mkdir /mnt/myLunubuntu@ceph-client:~$ sudo mount /dev/rbd/rbd/myLun /mnt/myLunubuntu@ceph-client:~$ df -h | grep myLun/dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLunubuntu@ceph-client:~$ sudo dd if=/dev/zero of=/mnt/myLun/testfilebs=4K count=128128+0 records in128+0 records out524288 bytes (524 kB) copied, 0.000431868 s, 1.2 GB/subuntu@ceph-client:~$ ls -lh /mnt/myLun/total 528Kdrwx------ 2 root root 16K Jan 16 21:24 lost+found-rw-r--r-- 1 root root 512K Jan 16 21:29 testfile
  • 25. Access Ceph Distributed Filesystem~$ sudo mkdir /mnt/myCephFS~$ sudo mount.ceph ceph-node1,ceph-node2,ceph-node3:/ /mnt/myCephFS~$ df -h | grep my192.168.56.21,192.168.56.22,192.168.56.23:/ 48G 11G 38G 22% /mnt/myCephFS/dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun~$ sudo dd if=/dev/zero of=/mnt/myCephFS/testfile bs=4K count=128128+0 records in128+0 records out524288 bytes (524 kB) copied, 0.000439191 s, 1.2 GB/s~$ ls -lh /mnt/myCephFS/total 512K-rw-r--r-- 1 root root 512K Jan 16 23:04 testfile
  • 26. Unmount, Stop Ceph, and Haltubuntu@ceph-client:~$ sudo umount /mnt/myCephFSubuntu@ceph-client:~$ sudo umount /mnt/myLun/ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop=== mon.a ===Stopping Ceph mon.a on ceph-node1...kill 19863...done=== mon.b ====== mon.c ====== mds.a ====== osd.0 ====== osd.1 ====== osd.2 ====== osd.3 ====== osd.4 ====== osd.5 ===ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop * Will now halt^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop * Will now halt^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop * Will now halt^Cubuntu@ceph-client:~$ sudo service halt stop * Will now halt
  • 27. ReviewWe: 1.  Created the VirtualBox VMs 2.  Prepared the VMs for Creating the Ceph Cluster 3.  Installed Ceph on all VMs from the Client 4.  Configured Ceph on all the server nodes and the client 5.  Experimented with Ceph’s Virtual Block Device (RBD) 6.  Experimented with the Ceph Distributed Filesystem 7.  Unmounted, stopped Ceph, and shut down the VMs safely•  Based on VirtualBox; other hypervisors work too.•  Relaxed security best practices to speed things up, but recommend following them in most circumstances.
  • 28. Resources for Learning More
  • 29. Leverage great online resourcesDocumentation on the Ceph web site: •  http://ceph.com/docs/master/Blogs from Inktank and the Ceph community: •  http://www.inktank.com/news-events/blog/ •  http://ceph.com/community/blog/Developer resources: •  http://ceph.com/resources/development/ •  http://ceph.com/resources/mailing-list-irc/ •  http://dir.gmane.org/gmane.comp.file-systems.ceph.devel
  • 30. What Next? 30
  • 31. Try it yourself!•  Use the information in this webinar as a starting point•  Consult the Ceph documentation online: http://ceph.com/docs/master/ http://ceph.com/docs/master/start/
  • 32. Inktank’s Professional ServicesConsulting Services: •  Technical Overview •  Infrastructure Assessment •  Proof of Concept •  Implementation Support •  Performance TuningSupport Subscriptions: •  Pre-Production Support •  Production SupportA full description of our services can be found at the following:Consulting Services: http://www.inktank.com/consulting-services/Support Subscriptions: http://www.inktank.com/support-services/ 32
  • 33. Check out our upcoming webinars1.  Introduction to Ceph with OpenStack January 24, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/631772.  DreamHost Case Study: DreamObjects with Ceph February 7, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/631813.  Advanced Features of Ceph Distributed Storage (delivered by Sage Weil, creator of Ceph) February 12, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63179
  • 34. Contact UsInfo@inktank.com1-855-INKTANKDon’t forget to follow us on:Twitter: https://twitter.com/inktankFacebook: http://www.facebook.com/inktankYouTube: http://www.youtube.com/inktankstorage

×