InktankDelivering the Future of StorageGetting Started with CephJanuary 17, 2013
Agenda• Inktank and Ceph Introduction• Ceph Technology• Getting Started Walk-through• Resources• Next steps
• Company that provides • Distributed unified object, professional services and block and file storage support for Ceph platform• Founded in 2011 • Created by storage• Funded by DreamHost experts• Mark Shuttleworth • Open source invested $1M • In the Linux Kernel• Sage Weil, CTO and creator of Ceph • Integrated into Cloud Platforms
Ceph Technological FoundationsCeph was built with the following goals: l Every component must scale l There can be no single point of failure l The solution must be software-based, not an appliance l Should run on readily-available, commodity hardware l Everything must self-manage wherever possible l Must be open source 4
Key Differences• CRUSH data placement algorithm (Object) Intelligent storage nodes• Unified storage platform (Object + Block + File) All uses cases (cloud, big data, legacy, web app, archival, etc.) satisfied in a single cluster• Thinly provisioned virtual block device (Block) Cloud storage block for VM images• Distributed scalable metadata servers (CephFS)
Ceph Use CasesObject • Archival and backup storage • Primary data storage • S3-like storage • Web services and platforms • Application developmentBlock • SAN replacement • Virtual block device, VM imagesFile • HPC • Posix-compatible applications
Ceph APP APP HOST/VM CLIENT Ceph Object Ceph Block Ceph Distributed Ceph Object Gateway (RBD) File System Library (RADOS (CephFS) (LIBRADOS) Gateway) A reliable and fully- distributed block A POSIX-compliant device distributed file A library allowing A RESTful gateway applications to for object storage system directly access Ceph Object StorageCeph Object Storage(RADOS)A reliable, autonomous, distributed object store comprised of self-healing, self-managing,intelligent storage nodes
RADOS Components Monitors:M • Maintain cluster map • Provide consensus for distributed decision-making • Must have an odd number • These do not serve stored objects to clients RADOS Storage Nodes containing Object Storage Daemons (OSDs): • One OSD per disk (recommended) • At least three nodes in a cluster • Serve stored objects to clients • Intelligently peer to perform replication tasks • Supports object classes 9
RADOS Cluster Makeup OSD OSD OSD OSD OSDRADOS Node btrfs FS FS FS FS FS xfs ext4 DISK DISK DISK DISK DISK M M MRADOSCluster 10
VOTEUsing the Votes Bottom on the top of the presentation panel please take 30 seconds answer the following questions to help us better understand you.1. Are you exploring Ceph for a current project?2. Are you looking to implement Ceph within the next 6 months?3. Do you need help deploying Ceph?
Overview• This tutorial and walk-through based on VirtualBox, but other hypervisor platforms will work just as well.• Relaxed security best practices to speed things up, and will omit some of the security setup steps here.• We will: 1. Create the VirtualBox VMs 2. Prepare the VMs for Creating the Ceph Cluster 3. Install Ceph on all VMs from the Client 4. Configure Ceph on all the server nodes and the client 5. Experiment with Ceph’s Virtual Block Device (RBD) 6. Experiment with the Ceph Distributed Filesystem 7. Unmount, stop Ceph, and shut down the VMs safely
Create the VMs• 1 or more CPU cores• 512MB or more memory• Ubuntu 12.04 with latest updates• VirtualBox Guest Addons• Three virtual disks (dynamically allocated): • 28GB OS disk with boot partition • 8GB disk for Ceph data • 8GB disk for Ceph data• Two virtual network interfaces: • eth0 Host-Only interface for Ceph • eth1 NAT interface for updatesConsider creating a template based on the above, and thencloning the template to save time creating all four VMs
Adjust Networking in the VM OS• Edit /etc/network/interfaces # The primary network interface auto eth0 iface eth0 inet static address 192.168.56.20 netmask 255.255.255.0 # The secondary NAT interface with outside access auto eth1 iface eth1 inet dhcp gateway 10.0.3.2• Edit /etc/udev/rules.d/70-persistent-net.rules If the VMs were cloned from a template, the MAC addresses for the virtual NICs should have been regenerated to stay unique. Edit this file to make sure that the right NIC is mapped as eth0 and eth1.
Security ShortcutsTo streamline and simplify access for this tutorial, we: • Configured the user “ubuntu” to SSH between hosts using authorized keys instead of a password. • Added “ubuntu” to /etc/sudoers with full access. • Configured root on the server nodes to SSH between nodes using authorized keys without a password set. • Relaxed SSH checking of known hosts to avoid interactive confirmation when accessing a new host. • Disabled cephx authentication for the Ceph cluster
Edit /etc/hosts to resolve names• Use the /etc/hosts file for simple name resolution for all the VMs on the Host-Only network.• Create a portable /etc/hosts file on the client 127.0.0.1 localhost 192.168.56.20 ceph-client 192.168.56.21 ceph-node1 192.168.56.22 ceph-node2 192.168.56.23 ceph-node3• Copy the file to all the VMs so that names are consistently resolved across all machines.
Install the Ceph Bobtail releaseubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc | ssh ceph-node1 sudo apt-key add -OKubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc)main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.listdeb http://ceph.com/debian-bobtail/ precise mainubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph”...Setting up librados2 (0.56.1-1precise) ...Setting up librbd1 (0.56.1-1precise) ...Setting up ceph-common (0.56.1-1precise) ...Installing new version of config file /etc/bash_completion.d/rbd ...Setting up ceph (0.56.1-1precise) ...Setting up ceph-fs-common (0.56.1-1precise) ...Setting up ceph-fuse (0.56.1-1precise) ...Setting up ceph-mds (0.56.1-1precise) ...Setting up libcephfs1 (0.56.1-1precise) ......ldconfig deferred processing now taking place
Unmount, Stop Ceph, and Haltubuntu@ceph-client:~$ sudo umount /mnt/myCephFSubuntu@ceph-client:~$ sudo umount /mnt/myLun/ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop=== mon.a ===Stopping Ceph mon.a on ceph-node1...kill 19863...done=== mon.b ====== mon.c ====== mds.a ====== osd.0 ====== osd.1 ====== osd.2 ====== osd.3 ====== osd.4 ====== osd.5 ===ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop * Will now halt^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop * Will now halt^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop * Will now halt^Cubuntu@ceph-client:~$ sudo service halt stop * Will now halt
ReviewWe: 1. Created the VirtualBox VMs 2. Prepared the VMs for Creating the Ceph Cluster 3. Installed Ceph on all VMs from the Client 4. Configured Ceph on all the server nodes and the client 5. Experimented with Ceph’s Virtual Block Device (RBD) 6. Experimented with the Ceph Distributed Filesystem 7. Unmounted, stopped Ceph, and shut down the VMs safely• Based on VirtualBox; other hypervisors work too.• Relaxed security best practices to speed things up, but recommend following them in most circumstances.
Leverage great online resourcesDocumentation on the Ceph web site: • http://ceph.com/docs/master/Blogs from Inktank and the Ceph community: • http://www.inktank.com/news-events/blog/ • http://ceph.com/community/blog/Developer resources: • http://ceph.com/resources/development/ • http://ceph.com/resources/mailing-list-irc/ • http://dir.gmane.org/gmane.comp.file-systems.ceph.devel
Try it yourself!• Use the information in this webinar as a starting point• Consult the Ceph documentation online: http://ceph.com/docs/master/ http://ceph.com/docs/master/start/
Inktank’s Professional ServicesConsulting Services: • Technical Overview • Infrastructure Assessment • Proof of Concept • Implementation Support • Performance TuningSupport Subscriptions: • Pre-Production Support • Production SupportA full description of our services can be found at the following:Consulting Services: http://www.inktank.com/consulting-services/Support Subscriptions: http://www.inktank.com/support-services/ 32
Check out our upcoming webinars1. Introduction to Ceph with OpenStack January 24, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/631772. DreamHost Case Study: DreamObjects with Ceph February 7, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/631813. Advanced Features of Ceph Distributed Storage (delivered by Sage Weil, creator of Ceph) February 12, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63179
Contact UsInfo@inktank.com1-855-INKTANKDon’t forget to follow us on:Twitter: https://twitter.com/inktankFacebook: http://www.facebook.com/inktankYouTube: http://www.youtube.com/inktankstorage