SlideShare a Scribd company logo
1 of 29
Download to read offline
Table of Contents
1. LAB – Conventions. ............................................................................... 3
2. LAB – Setup. ........................................................................................ 3
3. LAB – Creating a Ceph RBD client. ..................................................... 13
4. LAB – Setting up the Ceph Object Gateway (RADOSGW). ................... 17
5. LAB – Integrating Ceph with OpenStack Glance. ................................ 21
6. LAB – Integrating Ceph with OpenStack Cinder. ................................ 24
7. LAB – Integrating the Ceph with OpenStack Keystone. ..................... 28
Red Hat Summit 2015 – Ceph and OpenStack
1. LAB – Conventions.
In this document, we will be using the following marking conventions:
• normal text Lab text and notes (proportional font).
• command text Command input (bold highlighted font).
• output text Command output (italic highlighted font).
• {variable} Some input to be entered based on your proceedings.
• […Truncated…] Some output has been truncated.
• […] Some output has been truncated.
2. LAB – Setup.
In order to be able to go through this LAB material, you will need to make sure you
have performed the following tasks and have the following environment:
• Download the virtual machine images:
• https://objects.dreamhost.com/ceph-training/0.80.6/c7_daisy.ova
• https://objects.dreamhost.com/ceph-training/0.80.6/c7_bob.ova
• Download the lab helper files for the RADOSGW lab in case you corrupt the pre-
loaded ones located on virtual machine daisy and located in
$HOME/HelperFiles:
• https://objects.dreamhost.com/ceph-
training/0.80.6/radosgw.conf.httpd.txt
• https://objects.dreamhost.com/ceph-training/0.80.6/radosgw.fcgi.txt
• https://objects.dreamhost.com/ceph-
training/0.80.6/ceph.conf.radosgw.c7.txt
• https://objects.dreamhost.com/ceph-training/0.80.6/s3cfg.txt
• https://objects.dreamhost.com/ceph-training/0.80.6/s3curl.pl.txt
• Make sure you have at least 4GB RAM (8GB recommended).
• Make sure you have at least 30GB disk space available.
In order to download files to your virtual machine image you can use the following
command line as a base example:
$daisy cd $HOME/HelperFiles
$daisy wget -O [filename] [file_url]
3
Red Hat Summit 2015 – Ceph and OpenStack
2.1 LAB - Deploying Ceph using ceph-deploy.
This lab exercise is about deploying a Ceph cluster on the following three nodes:
daisy, eric and frank. They’ll all run OSDs (on top of /dev/sdb) and MONs.
Note: VM can be accessed with user ceph and password ceph.
2.2 Create the cluster.
2.2.1 Ceph.conf creation.
Daisy, frank and eric will act as both monitors and OSDs.
On daisy:
daisy$ mkdir $HOME/ceph-deploy
daisy$ cd $HOME/ceph-deploy
daisy$ ceph-deploy new daisy
Creating new cluster named ceph
Resolving host daisy
Monitor daisy at 192.168.122.114
Monitor initial members are ['daisy']
Monitor addrs are ['192.168.122.114']
Creating a random mon key...
Writing initial config to ceph.conf...
Writing monitor keyring to ceph.conf..
Let’s look at the generated $HOME/ceph-deploy/ceph.conf:
daisy$ cat ceph.conf
[global]
fsid = 66f5ffc0-035b-4c1c-823f-36250e5091b7
mon initial members = daisy
mon host = 192.168.122.114
auth supported = cephx
filestore xattr use omap = true
Before deploying the OSDs, we need to update the configuration file to use some
specific parameters:
• Set the OSD journal size to 1024MB.
• Set the default replication size to 2 for our small test cluster.
• Allow dynamic update of the primary OSD affinity.
• Allow for object copies to reside on the same host.
daisy$ vi $HOME/ceph-deploy/ceph.conf
4
Red Hat Summit 2015 – Ceph and OpenStack
In the file, add the following lines at the end of the existing file:
osd journal size = 1024
osd pool default size = 2
osd pool default min size = 1
mon osd allow primary affinity = 1
osd crush chooseleaf type = 0
This command will also create the Ceph monitor keyring (ceph.mon.keyring) to deploy
monitors.
2.2.1.1 Deploying monitors.
On daisy:
daisy$ ceph-deploy mon create daisy
Deploying mon, cluster ceph hosts daisy
Deploying mon to daisy
After a few seconds, the monitor should be in quorum. If we run:
daisy$ sudo ceph -s
cluster 90c07d40-8dea-4b06-88a8-fa09c07aaf16
health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds;
monmap e1: 1 mons at {daisy=192.168.122.114:6789/0}, election epoch 6, quorum
0 daisy
osdmap e1: 0 osds: 0 up, 0 in pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0
KB used, 0 KB / 0 KB avail
mdsmap e1: 0/0/1 up
As you can see the 1 monitor is in quorum, but the cluster is unhealthy since we do
not have any OSDs deployed.
5
Red Hat Summit 2015 – Ceph and OpenStack
2.2.2 Deploying OSDs.
Before deploying OSDs, we need to get the bootstraps keys generated by the
monitors.
On daisy:
daisy$ ceph-deploy gatherkeys daisy
[ceph_deploy.gatherkeys][DEBUG ] Checking daisy for
/etc/ceph/ceph.client.admin.keyring
[ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from daisy.
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Checking daisy for /var/lib/ceph/bootstrap-
osd/ceph.keyring
[ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from daisy.
[ceph_deploy.gatherkeys][DEBUG ] Checking daisy for /var/lib/ceph/bootstrap-
mds/ceph.keyring
[ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from daisy.
We can have a look at which disks are present in a machine and partitioned with:
daisy$ ceph-deploy disk list daisy
[daisy][INFO ] Running command: ceph-disk list
[daisy][INFO ] /dev/sda :
[daisy][INFO ] /dev/sda1 other, ext2, mounted on /boot
[daisy][INFO ] /dev/sda2 other
[daisy][INFO ] /dev/sda5 other, LVM2_member
[daisy][INFO ] /dev/sdb other, unknown
[daisy][INFO ] /dev/sdc other, unknown
[daisy][INFO ] /dev/sdd other, unknown
[daisy][INFO ] /dev/sr0 other, unknown
[daisy][INFO ] /dev/sr1 other, unknown
Make sure your current working directory is $HOME/ceph-deploy.
daisy$ cd $HOME/ceph-deploy
6
Red Hat Summit 2015 – Ceph and OpenStack
Then use ceph-deploy to deploy the OSDs.
daisy$ ceph-deploy osd create daisy:sdb daisy:sdc daisy:sdd
Preparing cluster ceph disks daisy:/dev/sdb: daisy:/dev/sdc: daisy:/dev/sdd:
Deploying osd to daisy
Host daisy is now ready for osd use.
Preparing host daisy disk /dev/sdb journal None activate True
Deploying osd to daisy
Host eric is now ready for osd use.
Preparing host eric disk /dev/sdc journal None activate True
Deploying osd to daisy
Host frank is now ready for osd use.
Preparing host frank disk /dev/sdd journal None activate True
2.3 Checking the cluster health.
daisy$ sudo ceph -s
health HEALTH_OK
monmap e1: 1 mons at {daisy=192.168.122.114:6789/0}, election epoch 10, quorum
0 daisy
osdmap e13: 3 osds: 3 up, 3 in
pgmap v59: 192 pgs: 192 active+clean; 0 bytes data, 103 MB used, 21367 MB /
21470 MB avail
mdsmap e1: 0/0/1 up
At this point, the Ceph cluster is:
• In good health.
• The MON election epoch is 10.
• We have MONs in quorum (0).
• The OSD map epoch is 13.
• We have 3 OSDs (3 up and 3 in).
• They are all UP and IN.
7
Red Hat Summit 2015 – Ceph and OpenStack
2.4 Monitoring Cluster Events.
There is a simple way to check on the events that take place in the life of the Ceph
cluster by using the ceph –w command.
This command, as illustrated below, first displays the health of the cluster followed by
some lines each time an event occurs in the cluster.
daisy$ sudo ceph –w
cluster 2e5f14a2-a374-463b-82eb-58227e179591
health HEALTH_WARN 25 pgs peering
[…Truncated…]
mdsmap e1: 0/0/1 up
2014-01-09 08:32:07.201445 mon.0 [WRN] message from mon.2 was stamped 1.179327s
in the future, clocks not synchronized
2014-01-09 08:32:38.542240 mon.0 [INF] mon.daisy calling new monitor election
2014-01-09 08:32:38.544043 mon.0 [INF] mon.daisy@0 won leader election with
quorum 0,1,2
2014-01-09 08:32:38.548805 mon.0 [WRN] mon.2 192.168.122.116:6789/0 clock skew
1.32637s > max 1s
2014-01-09 08:32:38.556024 mon.0 [INF] pgmap v1926: 520 pgs: 495 active+clean,
25 peering; 80694 KB data, 564 MB used, 82280 MB / 82844 MB avail
2014-01-09 08:32:38.556078 mon.0 [INF] mdsmap e1: 0/0/1 up
2014-01-09 08:32:38.556136 mon.0 [INF] osdmap e319: 9 osds: 9 up, 9 in
2014-01-09 08:32:38.556239 mon.0 [INF] monmap e1: 3 mons at
{daisy=192.168.122.114:6789/0,eric=192.168.122.115:6789/0,frank=192.168.122.116:
6789/0}
2014-01-09 08:32:39.861357 mon.1 [INF] mon.eric calling new monitor election
2014-01-09 08:32:38.556420 mon.0 [WRN] mon.1 192.168.122.115:6789/0 clock skew
1.31247s > max 1s
2014-01-09 08:33:13.713197 mon.0 [INF] mon.daisy calling new monitor election
2014-01-09 08:33:13.715200 mon.0 [INF] mon.daisy@0 won leader election with
quorum 0,1,2
2014-01-09 08:33:13.717671 mon.0 [WRN] mon.1 192.168.122.115:6789/0 clock skew
1.50833s > max 1s
2014-01-09 08:33:13.725304 mon.0 [INF] pgmap v1926: 520 pgs: 495 active+clean,
25 peering; 80694 KB data, 564 MB used, 82280 MB / 82844 MB avail
[output truncated]
8
Red Hat Summit 2015 – Ceph and OpenStack
2.5 Ceph basic maintenance operation.
Each Ceph node will run a certain number of daemons you can interact with. In order
to do so, the following commands and syntax are available.
2.5.1 Starting and Stopping the OSDs.
You can either, start, stop or recycle the Ceph OSD daemons on the host you are
connected to:
• sudo /etc/init.d/ceph stop osd Will stop the OSD daemons.
• sudo /etc/init.d/ceph start osd Will start the OSD daemons.
• sudo /etc/init.d/ceph restart osd Will recycle the OSD daemons.
2.5.1.1 Using the commands.
On daisy: issue perform the following operations:
daisy$ ps –ef | grep ceph-osd
Can you see the OSD daemons running as processes? …............... (Y/N)
How many OSD daemons are running as processes? …............... [1]
daisy$ sudo [use your platform STOP command]
daisy$ ps –ef | grep ceph-osd
Can you see the OSD daemons running as processes? …............... (Y/N)
daisy$ sudo ceph -s
cluster 2e5f14a2-a374-463b-82eb-58227e179591
[…Truncated…]
osdmap e352: 3 osds: 2 up, 3 in
[…Truncated…]
How many OSDs are participating in the cluster? …............... [3]
How many OSDs are UP in the cluster? …............... [2]
How many OSDs are DOWN in the cluster? …............... [1]
9
Red Hat Summit 2015 – Ceph and OpenStack
To find out about which OSDs are up or down, you can use the ceph osd tree
command.
daisy$ sudo ceph osd tree
#id weight type name up/down reweight
-1 0.03 root default
-2 0.03 host daisy
0 0.009995 osd.0 down 1
1 0.009995 osd.1 up 1
2 0.009995 osd.2 up 1
daisy$ sudo [use your platform START command]
daisy$ sudo ceph osd tree
#id weight type name up/down reweight
-1 0.03 root default
-2 0.03 host daisy
0 0.009995 osd.0 up 1
1 0.009995 osd.1 up 1
2 0.009995 osd.2 up 1
To act on a particular OSD daemon, you can input the following command.
daisy$ sudo /etc/init.d/ceph [stop|start|restart] osd.{id}
2.5.1.2 Using the commands.
Let’s try this out.
daisy$ sudo {use_your_platform_STOP_command_for_osd_id_0}
daisy$ sudo ceph osd tree
#id weight type name up/down reweight
-1 0.03 root default
-2 0.03 host daisy
0 0.009995 osd.0 down 1
1 0.009995 osd.1 up 1
2 0.009995 osd.2 up 1
Can you see the OSD daemon with ID=0 down? …............... (Y/N)
daisy$ ps –ef | grep ceph-osd
How many OSD daemons are running as processes? …............... [0]
daisy$ sudo {use_your_platform_START_command_for_osd_id_0}
Check all OSD daemons are now up and running across the cluster.
10
Red Hat Summit 2015 – Ceph and OpenStack
daisy$ sudo ceph osd tree
#id weight type name up/down reweight
-1 0.03 root default
-2 0.03 host daisy
0 0.009995 osd.0 up 1
1 0.009995 osd.1 up 1
2 0.009995 osd.2 up 1
daisy$ sudo ceph -s
cluster 2e5f14a2-a374-463b-82eb-58227e179591
health HEALTH_OK
[…Truncated…]
osdmap e364: 3 osds: 3 up, 3 in
pgmap v2059: 520 pgs: 520 active+clean; 80694 KB data, 536 MB used, 82308 MB
/ 82844 MB avail
mdsmap e1: 0/0/1 up
2.5.2 Starting and Stopping all Ceph daemons.
You can either, start, stop or recycle all Ceph daemons on the host you are connected
to:
• sudo /etc/init.d/ceph stop Will stop all Ceph daemons.
• sudo /etc/init.d/ceph start Will start all Ceph daemons.
• sudo /etc/init.d/ceph restart Will recycle all Ceph daemons.
2.5.3 Checking the installed Ceph version on a host.
ceph version 0.80.6 (f93610a4421cb670b08e974c6550ee715ac528ae)
2.6 Using RADOS.
On daisy: Create a 10MB file:
daisy$ sudo dd if=/dev/zero of=/tmp/test bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied
daisy$ sudo rados -p data put test /tmp/test
daisy$ sudo ceph df
Write down the amount of objects used on data: …...............
Write down the amount of bytes used on data: …...............
11
Red Hat Summit 2015 – Ceph and OpenStack
N.B: The number of objects for pool named data is the 4th column in the display and
the number of bytes, expressed in Kilobytes, is the 3rd column of the display.
pool name category KB objects clones degraded …
[output truncated]
data - 0 0 0 0
[output truncated]
daisy$ sudo rados -p data put test1 /tmp/test
daisy$ sudo rados -p data put test2 /tmp/test
daisy$ sudo ceph df
Write down the amount of objects used on data: …...............
Write down the amount of bytes used on data: …...............
What can you say about the difference in the figures you wrote down first and
second?
................................................................................................................................................
You can also use the ceph df command to perform these checks:
daisy$ sudo ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
82844M 82464M 380M 0.46
POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 8 0 1
[output truncated]
Now we need to clean up the environment after our tests:
daisy$ sudo rados -p data rm test
daisy$ sudo rados -p data rm test1
daisy$ sudo rados -p data rm test2
Lab - This is the end of this lab.
12
Red Hat Summit 2015 – Ceph and OpenStack
3. LAB – Creating a Ceph RBD client.
The Ceph cluster that is currently running within your virtual machines is built out of
three nodes: daisy, eric and frank. All cluster nodes are running OSDs and MONs,
hence we can use this Ceph cluster as target for RBD access. And of course, you can
also upload data into the object store directly.
In our lab environment, Ceph packages are pre-installed on each VM as we can not
assume Internet connectivity on the network we will be using during our trainings.
In your environment, you will need to make sure Ceph packages are install on the
machine you wish to use as an RBD client (apt-get update && apt-get install ceph-
common). Make sure you use the official Ceph repos to do so.
In a production environment, you should never use the RBD kernel module on a host
running an OSD daemon.
3.1 RADOS Block Device (RBD).
On daisy, first, you’ll have to create client credentials for the RBD client:
daisy$ sudo ceph auth get-or-create client.rbd.daisy osd 'allow rwx
pool=rbd' mon 'allow r' -o /etc/ceph/ceph.client.rbd.daisy.keyring
Create an RBD image in Ceph named test and 128MB large:
daisy$ sudo rbd create test --size 128
Check the RBD image has been successfully created with the following command:
daisy$ sudo rbd info test
rbd image 'test':
size 128 MB in 32 objects
order 22 (4096 KB objects)
block_name_prefix: rb.0.239e.238e1f29
format: 1
On daisy, make sure that the RBD kernel driver is loaded:
alice$ sudo modprobe rbd
13
Red Hat Summit 2015 – Ceph and OpenStack
Map the image on your local server:
alice$ sudo rbd --id rbd.daisy map test
Get a list of all mapped RBD images like this:
alice$ rbd --id rbd.daisy showmapped
id pool image snap device
0 rbd test - /dev/rbd0
Finally, create a File System on the RBD and mount it just like you would do for a
regular disk device:
alice$ sudo mkfs.ext4 /dev/rbd0
alice$ sudo mkdir /mnt/rbd
alice$ sudo mount /dev/rbd0 /mnt/rbd
3.2 Storing data in Ceph.
Create a 10MB file:
alice$ sudo dd if=/dev/zero of=/tmp/test bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied
3.2.1 Use the mounted File System.
alice$ df
Write down the amount of bytes used on /mnt/rbd: …............... …...............
Write down the amount of bytes available on /mnt/rbd: …............... …...............
alice$ ls
Write down the number of files and directories present: …............... …...............
alice$ sudo cp /tmp/test /mnt/rbd/test1
alice$ df
Write down the amount of bytes used on /mnt/rbd: …............... …...............
Write down the amount of bytes available on /mnt/rbd: …............... …...............
14
Red Hat Summit 2015 – Ceph and OpenStack
alice$ ls
Write down the number of files and directories present: …............... …...............
alice$ sudo rados --id rbd.daisy df
Write down the number of objects and bytes used for rbd: …............... …...............
Repeat the 3.2.1 sequence of operations but replace sudo cp /tmp/test
/mnt/rbd/test1 with sudo cp /tmp/test /mnt/rbd/test2 and use the second column
of this document to write down a second set of values. Then proceed to Error:
Reference source not found.
3.2.2 Analyzing figures
What can you observe for the system df commands?
................................................................................................................................................
What can you observe for the ceph df commands?
................................................................................................................................................
3.2.3 Use RADOS.
Upload an object into RADOS:
alice$ sudo rados --id rbd.daisy -p rbd put test /tmp/test
alice$ df
Write down the amount of bytes used on /mnt/rbd: …............... …...............
Write down the amount of bytes available on /mnt/rbd: …............... …...............
alice$ ls
Write down the number of files and directories present: …............... …...............
How do you explain that the number of bytes used on /mnt/rbd is not changing?
................................................................................................................................................
alice$ sudo rados --id rbd.daisy df
Write down the number of objects and bytes used for rbd: …............... …...............
15
Red Hat Summit 2015 – Ceph and OpenStack
Repeat the 3.2.3 sequence of operations but replace sudo rados --id rbd.daisy -p rbd
put test /tmp/test with sudo rados --id rbd.daisy -p rbd put test1 /tmp/test and
use the second column of this document to write down a second set of values.
Is the number of objects and the number of bytes used in the pool changing (Y or N)?
................................................................................................................................................
N.B: It may be necessary to repeat the sequence of commands more than once to see
a significant difference.
Request the stats for this file:
alice$ sudo rados --id rbd.daisy -p rbd stat test
data/test mtime 1348960511, size 10485760
3.2.4 Checking cephx in action.
Issue the following RADOS command:
alice$ sudo rados --id rbd.daisy -p data put test /tmp/test
What message do you obtain? Why do you receive this message?
................................................................................................................................................
3.3 Cleanup.
Unmount drives:
alice$ cd $HOME
alice$ sudo umount /mnt/rbd
alice$ sudo rbd --id rbd.daisy unmap /dev/rbd0
On daisy:
Remove the data you stored from the default RADOS pool:
daisy$ sudo rados -p rbd rm test
daisy$ sudo rados -p rbd rm test1
daisy$ sudo rbd rm test
daisy$ sudo ceph df
Lab 3 - This is the end of this lab.
16
Red Hat Summit 2015 – Ceph and OpenStack
4. LAB – Setting up the Ceph Object Gateway (RADOSGW).
4.1 Update /etc/ceph/ceph.conf for RADOSGW.
On daisy:
Open /etc/ceph/ceph.conf and add the following user entry for radosgw:
[client.radosgw.daisy]
host = daisy
rgw socket path = /var/run/ceph/radosgw.daisy.fastcgi.sock
keyring = /etc/ceph/keyring.radosgw.daisy
rgw print continue = false
rgw dns name = daisy
nss db path = /var/ceph/nss
N.B: Use $HOME/HelperFiles/ceph.conf.radosgw.c7.txt as a template to make it easier
and avoid typos.
4.2 Create the RADOSGW Ceph client.
Create a keyring for the radosgw.daisy user:
daisy$ sudo ceph auth get-or-create client.radosgw.daisy osd 'allow rwx'
mon 'allow rwx' -o /etc/ceph/keyring.radosgw.daisy
4.3 Create the RADOSGW HTTP access point.
Then, create /etc/httpd/conf.d/radosgw.conf by copying the template file located in
the $HOME/HelperFiles folder.
N.B: Use $HOME/HelperFiles/radosgw.conf.http.txt as a template to make it easier
and avoid typos.
4.4 Create the RADOSGW FastCGI wrapper script.
Create the radosgw.fcgi script in /var/www/html and add these lines:
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.daisy
N.B: Use $HOME/HelperFiles/radosgw.fcgi.txt as a template to make it easier and
avoid typos.
17
Red Hat Summit 2015 – Ceph and OpenStack
Then save the file and close it. Make it executable:
daisy$ sudo chmod +x /var/www/html/radosgw.fcgi
Make sure that all folders have appropriate permissions:
daisy$ $HOME/HelperFiles/setperms.sh
Enable the apache site and restart apache:
daisy$ sudo systemctl daemon-reload
daisy$ sudo systemctl start httpd
daisy$ sudo systemctl start ceph-radosgw
If desired, you can start the Apache daemon at boot time by entering this command:
daisy$ sudo chkconfig httpd on
4.5 Create the RADOSGW Region Map.
Create the default region map:
daisy$ sudo radosgw-admin regionmap update
4.6 Create a RADOSGW S3 user.
Add users to the radosgw:
daisy$ sudo radosgw-admin 
-n client.radosgw.daisy 
user create 
--uid=johndoe 
--display-name="John Doe" 
--email=john@example.com 
--access-key=12345 
--secret=67890
18
Red Hat Summit 2015 – Ceph and OpenStack
4.7 Verify S3 access through RADOSGW.
You can now access the radosgw via the S3 API.
4.7.1 RADOSGW with s3cmd.
This lab exercise will let you configure the RADOS Gateway and interact with it using
another tool that is available.
On daisy:
If your home directory does not contain the $HOME/.s3cfg file, check for it in
$HOME/HelperFiles folder or ask your instructor for a copy of it.
daisy$ mv $HOME/HelperFiles/s3cfg.txt $HOME/.s3cfg
And check that we can access the S3 “cloud” by listing the existing buckets.
daisy$ s3cmd ls
Create a bucket:
daisy$ s3cmd mb s3://bucket1
daisy$ s3cmd ls
Now, create a test file that we shall upload:
daisy$ sudo dd if=/dev/zero of=/tmp/10MB.bin bs=1024k count=10
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied
Then, upload the file through the RADOS Gateway
daisy$ s3cmd put --acl-public /tmp/10MB.bin s3://bucket1/10MB.bin
And finally, verify we can access the file in the cloud
daisy$ wget -O /dev/null http://bucket1.daisy/10MB.bin
19
Red Hat Summit 2015 – Ceph and OpenStack
4.7.2 RADOSGW with s3curl and RADOSGW admin API.
A radosgw admin will have special privileges to access users, buckets and usage
information through the RadosGW Admin API.
daisy$ sudo radosgw-admin user create --uid=admin 
--display-name="Admin user" --caps="users=read, write; 
usage=read, write; buckets=read, write; zone=read, write" 
--access-key=abcde --secret=qwerty
If your home directory does not contain the $HOME/s3curl.pl file, check for it in
$HOME/HelperFiles folder or ask your instructor for a copy of it.
daisy$ mv $HOME/HelperFiles/s3curl.pl.txt $HOME/s3curl.pl
daisy$ chmod +x $HOME/s3curl.pl
Then create a ~/.s3curl file on daisy with the following:
%awsSecretAccessKeys = (
admin => {
id => 'abcde',
key => 'qwerty',
},
);
Change the permissions on the file to:
daisy$ chmod 400 ~/.s3curl
Finally you will need to modify the s3curl.pl script so that ‘daisy’ is included in
@endpoints.
List all the buckets of a user:
daisy$ ./s3curl.pl --id=admin 
-- 'http://daisy/admin/bucket?uid=johndoe'
["bucket1"]
You can have a full description of the Admin API at this address:
http://ceph.com/docs/master/radosgw/adminops/
Lab 4 - This is the end of this lab.
20
Red Hat Summit 2015 – Ceph and OpenStack
5. LAB – Integrating Ceph with OpenStack Glance.
Ceph can easily be integrated with Glance, OpenStack’s Image Service. Glance has a
native backend to talk to RBD, the following steps enable it.
5.1 Ceph Configuration.
Add daisy to bob's authorized keys:
daisy$ ssh-copy-id bob
Start by adding an images pool to Ceph:
daisy$ sudo ceph osd pool create images 128
Then, add a user to Ceph called client.glance:
daisy$ sudo ceph auth get-or-create client.images mon 'allow r' osd 'allow
class-read object_prefix rbd_children, allow rwx pool=images' -o
/etc/ceph/ceph.client.images.keyring
Copy the keyring to bob:
daisy$ cat /etc/ceph/ceph.client.images.keyring | ssh bob "sudo tee
/etc/ceph/ceph.client.images.keyring"
From bob:
bob$ sudo chgrp glance /etc/ceph/ceph.client.images.keyring
bob$ sudo chmod 0640 /etc/ceph/ceph.client.images.keyring
Copy /etc/ceph/ceph.conf to bob:
daisy$ cat /etc/ceph/ceph.conf | ssh bob "sudo tee 
/etc/ceph/ceph.conf"
On bob, edit /etc/ceph/ceph.conf and add:
[client.images]
keyring = /etc/ceph/ceph.client.images.keyring
21
Red Hat Summit 2015 – Ceph and OpenStack
5.2 Glance Configuration.
Adapt /etc/glance/glance-api.conf to make Glance use Ceph in the future.
Locate line:
default_store = file
And adapt to read:
default_store = rbd
Search for RBD Store Options.
Uncomment the following line:
#rbd_store_ceph_conf=/etc/ceph/ceph.conf
Adapt the following line:
rbd_store_user = <None>
With:
rbd_store_user = images
Uncomment the following line
#rbd_store_pool = images
Restart the glance-api service:
bob$ sudo service openstack-glance-api restart
5.3 Verify Integration.
Load the keystone environment and upload a test image:
bob$ source /home/ceph/openstack.env
bob$ glance image-create --name="Cirros 0.3.2" --disk-format=raw 
--container-format=bare 
</home/ceph/cirros-0.3.2-x86_64-disk.img
22
Red Hat Summit 2015 – Ceph and OpenStack
From daisy, check if the image has been created:
daisy$ sudo rbd -p images ls
c8e400b-77f0-41ff-8ec4-26eaad77957d
daisy$ sudo rbd -p images info $(sudo rbd -p images ls)
size 255 bytes in 1 objects
order 23 (8192 KB objects)
block_name_prefix: rbd_data.12e64b364f03
format: 2
features: layering
5.4 Cleanup.
We shall delete the image we created; Deleting the image in Glance will trigger the
deletion of the RBD image in Ceph.
On bob:
bob$ glance image-delete {image_unique_id}
bob$ glance image-list
On daisy:
From daisy, check if the image has been deleted:
daisy$ sudo rbd -p images ls
Lab 5 - This is the end of this lab.
23
Red Hat Summit 2015 – Ceph and OpenStack
6. LAB – Integrating Ceph with OpenStack Cinder.
OpenStack’s volume service, Cinder, can access Ceph RBD images directly and use
them as backing devices for the volumes it exports. To make this work, only a few
configuration changes are required. This document explains what needs to be done.
6.1 Ceph Configuration.
On daisy:
Start by adding a volume pool to Ceph:
daisy$ sudo ceph osd pool create volumes 128
Then, add a user to Ceph called client.volumes:
daisy$ sudo ceph auth get-or-create client.volumes mon 'allow r' 
osd 'allow class-read object_prefix rbd_children, 
allow rwx pool=volumes, allow rx pool=images' 
-o /etc/ceph/ceph.client.volumes.keyring
Copy the keyring to bob:
daisy$ cat /etc/ceph/ceph.client.volumes.keyring | ssh bob 
"sudo tee /etc/ceph/ceph.client.volumes.keyring"
daisy$ sudo ceph auth get-key client.volumes | ssh bob tee 
client.volumes.key
From bob:
bob$ sudo chgrp cinder /etc/ceph/ceph.client.volumes.keyring
bob$ sudo chmod 0640 /etc/ceph/ceph.client.volumes.keyring
On bob, edit /etc/ceph/ceph.conf and add:
[client.volumes]
keyring = /etc/ceph/ceph.client.volumes.keyring
24
Red Hat Summit 2015 – Ceph and OpenStack
6.2 Cinder Configuration.
On bob:
Generate a UUID that we will need for Ceph integration with Libvirt (which Cinder uses
to connect block devices into VMs):
bob$ uuidgen | tee $HOME/myuuid.txt
{Your Personal UUID Is Displayed}
Then, create a file called ceph.xml with the following contents:
<secret ephemeral="no" private="no">
<uuid>{Type In Your UUID}</uuid>
<usage type="ceph">
<name>client.volumes secret</name>
</usage>
</secret>
bob$ sudo virsh secret-define --file ceph.xml
Secret {Your UUID Displayed Here} created
bob$ sudo virsh secret-set-value --secret {Type In Your UUID} 
--base64 $(cat client.volumes.key) 
&& rm client.volumes.key ceph.xml
On bob, open /etc/cinder/cinder.conf and add these lines under the [DEFAULT]
section:
volume_driver=cinder.volume.drivers.rbd.RBDDriver
glance_api_version=2
In the /etc/cinder/cinder.conf file locate the lines below and modify the parameters to
match the values of our lab environment:
rbd_pool=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth=5
rbd_user=volumes
rbd_secret_uuid={Type In Your UUID}
bob$ sudo service openstack-cinder-api restart
25
Red Hat Summit 2015 – Ceph and OpenStack
bob$ sudo service openstack-cinder-volume restart
Additional documentation for configuring NOVA compute nodes can be found at:
http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova
6.3 Verify Integration.
On bob:
Create a cinder image:
bob$ source openstack.env
bob$ cinder create --display_name="test" 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2013-07-17T04:08:25.217224 |
| display_description | None |
| display_name | test |
| id | 001a6a69-4276-4608-908e-bb991a2a51e0 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
Verify that the Cinder volume has been created:
bob$ cinder list
If the creation of the Cinder volume was a success, its status should be: available
26
Red Hat Summit 2015 – Ceph and OpenStack
On daisy:
Verify that the rbd image is created:
daisy$ sudo rbd -p volumes ls
{A Unique Volume ID is Displayed here}
daisy$ sudo rbd -p volumes info $(sudo rbd -p volumes ls)
rbd image 'volume-998c8370-1bd7-4425-b246-b3d405a07f01':
size 1024 MB in 256 objects
order 22 (4096 KB objects)
block_name_prefix: rbd_data.13aa2ae8944a
format: 2
features: layering, striping
stripe unit: 4096 KB
stripe count: 1
6.4 Cleanup.
We shall delete the image we created; Deleting the image in Glance will trigger the
deletion of the RBD image in Ceph.
On bob:
bob$ cinder delete {volume_unique_id}
bob$ cinder list
On daisy:
From daisy, check if the image has been deleted:
daisy$ sudo rbd -p volumes ls
If successful delete the files containing sensitive information:
bob$ sudo rm client.volumes.key
bob$ sudo rm ceph.xml
bob$ sudo rm myuuid.txt
Lab 6 - This is the end of this lab.
27
Red Hat Summit 2015 – Ceph and OpenStack
7. LAB – Integrating the Ceph with OpenStack Keystone.
The Ceph RadosGW can be integrated with OpenStack keystone to authenticate users
from keystone rather than creating them within the radosgw.
On daisy open /etc/ceph/ceph.conf and add the following user entry for radosgw:
[client.radosgw.daisy]
host = daisy
rgw socket path = /var/run/ceph/radosgw.daisy.fastcgi.sock
keyring = /etc/ceph/keyring.radosgw.daisy
rgw log file = /var/log/ceph/radosgw.log
rgw print continue = false
rgw dns name = daisy
nss db path = /var/ceph/nss
rgw keystone url = http://bob:35357
rgw keystone admin token = ADMIN
rgw keystone accepted role = admin
Restart the radosgw:
daisy$ sudo service ceph-radosgw restart
Then from daisy, try to access the radosgw with the admin user from OpenStack:
daisy:~$ swift -v -V 2.0 -A http://bob:5000/v2.0/ 
-U admin:admin -K admin stat
StorageURL: http://daisy/swift/v1
Auth Token:
MIIJTwYJKoZIhvcNAQcCoIIJQDCCCTwCAQExCTAHBgUrDgMCGjCCB6UGCSqGSIb3DQEHAaCCB5YEggeS
eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wOC0x[…Output Truncated…]
MXg+gwaln9iaFG2Kgw2zPUFVArNlHIrBVLUwvnBd2aZ0IzygkdyB01HxgDOyftr2PGdmaZ5h819kik0S
w3r7e7kLiybScUt5lFZa6YKgzaFkhwigp+C32oxFBqoBBRxcyxyF+WA25T1oISRcMvUzutb3CTlA-
oFEQ5aI+JWGVQoKyyIKaDrxUONBsY8QV4=
Account: v1
Containers: 1
Objects: 0
Bytes: 0
Vary: Accept-Encoding
Server: Apache/2.2.22 (Ubuntu)
X-Account-Bytes-Used-Actual: 0
Content-Type: text/plain; charset=utf-8
28
Red Hat Summit 2015 – Ceph and OpenStack
Then from bob, try to access the radosgw using a regular OpenStack environment:
bob:~$ source openstack.env
bob:~$ swift -v -V 2.0 -A http://bob:5000/v2.0/ stat
StorageURL: http://daisy/swift/v1
Auth Token:
MIIJTwYJKoZIhvcNAQcCoIIJQDCCCTwCAQExCTAHBgUrDgMCGjCCB6UGCSqGSIb3DQEHAaCCB5YEggeS
eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wOC0x[…Output Truncated…]
MXg+gwaln9iaFG2Kgw2zPUFVArNlHIrBVLUwvnBd2aZ0IzygkdyB01HxgDOyftr2PGdmaZ5h819kik0S
w3r7e7kLiybScUt5lFZa6YKgzaFkhwigp+C32oxFBqoBBRxcyxyF+WA25T1oISRcMvUzutb3CTlA-
oFEQ5aI+JWGVQoKyyIKaDrxUONBsY8QV4=
Account: v1
Containers: 1
Objects: 0
Bytes: 0
Vary: Accept-Encoding
Server: Apache/2.2.22 (Ubuntu)
X-Account-Bytes-Used-Actual: 0
Content-Type: text/plain; charset=utf-8
Lab 7 - This is the end of this lab.
29

More Related Content

What's hot

Advanced Replication
Advanced ReplicationAdvanced Replication
Advanced ReplicationMongoDB
 
Cassandra Day SV 2014: Basic Operations with Apache Cassandra
Cassandra Day SV 2014: Basic Operations with Apache CassandraCassandra Day SV 2014: Basic Operations with Apache Cassandra
Cassandra Day SV 2014: Basic Operations with Apache CassandraDataStax Academy
 
Vmlinux: anatomy of bzimage and how x86 64 processor is booted
Vmlinux: anatomy of bzimage and how x86 64 processor is bootedVmlinux: anatomy of bzimage and how x86 64 processor is booted
Vmlinux: anatomy of bzimage and how x86 64 processor is bootedAdrian Huang
 
Decompressed vmlinux: linux kernel initialization from page table configurati...
Decompressed vmlinux: linux kernel initialization from page table configurati...Decompressed vmlinux: linux kernel initialization from page table configurati...
Decompressed vmlinux: linux kernel initialization from page table configurati...Adrian Huang
 
DTrace talk at Oracle Open World
DTrace talk at Oracle Open WorldDTrace talk at Oracle Open World
DTrace talk at Oracle Open WorldAngelo Rajadurai
 
SSD based storage tuning for databases
SSD based storage tuning for databasesSSD based storage tuning for databases
SSD based storage tuning for databasesAngelo Rajadurai
 
AMS Node Meetup December presentation Phusion Passenger
AMS Node Meetup December presentation Phusion PassengerAMS Node Meetup December presentation Phusion Passenger
AMS Node Meetup December presentation Phusion Passengericemobile
 
How to create a multi tenancy for an interactive data analysis with jupyter h...
How to create a multi tenancy for an interactive data analysis with jupyter h...How to create a multi tenancy for an interactive data analysis with jupyter h...
How to create a multi tenancy for an interactive data analysis with jupyter h...Tiago Simões
 
Process Address Space: The way to create virtual address (page table) of user...
Process Address Space: The way to create virtual address (page table) of user...Process Address Space: The way to create virtual address (page table) of user...
Process Address Space: The way to create virtual address (page table) of user...Adrian Huang
 
Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6Manish Chopra
 
Install nagios
Install nagiosInstall nagios
Install nagioshassandb
 
Install tomcat 5.5 in debian os and deploy war file
Install tomcat 5.5 in debian os and deploy war fileInstall tomcat 5.5 in debian os and deploy war file
Install tomcat 5.5 in debian os and deploy war fileNguyen Cao Hung
 
How to create a secured multi tenancy for clustered ML with JupyterHub
How to create a secured multi tenancy for clustered ML with JupyterHubHow to create a secured multi tenancy for clustered ML with JupyterHub
How to create a secured multi tenancy for clustered ML with JupyterHubTiago Simões
 
How to go the extra mile on monitoring
How to go the extra mile on monitoringHow to go the extra mile on monitoring
How to go the extra mile on monitoringTiago Simões
 
How to configure a hive high availability connection with zeppelin
How to configure a hive high availability connection with zeppelinHow to configure a hive high availability connection with zeppelin
How to configure a hive high availability connection with zeppelinTiago Simões
 

What's hot (18)

Advanced Replication
Advanced ReplicationAdvanced Replication
Advanced Replication
 
Cassandra Day SV 2014: Basic Operations with Apache Cassandra
Cassandra Day SV 2014: Basic Operations with Apache CassandraCassandra Day SV 2014: Basic Operations with Apache Cassandra
Cassandra Day SV 2014: Basic Operations with Apache Cassandra
 
Vmlinux: anatomy of bzimage and how x86 64 processor is booted
Vmlinux: anatomy of bzimage and how x86 64 processor is bootedVmlinux: anatomy of bzimage and how x86 64 processor is booted
Vmlinux: anatomy of bzimage and how x86 64 processor is booted
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 
Multipath
MultipathMultipath
Multipath
 
Decompressed vmlinux: linux kernel initialization from page table configurati...
Decompressed vmlinux: linux kernel initialization from page table configurati...Decompressed vmlinux: linux kernel initialization from page table configurati...
Decompressed vmlinux: linux kernel initialization from page table configurati...
 
DTrace talk at Oracle Open World
DTrace talk at Oracle Open WorldDTrace talk at Oracle Open World
DTrace talk at Oracle Open World
 
SSD based storage tuning for databases
SSD based storage tuning for databasesSSD based storage tuning for databases
SSD based storage tuning for databases
 
AMS Node Meetup December presentation Phusion Passenger
AMS Node Meetup December presentation Phusion PassengerAMS Node Meetup December presentation Phusion Passenger
AMS Node Meetup December presentation Phusion Passenger
 
How to create a multi tenancy for an interactive data analysis with jupyter h...
How to create a multi tenancy for an interactive data analysis with jupyter h...How to create a multi tenancy for an interactive data analysis with jupyter h...
How to create a multi tenancy for an interactive data analysis with jupyter h...
 
Process Address Space: The way to create virtual address (page table) of user...
Process Address Space: The way to create virtual address (page table) of user...Process Address Space: The way to create virtual address (page table) of user...
Process Address Space: The way to create virtual address (page table) of user...
 
Network Manual
Network ManualNetwork Manual
Network Manual
 
Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6
 
Install nagios
Install nagiosInstall nagios
Install nagios
 
Install tomcat 5.5 in debian os and deploy war file
Install tomcat 5.5 in debian os and deploy war fileInstall tomcat 5.5 in debian os and deploy war file
Install tomcat 5.5 in debian os and deploy war file
 
How to create a secured multi tenancy for clustered ML with JupyterHub
How to create a secured multi tenancy for clustered ML with JupyterHubHow to create a secured multi tenancy for clustered ML with JupyterHub
How to create a secured multi tenancy for clustered ML with JupyterHub
 
How to go the extra mile on monitoring
How to go the extra mile on monitoringHow to go the extra mile on monitoring
How to go the extra mile on monitoring
 
How to configure a hive high availability connection with zeppelin
How to configure a hive high availability connection with zeppelinHow to configure a hive high availability connection with zeppelin
How to configure a hive high availability connection with zeppelin
 

Similar to Ceph_And_OpenStack_Red_Hat_Summit_2015_Boston_20150606

Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)
Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)
Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)Simon Boulet
 
MINCS - containers in the shell script (Eng. ver.)
MINCS - containers in the shell script (Eng. ver.)MINCS - containers in the shell script (Eng. ver.)
MINCS - containers in the shell script (Eng. ver.)Masami Hiramatsu
 
Check the version with fixes. Link in description
Check the version with fixes. Link in descriptionCheck the version with fixes. Link in description
Check the version with fixes. Link in descriptionPrzemyslaw Koltermann
 
[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"
[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"
[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"OpenStack Korea Community
 
Mirroring the root_disk under solaris SVM
Mirroring the root_disk under solaris SVMMirroring the root_disk under solaris SVM
Mirroring the root_disk under solaris SVMKazimal Abed Mohammed
 
The Secrets of The FullStack Ninja - Part A - Session I
The Secrets of The FullStack Ninja - Part A - Session IThe Secrets of The FullStack Ninja - Part A - Session I
The Secrets of The FullStack Ninja - Part A - Session IOded Sagir
 
Architecting cloud
Architecting cloudArchitecting cloud
Architecting cloudTahsin Hasan
 
GoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docx
GoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docxGoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docx
GoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docxtricantino1973
 
OpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloud
OpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloudOpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloud
OpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloudNetcetera
 
Hostvn ceph in production v1.1 dungtq
Hostvn   ceph in production v1.1 dungtqHostvn   ceph in production v1.1 dungtq
Hostvn ceph in production v1.1 dungtqViet Stack
 
introduction-infra-as-a-code using terraform
introduction-infra-as-a-code using terraformintroduction-infra-as-a-code using terraform
introduction-infra-as-a-code using terraformniyof97
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
 
Capifony. Minsk PHP MeetUp #11
Capifony. Minsk PHP MeetUp #11Capifony. Minsk PHP MeetUp #11
Capifony. Minsk PHP MeetUp #11Yury Pliashkou
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawnGábor Nyers
 
Install nagios
Install nagiosInstall nagios
Install nagioshassandb
 
Install nagios
Install nagiosInstall nagios
Install nagioshassandb
 

Similar to Ceph_And_OpenStack_Red_Hat_Summit_2015_Boston_20150606 (20)

Tinydns and dnscache
Tinydns and dnscacheTinydns and dnscache
Tinydns and dnscache
 
Dev ops
Dev opsDev ops
Dev ops
 
Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)
Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)
Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)
 
MINCS - containers in the shell script (Eng. ver.)
MINCS - containers in the shell script (Eng. ver.)MINCS - containers in the shell script (Eng. ver.)
MINCS - containers in the shell script (Eng. ver.)
 
Check the version with fixes. Link in description
Check the version with fixes. Link in descriptionCheck the version with fixes. Link in description
Check the version with fixes. Link in description
 
[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"
[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"
[OpenInfra Days Korea 2018] Day 1 - T4-7: "Ceph 스토리지, PaaS로 서비스 운영하기"
 
Mirroring the root_disk under solaris SVM
Mirroring the root_disk under solaris SVMMirroring the root_disk under solaris SVM
Mirroring the root_disk under solaris SVM
 
The Secrets of The FullStack Ninja - Part A - Session I
The Secrets of The FullStack Ninja - Part A - Session IThe Secrets of The FullStack Ninja - Part A - Session I
The Secrets of The FullStack Ninja - Part A - Session I
 
Architecting cloud
Architecting cloudArchitecting cloud
Architecting cloud
 
GoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docx
GoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docxGoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docx
GoldenGate-12c-Advanced-Workshop-Lab-Exercise-1.docx
 
OpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloud
OpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloudOpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloud
OpenCloudDay 2014: Deploying trusted developer sandboxes in Amazon's cloud
 
Hostvn ceph in production v1.1 dungtq
Hostvn   ceph in production v1.1 dungtqHostvn   ceph in production v1.1 dungtq
Hostvn ceph in production v1.1 dungtq
 
Hostvn ceph in production v1.1 dungtq
Hostvn   ceph in production v1.1 dungtqHostvn   ceph in production v1.1 dungtq
Hostvn ceph in production v1.1 dungtq
 
introduction-infra-as-a-code using terraform
introduction-infra-as-a-code using terraformintroduction-infra-as-a-code using terraform
introduction-infra-as-a-code using terraform
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
Docker as an every day work tool
Docker as an every day work toolDocker as an every day work tool
Docker as an every day work tool
 
Capifony. Minsk PHP MeetUp #11
Capifony. Minsk PHP MeetUp #11Capifony. Minsk PHP MeetUp #11
Capifony. Minsk PHP MeetUp #11
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
Install nagios
Install nagiosInstall nagios
Install nagios
 
Install nagios
Install nagiosInstall nagios
Install nagios
 

Ceph_And_OpenStack_Red_Hat_Summit_2015_Boston_20150606

  • 1.
  • 2. Table of Contents 1. LAB – Conventions. ............................................................................... 3 2. LAB – Setup. ........................................................................................ 3 3. LAB – Creating a Ceph RBD client. ..................................................... 13 4. LAB – Setting up the Ceph Object Gateway (RADOSGW). ................... 17 5. LAB – Integrating Ceph with OpenStack Glance. ................................ 21 6. LAB – Integrating Ceph with OpenStack Cinder. ................................ 24 7. LAB – Integrating the Ceph with OpenStack Keystone. ..................... 28
  • 3. Red Hat Summit 2015 – Ceph and OpenStack 1. LAB – Conventions. In this document, we will be using the following marking conventions: • normal text Lab text and notes (proportional font). • command text Command input (bold highlighted font). • output text Command output (italic highlighted font). • {variable} Some input to be entered based on your proceedings. • […Truncated…] Some output has been truncated. • […] Some output has been truncated. 2. LAB – Setup. In order to be able to go through this LAB material, you will need to make sure you have performed the following tasks and have the following environment: • Download the virtual machine images: • https://objects.dreamhost.com/ceph-training/0.80.6/c7_daisy.ova • https://objects.dreamhost.com/ceph-training/0.80.6/c7_bob.ova • Download the lab helper files for the RADOSGW lab in case you corrupt the pre- loaded ones located on virtual machine daisy and located in $HOME/HelperFiles: • https://objects.dreamhost.com/ceph- training/0.80.6/radosgw.conf.httpd.txt • https://objects.dreamhost.com/ceph-training/0.80.6/radosgw.fcgi.txt • https://objects.dreamhost.com/ceph- training/0.80.6/ceph.conf.radosgw.c7.txt • https://objects.dreamhost.com/ceph-training/0.80.6/s3cfg.txt • https://objects.dreamhost.com/ceph-training/0.80.6/s3curl.pl.txt • Make sure you have at least 4GB RAM (8GB recommended). • Make sure you have at least 30GB disk space available. In order to download files to your virtual machine image you can use the following command line as a base example: $daisy cd $HOME/HelperFiles $daisy wget -O [filename] [file_url] 3
  • 4. Red Hat Summit 2015 – Ceph and OpenStack 2.1 LAB - Deploying Ceph using ceph-deploy. This lab exercise is about deploying a Ceph cluster on the following three nodes: daisy, eric and frank. They’ll all run OSDs (on top of /dev/sdb) and MONs. Note: VM can be accessed with user ceph and password ceph. 2.2 Create the cluster. 2.2.1 Ceph.conf creation. Daisy, frank and eric will act as both monitors and OSDs. On daisy: daisy$ mkdir $HOME/ceph-deploy daisy$ cd $HOME/ceph-deploy daisy$ ceph-deploy new daisy Creating new cluster named ceph Resolving host daisy Monitor daisy at 192.168.122.114 Monitor initial members are ['daisy'] Monitor addrs are ['192.168.122.114'] Creating a random mon key... Writing initial config to ceph.conf... Writing monitor keyring to ceph.conf.. Let’s look at the generated $HOME/ceph-deploy/ceph.conf: daisy$ cat ceph.conf [global] fsid = 66f5ffc0-035b-4c1c-823f-36250e5091b7 mon initial members = daisy mon host = 192.168.122.114 auth supported = cephx filestore xattr use omap = true Before deploying the OSDs, we need to update the configuration file to use some specific parameters: • Set the OSD journal size to 1024MB. • Set the default replication size to 2 for our small test cluster. • Allow dynamic update of the primary OSD affinity. • Allow for object copies to reside on the same host. daisy$ vi $HOME/ceph-deploy/ceph.conf 4
  • 5. Red Hat Summit 2015 – Ceph and OpenStack In the file, add the following lines at the end of the existing file: osd journal size = 1024 osd pool default size = 2 osd pool default min size = 1 mon osd allow primary affinity = 1 osd crush chooseleaf type = 0 This command will also create the Ceph monitor keyring (ceph.mon.keyring) to deploy monitors. 2.2.1.1 Deploying monitors. On daisy: daisy$ ceph-deploy mon create daisy Deploying mon, cluster ceph hosts daisy Deploying mon to daisy After a few seconds, the monitor should be in quorum. If we run: daisy$ sudo ceph -s cluster 90c07d40-8dea-4b06-88a8-fa09c07aaf16 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds; monmap e1: 1 mons at {daisy=192.168.122.114:6789/0}, election epoch 6, quorum 0 daisy osdmap e1: 0 osds: 0 up, 0 in pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail mdsmap e1: 0/0/1 up As you can see the 1 monitor is in quorum, but the cluster is unhealthy since we do not have any OSDs deployed. 5
  • 6. Red Hat Summit 2015 – Ceph and OpenStack 2.2.2 Deploying OSDs. Before deploying OSDs, we need to get the bootstraps keys generated by the monitors. On daisy: daisy$ ceph-deploy gatherkeys daisy [ceph_deploy.gatherkeys][DEBUG ] Checking daisy for /etc/ceph/ceph.client.admin.keyring [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from daisy. [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring [ceph_deploy.gatherkeys][DEBUG ] Checking daisy for /var/lib/ceph/bootstrap- osd/ceph.keyring [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from daisy. [ceph_deploy.gatherkeys][DEBUG ] Checking daisy for /var/lib/ceph/bootstrap- mds/ceph.keyring [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from daisy. We can have a look at which disks are present in a machine and partitioned with: daisy$ ceph-deploy disk list daisy [daisy][INFO ] Running command: ceph-disk list [daisy][INFO ] /dev/sda : [daisy][INFO ] /dev/sda1 other, ext2, mounted on /boot [daisy][INFO ] /dev/sda2 other [daisy][INFO ] /dev/sda5 other, LVM2_member [daisy][INFO ] /dev/sdb other, unknown [daisy][INFO ] /dev/sdc other, unknown [daisy][INFO ] /dev/sdd other, unknown [daisy][INFO ] /dev/sr0 other, unknown [daisy][INFO ] /dev/sr1 other, unknown Make sure your current working directory is $HOME/ceph-deploy. daisy$ cd $HOME/ceph-deploy 6
  • 7. Red Hat Summit 2015 – Ceph and OpenStack Then use ceph-deploy to deploy the OSDs. daisy$ ceph-deploy osd create daisy:sdb daisy:sdc daisy:sdd Preparing cluster ceph disks daisy:/dev/sdb: daisy:/dev/sdc: daisy:/dev/sdd: Deploying osd to daisy Host daisy is now ready for osd use. Preparing host daisy disk /dev/sdb journal None activate True Deploying osd to daisy Host eric is now ready for osd use. Preparing host eric disk /dev/sdc journal None activate True Deploying osd to daisy Host frank is now ready for osd use. Preparing host frank disk /dev/sdd journal None activate True 2.3 Checking the cluster health. daisy$ sudo ceph -s health HEALTH_OK monmap e1: 1 mons at {daisy=192.168.122.114:6789/0}, election epoch 10, quorum 0 daisy osdmap e13: 3 osds: 3 up, 3 in pgmap v59: 192 pgs: 192 active+clean; 0 bytes data, 103 MB used, 21367 MB / 21470 MB avail mdsmap e1: 0/0/1 up At this point, the Ceph cluster is: • In good health. • The MON election epoch is 10. • We have MONs in quorum (0). • The OSD map epoch is 13. • We have 3 OSDs (3 up and 3 in). • They are all UP and IN. 7
  • 8. Red Hat Summit 2015 – Ceph and OpenStack 2.4 Monitoring Cluster Events. There is a simple way to check on the events that take place in the life of the Ceph cluster by using the ceph –w command. This command, as illustrated below, first displays the health of the cluster followed by some lines each time an event occurs in the cluster. daisy$ sudo ceph –w cluster 2e5f14a2-a374-463b-82eb-58227e179591 health HEALTH_WARN 25 pgs peering […Truncated…] mdsmap e1: 0/0/1 up 2014-01-09 08:32:07.201445 mon.0 [WRN] message from mon.2 was stamped 1.179327s in the future, clocks not synchronized 2014-01-09 08:32:38.542240 mon.0 [INF] mon.daisy calling new monitor election 2014-01-09 08:32:38.544043 mon.0 [INF] mon.daisy@0 won leader election with quorum 0,1,2 2014-01-09 08:32:38.548805 mon.0 [WRN] mon.2 192.168.122.116:6789/0 clock skew 1.32637s > max 1s 2014-01-09 08:32:38.556024 mon.0 [INF] pgmap v1926: 520 pgs: 495 active+clean, 25 peering; 80694 KB data, 564 MB used, 82280 MB / 82844 MB avail 2014-01-09 08:32:38.556078 mon.0 [INF] mdsmap e1: 0/0/1 up 2014-01-09 08:32:38.556136 mon.0 [INF] osdmap e319: 9 osds: 9 up, 9 in 2014-01-09 08:32:38.556239 mon.0 [INF] monmap e1: 3 mons at {daisy=192.168.122.114:6789/0,eric=192.168.122.115:6789/0,frank=192.168.122.116: 6789/0} 2014-01-09 08:32:39.861357 mon.1 [INF] mon.eric calling new monitor election 2014-01-09 08:32:38.556420 mon.0 [WRN] mon.1 192.168.122.115:6789/0 clock skew 1.31247s > max 1s 2014-01-09 08:33:13.713197 mon.0 [INF] mon.daisy calling new monitor election 2014-01-09 08:33:13.715200 mon.0 [INF] mon.daisy@0 won leader election with quorum 0,1,2 2014-01-09 08:33:13.717671 mon.0 [WRN] mon.1 192.168.122.115:6789/0 clock skew 1.50833s > max 1s 2014-01-09 08:33:13.725304 mon.0 [INF] pgmap v1926: 520 pgs: 495 active+clean, 25 peering; 80694 KB data, 564 MB used, 82280 MB / 82844 MB avail [output truncated] 8
  • 9. Red Hat Summit 2015 – Ceph and OpenStack 2.5 Ceph basic maintenance operation. Each Ceph node will run a certain number of daemons you can interact with. In order to do so, the following commands and syntax are available. 2.5.1 Starting and Stopping the OSDs. You can either, start, stop or recycle the Ceph OSD daemons on the host you are connected to: • sudo /etc/init.d/ceph stop osd Will stop the OSD daemons. • sudo /etc/init.d/ceph start osd Will start the OSD daemons. • sudo /etc/init.d/ceph restart osd Will recycle the OSD daemons. 2.5.1.1 Using the commands. On daisy: issue perform the following operations: daisy$ ps –ef | grep ceph-osd Can you see the OSD daemons running as processes? …............... (Y/N) How many OSD daemons are running as processes? …............... [1] daisy$ sudo [use your platform STOP command] daisy$ ps –ef | grep ceph-osd Can you see the OSD daemons running as processes? …............... (Y/N) daisy$ sudo ceph -s cluster 2e5f14a2-a374-463b-82eb-58227e179591 […Truncated…] osdmap e352: 3 osds: 2 up, 3 in […Truncated…] How many OSDs are participating in the cluster? …............... [3] How many OSDs are UP in the cluster? …............... [2] How many OSDs are DOWN in the cluster? …............... [1] 9
  • 10. Red Hat Summit 2015 – Ceph and OpenStack To find out about which OSDs are up or down, you can use the ceph osd tree command. daisy$ sudo ceph osd tree #id weight type name up/down reweight -1 0.03 root default -2 0.03 host daisy 0 0.009995 osd.0 down 1 1 0.009995 osd.1 up 1 2 0.009995 osd.2 up 1 daisy$ sudo [use your platform START command] daisy$ sudo ceph osd tree #id weight type name up/down reweight -1 0.03 root default -2 0.03 host daisy 0 0.009995 osd.0 up 1 1 0.009995 osd.1 up 1 2 0.009995 osd.2 up 1 To act on a particular OSD daemon, you can input the following command. daisy$ sudo /etc/init.d/ceph [stop|start|restart] osd.{id} 2.5.1.2 Using the commands. Let’s try this out. daisy$ sudo {use_your_platform_STOP_command_for_osd_id_0} daisy$ sudo ceph osd tree #id weight type name up/down reweight -1 0.03 root default -2 0.03 host daisy 0 0.009995 osd.0 down 1 1 0.009995 osd.1 up 1 2 0.009995 osd.2 up 1 Can you see the OSD daemon with ID=0 down? …............... (Y/N) daisy$ ps –ef | grep ceph-osd How many OSD daemons are running as processes? …............... [0] daisy$ sudo {use_your_platform_START_command_for_osd_id_0} Check all OSD daemons are now up and running across the cluster. 10
  • 11. Red Hat Summit 2015 – Ceph and OpenStack daisy$ sudo ceph osd tree #id weight type name up/down reweight -1 0.03 root default -2 0.03 host daisy 0 0.009995 osd.0 up 1 1 0.009995 osd.1 up 1 2 0.009995 osd.2 up 1 daisy$ sudo ceph -s cluster 2e5f14a2-a374-463b-82eb-58227e179591 health HEALTH_OK […Truncated…] osdmap e364: 3 osds: 3 up, 3 in pgmap v2059: 520 pgs: 520 active+clean; 80694 KB data, 536 MB used, 82308 MB / 82844 MB avail mdsmap e1: 0/0/1 up 2.5.2 Starting and Stopping all Ceph daemons. You can either, start, stop or recycle all Ceph daemons on the host you are connected to: • sudo /etc/init.d/ceph stop Will stop all Ceph daemons. • sudo /etc/init.d/ceph start Will start all Ceph daemons. • sudo /etc/init.d/ceph restart Will recycle all Ceph daemons. 2.5.3 Checking the installed Ceph version on a host. ceph version 0.80.6 (f93610a4421cb670b08e974c6550ee715ac528ae) 2.6 Using RADOS. On daisy: Create a 10MB file: daisy$ sudo dd if=/dev/zero of=/tmp/test bs=10M count=1 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied daisy$ sudo rados -p data put test /tmp/test daisy$ sudo ceph df Write down the amount of objects used on data: …............... Write down the amount of bytes used on data: …............... 11
  • 12. Red Hat Summit 2015 – Ceph and OpenStack N.B: The number of objects for pool named data is the 4th column in the display and the number of bytes, expressed in Kilobytes, is the 3rd column of the display. pool name category KB objects clones degraded … [output truncated] data - 0 0 0 0 [output truncated] daisy$ sudo rados -p data put test1 /tmp/test daisy$ sudo rados -p data put test2 /tmp/test daisy$ sudo ceph df Write down the amount of objects used on data: …............... Write down the amount of bytes used on data: …............... What can you say about the difference in the figures you wrote down first and second? ................................................................................................................................................ You can also use the ceph df command to perform these checks: daisy$ sudo ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 82844M 82464M 380M 0.46 POOLS: NAME ID USED %USED OBJECTS data 0 0 0 0 metadata 1 0 0 0 rbd 2 8 0 1 [output truncated] Now we need to clean up the environment after our tests: daisy$ sudo rados -p data rm test daisy$ sudo rados -p data rm test1 daisy$ sudo rados -p data rm test2 Lab - This is the end of this lab. 12
  • 13. Red Hat Summit 2015 – Ceph and OpenStack 3. LAB – Creating a Ceph RBD client. The Ceph cluster that is currently running within your virtual machines is built out of three nodes: daisy, eric and frank. All cluster nodes are running OSDs and MONs, hence we can use this Ceph cluster as target for RBD access. And of course, you can also upload data into the object store directly. In our lab environment, Ceph packages are pre-installed on each VM as we can not assume Internet connectivity on the network we will be using during our trainings. In your environment, you will need to make sure Ceph packages are install on the machine you wish to use as an RBD client (apt-get update && apt-get install ceph- common). Make sure you use the official Ceph repos to do so. In a production environment, you should never use the RBD kernel module on a host running an OSD daemon. 3.1 RADOS Block Device (RBD). On daisy, first, you’ll have to create client credentials for the RBD client: daisy$ sudo ceph auth get-or-create client.rbd.daisy osd 'allow rwx pool=rbd' mon 'allow r' -o /etc/ceph/ceph.client.rbd.daisy.keyring Create an RBD image in Ceph named test and 128MB large: daisy$ sudo rbd create test --size 128 Check the RBD image has been successfully created with the following command: daisy$ sudo rbd info test rbd image 'test': size 128 MB in 32 objects order 22 (4096 KB objects) block_name_prefix: rb.0.239e.238e1f29 format: 1 On daisy, make sure that the RBD kernel driver is loaded: alice$ sudo modprobe rbd 13
  • 14. Red Hat Summit 2015 – Ceph and OpenStack Map the image on your local server: alice$ sudo rbd --id rbd.daisy map test Get a list of all mapped RBD images like this: alice$ rbd --id rbd.daisy showmapped id pool image snap device 0 rbd test - /dev/rbd0 Finally, create a File System on the RBD and mount it just like you would do for a regular disk device: alice$ sudo mkfs.ext4 /dev/rbd0 alice$ sudo mkdir /mnt/rbd alice$ sudo mount /dev/rbd0 /mnt/rbd 3.2 Storing data in Ceph. Create a 10MB file: alice$ sudo dd if=/dev/zero of=/tmp/test bs=10M count=1 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied 3.2.1 Use the mounted File System. alice$ df Write down the amount of bytes used on /mnt/rbd: …............... …............... Write down the amount of bytes available on /mnt/rbd: …............... …............... alice$ ls Write down the number of files and directories present: …............... …............... alice$ sudo cp /tmp/test /mnt/rbd/test1 alice$ df Write down the amount of bytes used on /mnt/rbd: …............... …............... Write down the amount of bytes available on /mnt/rbd: …............... …............... 14
  • 15. Red Hat Summit 2015 – Ceph and OpenStack alice$ ls Write down the number of files and directories present: …............... …............... alice$ sudo rados --id rbd.daisy df Write down the number of objects and bytes used for rbd: …............... …............... Repeat the 3.2.1 sequence of operations but replace sudo cp /tmp/test /mnt/rbd/test1 with sudo cp /tmp/test /mnt/rbd/test2 and use the second column of this document to write down a second set of values. Then proceed to Error: Reference source not found. 3.2.2 Analyzing figures What can you observe for the system df commands? ................................................................................................................................................ What can you observe for the ceph df commands? ................................................................................................................................................ 3.2.3 Use RADOS. Upload an object into RADOS: alice$ sudo rados --id rbd.daisy -p rbd put test /tmp/test alice$ df Write down the amount of bytes used on /mnt/rbd: …............... …............... Write down the amount of bytes available on /mnt/rbd: …............... …............... alice$ ls Write down the number of files and directories present: …............... …............... How do you explain that the number of bytes used on /mnt/rbd is not changing? ................................................................................................................................................ alice$ sudo rados --id rbd.daisy df Write down the number of objects and bytes used for rbd: …............... …............... 15
  • 16. Red Hat Summit 2015 – Ceph and OpenStack Repeat the 3.2.3 sequence of operations but replace sudo rados --id rbd.daisy -p rbd put test /tmp/test with sudo rados --id rbd.daisy -p rbd put test1 /tmp/test and use the second column of this document to write down a second set of values. Is the number of objects and the number of bytes used in the pool changing (Y or N)? ................................................................................................................................................ N.B: It may be necessary to repeat the sequence of commands more than once to see a significant difference. Request the stats for this file: alice$ sudo rados --id rbd.daisy -p rbd stat test data/test mtime 1348960511, size 10485760 3.2.4 Checking cephx in action. Issue the following RADOS command: alice$ sudo rados --id rbd.daisy -p data put test /tmp/test What message do you obtain? Why do you receive this message? ................................................................................................................................................ 3.3 Cleanup. Unmount drives: alice$ cd $HOME alice$ sudo umount /mnt/rbd alice$ sudo rbd --id rbd.daisy unmap /dev/rbd0 On daisy: Remove the data you stored from the default RADOS pool: daisy$ sudo rados -p rbd rm test daisy$ sudo rados -p rbd rm test1 daisy$ sudo rbd rm test daisy$ sudo ceph df Lab 3 - This is the end of this lab. 16
  • 17. Red Hat Summit 2015 – Ceph and OpenStack 4. LAB – Setting up the Ceph Object Gateway (RADOSGW). 4.1 Update /etc/ceph/ceph.conf for RADOSGW. On daisy: Open /etc/ceph/ceph.conf and add the following user entry for radosgw: [client.radosgw.daisy] host = daisy rgw socket path = /var/run/ceph/radosgw.daisy.fastcgi.sock keyring = /etc/ceph/keyring.radosgw.daisy rgw print continue = false rgw dns name = daisy nss db path = /var/ceph/nss N.B: Use $HOME/HelperFiles/ceph.conf.radosgw.c7.txt as a template to make it easier and avoid typos. 4.2 Create the RADOSGW Ceph client. Create a keyring for the radosgw.daisy user: daisy$ sudo ceph auth get-or-create client.radosgw.daisy osd 'allow rwx' mon 'allow rwx' -o /etc/ceph/keyring.radosgw.daisy 4.3 Create the RADOSGW HTTP access point. Then, create /etc/httpd/conf.d/radosgw.conf by copying the template file located in the $HOME/HelperFiles folder. N.B: Use $HOME/HelperFiles/radosgw.conf.http.txt as a template to make it easier and avoid typos. 4.4 Create the RADOSGW FastCGI wrapper script. Create the radosgw.fcgi script in /var/www/html and add these lines: #!/bin/sh exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.daisy N.B: Use $HOME/HelperFiles/radosgw.fcgi.txt as a template to make it easier and avoid typos. 17
  • 18. Red Hat Summit 2015 – Ceph and OpenStack Then save the file and close it. Make it executable: daisy$ sudo chmod +x /var/www/html/radosgw.fcgi Make sure that all folders have appropriate permissions: daisy$ $HOME/HelperFiles/setperms.sh Enable the apache site and restart apache: daisy$ sudo systemctl daemon-reload daisy$ sudo systemctl start httpd daisy$ sudo systemctl start ceph-radosgw If desired, you can start the Apache daemon at boot time by entering this command: daisy$ sudo chkconfig httpd on 4.5 Create the RADOSGW Region Map. Create the default region map: daisy$ sudo radosgw-admin regionmap update 4.6 Create a RADOSGW S3 user. Add users to the radosgw: daisy$ sudo radosgw-admin -n client.radosgw.daisy user create --uid=johndoe --display-name="John Doe" --email=john@example.com --access-key=12345 --secret=67890 18
  • 19. Red Hat Summit 2015 – Ceph and OpenStack 4.7 Verify S3 access through RADOSGW. You can now access the radosgw via the S3 API. 4.7.1 RADOSGW with s3cmd. This lab exercise will let you configure the RADOS Gateway and interact with it using another tool that is available. On daisy: If your home directory does not contain the $HOME/.s3cfg file, check for it in $HOME/HelperFiles folder or ask your instructor for a copy of it. daisy$ mv $HOME/HelperFiles/s3cfg.txt $HOME/.s3cfg And check that we can access the S3 “cloud” by listing the existing buckets. daisy$ s3cmd ls Create a bucket: daisy$ s3cmd mb s3://bucket1 daisy$ s3cmd ls Now, create a test file that we shall upload: daisy$ sudo dd if=/dev/zero of=/tmp/10MB.bin bs=1024k count=10 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied Then, upload the file through the RADOS Gateway daisy$ s3cmd put --acl-public /tmp/10MB.bin s3://bucket1/10MB.bin And finally, verify we can access the file in the cloud daisy$ wget -O /dev/null http://bucket1.daisy/10MB.bin 19
  • 20. Red Hat Summit 2015 – Ceph and OpenStack 4.7.2 RADOSGW with s3curl and RADOSGW admin API. A radosgw admin will have special privileges to access users, buckets and usage information through the RadosGW Admin API. daisy$ sudo radosgw-admin user create --uid=admin --display-name="Admin user" --caps="users=read, write; usage=read, write; buckets=read, write; zone=read, write" --access-key=abcde --secret=qwerty If your home directory does not contain the $HOME/s3curl.pl file, check for it in $HOME/HelperFiles folder or ask your instructor for a copy of it. daisy$ mv $HOME/HelperFiles/s3curl.pl.txt $HOME/s3curl.pl daisy$ chmod +x $HOME/s3curl.pl Then create a ~/.s3curl file on daisy with the following: %awsSecretAccessKeys = ( admin => { id => 'abcde', key => 'qwerty', }, ); Change the permissions on the file to: daisy$ chmod 400 ~/.s3curl Finally you will need to modify the s3curl.pl script so that ‘daisy’ is included in @endpoints. List all the buckets of a user: daisy$ ./s3curl.pl --id=admin -- 'http://daisy/admin/bucket?uid=johndoe' ["bucket1"] You can have a full description of the Admin API at this address: http://ceph.com/docs/master/radosgw/adminops/ Lab 4 - This is the end of this lab. 20
  • 21. Red Hat Summit 2015 – Ceph and OpenStack 5. LAB – Integrating Ceph with OpenStack Glance. Ceph can easily be integrated with Glance, OpenStack’s Image Service. Glance has a native backend to talk to RBD, the following steps enable it. 5.1 Ceph Configuration. Add daisy to bob's authorized keys: daisy$ ssh-copy-id bob Start by adding an images pool to Ceph: daisy$ sudo ceph osd pool create images 128 Then, add a user to Ceph called client.glance: daisy$ sudo ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o /etc/ceph/ceph.client.images.keyring Copy the keyring to bob: daisy$ cat /etc/ceph/ceph.client.images.keyring | ssh bob "sudo tee /etc/ceph/ceph.client.images.keyring" From bob: bob$ sudo chgrp glance /etc/ceph/ceph.client.images.keyring bob$ sudo chmod 0640 /etc/ceph/ceph.client.images.keyring Copy /etc/ceph/ceph.conf to bob: daisy$ cat /etc/ceph/ceph.conf | ssh bob "sudo tee /etc/ceph/ceph.conf" On bob, edit /etc/ceph/ceph.conf and add: [client.images] keyring = /etc/ceph/ceph.client.images.keyring 21
  • 22. Red Hat Summit 2015 – Ceph and OpenStack 5.2 Glance Configuration. Adapt /etc/glance/glance-api.conf to make Glance use Ceph in the future. Locate line: default_store = file And adapt to read: default_store = rbd Search for RBD Store Options. Uncomment the following line: #rbd_store_ceph_conf=/etc/ceph/ceph.conf Adapt the following line: rbd_store_user = <None> With: rbd_store_user = images Uncomment the following line #rbd_store_pool = images Restart the glance-api service: bob$ sudo service openstack-glance-api restart 5.3 Verify Integration. Load the keystone environment and upload a test image: bob$ source /home/ceph/openstack.env bob$ glance image-create --name="Cirros 0.3.2" --disk-format=raw --container-format=bare </home/ceph/cirros-0.3.2-x86_64-disk.img 22
  • 23. Red Hat Summit 2015 – Ceph and OpenStack From daisy, check if the image has been created: daisy$ sudo rbd -p images ls c8e400b-77f0-41ff-8ec4-26eaad77957d daisy$ sudo rbd -p images info $(sudo rbd -p images ls) size 255 bytes in 1 objects order 23 (8192 KB objects) block_name_prefix: rbd_data.12e64b364f03 format: 2 features: layering 5.4 Cleanup. We shall delete the image we created; Deleting the image in Glance will trigger the deletion of the RBD image in Ceph. On bob: bob$ glance image-delete {image_unique_id} bob$ glance image-list On daisy: From daisy, check if the image has been deleted: daisy$ sudo rbd -p images ls Lab 5 - This is the end of this lab. 23
  • 24. Red Hat Summit 2015 – Ceph and OpenStack 6. LAB – Integrating Ceph with OpenStack Cinder. OpenStack’s volume service, Cinder, can access Ceph RBD images directly and use them as backing devices for the volumes it exports. To make this work, only a few configuration changes are required. This document explains what needs to be done. 6.1 Ceph Configuration. On daisy: Start by adding a volume pool to Ceph: daisy$ sudo ceph osd pool create volumes 128 Then, add a user to Ceph called client.volumes: daisy$ sudo ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' -o /etc/ceph/ceph.client.volumes.keyring Copy the keyring to bob: daisy$ cat /etc/ceph/ceph.client.volumes.keyring | ssh bob "sudo tee /etc/ceph/ceph.client.volumes.keyring" daisy$ sudo ceph auth get-key client.volumes | ssh bob tee client.volumes.key From bob: bob$ sudo chgrp cinder /etc/ceph/ceph.client.volumes.keyring bob$ sudo chmod 0640 /etc/ceph/ceph.client.volumes.keyring On bob, edit /etc/ceph/ceph.conf and add: [client.volumes] keyring = /etc/ceph/ceph.client.volumes.keyring 24
  • 25. Red Hat Summit 2015 – Ceph and OpenStack 6.2 Cinder Configuration. On bob: Generate a UUID that we will need for Ceph integration with Libvirt (which Cinder uses to connect block devices into VMs): bob$ uuidgen | tee $HOME/myuuid.txt {Your Personal UUID Is Displayed} Then, create a file called ceph.xml with the following contents: <secret ephemeral="no" private="no"> <uuid>{Type In Your UUID}</uuid> <usage type="ceph"> <name>client.volumes secret</name> </usage> </secret> bob$ sudo virsh secret-define --file ceph.xml Secret {Your UUID Displayed Here} created bob$ sudo virsh secret-set-value --secret {Type In Your UUID} --base64 $(cat client.volumes.key) && rm client.volumes.key ceph.xml On bob, open /etc/cinder/cinder.conf and add these lines under the [DEFAULT] section: volume_driver=cinder.volume.drivers.rbd.RBDDriver glance_api_version=2 In the /etc/cinder/cinder.conf file locate the lines below and modify the parameters to match the values of our lab environment: rbd_pool=volumes rbd_ceph_conf=/etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot=false rbd_max_clone_depth=5 rbd_user=volumes rbd_secret_uuid={Type In Your UUID} bob$ sudo service openstack-cinder-api restart 25
  • 26. Red Hat Summit 2015 – Ceph and OpenStack bob$ sudo service openstack-cinder-volume restart Additional documentation for configuring NOVA compute nodes can be found at: http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova 6.3 Verify Integration. On bob: Create a cinder image: bob$ source openstack.env bob$ cinder create --display_name="test" 1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2013-07-17T04:08:25.217224 | | display_description | None | | display_name | test | | id | 001a6a69-4276-4608-908e-bb991a2a51e0 | | metadata | {} | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ Verify that the Cinder volume has been created: bob$ cinder list If the creation of the Cinder volume was a success, its status should be: available 26
  • 27. Red Hat Summit 2015 – Ceph and OpenStack On daisy: Verify that the rbd image is created: daisy$ sudo rbd -p volumes ls {A Unique Volume ID is Displayed here} daisy$ sudo rbd -p volumes info $(sudo rbd -p volumes ls) rbd image 'volume-998c8370-1bd7-4425-b246-b3d405a07f01': size 1024 MB in 256 objects order 22 (4096 KB objects) block_name_prefix: rbd_data.13aa2ae8944a format: 2 features: layering, striping stripe unit: 4096 KB stripe count: 1 6.4 Cleanup. We shall delete the image we created; Deleting the image in Glance will trigger the deletion of the RBD image in Ceph. On bob: bob$ cinder delete {volume_unique_id} bob$ cinder list On daisy: From daisy, check if the image has been deleted: daisy$ sudo rbd -p volumes ls If successful delete the files containing sensitive information: bob$ sudo rm client.volumes.key bob$ sudo rm ceph.xml bob$ sudo rm myuuid.txt Lab 6 - This is the end of this lab. 27
  • 28. Red Hat Summit 2015 – Ceph and OpenStack 7. LAB – Integrating the Ceph with OpenStack Keystone. The Ceph RadosGW can be integrated with OpenStack keystone to authenticate users from keystone rather than creating them within the radosgw. On daisy open /etc/ceph/ceph.conf and add the following user entry for radosgw: [client.radosgw.daisy] host = daisy rgw socket path = /var/run/ceph/radosgw.daisy.fastcgi.sock keyring = /etc/ceph/keyring.radosgw.daisy rgw log file = /var/log/ceph/radosgw.log rgw print continue = false rgw dns name = daisy nss db path = /var/ceph/nss rgw keystone url = http://bob:35357 rgw keystone admin token = ADMIN rgw keystone accepted role = admin Restart the radosgw: daisy$ sudo service ceph-radosgw restart Then from daisy, try to access the radosgw with the admin user from OpenStack: daisy:~$ swift -v -V 2.0 -A http://bob:5000/v2.0/ -U admin:admin -K admin stat StorageURL: http://daisy/swift/v1 Auth Token: MIIJTwYJKoZIhvcNAQcCoIIJQDCCCTwCAQExCTAHBgUrDgMCGjCCB6UGCSqGSIb3DQEHAaCCB5YEggeS eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wOC0x[…Output Truncated…] MXg+gwaln9iaFG2Kgw2zPUFVArNlHIrBVLUwvnBd2aZ0IzygkdyB01HxgDOyftr2PGdmaZ5h819kik0S w3r7e7kLiybScUt5lFZa6YKgzaFkhwigp+C32oxFBqoBBRxcyxyF+WA25T1oISRcMvUzutb3CTlA- oFEQ5aI+JWGVQoKyyIKaDrxUONBsY8QV4= Account: v1 Containers: 1 Objects: 0 Bytes: 0 Vary: Accept-Encoding Server: Apache/2.2.22 (Ubuntu) X-Account-Bytes-Used-Actual: 0 Content-Type: text/plain; charset=utf-8 28
  • 29. Red Hat Summit 2015 – Ceph and OpenStack Then from bob, try to access the radosgw using a regular OpenStack environment: bob:~$ source openstack.env bob:~$ swift -v -V 2.0 -A http://bob:5000/v2.0/ stat StorageURL: http://daisy/swift/v1 Auth Token: MIIJTwYJKoZIhvcNAQcCoIIJQDCCCTwCAQExCTAHBgUrDgMCGjCCB6UGCSqGSIb3DQEHAaCCB5YEggeS eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wOC0x[…Output Truncated…] MXg+gwaln9iaFG2Kgw2zPUFVArNlHIrBVLUwvnBd2aZ0IzygkdyB01HxgDOyftr2PGdmaZ5h819kik0S w3r7e7kLiybScUt5lFZa6YKgzaFkhwigp+C32oxFBqoBBRxcyxyF+WA25T1oISRcMvUzutb3CTlA- oFEQ5aI+JWGVQoKyyIKaDrxUONBsY8QV4= Account: v1 Containers: 1 Objects: 0 Bytes: 0 Vary: Accept-Encoding Server: Apache/2.2.22 (Ubuntu) X-Account-Bytes-Used-Actual: 0 Content-Type: text/plain; charset=utf-8 Lab 7 - This is the end of this lab. 29