The slides from our first webinar on getting started with Ceph. You can watch the full webinar on demand from http://www.inktank.com/news-events/webinars/. Enjoy!
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
From SUSECon 2015: Smooth integration of emerging Software Defined Storage technologies into traditional Data Center using Fiber Channel and iSCSI as key values for success.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
From SUSECon 2015: Smooth integration of emerging Software Defined Storage technologies into traditional Data Center using Fiber Channel and iSCSI as key values for success.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
Running services in virtualized systems provides many benefits, but has often presented performance and flexibility drawbacks. This has become critical when managing large databases, where resource usage and performance are paramount. We will explore a case study in the use of Docker to roll out multiple database servers distributed across multiple physical servers.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Using cobbler in a not so small environment 1.77chhorn
- cobbler basics
- why cobbler was chosen at a company
- how enterprise-requirements were met
- surrounding infrastructure (monitoring etc.)
- on community interaction
LXC, Docker, security: is it safe to run applications in Linux Containers?Jérôme Petazzoni
Linux Containers (or LXC) is now a popular choice for development and testing environments. As more and more people use them in production deployments, they face a common question: are Linux Containers secure enough? It is often claimed that containers have weaker isolation than virtual machines. We will explore whether this is true, if it matters, and what can be done about it.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
Running services in virtualized systems provides many benefits, but has often presented performance and flexibility drawbacks. This has become critical when managing large databases, where resource usage and performance are paramount. We will explore a case study in the use of Docker to roll out multiple database servers distributed across multiple physical servers.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Using cobbler in a not so small environment 1.77chhorn
- cobbler basics
- why cobbler was chosen at a company
- how enterprise-requirements were met
- surrounding infrastructure (monitoring etc.)
- on community interaction
LXC, Docker, security: is it safe to run applications in Linux Containers?Jérôme Petazzoni
Linux Containers (or LXC) is now a popular choice for development and testing environments. As more and more people use them in production deployments, they face a common question: are Linux Containers secure enough? It is often claimed that containers have weaker isolation than virtual machines. We will explore whether this is true, if it matters, and what can be done about it.
Presentation held at GRNET Digital Technology Symposium on November 5-6, 2018 at the Stavros Niarchos Foundation Cultural Center, Athens, Greece.
• Introduction to Ceph and its internals
• Presentation of GRNET's Ceph deployments (technical specs, operations)
• Usecases: ESA Copernicus, ~okeanos, ViMa
Leonid Vasilyev "Building, deploying and running production code at Dropbox"IT Event
Reproducible builds, fast and safe deployment process together with self-healing services form the basis of stable and maintainable infrastructure. In this talk I’d like to cover, from the Site Reliability Engineering (SRE) perspective, how Dropbox addresses above challenges, what technologies are used and what lessons were learnt during implementation process.
This was co-presented at the OpenStack Summit 2013 in Portland by Kamesh Pemmaraju, Product Manager from Dell and Neil Levine Inktank.
Inktank Ceph is a transformational open source storage solution fully integrated into OpenStack providing scalable object and block storage (via Cinder) using commodity servers. The Ceph solution is resilient to failures, uses storage efficiently, and performs well under a variety of VM Workloads.
Dell Crowbar is an open source software framework that can automatically deploy Ceph and OpenStack on bare metal servers in a matter of hours. The Ceph team worked with Dell to create a Ceph barclamp (a crowbar extention) that integrates Glance, Cinder, and Nova-Volume. As a result, it is lot faster and easier to install, configure, and manage a sizable OpenStack and Ceph cluster that is tightly integrated and cost- optimized.
Hear how OpenStack users can address their storage deployment challenges:
Considerations when selecting a cloud storage system
Overview of the Ceph architecture with unique features and benefits
Overview of Dell Crowbar and how it can automate and simplify Ceph/OpenStack deployments Best practices in deploying cloud storage with Ceph and OpenStack
Presentation on how GRNET uses Ceph as a storage backend on its Cloud Computing services. Technical specs, lessons learned, future plans.
Presentation held at the 1st GEANT SIG-CISS Meeting in Amsterdam, 2017-09-25.
GRNET - Greek Research and Technology network is the state-owned Greek NREN.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
Ceph is an open source project, which provides software-defined, unified storage solutions. Ceph is a distributed storage system which is massively scalable and high-performing without any single point of failure. From the roots, it has been designed to be highly scalable, up to exabyte level and beyond while running on general-purpose commodity hardware.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
Big Data in Container; Hadoop Spark in Docker and MesosHeiko Loewe
3 examples for Big Data analytics containerized:
1. The installation with Docker and Weave for small and medium,
2. Hadoop on Mesos w/ Appache Myriad
3. Spark on Mesos
OpenStack and Ceph: the Winning Pair
By: Sebastien Han
Ceph has become increasingly popular and saw several deployments inside and outside OpenStack. The community and Ceph itself has greatly matured. Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance,and scalability from terabytes to exabytes. Ceph utilizes a novel placement algorithm (CRUSH), active storage nodes, and peer-to-peer gossip protocols to avoid the scalability and reliability problems associated with centralized controllers and lookup tables. The main goal of the talk is to convince those of you who aren't already using Ceph as a storage backend for OpenStack to do so. I consider the Ceph technology to be the de facto storage backend for OpenStack for a lot of good reasons that I'll expose during the talk. Since the Icehouse OpenStack summit, we have been working really hard to improve the Ceph integration. Icehouse is definitely THE big release for OpenStack and Ceph. In this session, Sebastien Han from eNovance will go through several subjects such as: Ceph overview Building a Ceph cluster - general considerations Why is Ceph so good with OpenStack? OpenStack and Ceph: 5 minutes quick start for developers Typical architecture designs State of the integration with OpenStack (icehouse best additions) Juno roadmap and beyond.
Video Presentation: http://bit.ly/1iLwTNf
Lions, Tigers and Deers: What building zoos can teach us about securing micro...Sysdig
How to secure microservices running in containers? Strategies for Docker, Kubernetes, Openshift, RancherOS, DC/OS Mesos.
Privileges, resources and visibility constrains with capabilities, cgroups and namespaces. Image vulnerability scanning and behaviour security monitoring with Sysdig Falco.
Similar to Webinar - Getting Started With Ceph (20)
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
2. Agenda
• Inktank and Ceph Introduction
• Ceph Technology
• Getting Started Walk-through
• Resources
• Next steps
3. • Company that provides • Distributed unified object,
professional services and block and file storage
support for Ceph
platform
• Founded in 2011
• Created by storage
• Funded by DreamHost experts
• Mark Shuttleworth • Open source
invested $1M
• In the Linux Kernel
• Sage Weil, CTO and
creator of Ceph • Integrated into Cloud
Platforms
4. Ceph Technological Foundations
Ceph was built with the following goals:
l Every component must scale
l There can be no single point of failure
l The solution must be software-based, not an appliance
l Should run on readily-available, commodity hardware
l Everything must self-manage wherever possible
l Must be open source
4
5. Key Differences
• CRUSH data placement algorithm (Object)
Intelligent storage nodes
• Unified storage platform (Object + Block + File)
All uses cases (cloud, big data, legacy, web app,
archival, etc.) satisfied in a single cluster
• Thinly provisioned virtual block device (Block)
Cloud storage block for VM images
• Distributed scalable metadata servers (CephFS)
6. Ceph Use Cases
Object
• Archival and backup storage
• Primary data storage
• S3-like storage
• Web services and platforms
• Application development
Block
• SAN replacement
• Virtual block device, VM images
File
• HPC
• Posix-compatible applications
8. Ceph
APP APP HOST/VM CLIENT
Ceph Object Ceph Block Ceph Distributed
Ceph Object Gateway (RBD) File System
Library (RADOS (CephFS)
(LIBRADOS) Gateway) A reliable and fully-
distributed block A POSIX-compliant
device distributed file
A library allowing A RESTful gateway
applications to for object storage system
directly
access Ceph Object
Storage
Ceph Object Storage
(RADOS)
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
9. RADOS Components
Monitors:
M
• Maintain cluster map
• Provide consensus for
distributed decision-making
• Must have an odd number
• These do not serve stored
objects to clients
RADOS Storage Nodes
containing Object Storage
Daemons (OSDs):
• One OSD per disk (recommended)
• At least three nodes in a cluster
• Serve stored objects to clients
• Intelligently peer to perform
replication tasks
• Supports object classes
9
10. RADOS Cluster Makeup
OSD OSD OSD OSD OSD
RADOS
Node
btrfs
FS FS FS FS FS xfs
ext4
DISK DISK DISK DISK DISK
M M M
RADOS
Cluster
10
11. VOTE
Using the Votes Bottom on the top of the presentation panel
please take 30 seconds answer the following questions to
help us better understand you.
1. Are you exploring Ceph for a current project?
2. Are you looking to implement Ceph within the next 6
months?
3. Do you need help deploying Ceph?
13. Overview
• This tutorial and walk-through based on VirtualBox, but
other hypervisor platforms will work just as well.
• Relaxed security best practices to speed things up, and
will omit some of the security setup steps here.
• We will:
1. Create the VirtualBox VMs
2. Prepare the VMs for Creating the Ceph Cluster
3. Install Ceph on all VMs from the Client
4. Configure Ceph on all the server nodes and the client
5. Experiment with Ceph’s Virtual Block Device (RBD)
6. Experiment with the Ceph Distributed Filesystem
7. Unmount, stop Ceph, and shut down the VMs safely
14. Create the VMs
• 1 or more CPU cores
• 512MB or more memory
• Ubuntu 12.04 with latest updates
• VirtualBox Guest Addons
• Three virtual disks (dynamically allocated):
• 28GB OS disk with boot partition
• 8GB disk for Ceph data
• 8GB disk for Ceph data
• Two virtual network interfaces:
• eth0 Host-Only interface for Ceph
• eth1 NAT interface for updates
Consider creating a template based on the above, and then
cloning the template to save time creating all four VMs
15. Adjust Networking in the VM OS
• Edit /etc/network/interfaces
# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.56.20
netmask 255.255.255.0
# The secondary NAT interface with outside access
auto eth1
iface eth1 inet dhcp
gateway 10.0.3.2
• Edit /etc/udev/rules.d/70-persistent-net.rules
If the VMs were cloned from a template, the MAC addresses for the
virtual NICs should have been regenerated to stay unique. Edit this
file to make sure that the right NIC is mapped as eth0 and eth1.
16. Security Shortcuts
To streamline and simplify access for this tutorial, we:
• Configured the user “ubuntu” to SSH between hosts using
authorized keys instead of a password.
• Added “ubuntu” to /etc/sudoers with full access.
• Configured root on the server nodes to SSH between nodes
using authorized keys without a password set.
• Relaxed SSH checking of known hosts to avoid interactive
confirmation when accessing a new host.
• Disabled cephx authentication for the Ceph cluster
17. Edit /etc/hosts to resolve names
• Use the /etc/hosts file for simple name resolution
for all the VMs on the Host-Only network.
• Create a portable /etc/hosts file on the client
127.0.0.1 localhost
192.168.56.20 ceph-client
192.168.56.21 ceph-node1
192.168.56.22 ceph-node2
192.168.56.23 ceph-node3
• Copy the file to all the VMs so that names are
consistently resolved across all machines.
18. Install the Ceph Bobtail release
ubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/
release.asc | ssh ceph-node1 sudo apt-key add -
OK
ubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc)
main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.list
deb http://ceph.com/debian-bobtail/ precise main
ubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph”
...
Setting up librados2 (0.56.1-1precise) ...
Setting up librbd1 (0.56.1-1precise) ...
Setting up ceph-common (0.56.1-1precise) ...
Installing new version of config file /etc/bash_completion.d/rbd ...
Setting up ceph (0.56.1-1precise) ...
Setting up ceph-fs-common (0.56.1-1precise) ...
Setting up ceph-fuse (0.56.1-1precise) ...
Setting up ceph-mds (0.56.1-1precise) ...
Setting up libcephfs1 (0.56.1-1precise) ...
...
ldconfig deferred processing now taking place
20. Complete Ceph Cluster Creation
• Copy the /etc/ceph/ceph.conf file to all nodes
• Create the Ceph deamon working directories:
~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-0
~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-1
~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-2
~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-3
~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-4
~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-5
~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mon/ceph-a
~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/mon/ceph-b
~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/mon/ceph-c
~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mds/ceph-a
• Run the mkcephfs command from a server node:
~$ ubuntu@ceph-client:~$ ssh ceph-node1
Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic
x86_64)
...
ubuntu@ceph-node1:~$ sudo -i
root@ceph-node1:~# cd /etc/ceph
root@ceph-node1:/etc/ceph#
mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
21. Start the Ceph Cluster
On a server node, start the Ceph service:
root@ceph-node1:/etc/ceph# service ceph -a start
=== mon.a ===
Starting Ceph mon.a on ceph-node1...
starting mon.a rank 0 at 192.168.56.21:6789/0 mon_data /var/lib/ceph/mon/
ceph-a fsid 11309f36-9955-413c-9463-efae6c293fd6
=== mon.b ===
=== mon.c ===
=== mds.a ===
Starting Ceph mds.a on ceph-node1...
starting mds.a at :/0
=== osd.0 ===
Mounting ext4 on ceph-node1:/var/lib/ceph/osd/ceph-0
Starting Ceph osd.0 on ceph-node1...
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/
osd/ceph-0/journal
=== osd.1 ===
=== osd.2 ===
=== osd.3 ===
=== osd.4 ===
=== osd.5 ===
22. Verify Cluster Health
root@ceph-node1:/etc/ceph# ceph status
health HEALTH_OK
monmap e1: 3 mons at
{a=192.168.56.21:6789/0,b=192.168.56.22:6789/0,c=192.168.56.23:6789/0},
election epoch 6, quorum 0,1,2 a,b,c
osdmap e17: 6 osds: 6 up, 6 in
pgmap v473: 1344 pgs: 1344 active+clean; 8730 bytes data, 7525 MB
used,
39015 MB / 48997 MB avail
mdsmap e9: 1/1/1 up {0=a=up:active}
root@ceph-node1:/etc/ceph# ceph osd tree
# id weight type name up/down reweight
-1 6 root default
-3 6 rack unknownrack
-2 2 host ceph-node1
0 1 osd.0 up 1
1 1 osd.1 up 1
-4 2 host ceph-node2
2 1 osd.2 up 1
3 1 osd.3 up 1
-5 2 host ceph-node3
4 1 osd.4 up 1
5 1 osd.5 up 1
23. Access Ceph’s Virtual Block Device
ubuntu@ceph-client:~$ rbd ls
rbd: pool rbd doesn't contain rbd images
ubuntu@ceph-client:~$ rbd create myLun --size 4096
ubuntu@ceph-client:~$ rbd ls -l
NAME SIZE PARENT FMT PROT LOCK
myLun 4096M 1
ubuntu@ceph-client:~$ sudo modprobe rbd
ubuntu@ceph-client:~$ sudo rbd map myLun --pool rbd
ubuntu@ceph-client:~$ sudo rbd showmapped
id pool image snap device
0 rbd myLun - /dev/rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd
rbd/ rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd/rbd/myLun
… 1 root root 10 Jan 16 21:15 /dev/rbd/rbd/myLun -> ../../rbd0
ubuntu@ceph-client:~$ ls -l /dev/rbd0
brw-rw---- 1 root disk 251, 0 Jan 16 21:15 /dev/rbd0
24. Format RBD image and use it
ubuntu@ceph-client:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/myLun
mke2fs 1.42 (29-Nov-2011)
...
Writing superblocks and filesystem accounting information: done
ubuntu@ceph-client:~$ sudo mkdir /mnt/myLun
ubuntu@ceph-client:~$ sudo mount /dev/rbd/rbd/myLun /mnt/myLun
ubuntu@ceph-client:~$ df -h | grep myLun
/dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun
ubuntu@ceph-client:~$ sudo dd if=/dev/zero of=/mnt/myLun/testfile
bs=4K count=128
128+0 records in
128+0 records out
524288 bytes (524 kB) copied, 0.000431868 s, 1.2 GB/s
ubuntu@ceph-client:~$ ls -lh /mnt/myLun/
total 528K
drwx------ 2 root root 16K Jan 16 21:24 lost+found
-rw-r--r-- 1 root root 512K Jan 16 21:29 testfile
25. Access Ceph Distributed Filesystem
~$ sudo mkdir /mnt/myCephFS
~$ sudo mount.ceph ceph-node1,ceph-node2,ceph-node3:/ /mnt/myCephFS
~$ df -h | grep my
192.168.56.21,192.168.56.22,192.168.56.23:/ 48G 11G 38G 22% /mnt/myCephFS
/dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun
~$ sudo dd if=/dev/zero of=/mnt/myCephFS/testfile bs=4K count=128
128+0 records in
128+0 records out
524288 bytes (524 kB) copied, 0.000439191 s, 1.2 GB/s
~$ ls -lh /mnt/myCephFS/
total 512K
-rw-r--r-- 1 root root 512K Jan 16 23:04 testfile
26. Unmount, Stop Ceph, and Halt
ubuntu@ceph-client:~$ sudo umount /mnt/myCephFS
ubuntu@ceph-client:~$ sudo umount /mnt/myLun/
ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0
ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop
=== mon.a ===
Stopping Ceph mon.a on ceph-node1...kill 19863...done
=== mon.b ===
=== mon.c ===
=== mds.a ===
=== osd.0 ===
=== osd.1 ===
=== osd.2 ===
=== osd.3 ===
=== osd.4 ===
=== osd.5 ===
ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop
* Will now halt
^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop
* Will now halt
^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop
* Will now halt
^Cubuntu@ceph-client:~$ sudo service halt stop
* Will now halt
27. Review
We:
1. Created the VirtualBox VMs
2. Prepared the VMs for Creating the Ceph Cluster
3. Installed Ceph on all VMs from the Client
4. Configured Ceph on all the server nodes and the client
5. Experimented with Ceph’s Virtual Block Device (RBD)
6. Experimented with the Ceph Distributed Filesystem
7. Unmounted, stopped Ceph, and shut down the VMs safely
• Based on VirtualBox; other hypervisors work too.
• Relaxed security best practices to speed things up, but
recommend following them in most circumstances.
29. Leverage great online resources
Documentation on the Ceph web site:
• http://ceph.com/docs/master/
Blogs from Inktank and the Ceph community:
• http://www.inktank.com/news-events/blog/
• http://ceph.com/community/blog/
Developer resources:
• http://ceph.com/resources/development/
• http://ceph.com/resources/mailing-list-irc/
• http://dir.gmane.org/gmane.comp.file-systems.ceph.devel
31. Try it yourself!
• Use the information in this webinar as a starting point
• Consult the Ceph documentation online:
http://ceph.com/docs/master/
http://ceph.com/docs/master/start/
32. Inktank’s Professional Services
Consulting Services:
• Technical Overview
• Infrastructure Assessment
• Proof of Concept
• Implementation Support
• Performance Tuning
Support Subscriptions:
• Pre-Production Support
• Production Support
A full description of our services can be found at the following:
Consulting Services: http://www.inktank.com/consulting-services/
Support Subscriptions: http://www.inktank.com/support-services/
32
33. Check out our upcoming webinars
1. Introduction to Ceph with OpenStack
January 24, 2013
10:00AM PT, 12:00PM CT, 1:00PM ET
https://www.brighttalk.com/webcast/8847/63177
2. DreamHost Case Study: DreamObjects with Ceph
February 7, 2013
10:00AM PT, 12:00PM CT, 1:00PM ET
https://www.brighttalk.com/webcast/8847/63181
3. Advanced Features of Ceph Distributed Storage
(delivered by Sage Weil, creator of Ceph)
February 12, 2013
10:00AM PT, 12:00PM CT, 1:00PM ET
https://www.brighttalk.com/webcast/8847/63179