SlideShare a Scribd company logo
Unleash the Power of Ceph
Across the Data Center
TUT18972: FC/iSCSI for Ceph
Ettore Simone
Senior Architect
Alchemy Solutions Lab
ettore.simone@alchemy.solutions
2
Agenda
• Introduction
• The Bridge
• The Architecture
• Use Cases
• How It Works
• Some Benchmarks
• Some Optimizations
• Q&A
• Bonus Tracks
Introduction
4
About Ceph
“Ceph is a distributed object store and file system
designed to provide excellent performance, reliability
and scalability.” (http://ceph.com/)
FUT19336 - SUSE Enterprise Storage Overview and Roadmap
TUT20074 - SUSE Enterprise Storage Design and Performance
5
Ceph timeline
Open
Source
2006
OpenStack
Integration
2011
Production
Ready
Q3 2012
Xen
Integration
2013
SUSE
Enterprise
Storage 2.0
Q4 2015
2004
Project
Start at
UCSC
2010
Mainline
Linux
Kernel
Q2 2012
Launch of
Inktank
2012
CloudStack
Integration
Q1 2015
SUSE
Storage 1.0
6
Some facts
Common data centers storage solutions are built
mainly on top of Fibre Channel (yes, and NAS too).
Source: Wikibon Server SAN Research Project 2014
7
Is the storage mindset changing?
New/Cloud
‒ Micro-services Composed Applications
‒ NoSQL and Distributed Database (lazy commit, replication)
‒ Object and Distributed Storage
SCALE-OUT
Classic
‒ Traditional Application → Relational DB → Traditional Storage
‒ Transactional Process → Commit on DB → Commit on Disk
SCALE-UP
8
Is the storage mindset changing? No.
New/Cloud
‒ Micro-services Composed Applications
‒ NoSQL and Distributed Database (lazy commit, replication)
‒ Object and Distributed Storage
Natural playground of Ceph
Classic
‒ Traditional Application → Relational DB → Traditional Storage
‒ Transactional Process → Commit on DB → Commit on Disk
Where we want to introduce Ceph!
9
Is the new kid on the block so noisy?
Ceph is cool but I cannot rearchitect my storage!
And what about my shiny big disk arrays?
I have already N protocols, why another one?
<Add your own fear here>
10
SAN
SCSI
over FC
Our goal
How to achieve a non disruptive introduction of Ceph
into a traditional storage infrastructure?
NAS
NFS/SMB/iSCSI
over Ethernet
RBD
over Ethernet
Ceph
11
How to let happily coexist Ceph in your
datacenter with the existing neighborhood
(traditional workloads, legacy servers, FC switches etc...)
The Bridge
13
FC/iSCSI gateway
iSCSI
‒ Out-of-the-box feature of SES 2.0
‒ TUT16512 - Ceph RBD Devices and iSCSI
Fiber Channel
‒ That's the point we will focus today
14
Back to our goal
How to achieve a non disruptive introduction of Ceph
into a traditional storage infrastructure?
RBDSAN NAS
15
Linux-IO Target (LIO™)
Is the most common open-source SCSI target in
modern GNU/Linux distros:
FC
FCoE
FireWire
iSCSI
iSER
SRP
loop
vHost
FABRIC BACKSTORELIO
FILEIO
IBLOCK
RBD
pSCSI
RAMDISK
TCMU
Kernel space
The Architecture
17
Technical Reference for Entry Level
Dedicated nodes connect Ceph to Fiber Channel
18
Hypothesis for High Throughput
All OSDs nodes connect Ceph to Fiber Channel
19
Our LAB Architecture
20
Pool and OSD geometry
x
x
x
x
x
x
x
x
x
x
x
x
x
x
21
Multi root CRUSH map
22
Multipath I/O (MPIO)
devices {
device {
vendor "(LIO-ORG|SUSE)"
product "*"
path_grouping_policy "multibus"
path_checker "tur"
features "0"
hardware_handler "1 alua"
prio "alua"
failback "immediate"
rr_weight "uniform"
no_path_retry "fail"
rr_min_io 100
}
}
23
Automatically classify the OSD
Classify by NODE;OSD;DEV;SIZE;WEIGHT;SPEED
# ceph-disk-classify
osd01 0 sdb 300G 0.287 15K
osd01 1 sdc 300G 0.287 15K
osd01 2 sdd 200G 0.177 SSD
osd01 3 sde 1.0T 0.971 7.2K
osd01 4 sdf 1.0T 0.971 7.2K
osd02 5 sdb 300G 0.287 15K
osd02 6 sdd 200G 0.177 SSD
osd02 7 sde 1.0T 0.971 7.2K
osd01 8 sdf 1.0T 0.971 7.2K
osd03 9 sdb 300G 0.287 15K
…
24
Avoid standard CRUSH location
Default:
osd crush location = root=default host=`hostname -s`
Using an helper script:
osd crush location hook = /path/to/script
Or entirely manual:
osd crush update on start = false
…
# ceph osd crush [add|set] 39 0.971 root=root-7.2K
host=osd08-7.2K
Use Cases
26
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
27
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
28
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
29
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
30
Storage replacement
No drama at the End of Life/Support of traditional
storages:
Traditional Workloads Private Cloud
CephGW
New Workloads
31
D/R and Business Continuity
CephGW
Site A Site B
Ceph GW
How It Works
33
Ceph and Linux-IO
SCSI commands from fabrics are addressed by LIO
core, configured using targetcli or directly via sysfs,
and proxied to the interested block device through the
relative backstore module.
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
← user space →
← kernel space →
34
Enable QLocig in target mode
# modprobe qla2xxx qlini_mode="disabled"
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
← user space →
← kernel space →
35
Identify and enable HBAs
# cat
/sys/class/scsi_host/host*/device/fc_host/h
ost*/port_name | 
sed -e 's/../:&/g' -e 's/:0x://'
# targetcli qla2xxx/ create ${WWPN}
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
← user space →
← kernel space →
36
Map RBDs and create backstores
# rbd map -p ${POOL} ${VOL}
# targetcli backstores/rbd create name="$
{POOL}-${VOL}" dev="${DEV}"
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
/dev/rbd0
← user space →
← kernel space →
37
Create LUNs connected to RBDs
# targetcli qla2xxx/${WWPN}/luns create
/backstores/rbd/${POOL}-${VOL}
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
/dev/rbd0
← user space →
← kernel space →
LUN0
38
“Zoning” to filter access with ACLs
# targetcli qla2xxx/${WWPN}/acls create $
{INITIATOR} true
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
/dev/rbd0
← user space →
← kernel space →
LUN0
Some Benchmarks
40
First of all...
This solution is NOT a drop in replacement for SAN nor
NAS (at the moment at least!).
The main focus is to identify how to minimize the
overhead from native RBD to FC/iSCSI.
41
Raw performance/estimation on 15K
Physical Disk IOPS: Ceph IOPS:
‒ 4K RND Read = 193 x 24 = 4.632
‒ 4K RND Write = 178 x 24 / 3 = 1.424 / 3 = 475
Physical Disk Throughput: Ceph Throughput:
‒ 512K RND Read = 108 MB/s x 24 = 2.600
‒ 512K RND Write = 105 MB/s x 24 / 3 = 840 / 2 = 420 MB/s
NOTE:
‒ 24 OSD and 3 Replicas per Pool
‒ No SSD for journal (so ~1/3 IOPS and ~1/2 of bandwidth for
writes)
43
64K SEQ Read
64K SEQ Write
0 500 1000 1500 2000 2500 3000
Estimated
RBD
MAP/AIO
MAP/LIO
QEMU/LIO
ThroughputinMB/s
4K RND Read
4K RND Write
0 1000 2000 3000 4000 5000 6000
Estimated
RBD
MAP/AIO
MAP/LIO
QEMU/LIO
IOPS
Compared performance on 15K
NOTE:
‒ SEQ 64K on RBD Client → RND 512K on Ceph OSD
Work in Progress
46
Where we are working on
Centralized management with GUI/CLI
‒ Deploy MON/OSD/GW nodes
‒ Manage Nodes/Disk/Pools/Map/LIO
‒ Monitor cluster and node status
Reaction on failures
Using librados/librbd with tcmu for backstore
47
Central Management Console
• Intel Virtual Storage Manager
• Ceph Calamari
• inkScope
48
More integration with existing tools
Extend LRBD do accept multiple Fabric:
‒ iSCSI (native support)
‒ FC
‒ FCoE
Linux-IO:
‒ Use of librados via tcmu
Some Optimizations
50
I/O scheduler matter!
On OSD nodes:
‒ deadline on physical disk (cfq if ionice scrub thread)
‒ noop on RAID disk
‒ read_ahead_kb=2048
On Gateway nodes:
‒ noop on mapped RBD
On Client nodes:
‒ noop or deadline on multipath device
51
Reduce I/O concurrency
• Reduce OSD scrub priority:
‒ I/O scheduler cfq
‒ osd_disk_thread_ioprio_class = idle
‒ osd_disk_thread_ioprio_priority = 7
52
Design optimizations
• SSD on monitor nodes for LevelDB: decrease CPU,
memory usage and time during recovery
• SSD Journal decrease I/O latency: 3x IOPS and better
throughput
Q&A
54
lab@alchemy.solutions
Thank you.
Corporate Headquarters
Maxfeldstrasse 5
90409 Nuremberg
Germany
+49 911 740 53 0 (Worldwide)
www.suse.com
Join us on:
www.opensuse.org
55
Bonus Tracks
57
Business Continuity architecture
Low latency connected sites:
WARNING: To improve availability a third site to place a
quorum node are highly encouraged.
58
Disaster Recovery architecture
High latency or disconnected sites:
As in OpenStack Ceph plug-in for Cinder Backup:
# rbd export-diff pool/image@end --from-snap start - |
ssh -C remote rbd import-diff – pool/image
59
KVM Gateways
• VT-x Physical passthrough of QLogic
• RBD Volumes as VirtIO devices
• Linux-IO iblock backstore
60
VT-x PCI passthrough 1/2
Install KVM and tools
Boot with intel_iommu=on
# lspci -D | grep -i QLogic | awk '{ print $1 }'
0000:24:00:0
0000:24:00:1
# readlink /sys/bus/pci/devices/0000:24:00.
{0,1}/driver
../../../../bus/pci/drivers/qla2xxx
../../../../bus/pci/drivers/qla2xxx
# modprobe -r qla2xxx
61
VT-x PCI passthrough 2/2
# virsh nodedev-detach pci_0000_24_00_{0,1}
Device pci_0000_24_00_0 detached
Device pci_0000_24_00_1 detached
# virsh edit VM
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x24' slot='0x0' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x24' slot='0x0' function='0x1'/>
</source>
</hostdev>
# virsh start VM
62
KVM hot-add RBD 1/2
# ceph auth get-or-create client.libvirt mon 'allow r'
osd 'allow rwx'
[client.libvirt]
key = AQBN3S9W0Z2gKxAAnua2fIlcSVSZ/c7pqHtTwA==
# cat secret.xml
<secret ephemeral='no' private='no'>
<usage type='ceph'>
<name>client.libvirt secret</name>
</usage>
</secret>
# virsh secret-define --file secret.xml
Secret 363aad3c-d13c-440d-bb27-fd58fca6aac2 created
# virsh secret-set-value --secret 363aad3c-d13c-440d-
bb27-fd58fca6aac2 --base64
AQBN3S9W0Z2gKxAAnua2fIlcSVSZ/c7pqHtTwA==
63
KVM hot-add RBD 2/2
# cat disk.xml
<disk type='network' device='disk'>
<source protocol='rbd' name='pool/vol'>
<host name='mon01' port='6789'/>
<host name='mon02' port='6789'/>
<host name='mon03' port='6789'/>
</source>
<auth username='libvirt'>
<secret type='ceph' uuid='363aad3c-d13c-440d-bb27-
fd58fca6aac2'/>
</auth>
<target dev='vdb' bus='virtio'/>
</disk>
# virsh attach-device --persistent VM disk.xml
Device attached successfully
64
/usr/local/sbin/ceph-disk-classify
# Enumerate OSDs
ceph osd ls | 
while read OSD; do
# Extract IP/HOST from Cluster Map
IP=`ceph osd find $OSD | tr -d '"' | grep 'ip:' | awk -F: '{ print $2 }'`
NODE=`getent hosts $IP | sed -e 's/.* //'`
test -n "$NODE" || NODE=$IP
# Evaluate mount point for osd.<N> (so skip Journals and not used ones)
MOUNT=`ssh -n $NODE ceph-disk list 2>/dev/null | grep "osd.$OSD" | awk '{ print $1 }'`
DEV=`echo $MOUNT | sed -e 's/[0-9]*$//' -e 's|/dev/||'`
# Calculate Disk size and FS size
SIZE=`ssh -n $NODE cat /sys/block/$DEV/size`
SIZE=$[SIZE*512]
DF=`ssh -n $NODE df $MOUNT | grep $MOUNT | awk '{ print $2 }'`
# Weight is the size in TByte
WEIGHT=`printf '%3.3f' $(bc -l<<<$DF/1000000000)`
SPEED=`ssh -n $NODE sginfo -g /dev/$DEV | sed -n -e 's/^Rotational Rates*//p'`
test "$SPEED" = '1' && SPEED='SSD'
# Output
echo $NODE $OSD $DEV `numfmt --to=si $SIZE` $WEIGHT $SPEED
done
A Light Hands-On
66
A Vagrant LAB for Ceph and iSCSI
• 3 all-in-one nodes (MON+OSD+iSCSI Target)
• 1 admin Calamari and iSCSI Initiator with MPIO
• 3 disks per OSD node
• 2 replicas
• Placement Groups: 3*3*100/2 = 450 → 512
67
Ceph Initial Configuration
Login into ceph-admin and create initial ceph.conf
# ceph-deploy install ceph-{admin,1,2,3}
# ceph-deploy new ceph-{1,2,3}
# cat <<-EOD >>ceph.conf
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 512
osd_pool_default_pgp_num = 512
EOD
68
Ceph Deploy
Login into ceph-admin and create the Ceph cluster
# ceph-deploy mon create-initial
# ceph-deploy osd create ceph-{1,2,3}:sd{b,c,d}
# ceph-deploy admin ceph-{admin,1,2,3}
69
LRBD “auth”
"auth": [
{
"authentication": "none",
"target": "iqn.2015-09.ceph:sn"
}
]
70
LRBD “targets”
"targets": [
{
"hosts": [
{
"host": "ceph-1", "portal": "portal1"
},
{
"host": "ceph-2", "portal": "portal2"
},
{
"host": "ceph-3", "portal": "portal3"
}
],
"target": "iqn.2015-09.ceph:sn"
}
]
71
LRBD “portals”
"portals": [
{
"name": "portal1",
"addresses": [ "10.20.0.101" ]
},
{
"name": "portal2",
"addresses": [ "10.20.0.102" ]
},
{
"name": "portal3",
"addresses": [ "10.20.0.103" ]
}
]
72
LRBD “pools”
"pools": [
{
"pool": "rbd",
"gateways": [
{
"target": "iqn.2015-09.ceph:sn",
"tpg": [
{
"image": "data",
"initiator": "iqn.1996-04.suse:cl"
}
]
}
]
}
]
Unpublished Work of SUSE LLC. All Rights Reserved.
This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC.
Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their
assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated,
abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE.
Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability.
General Disclaimer
This document is not to be construed as a promise by any participating company to develop, deliver, or market a
product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making
purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and
specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The
development, release, and timing of features or functionality described for SUSE products remains at the sole discretion
of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time,
without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this
presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-
party trademarks are the property of their respective owners.

More Related Content

What's hot

Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Colleen Corrice
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
Emma Haruka Iwao
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Community
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
Linaro
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
James Saint-Rossy
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oF
inside-BigData.com
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
Lars Marowsky-Brée
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
Robert Starmer
 
Storing VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdfStoring VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdf
OpenStack Foundation
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Community
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
Colleen Corrice
 
OpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaOpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of Alabama
Kamesh Pemmaraju
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2
Tommy Lee
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong ZhuBuild a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Ceph Community
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
Jonathan Long
 
Building reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise StorageBuilding reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise Storage
Lars Marowsky-Brée
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
Alex Lau
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
Ceph Community
 

What's hot (20)

Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oF
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
 
Storing VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdfStoring VMs with Cinder and Ceph RBD.pdf
Storing VMs with Cinder and Ceph RBD.pdf
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
OpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaOpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of Alabama
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong ZhuBuild a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Building reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise StorageBuilding reliable Ceph clusters with SUSE Enterprise Storage
Building reliable Ceph clusters with SUSE Enterprise Storage
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
 

Viewers also liked

Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
Mirantis
 
Openstack Summit Container Day Keynote
Openstack Summit Container Day KeynoteOpenstack Summit Container Day Keynote
Openstack Summit Container Day Keynote
Boyd Hemphill
 
Managing Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native WayManaging Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native Way
Qiming Teng
 
Webinar container management in OpenStack
Webinar container management in OpenStackWebinar container management in OpenStack
Webinar container management in OpenStack
CREATE-NET
 
Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]
Joshua Harlow
 
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...
Daniel Krook
 
Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph Solutions
Red_Hat_Storage
 
My SQL and Ceph: Head-to-Head Performance Lab
My SQL and Ceph: Head-to-Head Performance LabMy SQL and Ceph: Head-to-Head Performance Lab
My SQL and Ceph: Head-to-Head Performance Lab
Red_Hat_Storage
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
Karan Singh
 
OpenStack Magnum
OpenStack MagnumOpenStack Magnum
OpenStack Magnum
Adrian Otto
 
SUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionSUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle Introduction
Gábor Nyers
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
Daniel Schneller
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
OpenStack_Online
 
Ceph Object Store
Ceph Object StoreCeph Object Store
Ceph Object Store
Daniel Schneller
 
Turning Containers into Cattle
Turning Containers into CattleTurning Containers into Cattle
Turning Containers into Cattle
Subbu Allamaraju
 
How to Monitor Application Performance in a Container-Based World
How to Monitor Application Performance in a Container-Based WorldHow to Monitor Application Performance in a Container-Based World
How to Monitor Application Performance in a Container-Based World
Ken Owens
 
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephProtecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Sean Cohen
 
Streamlining HPC Workloads with Containers
Streamlining HPC Workloads with ContainersStreamlining HPC Workloads with Containers
Streamlining HPC Workloads with Containers
Dustin Kirkland
 
Who carries your container? Zun or Magnum?
Who carries your container? Zun or Magnum?Who carries your container? Zun or Magnum?
Who carries your container? Zun or Magnum?
Madhuri Kumari
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
Sage Weil
 

Viewers also liked (20)

Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Openstack Summit Container Day Keynote
Openstack Summit Container Day KeynoteOpenstack Summit Container Day Keynote
Openstack Summit Container Day Keynote
 
Managing Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native WayManaging Container Clusters in OpenStack Native Way
Managing Container Clusters in OpenStack Native Way
 
Webinar container management in OpenStack
Webinar container management in OpenStackWebinar container management in OpenStack
Webinar container management in OpenStack
 
Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]
 
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...
 
Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph Solutions
 
My SQL and Ceph: Head-to-Head Performance Lab
My SQL and Ceph: Head-to-Head Performance LabMy SQL and Ceph: Head-to-Head Performance Lab
My SQL and Ceph: Head-to-Head Performance Lab
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
OpenStack Magnum
OpenStack MagnumOpenStack Magnum
OpenStack Magnum
 
SUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionSUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle Introduction
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
Ceph Object Store
Ceph Object StoreCeph Object Store
Ceph Object Store
 
Turning Containers into Cattle
Turning Containers into CattleTurning Containers into Cattle
Turning Containers into Cattle
 
How to Monitor Application Performance in a Container-Based World
How to Monitor Application Performance in a Container-Based WorldHow to Monitor Application Performance in a Container-Based World
How to Monitor Application Performance in a Container-Based World
 
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephProtecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
 
Streamlining HPC Workloads with Containers
Streamlining HPC Workloads with ContainersStreamlining HPC Workloads with Containers
Streamlining HPC Workloads with Containers
 
Who carries your container? Zun or Magnum?
Who carries your container? Zun or Magnum?Who carries your container? Zun or Magnum?
Who carries your container? Zun or Magnum?
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 

Similar to TUT18972: Unleash the power of Ceph across the Data Center

Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Community
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebula Project
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Sage Weil
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
ShapeBlue
 
Ceph
CephCeph
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Community
 
Docker and coreos20141020b
Docker and coreos20141020bDocker and coreos20141020b
Docker and coreos20141020b
Richard Kuo
 
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Community
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
In-Memory Computing Summit
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
Ceph Community
 
Unleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucsUnleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucs
solarisyougood
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
Ceph Community
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
Red_Hat_Storage
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Red_Hat_Storage
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Community
 
HPC in the Cloud
HPC in the CloudHPC in the Cloud
HPC in the Cloud
Amazon Web Services
 
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
Andrey Korolyov
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
NAVER D2
 

Similar to TUT18972: Unleash the power of Ceph across the Data Center (20)

Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
 
Ceph
CephCeph
Ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
 
Docker and coreos20141020b
Docker and coreos20141020bDocker and coreos20141020b
Docker and coreos20141020b
 
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Unleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucsUnleash oracle 12c performance with cisco ucs
Unleash oracle 12c performance with cisco ucs
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
HPC in the Cloud
HPC in the CloudHPC in the Cloud
HPC in the Cloud
 
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 

Recently uploaded

Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
UiPathCommunity
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
RinaMondal9
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 

Recently uploaded (20)

Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 

TUT18972: Unleash the power of Ceph across the Data Center

  • 1. Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph Ettore Simone Senior Architect Alchemy Solutions Lab ettore.simone@alchemy.solutions
  • 2. 2 Agenda • Introduction • The Bridge • The Architecture • Use Cases • How It Works • Some Benchmarks • Some Optimizations • Q&A • Bonus Tracks
  • 4. 4 About Ceph “Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability.” (http://ceph.com/) FUT19336 - SUSE Enterprise Storage Overview and Roadmap TUT20074 - SUSE Enterprise Storage Design and Performance
  • 5. 5 Ceph timeline Open Source 2006 OpenStack Integration 2011 Production Ready Q3 2012 Xen Integration 2013 SUSE Enterprise Storage 2.0 Q4 2015 2004 Project Start at UCSC 2010 Mainline Linux Kernel Q2 2012 Launch of Inktank 2012 CloudStack Integration Q1 2015 SUSE Storage 1.0
  • 6. 6 Some facts Common data centers storage solutions are built mainly on top of Fibre Channel (yes, and NAS too). Source: Wikibon Server SAN Research Project 2014
  • 7. 7 Is the storage mindset changing? New/Cloud ‒ Micro-services Composed Applications ‒ NoSQL and Distributed Database (lazy commit, replication) ‒ Object and Distributed Storage SCALE-OUT Classic ‒ Traditional Application → Relational DB → Traditional Storage ‒ Transactional Process → Commit on DB → Commit on Disk SCALE-UP
  • 8. 8 Is the storage mindset changing? No. New/Cloud ‒ Micro-services Composed Applications ‒ NoSQL and Distributed Database (lazy commit, replication) ‒ Object and Distributed Storage Natural playground of Ceph Classic ‒ Traditional Application → Relational DB → Traditional Storage ‒ Transactional Process → Commit on DB → Commit on Disk Where we want to introduce Ceph!
  • 9. 9 Is the new kid on the block so noisy? Ceph is cool but I cannot rearchitect my storage! And what about my shiny big disk arrays? I have already N protocols, why another one? <Add your own fear here>
  • 10. 10 SAN SCSI over FC Our goal How to achieve a non disruptive introduction of Ceph into a traditional storage infrastructure? NAS NFS/SMB/iSCSI over Ethernet RBD over Ethernet Ceph
  • 11. 11 How to let happily coexist Ceph in your datacenter with the existing neighborhood (traditional workloads, legacy servers, FC switches etc...)
  • 13. 13 FC/iSCSI gateway iSCSI ‒ Out-of-the-box feature of SES 2.0 ‒ TUT16512 - Ceph RBD Devices and iSCSI Fiber Channel ‒ That's the point we will focus today
  • 14. 14 Back to our goal How to achieve a non disruptive introduction of Ceph into a traditional storage infrastructure? RBDSAN NAS
  • 15. 15 Linux-IO Target (LIO™) Is the most common open-source SCSI target in modern GNU/Linux distros: FC FCoE FireWire iSCSI iSER SRP loop vHost FABRIC BACKSTORELIO FILEIO IBLOCK RBD pSCSI RAMDISK TCMU Kernel space
  • 17. 17 Technical Reference for Entry Level Dedicated nodes connect Ceph to Fiber Channel
  • 18. 18 Hypothesis for High Throughput All OSDs nodes connect Ceph to Fiber Channel
  • 20. 20 Pool and OSD geometry x x x x x x x x x x x x x x
  • 22. 22 Multipath I/O (MPIO) devices { device { vendor "(LIO-ORG|SUSE)" product "*" path_grouping_policy "multibus" path_checker "tur" features "0" hardware_handler "1 alua" prio "alua" failback "immediate" rr_weight "uniform" no_path_retry "fail" rr_min_io 100 } }
  • 23. 23 Automatically classify the OSD Classify by NODE;OSD;DEV;SIZE;WEIGHT;SPEED # ceph-disk-classify osd01 0 sdb 300G 0.287 15K osd01 1 sdc 300G 0.287 15K osd01 2 sdd 200G 0.177 SSD osd01 3 sde 1.0T 0.971 7.2K osd01 4 sdf 1.0T 0.971 7.2K osd02 5 sdb 300G 0.287 15K osd02 6 sdd 200G 0.177 SSD osd02 7 sde 1.0T 0.971 7.2K osd01 8 sdf 1.0T 0.971 7.2K osd03 9 sdb 300G 0.287 15K …
  • 24. 24 Avoid standard CRUSH location Default: osd crush location = root=default host=`hostname -s` Using an helper script: osd crush location hook = /path/to/script Or entirely manual: osd crush update on start = false … # ceph osd crush [add|set] 39 0.971 root=root-7.2K host=osd08-7.2K
  • 26. 26 Smooth transition Native migration of SAN/LUN to RBD/Volumes help migration/conversion/coexisting: Traditional Workloads Private Cloud CephSAN GW New Workloads
  • 27. 27 Smooth transition Native migration of SAN/LUN to RBD/Volumes help migration/conversion/coexisting: Traditional Workloads Private Cloud CephSAN GW New Workloads
  • 28. 28 Smooth transition Native migration of SAN/LUN to RBD/Volumes help migration/conversion/coexisting: Traditional Workloads Private Cloud CephSAN GW New Workloads
  • 29. 29 Smooth transition Native migration of SAN/LUN to RBD/Volumes help migration/conversion/coexisting: Traditional Workloads Private Cloud CephSAN GW New Workloads
  • 30. 30 Storage replacement No drama at the End of Life/Support of traditional storages: Traditional Workloads Private Cloud CephGW New Workloads
  • 31. 31 D/R and Business Continuity CephGW Site A Site B Ceph GW
  • 33. 33 Ceph and Linux-IO SCSI commands from fabrics are addressed by LIO core, configured using targetcli or directly via sysfs, and proxied to the interested block device through the relative backstore module. CLIENTS CEPHCLUSTER /sys/kernel/config/target ← user space → ← kernel space →
  • 34. 34 Enable QLocig in target mode # modprobe qla2xxx qlini_mode="disabled" CLIENTS CEPHCLUSTER /sys/kernel/config/target ← user space → ← kernel space →
  • 35. 35 Identify and enable HBAs # cat /sys/class/scsi_host/host*/device/fc_host/h ost*/port_name | sed -e 's/../:&/g' -e 's/:0x://' # targetcli qla2xxx/ create ${WWPN} CLIENTS CEPHCLUSTER /sys/kernel/config/target ← user space → ← kernel space →
  • 36. 36 Map RBDs and create backstores # rbd map -p ${POOL} ${VOL} # targetcli backstores/rbd create name="$ {POOL}-${VOL}" dev="${DEV}" CLIENTS CEPHCLUSTER /sys/kernel/config/target /dev/rbd0 ← user space → ← kernel space →
  • 37. 37 Create LUNs connected to RBDs # targetcli qla2xxx/${WWPN}/luns create /backstores/rbd/${POOL}-${VOL} CLIENTS CEPHCLUSTER /sys/kernel/config/target /dev/rbd0 ← user space → ← kernel space → LUN0
  • 38. 38 “Zoning” to filter access with ACLs # targetcli qla2xxx/${WWPN}/acls create $ {INITIATOR} true CLIENTS CEPHCLUSTER /sys/kernel/config/target /dev/rbd0 ← user space → ← kernel space → LUN0
  • 40. 40 First of all... This solution is NOT a drop in replacement for SAN nor NAS (at the moment at least!). The main focus is to identify how to minimize the overhead from native RBD to FC/iSCSI.
  • 41. 41 Raw performance/estimation on 15K Physical Disk IOPS: Ceph IOPS: ‒ 4K RND Read = 193 x 24 = 4.632 ‒ 4K RND Write = 178 x 24 / 3 = 1.424 / 3 = 475 Physical Disk Throughput: Ceph Throughput: ‒ 512K RND Read = 108 MB/s x 24 = 2.600 ‒ 512K RND Write = 105 MB/s x 24 / 3 = 840 / 2 = 420 MB/s NOTE: ‒ 24 OSD and 3 Replicas per Pool ‒ No SSD for journal (so ~1/3 IOPS and ~1/2 of bandwidth for writes)
  • 42. 43 64K SEQ Read 64K SEQ Write 0 500 1000 1500 2000 2500 3000 Estimated RBD MAP/AIO MAP/LIO QEMU/LIO ThroughputinMB/s 4K RND Read 4K RND Write 0 1000 2000 3000 4000 5000 6000 Estimated RBD MAP/AIO MAP/LIO QEMU/LIO IOPS Compared performance on 15K NOTE: ‒ SEQ 64K on RBD Client → RND 512K on Ceph OSD
  • 44. 46 Where we are working on Centralized management with GUI/CLI ‒ Deploy MON/OSD/GW nodes ‒ Manage Nodes/Disk/Pools/Map/LIO ‒ Monitor cluster and node status Reaction on failures Using librados/librbd with tcmu for backstore
  • 45. 47 Central Management Console • Intel Virtual Storage Manager • Ceph Calamari • inkScope
  • 46. 48 More integration with existing tools Extend LRBD do accept multiple Fabric: ‒ iSCSI (native support) ‒ FC ‒ FCoE Linux-IO: ‒ Use of librados via tcmu
  • 48. 50 I/O scheduler matter! On OSD nodes: ‒ deadline on physical disk (cfq if ionice scrub thread) ‒ noop on RAID disk ‒ read_ahead_kb=2048 On Gateway nodes: ‒ noop on mapped RBD On Client nodes: ‒ noop or deadline on multipath device
  • 49. 51 Reduce I/O concurrency • Reduce OSD scrub priority: ‒ I/O scheduler cfq ‒ osd_disk_thread_ioprio_class = idle ‒ osd_disk_thread_ioprio_priority = 7
  • 50. 52 Design optimizations • SSD on monitor nodes for LevelDB: decrease CPU, memory usage and time during recovery • SSD Journal decrease I/O latency: 3x IOPS and better throughput
  • 51. Q&A
  • 53. Corporate Headquarters Maxfeldstrasse 5 90409 Nuremberg Germany +49 911 740 53 0 (Worldwide) www.suse.com Join us on: www.opensuse.org 55
  • 55. 57 Business Continuity architecture Low latency connected sites: WARNING: To improve availability a third site to place a quorum node are highly encouraged.
  • 56. 58 Disaster Recovery architecture High latency or disconnected sites: As in OpenStack Ceph plug-in for Cinder Backup: # rbd export-diff pool/image@end --from-snap start - | ssh -C remote rbd import-diff – pool/image
  • 57. 59 KVM Gateways • VT-x Physical passthrough of QLogic • RBD Volumes as VirtIO devices • Linux-IO iblock backstore
  • 58. 60 VT-x PCI passthrough 1/2 Install KVM and tools Boot with intel_iommu=on # lspci -D | grep -i QLogic | awk '{ print $1 }' 0000:24:00:0 0000:24:00:1 # readlink /sys/bus/pci/devices/0000:24:00. {0,1}/driver ../../../../bus/pci/drivers/qla2xxx ../../../../bus/pci/drivers/qla2xxx # modprobe -r qla2xxx
  • 59. 61 VT-x PCI passthrough 2/2 # virsh nodedev-detach pci_0000_24_00_{0,1} Device pci_0000_24_00_0 detached Device pci_0000_24_00_1 detached # virsh edit VM <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x24' slot='0x0' function='0x0'/> </source> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x24' slot='0x0' function='0x1'/> </source> </hostdev> # virsh start VM
  • 60. 62 KVM hot-add RBD 1/2 # ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow rwx' [client.libvirt] key = AQBN3S9W0Z2gKxAAnua2fIlcSVSZ/c7pqHtTwA== # cat secret.xml <secret ephemeral='no' private='no'> <usage type='ceph'> <name>client.libvirt secret</name> </usage> </secret> # virsh secret-define --file secret.xml Secret 363aad3c-d13c-440d-bb27-fd58fca6aac2 created # virsh secret-set-value --secret 363aad3c-d13c-440d- bb27-fd58fca6aac2 --base64 AQBN3S9W0Z2gKxAAnua2fIlcSVSZ/c7pqHtTwA==
  • 61. 63 KVM hot-add RBD 2/2 # cat disk.xml <disk type='network' device='disk'> <source protocol='rbd' name='pool/vol'> <host name='mon01' port='6789'/> <host name='mon02' port='6789'/> <host name='mon03' port='6789'/> </source> <auth username='libvirt'> <secret type='ceph' uuid='363aad3c-d13c-440d-bb27- fd58fca6aac2'/> </auth> <target dev='vdb' bus='virtio'/> </disk> # virsh attach-device --persistent VM disk.xml Device attached successfully
  • 62. 64 /usr/local/sbin/ceph-disk-classify # Enumerate OSDs ceph osd ls | while read OSD; do # Extract IP/HOST from Cluster Map IP=`ceph osd find $OSD | tr -d '"' | grep 'ip:' | awk -F: '{ print $2 }'` NODE=`getent hosts $IP | sed -e 's/.* //'` test -n "$NODE" || NODE=$IP # Evaluate mount point for osd.<N> (so skip Journals and not used ones) MOUNT=`ssh -n $NODE ceph-disk list 2>/dev/null | grep "osd.$OSD" | awk '{ print $1 }'` DEV=`echo $MOUNT | sed -e 's/[0-9]*$//' -e 's|/dev/||'` # Calculate Disk size and FS size SIZE=`ssh -n $NODE cat /sys/block/$DEV/size` SIZE=$[SIZE*512] DF=`ssh -n $NODE df $MOUNT | grep $MOUNT | awk '{ print $2 }'` # Weight is the size in TByte WEIGHT=`printf '%3.3f' $(bc -l<<<$DF/1000000000)` SPEED=`ssh -n $NODE sginfo -g /dev/$DEV | sed -n -e 's/^Rotational Rates*//p'` test "$SPEED" = '1' && SPEED='SSD' # Output echo $NODE $OSD $DEV `numfmt --to=si $SIZE` $WEIGHT $SPEED done
  • 64. 66 A Vagrant LAB for Ceph and iSCSI • 3 all-in-one nodes (MON+OSD+iSCSI Target) • 1 admin Calamari and iSCSI Initiator with MPIO • 3 disks per OSD node • 2 replicas • Placement Groups: 3*3*100/2 = 450 → 512
  • 65. 67 Ceph Initial Configuration Login into ceph-admin and create initial ceph.conf # ceph-deploy install ceph-{admin,1,2,3} # ceph-deploy new ceph-{1,2,3} # cat <<-EOD >>ceph.conf osd_pool_default_size = 2 osd_pool_default_min_size = 1 osd_pool_default_pg_num = 512 osd_pool_default_pgp_num = 512 EOD
  • 66. 68 Ceph Deploy Login into ceph-admin and create the Ceph cluster # ceph-deploy mon create-initial # ceph-deploy osd create ceph-{1,2,3}:sd{b,c,d} # ceph-deploy admin ceph-{admin,1,2,3}
  • 67. 69 LRBD “auth” "auth": [ { "authentication": "none", "target": "iqn.2015-09.ceph:sn" } ]
  • 68. 70 LRBD “targets” "targets": [ { "hosts": [ { "host": "ceph-1", "portal": "portal1" }, { "host": "ceph-2", "portal": "portal2" }, { "host": "ceph-3", "portal": "portal3" } ], "target": "iqn.2015-09.ceph:sn" } ]
  • 69. 71 LRBD “portals” "portals": [ { "name": "portal1", "addresses": [ "10.20.0.101" ] }, { "name": "portal2", "addresses": [ "10.20.0.102" ] }, { "name": "portal3", "addresses": [ "10.20.0.103" ] } ]
  • 70. 72 LRBD “pools” "pools": [ { "pool": "rbd", "gateways": [ { "target": "iqn.2015-09.ceph:sn", "tpg": [ { "image": "data", "initiator": "iqn.1996-04.suse:cl" } ] } ] } ]
  • 71. Unpublished Work of SUSE LLC. All Rights Reserved. This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. General Disclaimer This document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third- party trademarks are the property of their respective owners.