SlideShare a Scribd company logo
1 of 44
SUSE® Storage 
Sizing and Performance for Ceph 
Lars Marowsky-Brée 
Distinguished Engineer 
lmb@suse.com
What is Ceph?
3 
From 10,000 Meters 
• Open Source Distributed Storage solution 
• Most popular choice of distributed storage for 
[1] 
OpenStack 
• Lots of goodies 
‒ Distributed Object Storage 
‒ Redundancy 
‒ Efficient Scale-Out 
‒ Can be built on commodity hardware 
‒ Lower operational cost 
[1] http://www.openstack.org/blog/2013/11/openstack-user-survey-statistics-november-2013/
4 
From 1,000 Meters 
• Three interfaces rolled into one 
‒ Object Access (like Amazon S3) 
‒ Block Access 
‒ (Distributed File System) 
• Sitting on top of a Storage Cluster 
‒ Self Healing 
‒ Self Managed 
‒ No Bottlenecks
5 
From 1,000 Meters 
Unified Data Handling for 3 Purposes 
Object Storage 
(Like Amazon S3) Block Device File System 
●RESTful Interface 
●S3 and SWIFT APIs 
●Block devices 
●Up to 16 EiB 
●Thin Provisioning 
●Snapshots 
●POSIX Compliant 
●Separate Data and 
Metadata 
●For use e.g. with 
Hadoop 
Autonomous, Redundant Storage Cluster
6 
Component Names 
radosgw 
Object Storage 
RBD 
Block Device 
Ceph FS 
File System 
RADOS 
librados 
Direct 
Application 
Access to 
RADOS
7 
SUSE Storage architectural benefits 
● Integration with SUSE Cloud and SUSE Linux Enterprise Server 
●Exabyte scalability 
● No bottlenecks or single points of failure 
● Industry-leading functionality 
● Remote replication, erasure coding 
● Cache tiering 
● Unified block, file and object interface 
● Thin provisioning, copy on write 
●100% software based; can use commodity hardware 
●Automated management 
● Self-managing, self-healing
How Does Ceph Work?
9 
For a Moment, Zooming to Atom Level 
OSD Object Storage Daemon 
FS 
Disk 
File System (btrfs, xfs) 
Physical Disk 
● OSDs serve storage objects to clients 
● Peer to perform replication and recovery
10 
Put Several of These in One Node 
OSD 
FS 
Disk 
OSD 
FS 
Disk 
OSD 
FS 
Disk 
OSD 
FS 
Disk 
OSD 
FS 
Disk 
OSD 
FS 
Disk
11 
Mix In a Few Monitor Nodes 
M • Monitors are the brain cells of the cluster 
‒ Cluster Membership 
‒ Consensus for Distributed Decision Making 
• Do not serve stored objects to clients
12 
Voilà, a Small RADOS Cluster 
M M 
M
13 
Different Access Modes 
• radosgw: 
‒ An additional gateway in 
front of your RADOS cluster 
‒ Little impact on throughput, 
but it does affect latency 
• User-space RADOS 
access: 
‒ More feature rich than in-kernel 
rbd.ko module 
‒ Typically provides higher 
performance
14 
Several Ingredients 
• Basic Idea 
‒ Coarse grained partitioning of storage supports policy based 
mapping (don't put all copies of my data in one rack) 
‒ Topology map and Rules allow clients to “compute” the exact 
location of any storage object 
• Three conceptual components 
‒ Pools 
‒ Placement groups 
‒ CRUSH: deterministic decentralized placement algorithm
15 
Pools 
• A pool is a logical container for storage objects 
• A pool has a set of parameters 
‒ a name 
‒ a numerical ID (internal to RADOS) 
‒ number of replicas OR erasure encoding settings 
‒ number of placement groups 
‒ placement rule set 
‒ owner 
• Pools support certain operations 
‒ create/remove/read/write entire objects 
‒ snapshot of the entire pool
16 
Placement Groups 
• Placement groups help balance data across OSDs 
• Consider a pool named “swimmingpool” 
‒ with a pool ID of 38 and 8192 placement groups (PGs) 
• Consider object “rubberduck” in “swimmingpool” 
‒ hash(“rubberduck”) % 8192 = 0xb0b 
‒ The resulting PG is 38.b0b 
• One PG typically exists on several OSDs 
‒ for replication 
• One OSD typically serves many PGs
17 
CRUSH 
• CRUSH uses a map of all OSDs in your 
cluster 
‒ includes physical topology, like row, rack, host 
‒ includes rules describing which OSDs to 
consider for what type of pool/PG 
• This map is maintained by the monitor 
nodes 
‒ Monitor nodes use standard cluster algorithms 
for consensus building, etc
18 
swimmingpool/rubberduck
19 
CRUSH in Action: Reading 
swimmingpool/rubberduck 
M 
M 
M 
38.b0b 
Reads could be 
serviced by any 
of the replicas 
(parallel reads 
improve thruput)
20 
CRUSH in Action: Writing 
swimmingpool/rubberduck 
M 
M 
M 
38.b0b 
Writes go to one 
OSD, which then 
propagates the 
changes to other 
replicas
Software Defined Storage
22 
Legacy Storage Arrays 
• Limits: 
‒ Tightly controlled 
environment 
‒ Limited scalability 
‒ Few options 
‒ Only certain approved drives 
‒ Constrained number of disk 
slots 
‒ Few memory variations 
‒ Only very few networking 
choices 
‒ Typically fixed controller and 
CPU 
• Benefits: 
‒ Reasonably easy to 
understand 
‒ Long-term experience and 
“gut instincts” 
‒ Somewhat deterministic 
behavior and pricing
23 
Software Defined Storage (SDS) 
• Limits: 
‒ ? 
• Benefits: 
‒ Infinite scalability 
‒ Infinite adaptability 
‒ Infinite choices 
‒ Infinite flexibility 
‒ ... right.
24 
Properties of a SDS System 
• Throughput 
• Latency 
• IOPS 
• Availability 
• Reliability 
• Capacity • Cost 
• Density
25 
Architecting a SDS system 
• These goals often conflict: 
‒ Availability versus Density 
‒ IOPS versus Density 
‒ Everything versus Cost 
• Many hardware options 
• Software topology offers many configuration choices 
• There is no one size fits all
Setup Choices
27 
Network 
• Choose the fastest network you can afford 
• Switches should be low latency with fully meshed 
backplane 
• Separate public and cluster network 
• Cluster network should typically be twice the public 
bandwidth 
‒ Incoming writes are replicated over the cluster network 
‒ Re-balancing and re-mirroring utilize the cluster network
28 
Networking (Public and Internal) 
• Ethernet (1, 10, 40 GbE) 
‒ Reasonably inexpensive (except for 40 GbE) 
‒ Can easily be bonded for availability 
‒ Use jumbo frames 
• Infiniband 
‒ High bandwidth 
‒ Low latency 
‒ Typically more expensive 
‒ No support for RDMA yet in Ceph, need to use IPoIB
29 
Storage Node 
• CPU 
‒ Number and speed of cores 
• Memory 
• Storage controller 
‒ Bandwidth, performance, cache size 
• SSDs for OSD journal 
‒ SSD to HDD ratio 
• HDDs 
‒ Count, capacity, performance
30 
Adding More Nodes 
• Capacity increases 
• Total throughput 
increases 
• IOPS increase 
• Redundancy increases 
• Latency unchanged 
• Eventually: network 
topology limitations 
• Temporary impact during 
re-balancing
31 
Adding More Disks to a Node 
• Capacity increases 
• Redundancy increases 
• Throughput might 
increase 
• IOPS might increase 
• Internal node bandwidth 
is consumed 
• Higher CPU and memory 
load 
• Cache contention 
• Latency unchanged
32 
OSD File System 
• btrfs 
‒ Typically better write 
throughput performance 
‒ Higher CPU utilization 
‒ Feature rich 
‒ Compression, checksums, copy 
on write 
‒ The choice for the future! 
• XFS 
‒ Good all around choice 
‒ Very mature for data 
partitions 
‒ Typically lower CPU 
utilization 
‒ The choice for today!
33 
Impact of Caches 
• Cache on the client side 
‒ Typically, biggest impact on performance 
‒ Does not help with write performance 
• Server OS cache 
‒ Low impact: reads have already been cached on the client 
‒ Still, helps with readahead 
• Caching controller, battery backed: 
‒ Significant impact for writes
34 
Impact of SSD Journals 
• SSD journals accelerate bursts and random write IO 
• For sustained writes that overflow the journal, 
performance degrades to HDD levels 
• SSDs help very little with read performance 
• SSDs are very costly 
‒ ... and consume storage slots -> lower density 
• A large battery-backed cache on the storage controller 
is highly recommended if not using SSD journals
35 
Hard Disk Parameters 
• Capacity matters 
‒ Often, highest density is not 
most cost effective 
‒ On-disk cache matters less 
• Reliability advantage of 
Enterprise drives typically 
marginal compared to 
cost 
‒ Buy more drives instead 
‒ Consider validation matrices 
for small/medium NAS 
servers as a guide 
• RPM: 
‒ Increase IOPS & throughput 
‒ Increases power 
consumption 
‒ 15k drives quite expensive 
still
36 
Impact of Redundancy Choices 
• Replication: 
‒ n number of exact, full-size 
copies 
‒ Potentially increased read 
performance due to striping 
‒ More copies lower 
throughput, increase latency 
‒ Increased cluster network 
utilization for writes 
‒ Rebuilds can leverage 
multiple sources 
‒ Significant capacity impact 
• Erasure coding: 
‒ Data split into k parts plus m 
redundancy codes 
‒ Better space efficiency 
‒ Higher CPU overhead 
‒ Significant CPU and cluster 
network impact, especially 
during rebuild 
‒ Cannot directly be used with 
block devices (see next 
slide)
37 
Cache Tiering 
• Multi-tier storage architecture: 
‒ Pool acts as a transparent write-back overlay for another 
‒ e.g., SSD 3-way replication over HDDs with erasure coding 
‒ Can flush either on relative or absolute dirty levels, or age 
‒ Additional configuration complexity and requires workload-specific 
tuning 
‒ Also available: read-only mode (no write acceleration) 
‒ Some downsides (no snapshots), memory consumption for 
HitSet 
• A good way to combine the advantages of replication 
and erasure coding
38 
Number of placement groups 
●Number of hash buckets 
per pool 
● Data is chunked & 
distributed across nodes 
● Typically approx. 100 per 
OSD/TB 
• Too many: 
‒ More peering 
‒ More resources used 
• Too few: 
‒ Large amounts of data per 
group 
‒ More hotspots, less striping 
‒ Slower recovery from failure 
‒ Slower re-balancing
39 
Measuring Ceph performance 
• rados bench 
‒ Measures backend performance of the RADOS store 
‒ Simple options 
• rados load-gen 
‒ Generate configurable load on the cluster 
• fio rbd backend 
‒ Swiss army knife of IO benchmarking on Linux 
‒ Can also compare in-kernel rbd with user-space librados 
• rest-bench 
‒ Measures S3/radosgw performance
Conclusion
41 
How to size a Ceph cluster? 
• Understand your workload 
• Make a best guess, based on the desirable properties 
and factors 
• Build a 10% pilot / proof of concept 
‒ Preferably using loaner hardware from a vendor to avoid early 
commitment 
‒ The outlook of selling a few PB of storage and compute nodes 
makes vendors very cooperative 
• Refine this until desired performance is achieved 
• Scale up 
‒ Ceph retains most characteristics at scale or even improves
Thank you. 
42 
Questions and Answers?
Corporate Headquarters 
Maxfeldstrasse 5 
90409 Nuremberg 
Germany 
+49 911 740 53 0 (Worldwide) 
www.suse.com 
Join us on: 
www.opensuse.org 
43
Unpublished Work of SUSE LLC. All Rights Reserved. 
This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. 
Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of 
their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, 
abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. 
Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. 
General Disclaimer 
This document is not to be construed as a promise by any participating company to develop, deliver, or market a 
product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making 
purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, 
and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The 
development, release, and timing of features or functionality described for SUSE products remains at the sole 
discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at 
any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in 
this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All 
third-party trademarks are the property of their respective owners.

More Related Content

What's hot

Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High CostsJonathan Long
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Community
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseAlex Lau
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 

What's hot (20)

librados
libradoslibrados
librados
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
Bluestore
BluestoreBluestore
Bluestore
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 

Viewers also liked

TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
 
SUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionSUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionGábor Nyers
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
SUSE OpenStack Cloud
SUSE OpenStack CloudSUSE OpenStack Cloud
SUSE OpenStack CloudFinceptum Oy
 
8/ SUSE @ OPEN'16
8/ SUSE @ OPEN'168/ SUSE @ OPEN'16
8/ SUSE @ OPEN'16Kangaroot
 
SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power Patrick Quairoli
 
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015bipin kunal
 
Ceph, storage cluster to go exabyte and beyond
Ceph, storage cluster to go exabyte  and beyondCeph, storage cluster to go exabyte  and beyond
Ceph, storage cluster to go exabyte and beyondAlvaro Soto
 
adp.ceph.openstack.talk
adp.ceph.openstack.talkadp.ceph.openstack.talk
adp.ceph.openstack.talkUdo Seidel
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Lenz Grimmer
 
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarryCeph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarryThe Linux Foundation
 
Gluster.community.day.2013
Gluster.community.day.2013Gluster.community.day.2013
Gluster.community.day.2013Udo Seidel
 
Ceph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and HowCeph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and HowEmma Haruka Iwao
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackDaniel Schneller
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster PerformanceGluster.org
 
Guts & OpenStack migration
Guts & OpenStack migrationGuts & OpenStack migration
Guts & OpenStack migrationopenstackindia
 
OpenStack Storage Buddy Ceph
OpenStack Storage Buddy CephOpenStack Storage Buddy Ceph
OpenStack Storage Buddy Cephopenstackindia
 

Viewers also liked (20)

TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
SUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionSUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle Introduction
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
SUSE OpenStack Cloud
SUSE OpenStack CloudSUSE OpenStack Cloud
SUSE OpenStack Cloud
 
8/ SUSE @ OPEN'16
8/ SUSE @ OPEN'168/ SUSE @ OPEN'16
8/ SUSE @ OPEN'16
 
SUSE KVM Ecosystem
SUSE KVM EcosystemSUSE KVM Ecosystem
SUSE KVM Ecosystem
 
SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power
 
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
 
Ceph, storage cluster to go exabyte and beyond
Ceph, storage cluster to go exabyte  and beyondCeph, storage cluster to go exabyte  and beyond
Ceph, storage cluster to go exabyte and beyond
 
Ceph
CephCeph
Ceph
 
adp.ceph.openstack.talk
adp.ceph.openstack.talkadp.ceph.openstack.talk
adp.ceph.openstack.talk
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
 
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarryCeph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
 
Gluster.community.day.2013
Gluster.community.day.2013Gluster.community.day.2013
Gluster.community.day.2013
 
Ceph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and HowCeph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and How
 
Gluster Data Tiering
Gluster Data TieringGluster Data Tiering
Gluster Data Tiering
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
 
Guts & OpenStack migration
Guts & OpenStack migrationGuts & OpenStack migration
Guts & OpenStack migration
 
OpenStack Storage Buddy Ceph
OpenStack Storage Buddy CephOpenStack Storage Buddy Ceph
OpenStack Storage Buddy Ceph
 

Similar to SUSE Storage: Sizing and Performance (Ceph)

Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Community
 
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Community
 
MongoDB Capacity Planning
MongoDB Capacity PlanningMongoDB Capacity Planning
MongoDB Capacity PlanningNorberto Leite
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster inwin stack
 
Capacity Planning
Capacity PlanningCapacity Planning
Capacity PlanningMongoDB
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...DataStax
 
Gluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGlusterFS
 
Managing Security At 1M Events a Second using Elasticsearch
Managing Security At 1M Events a Second using ElasticsearchManaging Security At 1M Events a Second using Elasticsearch
Managing Security At 1M Events a Second using ElasticsearchJoe Alex
 
In-memory Data Management Trends & Techniques
In-memory Data Management Trends & TechniquesIn-memory Data Management Trends & Techniques
In-memory Data Management Trends & TechniquesHazelcast
 
HBase Sizing Guide
HBase Sizing GuideHBase Sizing Guide
HBase Sizing Guidelarsgeorge
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraCeph Community
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadMarius Adrian Popa
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_clusterPrabhat gangwar
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisMike Pittaro
 
Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis PyData
 

Similar to SUSE Storage: Sizing and Performance (Ceph) (20)

Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
 
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
 
MongoDB Capacity Planning
MongoDB Capacity PlanningMongoDB Capacity Planning
MongoDB Capacity Planning
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Capacity Planning
Capacity PlanningCapacity Planning
Capacity Planning
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
 
Gluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & Tricks
 
Managing Security At 1M Events a Second using Elasticsearch
Managing Security At 1M Events a Second using ElasticsearchManaging Security At 1M Events a Second using Elasticsearch
Managing Security At 1M Events a Second using Elasticsearch
 
In-memory Data Management Trends & Techniques
In-memory Data Management Trends & TechniquesIn-memory Data Management Trends & Techniques
In-memory Data Management Trends & Techniques
 
HBase Sizing Guide
HBase Sizing GuideHBase Sizing Guide
HBase Sizing Guide
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
09. storage-part-1
09. storage-part-109. storage-part-1
09. storage-part-1
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
Os7
Os7Os7
Os7
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_cluster
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data Analysis
 
Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis
 

Recently uploaded

The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is insideshinachiaurasa2
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfonteinmasabamasaba
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...masabamasaba
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrainmasabamasaba
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...Shane Coughlan
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyviewmasabamasaba
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...masabamasaba
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfkalichargn70th171
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park masabamasaba
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...Jittipong Loespradit
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...masabamasaba
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in sowetomasabamasaba
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...panagenda
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfonteinmasabamasaba
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Hararemasabamasaba
 
Harnessing ChatGPT - Elevating Productivity in Today's Agile Environment
Harnessing ChatGPT  - Elevating Productivity in Today's Agile EnvironmentHarnessing ChatGPT  - Elevating Productivity in Today's Agile Environment
Harnessing ChatGPT - Elevating Productivity in Today's Agile EnvironmentVictorSzoltysek
 

Recently uploaded (20)

The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is inside
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare
 
Harnessing ChatGPT - Elevating Productivity in Today's Agile Environment
Harnessing ChatGPT  - Elevating Productivity in Today's Agile EnvironmentHarnessing ChatGPT  - Elevating Productivity in Today's Agile Environment
Harnessing ChatGPT - Elevating Productivity in Today's Agile Environment
 

SUSE Storage: Sizing and Performance (Ceph)

  • 1. SUSE® Storage Sizing and Performance for Ceph Lars Marowsky-Brée Distinguished Engineer lmb@suse.com
  • 3. 3 From 10,000 Meters • Open Source Distributed Storage solution • Most popular choice of distributed storage for [1] OpenStack • Lots of goodies ‒ Distributed Object Storage ‒ Redundancy ‒ Efficient Scale-Out ‒ Can be built on commodity hardware ‒ Lower operational cost [1] http://www.openstack.org/blog/2013/11/openstack-user-survey-statistics-november-2013/
  • 4. 4 From 1,000 Meters • Three interfaces rolled into one ‒ Object Access (like Amazon S3) ‒ Block Access ‒ (Distributed File System) • Sitting on top of a Storage Cluster ‒ Self Healing ‒ Self Managed ‒ No Bottlenecks
  • 5. 5 From 1,000 Meters Unified Data Handling for 3 Purposes Object Storage (Like Amazon S3) Block Device File System ●RESTful Interface ●S3 and SWIFT APIs ●Block devices ●Up to 16 EiB ●Thin Provisioning ●Snapshots ●POSIX Compliant ●Separate Data and Metadata ●For use e.g. with Hadoop Autonomous, Redundant Storage Cluster
  • 6. 6 Component Names radosgw Object Storage RBD Block Device Ceph FS File System RADOS librados Direct Application Access to RADOS
  • 7. 7 SUSE Storage architectural benefits ● Integration with SUSE Cloud and SUSE Linux Enterprise Server ●Exabyte scalability ● No bottlenecks or single points of failure ● Industry-leading functionality ● Remote replication, erasure coding ● Cache tiering ● Unified block, file and object interface ● Thin provisioning, copy on write ●100% software based; can use commodity hardware ●Automated management ● Self-managing, self-healing
  • 9. 9 For a Moment, Zooming to Atom Level OSD Object Storage Daemon FS Disk File System (btrfs, xfs) Physical Disk ● OSDs serve storage objects to clients ● Peer to perform replication and recovery
  • 10. 10 Put Several of These in One Node OSD FS Disk OSD FS Disk OSD FS Disk OSD FS Disk OSD FS Disk OSD FS Disk
  • 11. 11 Mix In a Few Monitor Nodes M • Monitors are the brain cells of the cluster ‒ Cluster Membership ‒ Consensus for Distributed Decision Making • Do not serve stored objects to clients
  • 12. 12 Voilà, a Small RADOS Cluster M M M
  • 13. 13 Different Access Modes • radosgw: ‒ An additional gateway in front of your RADOS cluster ‒ Little impact on throughput, but it does affect latency • User-space RADOS access: ‒ More feature rich than in-kernel rbd.ko module ‒ Typically provides higher performance
  • 14. 14 Several Ingredients • Basic Idea ‒ Coarse grained partitioning of storage supports policy based mapping (don't put all copies of my data in one rack) ‒ Topology map and Rules allow clients to “compute” the exact location of any storage object • Three conceptual components ‒ Pools ‒ Placement groups ‒ CRUSH: deterministic decentralized placement algorithm
  • 15. 15 Pools • A pool is a logical container for storage objects • A pool has a set of parameters ‒ a name ‒ a numerical ID (internal to RADOS) ‒ number of replicas OR erasure encoding settings ‒ number of placement groups ‒ placement rule set ‒ owner • Pools support certain operations ‒ create/remove/read/write entire objects ‒ snapshot of the entire pool
  • 16. 16 Placement Groups • Placement groups help balance data across OSDs • Consider a pool named “swimmingpool” ‒ with a pool ID of 38 and 8192 placement groups (PGs) • Consider object “rubberduck” in “swimmingpool” ‒ hash(“rubberduck”) % 8192 = 0xb0b ‒ The resulting PG is 38.b0b • One PG typically exists on several OSDs ‒ for replication • One OSD typically serves many PGs
  • 17. 17 CRUSH • CRUSH uses a map of all OSDs in your cluster ‒ includes physical topology, like row, rack, host ‒ includes rules describing which OSDs to consider for what type of pool/PG • This map is maintained by the monitor nodes ‒ Monitor nodes use standard cluster algorithms for consensus building, etc
  • 19. 19 CRUSH in Action: Reading swimmingpool/rubberduck M M M 38.b0b Reads could be serviced by any of the replicas (parallel reads improve thruput)
  • 20. 20 CRUSH in Action: Writing swimmingpool/rubberduck M M M 38.b0b Writes go to one OSD, which then propagates the changes to other replicas
  • 22. 22 Legacy Storage Arrays • Limits: ‒ Tightly controlled environment ‒ Limited scalability ‒ Few options ‒ Only certain approved drives ‒ Constrained number of disk slots ‒ Few memory variations ‒ Only very few networking choices ‒ Typically fixed controller and CPU • Benefits: ‒ Reasonably easy to understand ‒ Long-term experience and “gut instincts” ‒ Somewhat deterministic behavior and pricing
  • 23. 23 Software Defined Storage (SDS) • Limits: ‒ ? • Benefits: ‒ Infinite scalability ‒ Infinite adaptability ‒ Infinite choices ‒ Infinite flexibility ‒ ... right.
  • 24. 24 Properties of a SDS System • Throughput • Latency • IOPS • Availability • Reliability • Capacity • Cost • Density
  • 25. 25 Architecting a SDS system • These goals often conflict: ‒ Availability versus Density ‒ IOPS versus Density ‒ Everything versus Cost • Many hardware options • Software topology offers many configuration choices • There is no one size fits all
  • 27. 27 Network • Choose the fastest network you can afford • Switches should be low latency with fully meshed backplane • Separate public and cluster network • Cluster network should typically be twice the public bandwidth ‒ Incoming writes are replicated over the cluster network ‒ Re-balancing and re-mirroring utilize the cluster network
  • 28. 28 Networking (Public and Internal) • Ethernet (1, 10, 40 GbE) ‒ Reasonably inexpensive (except for 40 GbE) ‒ Can easily be bonded for availability ‒ Use jumbo frames • Infiniband ‒ High bandwidth ‒ Low latency ‒ Typically more expensive ‒ No support for RDMA yet in Ceph, need to use IPoIB
  • 29. 29 Storage Node • CPU ‒ Number and speed of cores • Memory • Storage controller ‒ Bandwidth, performance, cache size • SSDs for OSD journal ‒ SSD to HDD ratio • HDDs ‒ Count, capacity, performance
  • 30. 30 Adding More Nodes • Capacity increases • Total throughput increases • IOPS increase • Redundancy increases • Latency unchanged • Eventually: network topology limitations • Temporary impact during re-balancing
  • 31. 31 Adding More Disks to a Node • Capacity increases • Redundancy increases • Throughput might increase • IOPS might increase • Internal node bandwidth is consumed • Higher CPU and memory load • Cache contention • Latency unchanged
  • 32. 32 OSD File System • btrfs ‒ Typically better write throughput performance ‒ Higher CPU utilization ‒ Feature rich ‒ Compression, checksums, copy on write ‒ The choice for the future! • XFS ‒ Good all around choice ‒ Very mature for data partitions ‒ Typically lower CPU utilization ‒ The choice for today!
  • 33. 33 Impact of Caches • Cache on the client side ‒ Typically, biggest impact on performance ‒ Does not help with write performance • Server OS cache ‒ Low impact: reads have already been cached on the client ‒ Still, helps with readahead • Caching controller, battery backed: ‒ Significant impact for writes
  • 34. 34 Impact of SSD Journals • SSD journals accelerate bursts and random write IO • For sustained writes that overflow the journal, performance degrades to HDD levels • SSDs help very little with read performance • SSDs are very costly ‒ ... and consume storage slots -> lower density • A large battery-backed cache on the storage controller is highly recommended if not using SSD journals
  • 35. 35 Hard Disk Parameters • Capacity matters ‒ Often, highest density is not most cost effective ‒ On-disk cache matters less • Reliability advantage of Enterprise drives typically marginal compared to cost ‒ Buy more drives instead ‒ Consider validation matrices for small/medium NAS servers as a guide • RPM: ‒ Increase IOPS & throughput ‒ Increases power consumption ‒ 15k drives quite expensive still
  • 36. 36 Impact of Redundancy Choices • Replication: ‒ n number of exact, full-size copies ‒ Potentially increased read performance due to striping ‒ More copies lower throughput, increase latency ‒ Increased cluster network utilization for writes ‒ Rebuilds can leverage multiple sources ‒ Significant capacity impact • Erasure coding: ‒ Data split into k parts plus m redundancy codes ‒ Better space efficiency ‒ Higher CPU overhead ‒ Significant CPU and cluster network impact, especially during rebuild ‒ Cannot directly be used with block devices (see next slide)
  • 37. 37 Cache Tiering • Multi-tier storage architecture: ‒ Pool acts as a transparent write-back overlay for another ‒ e.g., SSD 3-way replication over HDDs with erasure coding ‒ Can flush either on relative or absolute dirty levels, or age ‒ Additional configuration complexity and requires workload-specific tuning ‒ Also available: read-only mode (no write acceleration) ‒ Some downsides (no snapshots), memory consumption for HitSet • A good way to combine the advantages of replication and erasure coding
  • 38. 38 Number of placement groups ●Number of hash buckets per pool ● Data is chunked & distributed across nodes ● Typically approx. 100 per OSD/TB • Too many: ‒ More peering ‒ More resources used • Too few: ‒ Large amounts of data per group ‒ More hotspots, less striping ‒ Slower recovery from failure ‒ Slower re-balancing
  • 39. 39 Measuring Ceph performance • rados bench ‒ Measures backend performance of the RADOS store ‒ Simple options • rados load-gen ‒ Generate configurable load on the cluster • fio rbd backend ‒ Swiss army knife of IO benchmarking on Linux ‒ Can also compare in-kernel rbd with user-space librados • rest-bench ‒ Measures S3/radosgw performance
  • 41. 41 How to size a Ceph cluster? • Understand your workload • Make a best guess, based on the desirable properties and factors • Build a 10% pilot / proof of concept ‒ Preferably using loaner hardware from a vendor to avoid early commitment ‒ The outlook of selling a few PB of storage and compute nodes makes vendors very cooperative • Refine this until desired performance is achieved • Scale up ‒ Ceph retains most characteristics at scale or even improves
  • 42. Thank you. 42 Questions and Answers?
  • 43. Corporate Headquarters Maxfeldstrasse 5 90409 Nuremberg Germany +49 911 740 53 0 (Worldwide) www.suse.com Join us on: www.opensuse.org 43
  • 44. Unpublished Work of SUSE LLC. All Rights Reserved. This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. General Disclaimer This document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-party trademarks are the property of their respective owners.