SlideShare a Scribd company logo
1 of 53
Download to read offline
Quick & Easy Deployment of
a Ceph Storage Cluster with
SUSE Enterprise Storage
Lars Marowsky-Brée
Distinguished Engineer, Architect Storage & HA
lmb@suse.com
2
Agenda
• Introduction
• Setting the stage
• What is Ceph, and how does it work?
• Designing Ceph clusters
• Deploying Ceph
• Q & A
Software-defined storage market
4
2014: $1.5B - 2017: $2.8B - 22.5% CAGR (IDC)
By 2018, open-source storage will gain 20% of the market share, up from
less than 1% in 2013 (Gartner)
5
Enterprise Data Storage: Market Segments
6
Enterprise Data Capacity Utilization
50-60%
of Enterprise Data
20-25%
15-20%
1-3%
Tier 0
Ultra High
Performance
Tier 1
High-value, OLTP,
Revenue Generating
Tier 2
Backup/Recovery,
Reference Data,
Bulk Data
Tier 3
Object, Archive,
Compliance Archive,
Long-term Retention
Source: Horison Information Strategies - Fred Moore
7
LOW
FUNCTIONALITY
HIGH
FUNCTIONALITY
Object
Storage
Archive
Storage
Data
Backup
Video
Audio
Big
Data
Bulk
Storage
Data
Analytics
OLTP
CRM
ERP
Email
HPC
Compliance
Archive
Enterprise Data Storage: Use Cases
CAPACITY
OPTIMIZED
PERFORMANCE
OPTIMIZED
VM-Aware
DR
Target
8
Ceph at SUSE
• 20+ years success with Linux & Enterprise storage
‒ Ceph as technology preview in SUSE Cloud 3
‒ General support with SUSE Cloud 4
‒ Dedicated storage product since February 2015
• Fast growing team, still hiring!
• Focus areas:
‒ Enterprise-readiness
‒ Ease of use
‒ Stabilization and integration
9
LOW
FUNCTIONALITY
HIGH
FUNCTIONALITY
Object
Storage
Archive
Storage
Data
Backup
Video
Audio
Big
Data
Bulk
Storage
Data
Analytics
OLTP
CRM
ERP
Email
HPC
Compliance
Archive
SUSE Enterprise Storage: Initial Target Market
CAPACITY
OPTIMIZED
PERFORMANCE
OPTIMIZED
VM-Aware
DR
Target
Initial Target Market
10
SUSE Enterprise Storage Pricing Model
SES Base Configuration – Priority Subscription
●
SES and limited use of SUSE Linux Enterprise Server to provide:
●
4 SES storage OSD nodes (1-2 sockets)
●
3 -5 SES MON nodes (customer selects redundancy level)
●
1 SES management node
SES Expansion Node – Priority Subscription
What is Ceph?
12
From 10,000 Meters
[1] http://www.openstack.org/blog/2013/11/openstack-user-survey-statistics-november-2013/
• Open Source Distributed Storage solution
• Most popular choice of distributed storage for
OpenStack
[1]
• Lots of goodies
‒ Distributed Object Storage
‒ Redundancy
‒ Efficient Scale-Out
‒ Extensible
‒ Can be built on commodity hardware
‒ Lower operational cost
13
From 1,000 Meters
Object Storage
(Like Amazon S3)
Block Device File System
Unified Data Handling for 3 Purposes
●RESTful Interface
●S3 and SWIFT APIs
●Block devices
●Up to 16 EiB
●Thin Provisioning
●Snapshots
●POSIX Compliant
●Separate Data and
Metadata
●For use e.g. with
Hadoop
Autonomous, Redundant Storage Cluster
14
Component Names
radosgw
Object Storage
RBD
Block Device
Ceph FS
File System
RADOS
librados
Direct
Application
Access to
RADOS
15
For a Moment, Zooming to Atom Level
FS
Disk
OSD Object Storage Daemon
File System (btrfs, xfs)
Physical Disk
● OSDs serve storage objects to clients
● Peer to perform replication and recovery
16
Put Several of These in One Node
FS
Disk
OSD
FS
Disk
OSD
FS
Disk
OSD
FS
Disk
OSD
FS
Disk
OSD
FS
Disk
OSD
17
Mix In a Few Monitor Nodes
M • Monitors are the brain cells of the cluster
‒ Cluster Membership
‒ Consensus for Distributed Decision Making
• Do not serve stored objects to clients
18
Voilà, a Small RADOS Cluster
M M
M
19
Application Access
M M
M
Application
librados
Application
librados
radosgw
Socket
Socket
REST
20
Linux Host
RADOS Block Device
M M
M
M
krbd librados
21
RADOS Block Device - VMs
M M
M
M
qemu-rbd librados
VM
KVM
22
Several Ingredients
• Basic Idea
‒ Coarse grained partitioning of storage supports policy based
mapping (don't put all copies of my data in one rack)
‒ Topology map and Rules allow clients to “compute” the exact
location of any storage object
‒ CRUSH: deterministic decentralized placement algorithm
• Three conceptual components
‒ Objects / images
‒ Pools
‒ Placement groups
23
Pools
• A pool is a logical container for storage objects
• A pool has a set of parameters
‒ a name
‒ a numerical ID (internal to RADOS)
‒ number of replicas
‒ number of placement groups
‒ placement rule set
‒ owner
• Pools support certain operations
‒ create/remove/read/write entire objects
‒ snapshot of the entire pool
24
Placement Groups
• Placement groups help balance data across OSDs
• Consider a pool named “swimmingpool”
‒ with a pool ID of 38 and 8192 placement groups (PGs)
• Consider object “rubberduck” in “swimmingpool”
‒ hash(“rubberduck”) % 8192 = 0xb0b
‒ The resulting PG is 38.b0b
• One PG typically spans several OSDs
‒ for balancing
‒ for replication
• One OSD typically serves many PGs
25
CRUSH
• CRUSH uses a map of all OSDs in your
cluster
‒ includes physical topology, like row, rack, host
‒ includes rules describing which OSDs to
consider for what type of pool/PG
• This map is maintained by the monitor
nodes
‒ Monitor nodes use standard cluster algorithms
for consensus building, etc
26
swimmingpool/rubberduck
27
CRUSH in Action: Writing
M M
M
M
38.b0b
swimmingpool/rubberduck
Writes go to one
OSD, which then
propagates the
changes to other
replicas
Self-Healing
29
Self-Healing
M M
M
M
38.b0b
swimmingpool/rubberduck
Monitors detect a
dead OSD
30
Self-Healing
M M
M
M
38.b0b
swimmingpool/rubberduck
Monitors allocate
other OSDs to
PG and update
CRUSH map
31
Self-Healing
M M
M
M
38.b0b
swimmingpool/rubberduck
Monitors initiate
recovery
32
Self-Healing
M M
M
M
38.b0b
swimmingpool/rubberduck
Future writes
update the new
replica
33
Redundancy Choices
• Replication:
‒ n number of exact, full-size
copies
‒ Potentially increased read
performance due to striping
‒ More copies lower
throughput, increase latency
‒ Increased cluster network
utilization for writes
‒ Rebuilds can leverage
multiple sources
‒ Significant capacity impact
• Erasure coding:
‒ Data split into k parts plus m
redundancy codes
‒ Better space efficiency
‒ Higher CPU overhead
‒ Significant CPU and cluster
network impact, especially
during rebuild
‒ Cannot directly be used with
block devices (see next
slide)
‒ Plenty of algorithms to
choose from
34
Cache Tiering
• Multi-tier storage architecture:
‒ Pool acts as a transparent write-back overlay for another
‒ e.g., SSD 3-way replication over HDDs with erasure coding
‒ Additional configuration complexity and requires workload-
specific tuning
‒ Also available: read-only mode (no write acceleration)
‒ Some downsides (not all features supported), memory
consumption for HitSet
• A good way to combine the advantages of replication
and erasure coding
Software Defined Storage and
Performance
36
Legacy Storage Arrays
• Limits:
‒ Tightly controlled
environment
‒ Limited scalability
‒ Few options
‒ Only certain approved drives
‒ Constrained number of disk
slots
‒ Few memory variations
‒ Only very few networking
choices
‒ Typically fixed controller and
CPU
• Benefits:
‒ Reasonably easy to
understand
‒ Long-term experience and
“gut instincts”
‒ Somewhat deterministic
behavior and pricing
37
Software Defined Storage (SDS)
• Limits:
‒ ?
‒ (Essentially, all limits are
merely bugs.)
• Benefits:
‒ Infinite scalability
‒ Infinite adaptability
‒ Infinite choices
‒ Infinite flexibility
38
Properties of a SDS System
• Throughput
• Latency
• IOPS
• Single client or aggregate
• Availability
• Reliability
• Safety
• Cost
• Adaptability
• Flexibility
• Capacity
• Density
39
Architecting a SDS system
• These goals often conflict:
‒ Availability versus Density
‒ IOPS versus Density
‒ Latency versus Throughput
‒ “Performance” during rebuild
‒ Everything versus Cost
• Many hardware options
• Software topology offers many configuration choices
• There is no one size fits all
40
Designing a Ceph cluster
• Understand your workload
• Make a best guess, based on the desirable properties
and factors
• Build a 1-10% pilot / proof of concept
‒ Preferably using loaner hardware from a vendor to avoid early
commitment.
• Refine and tune this until desired performance is
achieved
• Scale up from there, re-tuning as you go
41
Measuring Ceph performance
• rados bench
‒ Measures backend performance of the RADOS store
• rados load-gen
‒ Generate configurable load on the cluster
• ceph tell osd.XX
• fio rbd backend
‒ Swiss army knife of IO benchmarking on Linux
‒ Can also compare in-kernel rbd with user-space librados
• rest-bench
‒ Measures S3/radosgw performance
Deploying a Ceph cluster
43
Assumptions
• You have three nodes:
‒ node1, node2, node3
‒ These nodes will be both OSDs and MONitor nodes
‒ Each of these has /dev/vdb as a disk to be used for ceph
‒ admin
‒ A node from which you want to run calamari and host the web UI
44
Prepare the nodes
• Setup NTP
• Create a ceph user on each
‒ Setup sudo for the ceph user
• Enable sshd and allow all nodes to login to each other
‒ ssh-copy-id
• Install SUSE Linux Enterprise Server 12 and the
SUSE Enterprise Storage 1.0 products on each
• Apply all available online updates
45
Deploy ceph!
• su - ceph
• ceph-deploy new node1 node2 node3
• ceph-deploy mon create-initial
• ceph-deploy osd create node1:vdb node2:vdb
node3:vdb
• Sorry, we’re done ;-)
46
Deploy the calamari web frontend
• On the admin node:
• zypper in calamari-clients
• calamari-ctl initialize
• Open http://admin.node/
• ceph-deploy calamari –master admin –connect node1
node2 node3
47
Calamari Status screenshot
48
Performance overview
In closing
50
Summary: Why Ceph?
• Can be scaled arbitrarily
‒ No central bottleneck “master” servers
• Low operational cost
‒ Automation
‒ Commodity hardware
• Can move data close to application
• Redundancy through data replication
‒ Self-healing
‒ No need for RAID
• APIs and cloud integration for self-service
‒ Software defined storage
Thank you.
51
Questions and Answers?
Corporate Headquarters
Maxfeldstrasse 5
90409 Nuremberg
Germany
+49 911 740 53 0 (Worldwide)
www.suse.com
Join us on:
www.opensuse.org
52
Unpublished Work of SUSE LLC. All Rights Reserved.
This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC.
Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of
their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated,
abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE.
Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability.
General Disclaimer
This document is not to be construed as a promise by any participating company to develop, deliver, or market a
product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making
purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document,
and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The
development, release, and timing of features or functionality described for SUSE products remains at the sole
discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at
any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in
this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All
third-party trademarks are the property of their respective owners.

More Related Content

What's hot

Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storageAndrew Underwood
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed_Hat_Storage
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStackMirantis
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewMarcel Hergaarden
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High CostsJonathan Long
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
 

What's hot (20)

Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
librados
libradoslibrados
librados
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storage
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep Dive
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStack
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 

Viewers also liked

SUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionSUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionGábor Nyers
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power Patrick Quairoli
 
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015bipin kunal
 
Ceph, storage cluster to go exabyte and beyond
Ceph, storage cluster to go exabyte  and beyondCeph, storage cluster to go exabyte  and beyond
Ceph, storage cluster to go exabyte and beyondAlvaro Soto
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Lenz Grimmer
 
Guts & OpenStack migration
Guts & OpenStack migrationGuts & OpenStack migration
Guts & OpenStack migrationopenstackindia
 
OpenStack Storage Buddy Ceph
OpenStack Storage Buddy CephOpenStack Storage Buddy Ceph
OpenStack Storage Buddy Cephopenstackindia
 
Cephのベンチマークをしました
CephのベンチマークをしましたCephのベンチマークをしました
CephのベンチマークをしましたOSSラボ株式会社
 
Managing ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_CinderManaging ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_CinderMaor Lipchuk
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red_Hat_Storage
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 

Viewers also liked (19)

Ceph
CephCeph
Ceph
 
SUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionSUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle Introduction
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
SUSE KVM Ecosystem
SUSE KVM EcosystemSUSE KVM Ecosystem
SUSE KVM Ecosystem
 
SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power SUSE Linux Enterprise Server for IBM Power
SUSE Linux Enterprise Server for IBM Power
 
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
Cephfs - Red Hat Openstack and Ceph meetup, Pune 28th november 2015
 
Ceph, storage cluster to go exabyte and beyond
Ceph, storage cluster to go exabyte  and beyondCeph, storage cluster to go exabyte  and beyond
Ceph, storage cluster to go exabyte and beyond
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
 
Guts & OpenStack migration
Guts & OpenStack migrationGuts & OpenStack migration
Guts & OpenStack migration
 
OpenStack Storage Buddy Ceph
OpenStack Storage Buddy CephOpenStack Storage Buddy Ceph
OpenStack Storage Buddy Ceph
 
Cephのベンチマークをしました
CephのベンチマークをしましたCephのベンチマークをしました
Cephのベンチマークをしました
 
Managing ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_CinderManaging ceph through_oVirt_using_Cinder
Managing ceph through_oVirt_using_Cinder
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 

Similar to Quick-and-Easy Deployment of a Ceph Storage Cluster

Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Community
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Community
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonSage Weil
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraCeph Community
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster inwin stack
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
 
openSUSE storage workshop 2016
openSUSE storage workshop 2016openSUSE storage workshop 2016
openSUSE storage workshop 2016Alex Lau
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETNikos Kormpakis
 
What's new in hadoop 3.0
What's new in hadoop 3.0What's new in hadoop 3.0
What's new in hadoop 3.0Heiko Loewe
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With CephCeph Community
 
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...DataStax Academy
 

Similar to Quick-and-Easy Deployment of a Ceph Storage Cluster (20)

Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit Boston
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage System
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
 
Ceph
CephCeph
Ceph
 
openSUSE storage workshop 2016
openSUSE storage workshop 2016openSUSE storage workshop 2016
openSUSE storage workshop 2016
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNET
 
What's new in hadoop 3.0
What's new in hadoop 3.0What's new in hadoop 3.0
What's new in hadoop 3.0
 
optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With Ceph
 
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
 

Recently uploaded

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 

Recently uploaded (20)

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 

Quick-and-Easy Deployment of a Ceph Storage Cluster

  • 1. Quick & Easy Deployment of a Ceph Storage Cluster with SUSE Enterprise Storage Lars Marowsky-Brée Distinguished Engineer, Architect Storage & HA lmb@suse.com
  • 2. 2 Agenda • Introduction • Setting the stage • What is Ceph, and how does it work? • Designing Ceph clusters • Deploying Ceph • Q & A
  • 4. 4 2014: $1.5B - 2017: $2.8B - 22.5% CAGR (IDC) By 2018, open-source storage will gain 20% of the market share, up from less than 1% in 2013 (Gartner)
  • 5. 5 Enterprise Data Storage: Market Segments
  • 6. 6 Enterprise Data Capacity Utilization 50-60% of Enterprise Data 20-25% 15-20% 1-3% Tier 0 Ultra High Performance Tier 1 High-value, OLTP, Revenue Generating Tier 2 Backup/Recovery, Reference Data, Bulk Data Tier 3 Object, Archive, Compliance Archive, Long-term Retention Source: Horison Information Strategies - Fred Moore
  • 8. 8 Ceph at SUSE • 20+ years success with Linux & Enterprise storage ‒ Ceph as technology preview in SUSE Cloud 3 ‒ General support with SUSE Cloud 4 ‒ Dedicated storage product since February 2015 • Fast growing team, still hiring! • Focus areas: ‒ Enterprise-readiness ‒ Ease of use ‒ Stabilization and integration
  • 10. 10 SUSE Enterprise Storage Pricing Model SES Base Configuration – Priority Subscription ● SES and limited use of SUSE Linux Enterprise Server to provide: ● 4 SES storage OSD nodes (1-2 sockets) ● 3 -5 SES MON nodes (customer selects redundancy level) ● 1 SES management node SES Expansion Node – Priority Subscription
  • 12. 12 From 10,000 Meters [1] http://www.openstack.org/blog/2013/11/openstack-user-survey-statistics-november-2013/ • Open Source Distributed Storage solution • Most popular choice of distributed storage for OpenStack [1] • Lots of goodies ‒ Distributed Object Storage ‒ Redundancy ‒ Efficient Scale-Out ‒ Extensible ‒ Can be built on commodity hardware ‒ Lower operational cost
  • 13. 13 From 1,000 Meters Object Storage (Like Amazon S3) Block Device File System Unified Data Handling for 3 Purposes ●RESTful Interface ●S3 and SWIFT APIs ●Block devices ●Up to 16 EiB ●Thin Provisioning ●Snapshots ●POSIX Compliant ●Separate Data and Metadata ●For use e.g. with Hadoop Autonomous, Redundant Storage Cluster
  • 14. 14 Component Names radosgw Object Storage RBD Block Device Ceph FS File System RADOS librados Direct Application Access to RADOS
  • 15. 15 For a Moment, Zooming to Atom Level FS Disk OSD Object Storage Daemon File System (btrfs, xfs) Physical Disk ● OSDs serve storage objects to clients ● Peer to perform replication and recovery
  • 16. 16 Put Several of These in One Node FS Disk OSD FS Disk OSD FS Disk OSD FS Disk OSD FS Disk OSD FS Disk OSD
  • 17. 17 Mix In a Few Monitor Nodes M • Monitors are the brain cells of the cluster ‒ Cluster Membership ‒ Consensus for Distributed Decision Making • Do not serve stored objects to clients
  • 18. 18 Voilà, a Small RADOS Cluster M M M
  • 20. 20 Linux Host RADOS Block Device M M M M krbd librados
  • 21. 21 RADOS Block Device - VMs M M M M qemu-rbd librados VM KVM
  • 22. 22 Several Ingredients • Basic Idea ‒ Coarse grained partitioning of storage supports policy based mapping (don't put all copies of my data in one rack) ‒ Topology map and Rules allow clients to “compute” the exact location of any storage object ‒ CRUSH: deterministic decentralized placement algorithm • Three conceptual components ‒ Objects / images ‒ Pools ‒ Placement groups
  • 23. 23 Pools • A pool is a logical container for storage objects • A pool has a set of parameters ‒ a name ‒ a numerical ID (internal to RADOS) ‒ number of replicas ‒ number of placement groups ‒ placement rule set ‒ owner • Pools support certain operations ‒ create/remove/read/write entire objects ‒ snapshot of the entire pool
  • 24. 24 Placement Groups • Placement groups help balance data across OSDs • Consider a pool named “swimmingpool” ‒ with a pool ID of 38 and 8192 placement groups (PGs) • Consider object “rubberduck” in “swimmingpool” ‒ hash(“rubberduck”) % 8192 = 0xb0b ‒ The resulting PG is 38.b0b • One PG typically spans several OSDs ‒ for balancing ‒ for replication • One OSD typically serves many PGs
  • 25. 25 CRUSH • CRUSH uses a map of all OSDs in your cluster ‒ includes physical topology, like row, rack, host ‒ includes rules describing which OSDs to consider for what type of pool/PG • This map is maintained by the monitor nodes ‒ Monitor nodes use standard cluster algorithms for consensus building, etc
  • 27. 27 CRUSH in Action: Writing M M M M 38.b0b swimmingpool/rubberduck Writes go to one OSD, which then propagates the changes to other replicas
  • 33. 33 Redundancy Choices • Replication: ‒ n number of exact, full-size copies ‒ Potentially increased read performance due to striping ‒ More copies lower throughput, increase latency ‒ Increased cluster network utilization for writes ‒ Rebuilds can leverage multiple sources ‒ Significant capacity impact • Erasure coding: ‒ Data split into k parts plus m redundancy codes ‒ Better space efficiency ‒ Higher CPU overhead ‒ Significant CPU and cluster network impact, especially during rebuild ‒ Cannot directly be used with block devices (see next slide) ‒ Plenty of algorithms to choose from
  • 34. 34 Cache Tiering • Multi-tier storage architecture: ‒ Pool acts as a transparent write-back overlay for another ‒ e.g., SSD 3-way replication over HDDs with erasure coding ‒ Additional configuration complexity and requires workload- specific tuning ‒ Also available: read-only mode (no write acceleration) ‒ Some downsides (not all features supported), memory consumption for HitSet • A good way to combine the advantages of replication and erasure coding
  • 35. Software Defined Storage and Performance
  • 36. 36 Legacy Storage Arrays • Limits: ‒ Tightly controlled environment ‒ Limited scalability ‒ Few options ‒ Only certain approved drives ‒ Constrained number of disk slots ‒ Few memory variations ‒ Only very few networking choices ‒ Typically fixed controller and CPU • Benefits: ‒ Reasonably easy to understand ‒ Long-term experience and “gut instincts” ‒ Somewhat deterministic behavior and pricing
  • 37. 37 Software Defined Storage (SDS) • Limits: ‒ ? ‒ (Essentially, all limits are merely bugs.) • Benefits: ‒ Infinite scalability ‒ Infinite adaptability ‒ Infinite choices ‒ Infinite flexibility
  • 38. 38 Properties of a SDS System • Throughput • Latency • IOPS • Single client or aggregate • Availability • Reliability • Safety • Cost • Adaptability • Flexibility • Capacity • Density
  • 39. 39 Architecting a SDS system • These goals often conflict: ‒ Availability versus Density ‒ IOPS versus Density ‒ Latency versus Throughput ‒ “Performance” during rebuild ‒ Everything versus Cost • Many hardware options • Software topology offers many configuration choices • There is no one size fits all
  • 40. 40 Designing a Ceph cluster • Understand your workload • Make a best guess, based on the desirable properties and factors • Build a 1-10% pilot / proof of concept ‒ Preferably using loaner hardware from a vendor to avoid early commitment. • Refine and tune this until desired performance is achieved • Scale up from there, re-tuning as you go
  • 41. 41 Measuring Ceph performance • rados bench ‒ Measures backend performance of the RADOS store • rados load-gen ‒ Generate configurable load on the cluster • ceph tell osd.XX • fio rbd backend ‒ Swiss army knife of IO benchmarking on Linux ‒ Can also compare in-kernel rbd with user-space librados • rest-bench ‒ Measures S3/radosgw performance
  • 42. Deploying a Ceph cluster
  • 43. 43 Assumptions • You have three nodes: ‒ node1, node2, node3 ‒ These nodes will be both OSDs and MONitor nodes ‒ Each of these has /dev/vdb as a disk to be used for ceph ‒ admin ‒ A node from which you want to run calamari and host the web UI
  • 44. 44 Prepare the nodes • Setup NTP • Create a ceph user on each ‒ Setup sudo for the ceph user • Enable sshd and allow all nodes to login to each other ‒ ssh-copy-id • Install SUSE Linux Enterprise Server 12 and the SUSE Enterprise Storage 1.0 products on each • Apply all available online updates
  • 45. 45 Deploy ceph! • su - ceph • ceph-deploy new node1 node2 node3 • ceph-deploy mon create-initial • ceph-deploy osd create node1:vdb node2:vdb node3:vdb • Sorry, we’re done ;-)
  • 46. 46 Deploy the calamari web frontend • On the admin node: • zypper in calamari-clients • calamari-ctl initialize • Open http://admin.node/ • ceph-deploy calamari –master admin –connect node1 node2 node3
  • 50. 50 Summary: Why Ceph? • Can be scaled arbitrarily ‒ No central bottleneck “master” servers • Low operational cost ‒ Automation ‒ Commodity hardware • Can move data close to application • Redundancy through data replication ‒ Self-healing ‒ No need for RAID • APIs and cloud integration for self-service ‒ Software defined storage
  • 52. Corporate Headquarters Maxfeldstrasse 5 90409 Nuremberg Germany +49 911 740 53 0 (Worldwide) www.suse.com Join us on: www.opensuse.org 52
  • 53. Unpublished Work of SUSE LLC. All Rights Reserved. This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. General Disclaimer This document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-party trademarks are the property of their respective owners.