SlideShare a Scribd company logo
1 of 28
CEPH PERFORMANCE
Profiling and Reporting
Brent Compton, Director Storage Solution Architectures
Kyle Bader, Sr Storage Architect
Veda Shankar, Sr Storage Architect
HOW WELL CAN CEPH
PERFORM?
WHICH OF MY
WORKLOADS CAN IT
HANDLE?
HOW WILL CEPH
PERFORM ON MY
SERVERS?
INSERT DESIGNATOR, IF NEEDED2
Questions that continually surface
FAQ FROM THE COMMUNITY
PERCEIVED RANGE
OF CEPH PERF
ACTUAL (MEASURED) RANGE
OF CEPH PERF
INSERT DESIGNATOR, IF NEEDED3
Finding the right server and network config for the job
HOW WELL CAN CEPH PERFORM?
https://github.com/ceph/ceph-brag (email pmcgarry@redhat.com for access)
INSERT DESIGNATOR, IF NEEDED4
Ceph performance leaderboard (ceph-brag) coming to ceph.com
INVITATION TO BE PART OF THE ANSWER
INSERT DESIGNATOR, IF NEEDED5
Posted throughput results
A LEADERBOARD FOR CEPH PERF RESULTS
INSERT DESIGNATOR, IF NEEDED6
Looking for Beta submitters prior to general availability on Ceph.com
LEADERBOARD ATTRIBUTION AND DETAILS
INSERT DESIGNATOR, IF NEEDED7
Still under construction
EMERGING LEADERBOARD FOR IOPS
OpenStack Starter
64 TB
S
256TB +
M
1PB +
L
2PB+
MySQL Perf Node
IOPs optimized
Digital Media Perf Node
Throughput
optimized
Archive Node
Cost-Capacity
optimized
MAPPING CONFIGS TO WORKLOAD IO
CATEGORIES
INSERT DESIGNATOR, IF NEEDED9
Some pertinent measures
• MBps
• $/MBps
• MBps/provisioned-TB
• Watts/MBps
• MTTR (self-heal from server failure)
Range of MBps measured with Ceph on different server configs
DIGITAL MEDIA PERF NODES
0
100
200
300
400
500
HDD
sample
SSD
sample
4M Read
MBps per
Drive
4M Write
MBps per
Drive
Sequential Read Throughput vs IO Block Size
THROUGHPUT PER OSD DEVICE (READ)
INSERT DESIGNATOR, IF NEEDED10
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
100.00
64 512 1024 4096
MB/secperOSDDevice
IO Block Size
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
D51PH-1ULH - 12xOSD+0xSSD, 2x10G (EC3:2)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (EC2:2)
T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G (EC2:2)
Sequential Write Throughput vs IO Block Size
THROUGHPUT PER OSD DEVICE (WRITE)
INSERT DESIGNATOR, IF NEEDED11
0.00
5.00
10.00
15.00
20.00
25.00
64 512 1024 4096
MB/secperOSDDevice
IO Block Size
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (EC3:2)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G
(3xRep)
T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G
(EC2:2)
T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G
(EC2:2)
Sequential Throughput vs Different Server Sizes
SERVER SCALABILITY
INSERT DESIGNATOR, IF NEEDED12
0
10
20
30
40
50
60
70
80
90
100
12 Disks / OSDs (D51PH) 35 Disks / OSDs (T21P)
MBytes/sec/disk
Rados-4M-seq-read/Disk
Rados-4M-seq-write/Disk
Sequential Throughput vs Different Protection Methods (Replication v. Erasure-coding)
DATA PROTECTION METHODS
INSERT DESIGNATOR, IF NEEDED13
0
10
20
30
40
50
60
70
80
90
100
Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk
MBytes/sec/disk
D51PH-1ULH - 12xOSD+0xSSD,
2x10G (EC3:2)
D51PH-1ULH - 12xOSD+3xSSD,
2x10G (EC3:2)
D51PH-1ULH - 12xOSD+3xSSD,
2x10G (3xRep)
Sequential IO Latency vs Different Journal Approaches
JOURNALING
INSERT DESIGNATOR, IF NEEDED14
0
500
1000
1500
2000
2500
3000
3500
4000
Rados-4M-Seq-Reads Latency Rados-4M-Seq-Writes Latency
Latencyinmsec
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G
(3xRep)
T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G
(3xRep)
Sequential Throughput vs Different Network Bandwidth
NETWORK
INSERT DESIGNATOR, IF NEEDED15
0
10
20
30
40
50
60
70
80
90
100
Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk
MBytes/sec/disk
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G
(3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 10G+10G
(3xRep)
Sequential Throughput v. Different OSD Media Types (All-flash v. Magnetic)
MEDIA TYPE
16
Different Configs vs $/MBps (lowest = best)
PRICE/PERFORMANCE
17
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep)
$/MBps
Price/Perf (w)
Price/Perf (r)
Different Configs vs $/MBps (lowest = best)
PRICE/PERFORMANCE
18
D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep)
$/MBps
Price/Perf (w)
Price/Perf (r)
INSERT DESIGNATOR, IF NEEDED19
Some pertinent measures
• MySQL Sysbench requests/sec
• IOPS (4K, 16K random)
• $/IOP
• IOPS/provisioned-GB
• Watts/IOP
Range of IOPS measured with Ceph on different server configs
MYSQL PERF NODES
0
10000
20000
30000
40000
50000
60000
HDD
sample
SSD
sample
4K Read
IOPS per
Drive
4K Write
IOPS per
Drive
AWS provisioned-IOPS v. Ceph all-flash configs
SYSBENCH REQUEST/SEC
20
0
10000
20000
30000
40000
50000
60000
70000
80000
P-IOPS
m4.4XL
Ceph cluster
cl: 16 vcpu/64MB
(1 instance,
14% capacity)
Ceph cluster
cl: 16 vcpu/64MB
(10 instances,
87% capacity)
Sysbench Read Req/sec
Sysbench Write Req/sec
Sysbench 70/30 R/W
Req/sec
AWS use of IOPS/GB throttles
GETTING DETERMINISTIC IOPS
21
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
P-IOPS
m4.4XL
P-IOPS
r3.2XL
GP-SSD
r3.2XL
MySQL IOPS/GB, Sysbench Reads
MySQL IOPS/GB, Sysbench Writes
Ceph IOPS/GB varying with instance quantity and cluster capacity utilization
MYSQL INSTANCES AND CLUSTER
CAPACITY
22
26
87
19
0
10
20
30
40
50
60
70
80
90
100
P-IOPS
m4.4XL
Ceph cluster
cl: 16 vcpu/64MB
(1 instance,
14% capacity)
Ceph cluster
cl: 16 vcpu/64MB
(10 instances,
87% capacity)
Collect baseline measures
INSERT DESIGNATOR, IF NEEDED23
METHODOLOGY: BASELINING
1. Determine benchmark measures most representative of business need
2. Determine cluster access method (block, object, file)
3. Collect baseline measures
1. Look-up manufacturer drive specifications (IOPS, MBps, latency)
2. Single-node IO baseline (max IOPS, MBps to all drives concurrently)
3. Network baseline (consistent bandwidth across full route mesh)
4. Rados baseline (max sequential throughput per drive)
5. RBD baseline (max IOPS per drive)
6. Sysbench baseline (max DB requests/sec per drive)
7. RGW baseline (max object OP/sec per drive)
4. Calculate drive efficiency at each level up the stack
Towards deterministic performance
INSERT DESIGNATOR, IF NEEDED24
METHODOLOGY: WATERMARKS
1. Identify IOPS/GB at 35% and 70% cluster utilization (with corresponding MySQL instances)
2. Identify MBps/TB at 35% and 70% cluster utilization
3. Determine target IOPS/GB or MBps at target cluster utilization
4. (experimential) Set block device IO throttles to cap consumption by any single client
Towards comparable results
INSERT DESIGNATOR, IF NEEDED25
COMMON TOOLS
1. CBT – Ceph Benchmarking Tool
https://github.com/ceph/ceph-brag (email pmcgarry@redhat.com for access)
INSERT DESIGNATOR, IF NEEDED26
Ceph performance leaderboard (ceph-brag) coming to ceph.com
INVITATION TO BE PART OF THE ANSWER
plus.google.com/+RedHat
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHatNews
THANK YOU
4K Random Write IOPS v. Different Controllers and software configs
RAID CONTROLLER WRITE-BACK (HDD
OSDS)
28

More Related Content

What's hot

Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Community
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Community
 
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangAccelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangCeph Community
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCCeph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 

What's hot (20)

Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangAccelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoC
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 

Viewers also liked

Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Patrick McGarry
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsCeph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseAlex Lau
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 

Viewers also liked (7)

Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
My SQL on Ceph
My SQL on CephMy SQL on Ceph
My SQL on Ceph
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 

Similar to Ceph Performance Profiling and Reporting

Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...In-Memory Computing Summit
 
Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Community
 
9/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'169/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'16Kangaroot
 
Sql server 2016 it just runs faster sql bits 2017 edition
Sql server 2016 it just runs faster   sql bits 2017 editionSql server 2016 it just runs faster   sql bits 2017 edition
Sql server 2016 it just runs faster sql bits 2017 editionBob Ward
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheDavid Grier
 
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014Philippe Fierens
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例Kazuhito Ohkawa
 
SQL Server It Just Runs Faster
SQL Server It Just Runs FasterSQL Server It Just Runs Faster
SQL Server It Just Runs FasterBob Ward
 
Oracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open WorldOracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open WorldPaul Marden
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudNicolas Poggi
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
 

Similar to Ceph Performance Profiling and Reporting (20)

Ceph
CephCeph
Ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
 
9/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'169/ IBM POWER @ OPEN'16
9/ IBM POWER @ OPEN'16
 
Sql server 2016 it just runs faster sql bits 2017 edition
Sql server 2016 it just runs faster   sql bits 2017 editionSql server 2016 it just runs faster   sql bits 2017 edition
Sql server 2016 it just runs faster sql bits 2017 edition
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014
 
optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例Kauli SSPにおけるVyOSの導入事例
Kauli SSPにおけるVyOSの導入事例
 
SQL Server It Just Runs Faster
SQL Server It Just Runs FasterSQL Server It Just Runs Faster
SQL Server It Just Runs Faster
 
Oracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open WorldOracle RAC Presentation at Oracle Open World
Oracle RAC Presentation at Oracle Open World
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 

Recently uploaded

Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024Mind IT Systems
 
How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...software pro Development
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplatePresentation.STUDIO
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech studentsHimanshiGarg82
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionOnePlan Solutions
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVshikhaohhpro
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension AidPhilip Schwarz
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfVishalKumarJha10
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...OnePlan Solutions
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfryanfarris8
 

Recently uploaded (20)

Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
 

Ceph Performance Profiling and Reporting

  • 1. CEPH PERFORMANCE Profiling and Reporting Brent Compton, Director Storage Solution Architectures Kyle Bader, Sr Storage Architect Veda Shankar, Sr Storage Architect
  • 2. HOW WELL CAN CEPH PERFORM? WHICH OF MY WORKLOADS CAN IT HANDLE? HOW WILL CEPH PERFORM ON MY SERVERS? INSERT DESIGNATOR, IF NEEDED2 Questions that continually surface FAQ FROM THE COMMUNITY
  • 3. PERCEIVED RANGE OF CEPH PERF ACTUAL (MEASURED) RANGE OF CEPH PERF INSERT DESIGNATOR, IF NEEDED3 Finding the right server and network config for the job HOW WELL CAN CEPH PERFORM?
  • 4. https://github.com/ceph/ceph-brag (email pmcgarry@redhat.com for access) INSERT DESIGNATOR, IF NEEDED4 Ceph performance leaderboard (ceph-brag) coming to ceph.com INVITATION TO BE PART OF THE ANSWER
  • 5. INSERT DESIGNATOR, IF NEEDED5 Posted throughput results A LEADERBOARD FOR CEPH PERF RESULTS
  • 6. INSERT DESIGNATOR, IF NEEDED6 Looking for Beta submitters prior to general availability on Ceph.com LEADERBOARD ATTRIBUTION AND DETAILS
  • 7. INSERT DESIGNATOR, IF NEEDED7 Still under construction EMERGING LEADERBOARD FOR IOPS
  • 8. OpenStack Starter 64 TB S 256TB + M 1PB + L 2PB+ MySQL Perf Node IOPs optimized Digital Media Perf Node Throughput optimized Archive Node Cost-Capacity optimized MAPPING CONFIGS TO WORKLOAD IO CATEGORIES
  • 9. INSERT DESIGNATOR, IF NEEDED9 Some pertinent measures • MBps • $/MBps • MBps/provisioned-TB • Watts/MBps • MTTR (self-heal from server failure) Range of MBps measured with Ceph on different server configs DIGITAL MEDIA PERF NODES 0 100 200 300 400 500 HDD sample SSD sample 4M Read MBps per Drive 4M Write MBps per Drive
  • 10. Sequential Read Throughput vs IO Block Size THROUGHPUT PER OSD DEVICE (READ) INSERT DESIGNATOR, IF NEEDED10 0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 90.00 100.00 64 512 1024 4096 MB/secperOSDDevice IO Block Size D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) D51PH-1ULH - 12xOSD+0xSSD, 2x10G (EC3:2) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (EC2:2) T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G (EC2:2)
  • 11. Sequential Write Throughput vs IO Block Size THROUGHPUT PER OSD DEVICE (WRITE) INSERT DESIGNATOR, IF NEEDED11 0.00 5.00 10.00 15.00 20.00 25.00 64 512 1024 4096 MB/secperOSDDevice IO Block Size D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) D51PH-1ULH - 12xOSD+3xSSD, 2x10G (EC3:2) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G (EC2:2) T21P-4U/Dual - 35xOSD+0xSSD, 10G+10G (EC2:2)
  • 12. Sequential Throughput vs Different Server Sizes SERVER SCALABILITY INSERT DESIGNATOR, IF NEEDED12 0 10 20 30 40 50 60 70 80 90 100 12 Disks / OSDs (D51PH) 35 Disks / OSDs (T21P) MBytes/sec/disk Rados-4M-seq-read/Disk Rados-4M-seq-write/Disk
  • 13. Sequential Throughput vs Different Protection Methods (Replication v. Erasure-coding) DATA PROTECTION METHODS INSERT DESIGNATOR, IF NEEDED13 0 10 20 30 40 50 60 70 80 90 100 Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk MBytes/sec/disk D51PH-1ULH - 12xOSD+0xSSD, 2x10G (EC3:2) D51PH-1ULH - 12xOSD+3xSSD, 2x10G (EC3:2) D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep)
  • 14. Sequential IO Latency vs Different Journal Approaches JOURNALING INSERT DESIGNATOR, IF NEEDED14 0 500 1000 1500 2000 2500 3000 3500 4000 Rados-4M-Seq-Reads Latency Rados-4M-Seq-Writes Latency Latencyinmsec T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+0xPCIe, 1x40G (3xRep)
  • 15. Sequential Throughput vs Different Network Bandwidth NETWORK INSERT DESIGNATOR, IF NEEDED15 0 10 20 30 40 50 60 70 80 90 100 Rados-4M-Seq-Reads/disk Rados-4M-Seq-Writes/disk MBytes/sec/disk T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 10G+10G (3xRep)
  • 16. Sequential Throughput v. Different OSD Media Types (All-flash v. Magnetic) MEDIA TYPE 16
  • 17. Different Configs vs $/MBps (lowest = best) PRICE/PERFORMANCE 17 D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) $/MBps Price/Perf (w) Price/Perf (r)
  • 18. Different Configs vs $/MBps (lowest = best) PRICE/PERFORMANCE 18 D51PH-1ULH - 12xOSD+3xSSD, 2x10G (3xRep) T21P-4U/Dual - 35xOSD+2xPCIe, 1x40G (3xRep) $/MBps Price/Perf (w) Price/Perf (r)
  • 19. INSERT DESIGNATOR, IF NEEDED19 Some pertinent measures • MySQL Sysbench requests/sec • IOPS (4K, 16K random) • $/IOP • IOPS/provisioned-GB • Watts/IOP Range of IOPS measured with Ceph on different server configs MYSQL PERF NODES 0 10000 20000 30000 40000 50000 60000 HDD sample SSD sample 4K Read IOPS per Drive 4K Write IOPS per Drive
  • 20. AWS provisioned-IOPS v. Ceph all-flash configs SYSBENCH REQUEST/SEC 20 0 10000 20000 30000 40000 50000 60000 70000 80000 P-IOPS m4.4XL Ceph cluster cl: 16 vcpu/64MB (1 instance, 14% capacity) Ceph cluster cl: 16 vcpu/64MB (10 instances, 87% capacity) Sysbench Read Req/sec Sysbench Write Req/sec Sysbench 70/30 R/W Req/sec
  • 21. AWS use of IOPS/GB throttles GETTING DETERMINISTIC IOPS 21 0.0 5.0 10.0 15.0 20.0 25.0 30.0 35.0 P-IOPS m4.4XL P-IOPS r3.2XL GP-SSD r3.2XL MySQL IOPS/GB, Sysbench Reads MySQL IOPS/GB, Sysbench Writes
  • 22. Ceph IOPS/GB varying with instance quantity and cluster capacity utilization MYSQL INSTANCES AND CLUSTER CAPACITY 22 26 87 19 0 10 20 30 40 50 60 70 80 90 100 P-IOPS m4.4XL Ceph cluster cl: 16 vcpu/64MB (1 instance, 14% capacity) Ceph cluster cl: 16 vcpu/64MB (10 instances, 87% capacity)
  • 23. Collect baseline measures INSERT DESIGNATOR, IF NEEDED23 METHODOLOGY: BASELINING 1. Determine benchmark measures most representative of business need 2. Determine cluster access method (block, object, file) 3. Collect baseline measures 1. Look-up manufacturer drive specifications (IOPS, MBps, latency) 2. Single-node IO baseline (max IOPS, MBps to all drives concurrently) 3. Network baseline (consistent bandwidth across full route mesh) 4. Rados baseline (max sequential throughput per drive) 5. RBD baseline (max IOPS per drive) 6. Sysbench baseline (max DB requests/sec per drive) 7. RGW baseline (max object OP/sec per drive) 4. Calculate drive efficiency at each level up the stack
  • 24. Towards deterministic performance INSERT DESIGNATOR, IF NEEDED24 METHODOLOGY: WATERMARKS 1. Identify IOPS/GB at 35% and 70% cluster utilization (with corresponding MySQL instances) 2. Identify MBps/TB at 35% and 70% cluster utilization 3. Determine target IOPS/GB or MBps at target cluster utilization 4. (experimential) Set block device IO throttles to cap consumption by any single client
  • 25. Towards comparable results INSERT DESIGNATOR, IF NEEDED25 COMMON TOOLS 1. CBT – Ceph Benchmarking Tool
  • 26. https://github.com/ceph/ceph-brag (email pmcgarry@redhat.com for access) INSERT DESIGNATOR, IF NEEDED26 Ceph performance leaderboard (ceph-brag) coming to ceph.com INVITATION TO BE PART OF THE ANSWER
  • 28. 4K Random Write IOPS v. Different Controllers and software configs RAID CONTROLLER WRITE-BACK (HDD OSDS) 28