Ceph Tech Talk -- Ceph Benchmarking Tool

Ceph Community
Ceph Community Ceph Community
Ceph Benchmarking Tool (CBT)
Kyle BaderCeph Tech Talk
May 26, 2016
INTRO TO CBT
• Benchmarking framework written in python
• Began as a engineering benchmark tool for upstream developlment
• Adopted for downstream performance and sizing
• Used by many people in Ceph community
• Red Hat
• Intel / Samsung / SanDisk
• Quanta QCT / Supermicro / Dell
WHAT IS IT?
CBT PERSONALITIES
HEAD
• CBT checkout
• Key based authentication to all other hosts
• Including itself..
• PDSH packages
• Space to store results archives
• YAML testplans
CBT PERSONALITIES
CLIENT
• Generates load against the SUT
• Ceph admin keyring readable by cbt user
• Needs loadgen tools installed
• FIO
• COSbench
• Should be a VM for kvmrbdfio
• Can be containerized (good for rbdfio)
CBT PERSONALITIES
MON • Nodes to setup monitors on
OSD • Nodes to setup OSDs
• RADOS Bench
• FIO with RBD engine
• FIO on KRBD on EXT4
• FIO on KVM (vdb) on EXT4
• COSBench for S3/Swift against RGW
CBT BENCHMARKS
• Cluster creation ( optional, use_existing: true )
• Cache tier configuration
• Replicated and Erasure coded pools
• Collects monitoring information from every node
• Collectl – cpu/disk/net/etc.
CBT EXTRAS
• SSH Key on head
• Pub key in all hosts authorized_keys (including head)
• Ceph packages on all hosts
• PDSH packages on all hosts (for pdcp)
• Collectl installed on all hosts
BASIC SETUP
• Test network beforehand, bad network easily impairs performance
• All-to-All iperf
• Check network routes, interfaces
• Bonding
• Switches should use 5-tuple-hashing for LACP
• Nodes should use LACP xmit_hash_policy=layer3+4
TEST METHODOLOGY
• Use multiple iterations for micro benchmarks
• Use client sweeps to establish point of contention / max throughput
• Client sweeps should always start with X(1) ~ 1 client
• Should have 4-6 different increments of clients
• Eg. client1, client[1-2], client[1-3], client[1-4]
TEST METHODOLOGY
Testplan Examples
CBT CLUSTER CONFIGURATION
cluster:
head: "ceph@head”
clients: ["ceph@client"]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
osds_per_node: 1
fs: xfs mkfs_opts: -f -i size=2048
mount_opts: -o inode64,noatime,logbsize=256k
conf_file: /etc/ceph.conf
ceph.conf: /etc/ceph/cepf.conf
iterations: 3
rebuild_every_test: False
tmp_dir: "/tmp/cbt"
pool_profiles:
replicated:
pg_size: 4096
pgp_size: 4096
replication: 'replicated'
CLIENT SWEEPS
cluster:
head: "ceph@head”
clients: ["ceph@client1"]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
cluster:
head: "ceph@head”
clients: ["ceph@client1”,”ceph@client2”]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
cluster:
head: "ceph@head”
clients: ["ceph@client1”,”ceph@client2”,
”ceph@client3”]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
cluster:
head: "ceph@head”
clients: ["ceph@client1”,”ceph@client2”,
“ceph@client3”,”ceph@client4”]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
• Spawns RADOS bench processes
on each client
• Establish raw RADOS throughput
• Works against replicated or EC pools
RADOS BENCH
benchmarks:
radosbench:
op_size: [ 4194304, 524288, 4096 ]
write_only: False
time: 300
concurrent_ops: [ 128 ]
concurrent_procs: 1
use_existing: True
pool_profile: replicated
osd_ra: [256]
• Spawns FIO proccesses on each client
• Uses RBD ioengine
• Establish raw librbd performance
• No VM / container setup required
FIO WITH RBD IO ENGINE
benchmarks:
librbdfio:
time: 900
vol_size: 65536
mode: [ randwrite, randread, randrw ]
rwmixread: 70
op_size: [ 4096, 16384 ]
procs_per_volume: [ 1 ]
volumes_per_client: [ 1 ]
iodepth: [ 16 ]
osd_ra: [ 128 ]
cmd_path: '/home/ceph-admin/fio/fio’
pool_profile: 'rbd’
log_avg_msec: 100
use_existing_volumes: true
• Maps KRBD volume to each client
• Creates EXT4 filesystem on KRBD
• Mounts filesystem
• Spawns FIO process per client
• Uses AIO IO Engine on filesystem
• Client can be container or bare metal
• Establishes KRBD performance potential
FIO WITH KRBD ON EXT4
benchmarks:
rbdfio:
time: 900
vol_size: 65536
mode: [ randwrite, randread, randrw ]
rwmixread: 70
op_size: [ 4096, 16384 ]
concurrnet_procs: [ 1 ]
iodepth: [ 16 ]
osd_ra: [ 128 ]
cmd_path: '/home/ceph-admin/fio/fio’
pool_profile: 'rbd’
log_avg_msec: 100
• Create KVM instances outside CBT
• KVM instances listed as clients
• Creates EXT4 filesystem on /dev/vdb
• Mounts filesystem
• Spanws FIO process per client
• Uses AIO IO Engine
• Establish RBD performance with QEMU
IO susbsystems
FIO WITH KVM (VDB) ON EXT4
benchmarks:
kvmrbdfio:
time: 900
vol_size: 65536
mode: [ randwrite, randread, randrw ]
rwmixread: 70
op_size: [ 4096, 16384 ]
concurrnet_procs: [ 1 ]
iodepth: [ 16 ]
osd_ra: [ 128 ]
cmd_path: '/home/ceph-admin/fio/fio’
pool_profile: 'rbd’
log_avg_msec: 100
• Install COSBench on
head/clients outside CBT
• Install / Configure RGW
outside CBT
• Translates CBT YAML to
COSBench XML
• Runs COSBench
COSBENCH
benchmarks:
cosbench:
cosbench_dir: /root/0.4.1.0
cosbench_xml_dir: /home/ceph-admin/plugin/cbt/conf/cosbench/
controller: client01
auth:
config: username=cosbench:operator;password=intel2012;url=…
obj_size: [128KB]
template: [default]
mode: [write]
ratio: [100]
….
Example at cbt/docs/cosbench.README
Running CBT
# Loop through each test plan
for clients in $(seq 1 6);do
cbt/cbt –archive=/tmp/${clients}-clients-results path/to/test.yaml
done
ANALYZING DATA
• No robust tools for analysis
• Nested archive directory based on YAML options
• Archive/000000/Librbdfio/osd_ra-00000128…
• Usually awk/grep/cut-fu to csv
• Plot charts with gnplot, Excel, R
ANALYZING DATA
THANK YOU!
1 of 23

Recommended

[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T... by
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...OpenStack Korea Community
3.7K views46 slides
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C... by
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
23.5K views41 slides
Crimson: Ceph for the Age of NVMe and Persistent Memory by
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
943 views24 slides
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019 by
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
1.7K views40 slides
Ceph as software define storage by
Ceph as software define storageCeph as software define storage
Ceph as software define storageMahmoud Shiri Varamini
1.4K views39 slides
OVN DBs HA with scale test by
OVN DBs HA with scale testOVN DBs HA with scale test
OVN DBs HA with scale testAliasgar Ginwala
643 views25 slides

More Related Content

What's hot

Ceph scale testing with 10 Billion Objects by
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsKaran Singh
353 views18 slides
Introduction into Ceph storage for OpenStack by
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackOpenStack_Online
12.4K views54 slides
Ceph Block Devices: A Deep Dive by
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep DiveRed_Hat_Storage
7.3K views42 slides
BlueStore, A New Storage Backend for Ceph, One Year In by
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
29.3K views56 slides
Ceph Tech Talk: Ceph at DigitalOcean by
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Community
301 views19 slides
A crash course in CRUSH by
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
13.7K views59 slides

What's hot(20)

Ceph scale testing with 10 Billion Objects by Karan Singh
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion Objects
Karan Singh353 views
Introduction into Ceph storage for OpenStack by OpenStack_Online
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
OpenStack_Online12.4K views
Ceph Block Devices: A Deep Dive by Red_Hat_Storage
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
Red_Hat_Storage7.3K views
BlueStore, A New Storage Backend for Ceph, One Year In by Sage Weil
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
Sage Weil29.3K views
Ceph Tech Talk: Ceph at DigitalOcean by Ceph Community
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Community 301 views
A crash course in CRUSH by Sage Weil
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
Sage Weil13.7K views
Ceph - A distributed storage system by Italo Santos
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
Italo Santos1.3K views
Seastore: Next Generation Backing Store for Ceph by ScyllaDB
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
ScyllaDB996 views
Ceph Performance and Sizing Guide by Jose De La Rosa
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
Jose De La Rosa15.4K views
Nick Fisk - low latency Ceph by ShapeBlue
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
ShapeBlue6.1K views
2021.02 new in Ceph Pacific Dashboard by Ceph Community
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
Ceph Community 338 views
Disaggregating Ceph using NVMeoF by ShapeBlue
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
ShapeBlue1.5K views
Ceph, An introduction by Vimal A.R
Ceph, An introductionCeph, An introduction
Ceph, An introduction
Vimal A.R1.5K views
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화 by OpenStack Korea Community
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution by Karan Singh
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Karan Singh6.4K views
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE by OpenStack
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEUnderstanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
OpenStack1.2K views
Ceph and RocksDB by Sage Weil
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
Sage Weil13.6K views
CEPH DAY BERLIN - MASTERING CEPH OPERATIONS: UPMAP AND THE MGR BALANCER by Ceph Community
CEPH DAY BERLIN - MASTERING CEPH OPERATIONS: UPMAP AND THE MGR BALANCERCEPH DAY BERLIN - MASTERING CEPH OPERATIONS: UPMAP AND THE MGR BALANCER
CEPH DAY BERLIN - MASTERING CEPH OPERATIONS: UPMAP AND THE MGR BALANCER
Ceph Community 3K views

Viewers also liked

Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf... by
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Community
266 views31 slides
Ceph Day Shanghai - Ceph in Chinau Unicom Labs by
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Community
138 views33 slides
Ceph Day Seoul - Ceph: a decade in the making and still going strong by
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
140 views35 slides
Reference Architecture: Architecting Ceph Storage Solutions by
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
787 views30 slides
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload by
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Community
594 views22 slides
Ceph Day Chicago - Brining Ceph Storage to the Enterprise by
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
262 views58 slides

Viewers also liked(20)

Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf... by Ceph Community
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Community 266 views
Ceph Day Shanghai - Ceph in Chinau Unicom Labs by Ceph Community
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Community 138 views
Ceph Day Seoul - Ceph: a decade in the making and still going strong by Ceph Community
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Community 140 views
Reference Architecture: Architecting Ceph Storage Solutions by Ceph Community
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
Ceph Community 787 views
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload by Ceph Community
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Community 594 views
Ceph Day Chicago - Brining Ceph Storage to the Enterprise by Ceph Community
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Community 262 views
Ceph Day Shanghai - On the Productization Practice of Ceph by Ceph Community
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Community 183 views
Ceph Day Shanghai - Community Update by Ceph Community
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update
Ceph Community 198 views
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work... by Ceph Community
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Community 249 views
Ceph Day Taipei - Community Update by Ceph Community
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
Ceph Community 259 views
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data by Ceph Community
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Community 376 views
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph by Ceph Community
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Community 113 views
Ceph Day Chicago - Ceph at work at Bloomberg by Ceph Community
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Community 720 views
2016-JAN-28 -- High Performance Production Databases on Ceph by Ceph Community
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
Ceph Community 2.7K views
Ceph Day Taipei - Ceph on All-Flash Storage by Ceph Community
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Community 611 views
Ceph Day Shanghai - Ceph Performance Tools by Ceph Community
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
Ceph Community 133 views
iSCSI Target Support for Ceph by Ceph Community
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
Ceph Community 342 views
Ceph Day Taipei - Ceph Tiering with High Performance Architecture by Ceph Community
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Community 249 views
Ceph Community Talk on High-Performance Solid Sate Ceph by Ceph Community
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community 394 views

Similar to Ceph Tech Talk -- Ceph Benchmarking Tool

Kubernetes Internals by
Kubernetes InternalsKubernetes Internals
Kubernetes InternalsShimi Bandiel
1.3K views72 slides
Quick-and-Easy Deployment of a Ceph Storage Cluster by
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
826 views53 slides
Build an High-Performance and High-Durable Block Storage Service Based on Ceph by
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
8.2K views99 slides
QCT Ceph Solution - Design Consideration and Reference Architecture by
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
1.9K views50 slides
QCT Ceph Solution - Design Consideration and Reference Architecture by
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
83 views50 slides
Codasip application class RISC-V processor solutions by
Codasip application class RISC-V processor solutionsCodasip application class RISC-V processor solutions
Codasip application class RISC-V processor solutionsRISC-V International
190 views17 slides

Similar to Ceph Tech Talk -- Ceph Benchmarking Tool(20)

Quick-and-Easy Deployment of a Ceph Storage Cluster by Patrick Quairoli
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
Patrick Quairoli826 views
Build an High-Performance and High-Durable Block Storage Service Based on Ceph by Rongze Zhu
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu8.2K views
QCT Ceph Solution - Design Consideration and Reference Architecture by Patrick McGarry
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry1.9K views
QCT Ceph Solution - Design Consideration and Reference Architecture by Ceph Community
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Ceph Community 83 views
Codasip application class RISC-V processor solutions by RISC-V International
Codasip application class RISC-V processor solutionsCodasip application class RISC-V processor solutions
Codasip application class RISC-V processor solutions
Deep Dive: OpenStack Summit (Red Hat Summit 2014) by Stephen Gordon
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Stephen Gordon16.7K views
OpenStack Cinder, Implementation Today and New Trends for Tomorrow by Ed Balduf
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
Ed Balduf1K views
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture by Ceph Community
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Community 212 views
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture by Danielle Womboldt
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt2.1K views
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter... by DataStax Academy
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
DataStax Academy1.6K views
Leveraging Cassandra for real-time multi-datacenter public cloud analytics by Julien Anguenot
Leveraging Cassandra for real-time multi-datacenter public cloud analyticsLeveraging Cassandra for real-time multi-datacenter public cloud analytics
Leveraging Cassandra for real-time multi-datacenter public cloud analytics
Julien Anguenot2.3K views
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration by Ceph Community
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Community 402 views
Building an ActionScript Game Server with over 15,000 Concurrent Connections by Renaun Erickson
Building an ActionScript Game Server with over 15,000 Concurrent ConnectionsBuilding an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent Connections
Renaun Erickson3.3K views
Introduction to Chef by kevsmith
Introduction to ChefIntroduction to Chef
Introduction to Chef
kevsmith7.2K views
Current and Future of Non-Volatile Memory on Linux by mountpoint.io
Current and Future of Non-Volatile Memory on LinuxCurrent and Future of Non-Volatile Memory on Linux
Current and Future of Non-Volatile Memory on Linux
mountpoint.io1K views
Rook - cloud-native storage by Karol Chrapek
Rook - cloud-native storageRook - cloud-native storage
Rook - cloud-native storage
Karol Chrapek116 views

Recently uploaded

Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit... by
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...ShapeBlue
57 views25 slides
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue by
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlueWhat’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlueShapeBlue
131 views23 slides
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue by
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueMigrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueShapeBlue
96 views20 slides
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ... by
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...ShapeBlue
65 views28 slides
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti... by
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...ShapeBlue
46 views29 slides
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ... by
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...ShapeBlue
77 views12 slides

Recently uploaded(20)

Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit... by ShapeBlue
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...
ShapeBlue57 views
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue by ShapeBlue
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlueWhat’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue
ShapeBlue131 views
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue by ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueMigrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
ShapeBlue96 views
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ... by ShapeBlue
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
ShapeBlue65 views
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti... by ShapeBlue
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...
ShapeBlue46 views
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ... by ShapeBlue
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...
ShapeBlue77 views
Data Integrity for Banking and Financial Services by Precisely
Data Integrity for Banking and Financial ServicesData Integrity for Banking and Financial Services
Data Integrity for Banking and Financial Services
Precisely56 views
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha... by ShapeBlue
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
ShapeBlue74 views
State of the Union - Rohit Yadav - Apache CloudStack by ShapeBlue
State of the Union - Rohit Yadav - Apache CloudStackState of the Union - Rohit Yadav - Apache CloudStack
State of the Union - Rohit Yadav - Apache CloudStack
ShapeBlue145 views
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ... by ShapeBlue
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
ShapeBlue83 views
Business Analyst Series 2023 - Week 3 Session 5 by DianaGray10
Business Analyst Series 2023 -  Week 3 Session 5Business Analyst Series 2023 -  Week 3 Session 5
Business Analyst Series 2023 - Week 3 Session 5
DianaGray10369 views
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates by ShapeBlue
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesKeynote Talk: Open Source is Not Dead - Charles Schulz - Vates
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates
ShapeBlue119 views
Five Things You SHOULD Know About Postman by Postman
Five Things You SHOULD Know About PostmanFive Things You SHOULD Know About Postman
Five Things You SHOULD Know About Postman
Postman40 views
DRBD Deep Dive - Philipp Reisner - LINBIT by ShapeBlue
DRBD Deep Dive - Philipp Reisner - LINBITDRBD Deep Dive - Philipp Reisner - LINBIT
DRBD Deep Dive - Philipp Reisner - LINBIT
ShapeBlue62 views
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N... by James Anderson
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
James Anderson133 views
Business Analyst Series 2023 - Week 4 Session 7 by DianaGray10
Business Analyst Series 2023 -  Week 4 Session 7Business Analyst Series 2023 -  Week 4 Session 7
Business Analyst Series 2023 - Week 4 Session 7
DianaGray1080 views

Ceph Tech Talk -- Ceph Benchmarking Tool

  • 1. Ceph Benchmarking Tool (CBT) Kyle BaderCeph Tech Talk May 26, 2016
  • 3. • Benchmarking framework written in python • Began as a engineering benchmark tool for upstream developlment • Adopted for downstream performance and sizing • Used by many people in Ceph community • Red Hat • Intel / Samsung / SanDisk • Quanta QCT / Supermicro / Dell WHAT IS IT?
  • 4. CBT PERSONALITIES HEAD • CBT checkout • Key based authentication to all other hosts • Including itself.. • PDSH packages • Space to store results archives • YAML testplans
  • 5. CBT PERSONALITIES CLIENT • Generates load against the SUT • Ceph admin keyring readable by cbt user • Needs loadgen tools installed • FIO • COSbench • Should be a VM for kvmrbdfio • Can be containerized (good for rbdfio)
  • 6. CBT PERSONALITIES MON • Nodes to setup monitors on OSD • Nodes to setup OSDs
  • 7. • RADOS Bench • FIO with RBD engine • FIO on KRBD on EXT4 • FIO on KVM (vdb) on EXT4 • COSBench for S3/Swift against RGW CBT BENCHMARKS
  • 8. • Cluster creation ( optional, use_existing: true ) • Cache tier configuration • Replicated and Erasure coded pools • Collects monitoring information from every node • Collectl – cpu/disk/net/etc. CBT EXTRAS
  • 9. • SSH Key on head • Pub key in all hosts authorized_keys (including head) • Ceph packages on all hosts • PDSH packages on all hosts (for pdcp) • Collectl installed on all hosts BASIC SETUP
  • 10. • Test network beforehand, bad network easily impairs performance • All-to-All iperf • Check network routes, interfaces • Bonding • Switches should use 5-tuple-hashing for LACP • Nodes should use LACP xmit_hash_policy=layer3+4 TEST METHODOLOGY
  • 11. • Use multiple iterations for micro benchmarks • Use client sweeps to establish point of contention / max throughput • Client sweeps should always start with X(1) ~ 1 client • Should have 4-6 different increments of clients • Eg. client1, client[1-2], client[1-3], client[1-4] TEST METHODOLOGY
  • 13. CBT CLUSTER CONFIGURATION cluster: head: "ceph@head” clients: ["ceph@client"] osds: ["ceph@osd"] mons: ["ceph@mon"] osds_per_node: 1 fs: xfs mkfs_opts: -f -i size=2048 mount_opts: -o inode64,noatime,logbsize=256k conf_file: /etc/ceph.conf ceph.conf: /etc/ceph/cepf.conf iterations: 3 rebuild_every_test: False tmp_dir: "/tmp/cbt" pool_profiles: replicated: pg_size: 4096 pgp_size: 4096 replication: 'replicated'
  • 14. CLIENT SWEEPS cluster: head: "ceph@head” clients: ["ceph@client1"] osds: ["ceph@osd"] mons: ["ceph@mon"] cluster: head: "ceph@head” clients: ["ceph@client1”,”ceph@client2”] osds: ["ceph@osd"] mons: ["ceph@mon"] cluster: head: "ceph@head” clients: ["ceph@client1”,”ceph@client2”, ”ceph@client3”] osds: ["ceph@osd"] mons: ["ceph@mon"] cluster: head: "ceph@head” clients: ["ceph@client1”,”ceph@client2”, “ceph@client3”,”ceph@client4”] osds: ["ceph@osd"] mons: ["ceph@mon"]
  • 15. • Spawns RADOS bench processes on each client • Establish raw RADOS throughput • Works against replicated or EC pools RADOS BENCH benchmarks: radosbench: op_size: [ 4194304, 524288, 4096 ] write_only: False time: 300 concurrent_ops: [ 128 ] concurrent_procs: 1 use_existing: True pool_profile: replicated osd_ra: [256]
  • 16. • Spawns FIO proccesses on each client • Uses RBD ioengine • Establish raw librbd performance • No VM / container setup required FIO WITH RBD IO ENGINE benchmarks: librbdfio: time: 900 vol_size: 65536 mode: [ randwrite, randread, randrw ] rwmixread: 70 op_size: [ 4096, 16384 ] procs_per_volume: [ 1 ] volumes_per_client: [ 1 ] iodepth: [ 16 ] osd_ra: [ 128 ] cmd_path: '/home/ceph-admin/fio/fio’ pool_profile: 'rbd’ log_avg_msec: 100 use_existing_volumes: true
  • 17. • Maps KRBD volume to each client • Creates EXT4 filesystem on KRBD • Mounts filesystem • Spawns FIO process per client • Uses AIO IO Engine on filesystem • Client can be container or bare metal • Establishes KRBD performance potential FIO WITH KRBD ON EXT4 benchmarks: rbdfio: time: 900 vol_size: 65536 mode: [ randwrite, randread, randrw ] rwmixread: 70 op_size: [ 4096, 16384 ] concurrnet_procs: [ 1 ] iodepth: [ 16 ] osd_ra: [ 128 ] cmd_path: '/home/ceph-admin/fio/fio’ pool_profile: 'rbd’ log_avg_msec: 100
  • 18. • Create KVM instances outside CBT • KVM instances listed as clients • Creates EXT4 filesystem on /dev/vdb • Mounts filesystem • Spanws FIO process per client • Uses AIO IO Engine • Establish RBD performance with QEMU IO susbsystems FIO WITH KVM (VDB) ON EXT4 benchmarks: kvmrbdfio: time: 900 vol_size: 65536 mode: [ randwrite, randread, randrw ] rwmixread: 70 op_size: [ 4096, 16384 ] concurrnet_procs: [ 1 ] iodepth: [ 16 ] osd_ra: [ 128 ] cmd_path: '/home/ceph-admin/fio/fio’ pool_profile: 'rbd’ log_avg_msec: 100
  • 19. • Install COSBench on head/clients outside CBT • Install / Configure RGW outside CBT • Translates CBT YAML to COSBench XML • Runs COSBench COSBENCH benchmarks: cosbench: cosbench_dir: /root/0.4.1.0 cosbench_xml_dir: /home/ceph-admin/plugin/cbt/conf/cosbench/ controller: client01 auth: config: username=cosbench:operator;password=intel2012;url=… obj_size: [128KB] template: [default] mode: [write] ratio: [100] …. Example at cbt/docs/cosbench.README
  • 20. Running CBT # Loop through each test plan for clients in $(seq 1 6);do cbt/cbt –archive=/tmp/${clients}-clients-results path/to/test.yaml done
  • 22. • No robust tools for analysis • Nested archive directory based on YAML options • Archive/000000/Librbdfio/osd_ra-00000128… • Usually awk/grep/cut-fu to csv • Plot charts with gnplot, Excel, R ANALYZING DATA