SlideShare a Scribd company logo
1 of 26
Download to read offline
Ceph Performance:
Projects Leading up to Jewel
Mark Nelson
Ceph Community Performance Lead
mnelson@redhat.com
4/12/2016
OVERVIEW
What's been going on with Ceph performance since Hammer?
Answer: A lot!
● Memory Allocator Testing
● Bluestore Development
● RADOS Gateway Bucket Index Overhead
● Cache Teiring Probabilistic Promotion Throttling
First let's look at how we are testing all this stuff...
CBT
CBT is an open source tool for creating Ceph clusters and running benchmarks
against them.
● Automatically builds clusters and runs through a variety of tests.
● Can launch various monitoring and profiling tools such as collectl.
● YAML based configuration file for cluster and test configuration.
● Open Source: https://github.com/ceph/cbt
MEMORY ALLOCATOR TESTING
We sat down at the 2015 Ceph Hackathon and tested a CBT configuration to
replicate memory allocator results on SSD based clusters pioneered by Sandisk
and Intel.
Memory
Allocator
Version Notes
TCMalloc 2.1 (default) Thread Cache can not be changed due to bug.
TCMalloc 2.4 Default 32MB Thread Cache
TCMalloc 2.4 64MB Thread Cache
TCMalloc 2.4 128MB Thread cache
Jemalloc 3.6.0 Default jemalloc configuration
Example CBT Test
MEMORY ALLOCATOR TESTING
The cluster is rebuilt the exact same way for every memory allocator tested. Tests
are run across many different IO sizes and IO Types. The most impressive change
was in 4K Random Writes. Over 4X faster with jemalloc!
4KB Random Writes
0 50 100 150 200 250 300 350
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
TCMalloc 2.1 (32MB TC)
TCMalloc 2.4 (32MB TC)
TCMalloc 2.4 (64MB TC)
TCMalloc 2.4 (128MB TC)
JEMalloc
Time (seconds)
IOPS
TCMalloc 2.4 (128MB TC)
performance degrading!
4.1x faster writes vs
TCMalloc 2.1 with jemalloc!
We need to examine RSS memory
usage during recovery to see what
happens in a memory intensive
scenario. CBT can perform
recovery tests during benchmarks
with a small configuration change:
cluster:
recovery_test:
osds: [ 1, 2, 3, 4,
5, 6, 7, 8,
9,10,11,12,
13,14,15,16]
WHAT NOW?
Does the Memory Allocator Affect Memory Usage?
Test procedure:
● Start the test.
● Wait 60 seconds.
● Mark OSDs on Node 1 down/out.
● Wait until the cluster heals.
● Mark OSDs on Node 1 up/in.
● Wait until the cluster heals.
● Wait 60 seconds.
● End the test.
MEMORY ALLOCATOR TESTING
Memory Usage during Recovery with Concurrent 4KB Random Writes
Much higher jemalloc
RSS memory usage!
0 500 1000 1500 2000 2500
0
200
400
600
800
1000
1200
Node1 OSD RSS Memory Usage During Recovery
TCMalloc 2.4 (32MB TC)
TCMalloc 2.4 (128MB TC)
jemalloc 3.6.0
Time (Seconds)
OSDRSSMemory(MB)
OSDs marked up/in after
previously marked down/out
Highest peak memory
usage with jemalloc.
Jemalloc completes
recovery faster than
tcmalloc.
MEMORY ALLOCATOR TESTING
General Conclusions
● Ceph is very hard on memory allocators. Opportunities for tuning.
● Huge performance gains and latency drops possible!
● Small IO on fast SSDs is CPU limited in these tests.
● Jemalloc provides higher performance but uses more memory.
● Memory allocator tuning primarily necessary for SimpleMessenger.
AsyncMessenger not affected.
● We decided to keep TCMalloc as the default memory allocator in Jewel but
increased the amount of thread cache to 128MB.
FILESTORE DEFICIENCIES
Ceph already has Filestore. Why add a new OSD backend?
● 2X journal write penalty needs to go away!
● Filestore stores metadata in XATTRS. On XFS, any XATTR larger than 254B
causes all XATTRS to be moved out of the inode.
● Filestore's PG directory hierarchy grows with the number of objects. This can be
mitigated by favoring dentry cache with vfs_cache_pressure, but...
● OSD regularly call syncfs to persist buffered writes. Syncfs does an O(n) search
of the entire in-kernel inode cache and slows down as more inodes are cached!
● Pick your poison. Crank up vfs_cache_pressure to avoid syncfs penalties or turn
it down to avoid extra dentry seeks caused by deep PG directory hierarchies?
● There must be a better way...
BLUESTORE
How is BlueStore different?
BlueStore
BlueFS
RocksDB
BlockDeviceBlockDeviceBlockDevice
BlueRocksEnv
data metadata
Allocator
ObjectStore
● BlueStore = Block + NewStore
● consume raw block device(s)
● key/value database (RocksDB) for metadata
● data written directly to block device
● pluggable block Allocator
● We must share the block device with RocksDB
● implement our own rocksdb::Env
● implement tiny “file system” BlueFS
● make BlueStore and BlueFS share
BLUESTORE
BlueStore Advantages
● Large writes go to directly to block device and small writes to RocksDB WAL.
● No more crazy PG directory hierarchy!
● metadata stored in RocksDB instead of FS XATTRS / LevelDB
● Less SSD wear due to journal writes, and SSDs used for more than journaling.
● Map BlueFS/RocksDB “directories” to different block devices
● db.wal/ – on NVRAM, NVMe, SSD
● db/ – level0 and hot SSTs on SSD
● db.slow/ – cold SSTs on HDD
Not production ready yet, but will be in Jewel as an experimental feature!
BLUESTORE HDD PERFORMANCE
Read 50% Mixed Write-20%
0%
20%
40%
60%
80%
100%
120%
140%
10.1.0 Bluestore HDD Performance vs Filestore
Average of Many IO Sizes (4K, 8K, 16K, 32K ... 4096K)
Sequential
Random
IO Type
PerformanceIncrease
BlueStore average sequential
reads a little worse...
BlueStore much faster!
How does BlueStore Perform?
HDD Performance looks good overall, though sequential reads are important to
watch closely since BlueStore relies on client-side readahead.
BLUESTORE NVMe PERFORMANCE
NVMe results however are decidedly mixed.
What these averages don't show is dramatic performance variation at different IO
sizes. Let's take a look at what's happening.
Read 50% Mixed Write
-20%
-10%
0%
10%
20%
30%
40%
10.1.0 Bluestore NVMe Performance vs Filestore
Average of Many IO Sizes (4K, 8K, 16K, 32K ... 4096K)
Sequential
Random
IO Type
PerformanceDifference
BlueStore average sequential
reads still a little worse...
But mixed sequential workloads
are better?
BLUESTORE NVMe PERFORMANCE
NVMe results are decidedly mixed, but why?
Performance generally good at small and large IO sizes but is slower than filestore
at middle IO sizes. BlueStore is experimental still, stay tuned while we tune!
-100%
-50%
0%
50%
100%
150%
200%
250%
10.1.0 Bluestore NVME Performance vs Filestore
Performance at different IO Sizes
Sequential Read
Sequential Write
Random Read
Random Write
Sequential 50% Mixed
Random 50% Mixed
IO Size
PerformanceImprovement
RGW WRITE PERFORMANCE
A common question:
Why is there a difference in performance between RGW writes and pure RADOS
writes?
There are several factors that play a part:
● S3/Swift protocol likely higher overhead than native RADOS.
● Writes translated through a gateway results in extra latency and potentially
additional bottlenecks.
● Most importantly, RGW maintains bucket indices that have to be updated every
time there is a write while RADOS does not maintain indices.
RGW BUCKET INDEX OVERHEAD
How are RGW Bucket Indices Stored?
● Standard RADOS Objects with the same rules as other objects
● Can be sharded across multiple RADOS objects to improve parallelism
● Data stored in OMAP (ie XATTRS on the underlying object file using filestore)
What Happens during an RGW Write?
● Prepare Step: First stage of 2 stage bucket index update in preparation for write.
● Actual Write
●
Commit Step: Asynchronous 2nd
stage of bucket index update to record that the
write completed.
Note: Every time an object is accessed on filestore backed OSDs, multiple
metadata seeks may be required depending on the kernel dentry/inode cache,
the total numbrer of objects, and external memory pressure.
RGW BUCKET INDEX OVERHEAD
A real example from a customer deployment:
Use GDB as a “poorman's profiler” to see what RGW threads are doing during a
heavy 256K object write workload:
gdb -ex "set pagination 0" -ex "thread apply all bt" --batch -p <process>
Results:
● 200 threads doing IoCtx::operate
● 169 librados::ObjectWriteOperation
● 126 RGWRados::Bucket::UpdateIndex::prepare(RGWModifyOp) ()
● 31 librados::ObjectReadOperation
● 31 RGWRados::raw_obj_stat(…) () ← read head metadata
IMPROVING RGW WRITES
How to make RGW writes faster?
Bluestore gives us a lot
● No more journal penalty for large writes (helps everything, including RGW!)
● Much better allocator behavior and allocator metadata stored in RocksDB
● Bucket index updates in RocksDB instead of XATTRS (should be faster!)
● No need for separate SSD pool for Bucket Indices?
What about filestore?
● Put journals for rgw.buckets pool on SSD to avoid journal penalty
● Put rgw.buckets.index pools on SSD backed OSDs
● More OSDs for rgw.buckets.index means more PGs, higher memory usage, and
potentially lower distribution quality.
● Are there alternatives?
RGW BUCKET INDEX OVERHEAD
Are there alternatives? Potentially yes!
Ben England from Red Hat's Performance Team is testing RGW with LVM Cache.
Initial (100% cached) performance looks promising. Will it scale though?
0 500 1000 1500 2000 2500 3000 3500 4000
RGW Performance with LVM Cache OSDs
1M 64K Objects, 16 Index Shards, 128MB TCMalloc Thread Cache
LVM Cache
Native Disk
Puts/Second
DiskConfiguration
RGW BUCKET INDEX OVERHEAD
What if you don't need bucket indices?
The customer we tested with didn't need Ceph to keep track of bucket contents, so
for Jewel we introduce the concept of Blind Buckets that do not maintain bucket
indices. For this customer the overall performance improvement was near 60%.
0 2 4 6 8 10 12 14 16 18
RGW 256K Object Write Performance
Blind Buckets
Normal Buckets
IOPS (Thousands)
BucketType
CACHE TIERING
The original cache tiering implementation in firefly was very slow
when
firefly hammer
0
50
100
150
200
250
Client Read Throughput (MB/s)
4K Zipf 1.2 Read, 256GB Volume, 128GB Cache Tier
base only
cache tier
Throughput(MB/s)
firefly hammer
0
200
400
600
800
1000
1200
1400
1600
1800
2000
Cache Tier Writes During Client Reads
4K Zipf 1.2 Read, 256GB Volume, 128GB Cache Tier
WriteThroughput(MB/s)
CACHE TIERING
There have been many improvements since then...
● Memory Allocator tuning helps SSD tier in general
● Read proxy support added in Hammer
● Write proxy support added in Infernalis
● Recency fixed: https://github.com/ceph/ceph/pull/6702
● Other misc improvements.
● Is it enough?
CACHE TIERING
Is it enough?
Zipf 1.2 distribution reads performing very similar between base only and
tiered but random reads are still slow at small IO sizes.
-80%
-60%
-40%
-20%
0%
20%
40%
60%
RBD Random Read
NVMe Cache-Tiered vs Non-Tiered
Non-Tiered
Tiered (Rec 2)
IO Size
PercentImprovement
-2%
-1%
-1%
0%
1%
1%
RBD Zipf 1.2 Read
NVMe Cache-Tiered vs Non-Tiered
Non-Tiered
Tiered (Rec 2)
IO Size
PercentImprovement
CACHE TIERING
Limit promotions even more with object and throughput throttling.
Performance improves dramatically and in this case even beats the base tier when
promotions are throttled very aggressively.
4 8 16 32 64 128 256 512 1024 2048 4096
-80%
-60%
-40%
-20%
0%
20%
40%
60%
RBD Random Read Probablistic Promotion Improvement
NVMe Cache-Tiered vs Non-Tiered
Non-Tiered
Tiered (Rec 2)
Tiered (Rec 1, VH Promote)
Tiered (Rec 1, H Promote)
Tiered (Rec 1, M Promote)
Tiered (Rec 1, L Promote)
Tiered (Rec 1, VL Promote)
Tiered (Rec 2, 0 Promote)
IO Size
PercentImprovement
CACHE TIERING
Conclusions
● Performance very dependent on promotion and eviction rates.
● Limiting promotion can improve performance, but are we making the
cache tier less adaptive to changing hot/cold distributions?
● Will need a lot more testing and user feedback to see if our default
promotion throttles make sense!
THANK YOU
plus.google.com/+RedHat
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHatNews

More Related Content

What's hot

Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing GuideJose De La Rosa
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2Tommy Lee
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)Lars Marowsky-Brée
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
 

What's hot (18)

optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Bluestore
BluestoreBluestore
Bluestore
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 

Similar to Ceph Performance: Projects Leading up to Jewel

Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheDavid Grier
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Solr on Docker - the Good, the Bad and the Ugly
Solr on Docker - the Good, the Bad and the UglySolr on Docker - the Good, the Bad and the Ugly
Solr on Docker - the Good, the Bad and the UglySematext Group, Inc.
 
Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...
Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...
Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...Lucidworks
 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data DeduplicationRedWireServices
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...VMworld
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
Ippevent : openshift Introduction
Ippevent : openshift IntroductionIppevent : openshift Introduction
Ippevent : openshift Introductionkanedafromparis
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephScyllaDB
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephScyllaDB
 
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...JAXLondon2014
 
SSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax LondonSSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax LondonUri Cohen
 
Retour d'expérience d'un environnement base de données multitenant
Retour d'expérience d'un environnement base de données multitenantRetour d'expérience d'un environnement base de données multitenant
Retour d'expérience d'un environnement base de données multitenantSwiss Data Forum Swiss Data Forum
 
Experiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsExperiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsCeph Community
 

Similar to Ceph Performance: Projects Leading up to Jewel (20)

Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Solr on Docker - the Good, the Bad and the Ugly
Solr on Docker - the Good, the Bad and the UglySolr on Docker - the Good, the Bad and the Ugly
Solr on Docker - the Good, the Bad and the Ugly
 
Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...
Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...
Solr on Docker: the Good, the Bad, and the Ugly - Radu Gheorghe, Sematext Gro...
 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data Deduplication
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Ceph
CephCeph
Ceph
 
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
 
Shootout at the AWS Corral
Shootout at the AWS CorralShootout at the AWS Corral
Shootout at the AWS Corral
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
Ippevent : openshift Introduction
Ippevent : openshift IntroductionIppevent : openshift Introduction
Ippevent : openshift Introduction
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
 
SSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax LondonSSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax London
 
Retour d'expérience d'un environnement base de données multitenant
Retour d'expérience d'un environnement base de données multitenantRetour d'expérience d'un environnement base de données multitenant
Retour d'expérience d'un environnement base de données multitenant
 
Experiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsExperiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah Watkins
 
Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
 

Recently uploaded

Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 

Recently uploaded (20)

Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 

Ceph Performance: Projects Leading up to Jewel

  • 1. Ceph Performance: Projects Leading up to Jewel Mark Nelson Ceph Community Performance Lead mnelson@redhat.com 4/12/2016
  • 2. OVERVIEW What's been going on with Ceph performance since Hammer? Answer: A lot! ● Memory Allocator Testing ● Bluestore Development ● RADOS Gateway Bucket Index Overhead ● Cache Teiring Probabilistic Promotion Throttling First let's look at how we are testing all this stuff...
  • 3. CBT CBT is an open source tool for creating Ceph clusters and running benchmarks against them. ● Automatically builds clusters and runs through a variety of tests. ● Can launch various monitoring and profiling tools such as collectl. ● YAML based configuration file for cluster and test configuration. ● Open Source: https://github.com/ceph/cbt
  • 4. MEMORY ALLOCATOR TESTING We sat down at the 2015 Ceph Hackathon and tested a CBT configuration to replicate memory allocator results on SSD based clusters pioneered by Sandisk and Intel. Memory Allocator Version Notes TCMalloc 2.1 (default) Thread Cache can not be changed due to bug. TCMalloc 2.4 Default 32MB Thread Cache TCMalloc 2.4 64MB Thread Cache TCMalloc 2.4 128MB Thread cache Jemalloc 3.6.0 Default jemalloc configuration Example CBT Test
  • 5. MEMORY ALLOCATOR TESTING The cluster is rebuilt the exact same way for every memory allocator tested. Tests are run across many different IO sizes and IO Types. The most impressive change was in 4K Random Writes. Over 4X faster with jemalloc! 4KB Random Writes 0 50 100 150 200 250 300 350 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 TCMalloc 2.1 (32MB TC) TCMalloc 2.4 (32MB TC) TCMalloc 2.4 (64MB TC) TCMalloc 2.4 (128MB TC) JEMalloc Time (seconds) IOPS TCMalloc 2.4 (128MB TC) performance degrading! 4.1x faster writes vs TCMalloc 2.1 with jemalloc!
  • 6. We need to examine RSS memory usage during recovery to see what happens in a memory intensive scenario. CBT can perform recovery tests during benchmarks with a small configuration change: cluster: recovery_test: osds: [ 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12, 13,14,15,16] WHAT NOW? Does the Memory Allocator Affect Memory Usage? Test procedure: ● Start the test. ● Wait 60 seconds. ● Mark OSDs on Node 1 down/out. ● Wait until the cluster heals. ● Mark OSDs on Node 1 up/in. ● Wait until the cluster heals. ● Wait 60 seconds. ● End the test.
  • 7. MEMORY ALLOCATOR TESTING Memory Usage during Recovery with Concurrent 4KB Random Writes Much higher jemalloc RSS memory usage! 0 500 1000 1500 2000 2500 0 200 400 600 800 1000 1200 Node1 OSD RSS Memory Usage During Recovery TCMalloc 2.4 (32MB TC) TCMalloc 2.4 (128MB TC) jemalloc 3.6.0 Time (Seconds) OSDRSSMemory(MB) OSDs marked up/in after previously marked down/out Highest peak memory usage with jemalloc. Jemalloc completes recovery faster than tcmalloc.
  • 8. MEMORY ALLOCATOR TESTING General Conclusions ● Ceph is very hard on memory allocators. Opportunities for tuning. ● Huge performance gains and latency drops possible! ● Small IO on fast SSDs is CPU limited in these tests. ● Jemalloc provides higher performance but uses more memory. ● Memory allocator tuning primarily necessary for SimpleMessenger. AsyncMessenger not affected. ● We decided to keep TCMalloc as the default memory allocator in Jewel but increased the amount of thread cache to 128MB.
  • 9. FILESTORE DEFICIENCIES Ceph already has Filestore. Why add a new OSD backend? ● 2X journal write penalty needs to go away! ● Filestore stores metadata in XATTRS. On XFS, any XATTR larger than 254B causes all XATTRS to be moved out of the inode. ● Filestore's PG directory hierarchy grows with the number of objects. This can be mitigated by favoring dentry cache with vfs_cache_pressure, but... ● OSD regularly call syncfs to persist buffered writes. Syncfs does an O(n) search of the entire in-kernel inode cache and slows down as more inodes are cached! ● Pick your poison. Crank up vfs_cache_pressure to avoid syncfs penalties or turn it down to avoid extra dentry seeks caused by deep PG directory hierarchies? ● There must be a better way...
  • 10. BLUESTORE How is BlueStore different? BlueStore BlueFS RocksDB BlockDeviceBlockDeviceBlockDevice BlueRocksEnv data metadata Allocator ObjectStore ● BlueStore = Block + NewStore ● consume raw block device(s) ● key/value database (RocksDB) for metadata ● data written directly to block device ● pluggable block Allocator ● We must share the block device with RocksDB ● implement our own rocksdb::Env ● implement tiny “file system” BlueFS ● make BlueStore and BlueFS share
  • 11. BLUESTORE BlueStore Advantages ● Large writes go to directly to block device and small writes to RocksDB WAL. ● No more crazy PG directory hierarchy! ● metadata stored in RocksDB instead of FS XATTRS / LevelDB ● Less SSD wear due to journal writes, and SSDs used for more than journaling. ● Map BlueFS/RocksDB “directories” to different block devices ● db.wal/ – on NVRAM, NVMe, SSD ● db/ – level0 and hot SSTs on SSD ● db.slow/ – cold SSTs on HDD Not production ready yet, but will be in Jewel as an experimental feature!
  • 12. BLUESTORE HDD PERFORMANCE Read 50% Mixed Write-20% 0% 20% 40% 60% 80% 100% 120% 140% 10.1.0 Bluestore HDD Performance vs Filestore Average of Many IO Sizes (4K, 8K, 16K, 32K ... 4096K) Sequential Random IO Type PerformanceIncrease BlueStore average sequential reads a little worse... BlueStore much faster! How does BlueStore Perform? HDD Performance looks good overall, though sequential reads are important to watch closely since BlueStore relies on client-side readahead.
  • 13. BLUESTORE NVMe PERFORMANCE NVMe results however are decidedly mixed. What these averages don't show is dramatic performance variation at different IO sizes. Let's take a look at what's happening. Read 50% Mixed Write -20% -10% 0% 10% 20% 30% 40% 10.1.0 Bluestore NVMe Performance vs Filestore Average of Many IO Sizes (4K, 8K, 16K, 32K ... 4096K) Sequential Random IO Type PerformanceDifference BlueStore average sequential reads still a little worse... But mixed sequential workloads are better?
  • 14. BLUESTORE NVMe PERFORMANCE NVMe results are decidedly mixed, but why? Performance generally good at small and large IO sizes but is slower than filestore at middle IO sizes. BlueStore is experimental still, stay tuned while we tune! -100% -50% 0% 50% 100% 150% 200% 250% 10.1.0 Bluestore NVME Performance vs Filestore Performance at different IO Sizes Sequential Read Sequential Write Random Read Random Write Sequential 50% Mixed Random 50% Mixed IO Size PerformanceImprovement
  • 15. RGW WRITE PERFORMANCE A common question: Why is there a difference in performance between RGW writes and pure RADOS writes? There are several factors that play a part: ● S3/Swift protocol likely higher overhead than native RADOS. ● Writes translated through a gateway results in extra latency and potentially additional bottlenecks. ● Most importantly, RGW maintains bucket indices that have to be updated every time there is a write while RADOS does not maintain indices.
  • 16. RGW BUCKET INDEX OVERHEAD How are RGW Bucket Indices Stored? ● Standard RADOS Objects with the same rules as other objects ● Can be sharded across multiple RADOS objects to improve parallelism ● Data stored in OMAP (ie XATTRS on the underlying object file using filestore) What Happens during an RGW Write? ● Prepare Step: First stage of 2 stage bucket index update in preparation for write. ● Actual Write ● Commit Step: Asynchronous 2nd stage of bucket index update to record that the write completed. Note: Every time an object is accessed on filestore backed OSDs, multiple metadata seeks may be required depending on the kernel dentry/inode cache, the total numbrer of objects, and external memory pressure.
  • 17. RGW BUCKET INDEX OVERHEAD A real example from a customer deployment: Use GDB as a “poorman's profiler” to see what RGW threads are doing during a heavy 256K object write workload: gdb -ex "set pagination 0" -ex "thread apply all bt" --batch -p <process> Results: ● 200 threads doing IoCtx::operate ● 169 librados::ObjectWriteOperation ● 126 RGWRados::Bucket::UpdateIndex::prepare(RGWModifyOp) () ● 31 librados::ObjectReadOperation ● 31 RGWRados::raw_obj_stat(…) () ← read head metadata
  • 18. IMPROVING RGW WRITES How to make RGW writes faster? Bluestore gives us a lot ● No more journal penalty for large writes (helps everything, including RGW!) ● Much better allocator behavior and allocator metadata stored in RocksDB ● Bucket index updates in RocksDB instead of XATTRS (should be faster!) ● No need for separate SSD pool for Bucket Indices? What about filestore? ● Put journals for rgw.buckets pool on SSD to avoid journal penalty ● Put rgw.buckets.index pools on SSD backed OSDs ● More OSDs for rgw.buckets.index means more PGs, higher memory usage, and potentially lower distribution quality. ● Are there alternatives?
  • 19. RGW BUCKET INDEX OVERHEAD Are there alternatives? Potentially yes! Ben England from Red Hat's Performance Team is testing RGW with LVM Cache. Initial (100% cached) performance looks promising. Will it scale though? 0 500 1000 1500 2000 2500 3000 3500 4000 RGW Performance with LVM Cache OSDs 1M 64K Objects, 16 Index Shards, 128MB TCMalloc Thread Cache LVM Cache Native Disk Puts/Second DiskConfiguration
  • 20. RGW BUCKET INDEX OVERHEAD What if you don't need bucket indices? The customer we tested with didn't need Ceph to keep track of bucket contents, so for Jewel we introduce the concept of Blind Buckets that do not maintain bucket indices. For this customer the overall performance improvement was near 60%. 0 2 4 6 8 10 12 14 16 18 RGW 256K Object Write Performance Blind Buckets Normal Buckets IOPS (Thousands) BucketType
  • 21. CACHE TIERING The original cache tiering implementation in firefly was very slow when firefly hammer 0 50 100 150 200 250 Client Read Throughput (MB/s) 4K Zipf 1.2 Read, 256GB Volume, 128GB Cache Tier base only cache tier Throughput(MB/s) firefly hammer 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Cache Tier Writes During Client Reads 4K Zipf 1.2 Read, 256GB Volume, 128GB Cache Tier WriteThroughput(MB/s)
  • 22. CACHE TIERING There have been many improvements since then... ● Memory Allocator tuning helps SSD tier in general ● Read proxy support added in Hammer ● Write proxy support added in Infernalis ● Recency fixed: https://github.com/ceph/ceph/pull/6702 ● Other misc improvements. ● Is it enough?
  • 23. CACHE TIERING Is it enough? Zipf 1.2 distribution reads performing very similar between base only and tiered but random reads are still slow at small IO sizes. -80% -60% -40% -20% 0% 20% 40% 60% RBD Random Read NVMe Cache-Tiered vs Non-Tiered Non-Tiered Tiered (Rec 2) IO Size PercentImprovement -2% -1% -1% 0% 1% 1% RBD Zipf 1.2 Read NVMe Cache-Tiered vs Non-Tiered Non-Tiered Tiered (Rec 2) IO Size PercentImprovement
  • 24. CACHE TIERING Limit promotions even more with object and throughput throttling. Performance improves dramatically and in this case even beats the base tier when promotions are throttled very aggressively. 4 8 16 32 64 128 256 512 1024 2048 4096 -80% -60% -40% -20% 0% 20% 40% 60% RBD Random Read Probablistic Promotion Improvement NVMe Cache-Tiered vs Non-Tiered Non-Tiered Tiered (Rec 2) Tiered (Rec 1, VH Promote) Tiered (Rec 1, H Promote) Tiered (Rec 1, M Promote) Tiered (Rec 1, L Promote) Tiered (Rec 1, VL Promote) Tiered (Rec 2, 0 Promote) IO Size PercentImprovement
  • 25. CACHE TIERING Conclusions ● Performance very dependent on promotion and eviction rates. ● Limiting promotion can improve performance, but are we making the cache tier less adaptive to changing hot/cold distributions? ● Will need a lot more testing and user feedback to see if our default promotion throttles make sense!