Ceph Performance and Optimization - Ceph Day Frankfurt

13,472 views

Published on

Sebastien Han, eNovance

Published in: Technology

Ceph Performance and Optimization - Ceph Day Frankfurt

  1. 1. Ceph performance CephDays Frankfurt 2014
  2. 2. Whoami 💥 Sébastien Han 💥 French Cloud Engineer working for eNovance 💥 Daily job focused on Ceph and OpenStack 💥 Blogger Personal blog: http://www.sebastien-han.fr/blog/ Company blog: http://techs.enovance.com/ Last Cephdays presentation
  3. 3. How does Ceph perform? 42* *The Hitchhiker's Guide to the Galaxy
  4. 4. The Good Ceph IO pattern
  5. 5. CRUSH: deterministic object placement As soon as a client writes into Ceph, the operation is computed and the client decides to which OSD the object should belong
  6. 6. Aggregation: cluster level As soon as you write into Ceph, all the objects get equally spread across the entire Cluster, understanding machines and disks..
  7. 7. Aggregation: OSD level As soon as an IO goes into an OSD, no matter how the original pattern was, it becomes sequential.
  8. 8. The Bad Ceph IO pattern
  9. 9. Journaling As soon as an IO goes into an OSD, it gets written twice.
  10. 10. Journal and OSD data on the same disk Journal penalty on the disk Since we write twice, if the journal is stored on the same disk as the OSD data this will result in the following: Device: wMB/s sdb1 - journal 50.11 sdb2 - osd_data 40.25
  11. 11. Filesystem fragmentation • Objects are stored as files on the OSD filesystem • Several IO patterns with different block sizes increase filesystem fragmentation • Possible root cause: image sparseness • One year old cluster ends up with (see allocsize options for XFS): $ sudo xfs_db -c frag -r /dev/sdd actual 196334, ideal 122582, fragmentation factor 37.56%
  12. 12. No parallelized reads • Ceph will always serve the read request from the primary OSD • Room for Nx times speed up where N is the replica count Blueprint from Sage for the Giant release
  13. 13. Scrubbing impact • Consistent object check at the PG level • Compare replicas versions between each others (Fsck for objects) • Light scrubbing (daily) checks the object size and attributes. • Deep scrubbing (weekly) reads the data and uses checksums to ensure data integrity. • Corruption exists – ECC memory (10^15 for enterprise disk) ~113TB • No pain No gain
  14. 14. The Ugly Ceph IO pattern
  15. 15. IOs to the OSD disk One IO into Ceph leads to 2 writes, well… the second write is the worst!
  16. 16. The problem • Several objects map to the same physical disks • Sequential streams get mixed all together • Result: The disk seeks like hell
  17. 17. Even worse with erasure coding? This is just an assumption! •Since erasure coding does chunks of chunks we can possibly have this phenomena amplified
  18. 18. CLUSTER How to build it?
  19. 19. How to start? Things that you must consider: •Use case • IO profile: Bandwidth? IOPS? Mixed? • How many IOPS or Bandwidth per client do I want to deliver? • Do I use Ceph in standalone or is it combined with a software solution? •Amount of data (usable not RAW) • Replica count • Do I have a data growth planning? •Leftover • How much data am I willing to lose if a node fails? (%) • Am I ready to be annoyed by the scrubbing process? •
  20. 20. Things that you must not do • Don't put a RAID underneath your OSD • Ceph already manages the replication • Degraded RAID breaks performances • Reduce usable space on the cluster • Don't build high density nodes with a tiny cluster • Failure consideration and data to re-balance • Potential full cluster • Don't run Ceph on your hypervisors (unless you're broke) • Well maybe…
  21. 21. Firefly: Interesting things going on
  22. 22. Object store multi-backend • ObjectStore is born • Aims to support several backends: • levelDB (default) • RocksDB • Fusionio NVMKV • Seagate Kinetic • Yours!
  23. 23. Why is it so good? • No more journal! Yay! • Object backends have built-in atomic functions
  24. 24. Firefly leveldb • Relatively new • Need to be tested with your workload first • Tend to be more efficient with small objects
  25. 25. Many thanks! Questions? Contact: sebastien@enovance.com Twitter: @sebastien_han IRC: leseb

×