This document summarizes performance testing of OpenStack with Cinder volumes on Ceph storage. It tested scaling performance with increasing instance counts on a 4-node and 8-node Ceph cluster. Key findings include:
- Large file sequential write performance peaked with a single instance per server due to data striping across OSDs. Read performance peaked at 32 instances per server.
- Large file random I/O performance scaled linearly with increasing instances up to the maximum tested (512 instances).
- Small file operations showed good scaling up to 32 instances per server for creates and reads, but lower performance for renames and deletes.
- Performance tuning like tuned profiles, device readahead, and Ceph journal configuration improved both