SlideShare a Scribd company logo
Your 1st Ceph cluster
Rules of thumb
and magic numbers
Agenda
• Ceph overview
• Hardware planning
• Lessons learned
Who are we?
• Udo Seidel
• Greg Elkinbard
• Dmitriy Novakovskiy
Ceph overview
What is Ceph?
• Ceph is a free & open distributed storage
platform that provides unified object, block,
and file storage.
Object Storage
• Native RADOS objects:
– hash-based placement
– synchronous replication
– radosgw provides S3 and Swift compatible
REST access to RGW objects (striped over
multiple RADOS objects).
Block storage
• RBD block devices are thinly provisioned
over RADOS objects and can be accessed
by QEMU via librbd library.
File storage
• CephFS metadata servers (MDS) provide
a POSIX-compliant file system backed by
RADOS.
– …still experimental?
How does Ceph fit with OpenStack?
• RBD drivers for OpenStack tell libvirt to
configure the QEMU interface to librbd
• Ceph benefits:
– Multi-node striping and redundancy for block
storage (Cinder volumes, Nova ephemeral
drives, Glance images)
– Copy-on-write cloning of images to volumes
– Unified pool of storage nodes for all types of
data (objects, block devices, files)
– Live migration of Ceph-backed VMs
Hardware planning
Questions
• How much net storage, in what tiers?
• How many IOPS?
– Aggregated
– Per VM (average)
– Per VM (peak)
• What to optimize for?
– Cost
– Performance
Rules of thumb and magic numbers
#1
• Ceph-OSD sizing:
– Disks
• 8-10 SAS HDDs per 1 x 10G NIC
• ~12 SATA HDDs per 1 x 10G NIC
• 1 x SSD for write journal per 4-6 OSD drives
– RAM:
• 1 GB of RAM per 1 TB of OSD storage space
– CPU:
• 0.5 CPU core’s/1 Ghz of a core per OSD disk (1-2 CPU cores for SSD
drives)
• Ceph-mon sizing:
Rules of thumb and magic numbers
#2
• For typical “unpredictable” workload we usually
assume:
– 70/30 read/write IOPS proportion
– ~4-8KB random read pattern
• To estimate Ceph IOPS efficiency we usually
take*:
– 4-8K Random Read - 0.88
– 4-8K Random Write - 0.64
* Based on benchmark data and semi-empirical evidence
Sizing task #1
• 1 Petabyte usable (net) storage
• 500 VMs
• 100 IOPS per VM
• 50000 IOPS aggregated
Sizing example #1 – Ceph-mon
• Ceph monitors
– Sample spec:
• HP DL360 Gen9
• CPU: 2620v3 (6 cores)
• RAM: 64 GB
• HDD: 1 x 1 Tb SATA
• NIC: 1 x 1 Gig NIC, 1 x 10 Gig
– How many?
• 1 x ceph-mon node per 15-20 OSD nodes
Sizing example #2 – Ceph-OSD
• Lower density, performance optimized
– Sample spec:
• HP DL380 Gen9
• CPU: 2620v3 (6 cores)
• RAM: 64 GB
• HDD (OS drives): 2 x SATA 500 Gb
• HDD (OSD drives): 20 x SAS 1.8 Tb (10k RPM, 2.5 inch – HGST
C10K1800)
• SSD (write journals): 4 x SSD 128 GB Intel S3700
• NIC: 1 x 1 Gig NIC, 4 x 10 Gig (2 bonds – 1 for ceph-public and 1 for ceph-
replication)
– 1000/ (20 * 1.8 *0.85) * 3 = 98 servers to serve 1 petabyte net
Expected performance Level
• Conservative drive rating: 250 IOPs
• Cluster read rating:
– 250*20*98*.88 = 431200 IOPs
– Approx 800 IOPs per VM capacity is available
• Cluster write rating:
– 250*20*98*.88 = 313600 IOPs
– Approx 600 IOPs per VM capacity is available
Sizing example #3 – Ceph-OSD
• Higher density, cost optimized
– Sample spec:
• SuperMicro 6048R
• CPU: 2 x 2630v3 (16 cores)
• RAM: 192 GB
• HDD (OS drives): 2 x SATA 500 Gb
• HDD (OSD drives): 28 x SATA 4 Tb (7.2k RPM, 3.5 inch – HGST 7K4000)
• SSD (write journals): 6 x SSD 128 GB Intel S3700
• NIC: 1 x 1 Gig NIC, 4 x 10 Gig (2 bonds – 1 for ceph-public and 1 for ceph-
replication)
– 1000/ (28 * 4 * 0.85) * 3 = 32 servers to serve 1 petabyte net
Expected performance Level
• Conservative drive rating: 150 IOPs
• Cluster read rating:
– 150*32*28*0.88 = 118,272 IOPs
– Approx 200 IOPs per VM capacity is available
• Cluster write rating:
– 150*32*28*0.64 = 86,016 IOPs
– Approx 150 IOPs per VM capacity is available
What about SSDs?
• In Firefly (0.8x) an OSD is limited by ~5k
IOPS, so full-SSD Ceph cluster performance
disappoints
• In Giant (0.9x) ~30k per OSD should be
reachable, but has yet to be verified
Lessons learned
How we deploy (with Fuel)
Select storage options ⇒ Assign roles to nodes ⇒ Allocate
disks:
What we now know #1
• RBD is not suitable for IOPS heavy workloads:
– Realistic expectations:
• 100 IOPS per VM (OSD with HDD disks)
• 10K IOPS per VM (OSD with SSD disks, pre-Giant Ceph)
• 10-40 ms latency expected and accepted
• high CPU load on OSD nodes at high IOPS (bumping up CPU requirements)
• Object storage for tenants and applications has limitations
– Swift API gaps in radosgw:
• bulk operations
• custom account metadata,
• static website
• expiring objects
• object versioning
What we now know #2
• NIC bonding
– Do linux bonds, not OVS bonds. OVS underperforms and eats CPU
• Ceph and OVS
– DON’T. Bridges for Ceph networks should be connected directly to
NIC bond, not via OVS integration bridge
• Nova evacuate doesn’t work with Ceph RBD-backed VMs
– Fixed in Kilo
• Ceph network co-location (with Management, public etc)
– DON’T. Dedicated NICs/NIC pairs for Ceph-public and Ceph-
internal (replication)
• Tested ceph.conf
– Enable RBD cache and others -
https://bugs.launchpad.net/fuel/+bug/1374969
Questions?
Thank you!

More Related Content

What's hot

Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Community
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
OpenStack Korea Community
 
Ceph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion Objects
Karan Singh
 
Ceph
CephCeph
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
Open Source Consulting
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
David Pasek
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
Ceph Community
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
Jose De La Rosa
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
Ji-Woong Choi
 
Community Openstack 구축 사례
Community Openstack 구축 사례Community Openstack 구축 사례
Community Openstack 구축 사례
Open Source Consulting
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
ShapeBlue
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
Italo Santos
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Community
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
Ceph Community
 
DPDK In Depth
DPDK In DepthDPDK In Depth
DPDK In Depth
Kernel TLV
 
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
Ian Choi
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
Sage Weil
 
오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기
Jaehwa Park
 

What's hot (20)

Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
 
Ceph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion Objects
 
Ceph
CephCeph
Ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
 
Community Openstack 구축 사례
Community Openstack 구축 사례Community Openstack 구축 사례
Community Openstack 구축 사례
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
 
DPDK In Depth
DPDK In DepthDPDK In Depth
DPDK In Depth
 
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기
 

Similar to Your 1st Ceph cluster

Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
Ranga Swami Reddy Muthumula
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
Nicolas Poggi
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
Red hat open stack and storage presentation
Red hat open stack and storage presentationRed hat open stack and storage presentation
Red hat open stack and storage presentation
Mayur Shetty
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
David Grier
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent Memory
ScyllaDB
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
Ceph Community
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
OpenStack Korea Community
 
Ceph in the GRNET cloud stack
Ceph in the GRNET cloud stackCeph in the GRNET cloud stack
Ceph in the GRNET cloud stack
Nikos Kormpakis
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
Patrick Quairoli
 
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIsDX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
AMD Developer Central
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
ShapeBlue
 
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten RachfahlStorage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
ITCamp
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
Ceph Community
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt
Ceph Community
 
Couchbase live 2016
Couchbase live 2016Couchbase live 2016
Couchbase live 2016
Pierre Mavro
 

Similar to Your 1st Ceph cluster (20)

ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Red hat open stack and storage presentation
Red hat open stack and storage presentationRed hat open stack and storage presentation
Red hat open stack and storage presentation
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent Memory
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Ceph in the GRNET cloud stack
Ceph in the GRNET cloud stackCeph in the GRNET cloud stack
Ceph in the GRNET cloud stack
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIsDX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
 
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten RachfahlStorage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt
 
Couchbase live 2016
Couchbase live 2016Couchbase live 2016
Couchbase live 2016
 

More from Mirantis

How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
Mirantis
 
Kubernetes Security Workshop
Kubernetes Security WorkshopKubernetes Security Workshop
Kubernetes Security Workshop
Mirantis
 
Using Kubernetes to make cellular data plans cheaper for 50M users
Using Kubernetes to make cellular data plans cheaper for 50M usersUsing Kubernetes to make cellular data plans cheaper for 50M users
Using Kubernetes to make cellular data plans cheaper for 50M users
Mirantis
 
How to Build a Basic Edge Cloud
How to Build a Basic Edge CloudHow to Build a Basic Edge Cloud
How to Build a Basic Edge Cloud
Mirantis
 
Securing Your Containers is Not Enough: How to Encrypt Container Data
Securing Your Containers is Not Enough: How to Encrypt Container DataSecuring Your Containers is Not Enough: How to Encrypt Container Data
Securing Your Containers is Not Enough: How to Encrypt Container Data
Mirantis
 
What's New in Kubernetes 1.18 Webinar Slides
What's New in Kubernetes 1.18 Webinar SlidesWhat's New in Kubernetes 1.18 Webinar Slides
What's New in Kubernetes 1.18 Webinar Slides
Mirantis
 
Comparison of Current Service Mesh Architectures
Comparison of Current Service Mesh ArchitecturesComparison of Current Service Mesh Architectures
Comparison of Current Service Mesh Architectures
Mirantis
 
Your Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
Your Application Deserves Better than Kubernetes Ingress: Istio vs. KubernetesYour Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
Your Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
Mirantis
 
Demystifying Cloud Security Compliance
Demystifying Cloud Security ComplianceDemystifying Cloud Security Compliance
Demystifying Cloud Security Compliance
Mirantis
 
Mirantis life
Mirantis lifeMirantis life
Mirantis life
Mirantis
 
OpenStack and the IoT: Where we are, where we're going, what we need to get t...
OpenStack and the IoT: Where we are, where we're going, what we need to get t...OpenStack and the IoT: Where we are, where we're going, what we need to get t...
OpenStack and the IoT: Where we are, where we're going, what we need to get t...
Mirantis
 
Boris Renski: OpenStack Summit Keynote Austin 2016
Boris Renski: OpenStack Summit Keynote Austin 2016Boris Renski: OpenStack Summit Keynote Austin 2016
Boris Renski: OpenStack Summit Keynote Austin 2016
Mirantis
 
Digital Disciplines: Attaining Market Leadership through the Cloud
Digital Disciplines: Attaining Market Leadership through the CloudDigital Disciplines: Attaining Market Leadership through the Cloud
Digital Disciplines: Attaining Market Leadership through the Cloud
Mirantis
 
Decomposing Lithium's Monolith with Kubernetes and OpenStack
Decomposing Lithium's Monolith with Kubernetes and OpenStackDecomposing Lithium's Monolith with Kubernetes and OpenStack
Decomposing Lithium's Monolith with Kubernetes and OpenStack
Mirantis
 
OpenStack: Changing the Face of Service Delivery
OpenStack: Changing the Face of Service DeliveryOpenStack: Changing the Face of Service Delivery
OpenStack: Changing the Face of Service Delivery
Mirantis
 
Accelerating the Next 10,000 Clouds
Accelerating the Next 10,000 CloudsAccelerating the Next 10,000 Clouds
Accelerating the Next 10,000 Clouds
Mirantis
 
Containers for the Enterprise: It's Not That Simple
Containers for the Enterprise: It's Not That SimpleContainers for the Enterprise: It's Not That Simple
Containers for the Enterprise: It's Not That Simple
Mirantis
 
Protecting Yourself from the Container Shakeout
Protecting Yourself from the Container ShakeoutProtecting Yourself from the Container Shakeout
Protecting Yourself from the Container Shakeout
Mirantis
 
It's Not the Technology, It's You
It's Not the Technology, It's YouIt's Not the Technology, It's You
It's Not the Technology, It's You
Mirantis
 
OpenStack as the Platform for Innovation
OpenStack as the Platform for InnovationOpenStack as the Platform for Innovation
OpenStack as the Platform for Innovation
Mirantis
 

More from Mirantis (20)

How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
 
Kubernetes Security Workshop
Kubernetes Security WorkshopKubernetes Security Workshop
Kubernetes Security Workshop
 
Using Kubernetes to make cellular data plans cheaper for 50M users
Using Kubernetes to make cellular data plans cheaper for 50M usersUsing Kubernetes to make cellular data plans cheaper for 50M users
Using Kubernetes to make cellular data plans cheaper for 50M users
 
How to Build a Basic Edge Cloud
How to Build a Basic Edge CloudHow to Build a Basic Edge Cloud
How to Build a Basic Edge Cloud
 
Securing Your Containers is Not Enough: How to Encrypt Container Data
Securing Your Containers is Not Enough: How to Encrypt Container DataSecuring Your Containers is Not Enough: How to Encrypt Container Data
Securing Your Containers is Not Enough: How to Encrypt Container Data
 
What's New in Kubernetes 1.18 Webinar Slides
What's New in Kubernetes 1.18 Webinar SlidesWhat's New in Kubernetes 1.18 Webinar Slides
What's New in Kubernetes 1.18 Webinar Slides
 
Comparison of Current Service Mesh Architectures
Comparison of Current Service Mesh ArchitecturesComparison of Current Service Mesh Architectures
Comparison of Current Service Mesh Architectures
 
Your Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
Your Application Deserves Better than Kubernetes Ingress: Istio vs. KubernetesYour Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
Your Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
 
Demystifying Cloud Security Compliance
Demystifying Cloud Security ComplianceDemystifying Cloud Security Compliance
Demystifying Cloud Security Compliance
 
Mirantis life
Mirantis lifeMirantis life
Mirantis life
 
OpenStack and the IoT: Where we are, where we're going, what we need to get t...
OpenStack and the IoT: Where we are, where we're going, what we need to get t...OpenStack and the IoT: Where we are, where we're going, what we need to get t...
OpenStack and the IoT: Where we are, where we're going, what we need to get t...
 
Boris Renski: OpenStack Summit Keynote Austin 2016
Boris Renski: OpenStack Summit Keynote Austin 2016Boris Renski: OpenStack Summit Keynote Austin 2016
Boris Renski: OpenStack Summit Keynote Austin 2016
 
Digital Disciplines: Attaining Market Leadership through the Cloud
Digital Disciplines: Attaining Market Leadership through the CloudDigital Disciplines: Attaining Market Leadership through the Cloud
Digital Disciplines: Attaining Market Leadership through the Cloud
 
Decomposing Lithium's Monolith with Kubernetes and OpenStack
Decomposing Lithium's Monolith with Kubernetes and OpenStackDecomposing Lithium's Monolith with Kubernetes and OpenStack
Decomposing Lithium's Monolith with Kubernetes and OpenStack
 
OpenStack: Changing the Face of Service Delivery
OpenStack: Changing the Face of Service DeliveryOpenStack: Changing the Face of Service Delivery
OpenStack: Changing the Face of Service Delivery
 
Accelerating the Next 10,000 Clouds
Accelerating the Next 10,000 CloudsAccelerating the Next 10,000 Clouds
Accelerating the Next 10,000 Clouds
 
Containers for the Enterprise: It's Not That Simple
Containers for the Enterprise: It's Not That SimpleContainers for the Enterprise: It's Not That Simple
Containers for the Enterprise: It's Not That Simple
 
Protecting Yourself from the Container Shakeout
Protecting Yourself from the Container ShakeoutProtecting Yourself from the Container Shakeout
Protecting Yourself from the Container Shakeout
 
It's Not the Technology, It's You
It's Not the Technology, It's YouIt's Not the Technology, It's You
It's Not the Technology, It's You
 
OpenStack as the Platform for Innovation
OpenStack as the Platform for InnovationOpenStack as the Platform for Innovation
OpenStack as the Platform for Innovation
 

Recently uploaded

In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
Juraj Vysvader
 
Globus Compute Introduction - GlobusWorld 2024
Globus Compute Introduction - GlobusWorld 2024Globus Compute Introduction - GlobusWorld 2024
Globus Compute Introduction - GlobusWorld 2024
Globus
 
Vitthal Shirke Microservices Resume Montevideo
Vitthal Shirke Microservices Resume MontevideoVitthal Shirke Microservices Resume Montevideo
Vitthal Shirke Microservices Resume Montevideo
Vitthal Shirke
 
Why React Native as a Strategic Advantage for Startup Innovation.pdf
Why React Native as a Strategic Advantage for Startup Innovation.pdfWhy React Native as a Strategic Advantage for Startup Innovation.pdf
Why React Native as a Strategic Advantage for Startup Innovation.pdf
ayushiqss
 
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamOpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
takuyayamamoto1800
 
Using IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandUsing IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New Zealand
IES VE
 
2024 RoOUG Security model for the cloud.pptx
2024 RoOUG Security model for the cloud.pptx2024 RoOUG Security model for the cloud.pptx
2024 RoOUG Security model for the cloud.pptx
Georgi Kodinov
 
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Globus
 
top nidhi software solution freedownload
top nidhi software solution freedownloadtop nidhi software solution freedownload
top nidhi software solution freedownload
vrstrong314
 
Large Language Models and the End of Programming
Large Language Models and the End of ProgrammingLarge Language Models and the End of Programming
Large Language Models and the End of Programming
Matt Welsh
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2
 
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Globus
 
Advanced Flow Concepts Every Developer Should Know
Advanced Flow Concepts Every Developer Should KnowAdvanced Flow Concepts Every Developer Should Know
Advanced Flow Concepts Every Developer Should Know
Peter Caitens
 
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
Globus
 
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...
Hivelance Technology
 
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus
 
Into the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdfInto the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdf
Ortus Solutions, Corp
 
Software Testing Exam imp Ques Notes.pdf
Software Testing Exam imp Ques Notes.pdfSoftware Testing Exam imp Ques Notes.pdf
Software Testing Exam imp Ques Notes.pdf
MayankTawar1
 
BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024
Ortus Solutions, Corp
 
GlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote sessionGlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote session
Globus
 

Recently uploaded (20)

In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
 
Globus Compute Introduction - GlobusWorld 2024
Globus Compute Introduction - GlobusWorld 2024Globus Compute Introduction - GlobusWorld 2024
Globus Compute Introduction - GlobusWorld 2024
 
Vitthal Shirke Microservices Resume Montevideo
Vitthal Shirke Microservices Resume MontevideoVitthal Shirke Microservices Resume Montevideo
Vitthal Shirke Microservices Resume Montevideo
 
Why React Native as a Strategic Advantage for Startup Innovation.pdf
Why React Native as a Strategic Advantage for Startup Innovation.pdfWhy React Native as a Strategic Advantage for Startup Innovation.pdf
Why React Native as a Strategic Advantage for Startup Innovation.pdf
 
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamOpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
 
Using IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandUsing IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New Zealand
 
2024 RoOUG Security model for the cloud.pptx
2024 RoOUG Security model for the cloud.pptx2024 RoOUG Security model for the cloud.pptx
2024 RoOUG Security model for the cloud.pptx
 
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...
 
top nidhi software solution freedownload
top nidhi software solution freedownloadtop nidhi software solution freedownload
top nidhi software solution freedownload
 
Large Language Models and the End of Programming
Large Language Models and the End of ProgrammingLarge Language Models and the End of Programming
Large Language Models and the End of Programming
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
 
Advanced Flow Concepts Every Developer Should Know
Advanced Flow Concepts Every Developer Should KnowAdvanced Flow Concepts Every Developer Should Know
Advanced Flow Concepts Every Developer Should Know
 
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
 
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...
 
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024
 
Into the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdfInto the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdf
 
Software Testing Exam imp Ques Notes.pdf
Software Testing Exam imp Ques Notes.pdfSoftware Testing Exam imp Ques Notes.pdf
Software Testing Exam imp Ques Notes.pdf
 
BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024
 
GlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote sessionGlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote session
 

Your 1st Ceph cluster

  • 1. Your 1st Ceph cluster Rules of thumb and magic numbers
  • 2. Agenda • Ceph overview • Hardware planning • Lessons learned
  • 3. Who are we? • Udo Seidel • Greg Elkinbard • Dmitriy Novakovskiy
  • 5. What is Ceph? • Ceph is a free & open distributed storage platform that provides unified object, block, and file storage.
  • 6. Object Storage • Native RADOS objects: – hash-based placement – synchronous replication – radosgw provides S3 and Swift compatible REST access to RGW objects (striped over multiple RADOS objects).
  • 7. Block storage • RBD block devices are thinly provisioned over RADOS objects and can be accessed by QEMU via librbd library.
  • 8. File storage • CephFS metadata servers (MDS) provide a POSIX-compliant file system backed by RADOS. – …still experimental?
  • 9. How does Ceph fit with OpenStack? • RBD drivers for OpenStack tell libvirt to configure the QEMU interface to librbd • Ceph benefits: – Multi-node striping and redundancy for block storage (Cinder volumes, Nova ephemeral drives, Glance images) – Copy-on-write cloning of images to volumes – Unified pool of storage nodes for all types of data (objects, block devices, files) – Live migration of Ceph-backed VMs
  • 11. Questions • How much net storage, in what tiers? • How many IOPS? – Aggregated – Per VM (average) – Per VM (peak) • What to optimize for? – Cost – Performance
  • 12. Rules of thumb and magic numbers #1 • Ceph-OSD sizing: – Disks • 8-10 SAS HDDs per 1 x 10G NIC • ~12 SATA HDDs per 1 x 10G NIC • 1 x SSD for write journal per 4-6 OSD drives – RAM: • 1 GB of RAM per 1 TB of OSD storage space – CPU: • 0.5 CPU core’s/1 Ghz of a core per OSD disk (1-2 CPU cores for SSD drives) • Ceph-mon sizing:
  • 13. Rules of thumb and magic numbers #2 • For typical “unpredictable” workload we usually assume: – 70/30 read/write IOPS proportion – ~4-8KB random read pattern • To estimate Ceph IOPS efficiency we usually take*: – 4-8K Random Read - 0.88 – 4-8K Random Write - 0.64 * Based on benchmark data and semi-empirical evidence
  • 14. Sizing task #1 • 1 Petabyte usable (net) storage • 500 VMs • 100 IOPS per VM • 50000 IOPS aggregated
  • 15. Sizing example #1 – Ceph-mon • Ceph monitors – Sample spec: • HP DL360 Gen9 • CPU: 2620v3 (6 cores) • RAM: 64 GB • HDD: 1 x 1 Tb SATA • NIC: 1 x 1 Gig NIC, 1 x 10 Gig – How many? • 1 x ceph-mon node per 15-20 OSD nodes
  • 16. Sizing example #2 – Ceph-OSD • Lower density, performance optimized – Sample spec: • HP DL380 Gen9 • CPU: 2620v3 (6 cores) • RAM: 64 GB • HDD (OS drives): 2 x SATA 500 Gb • HDD (OSD drives): 20 x SAS 1.8 Tb (10k RPM, 2.5 inch – HGST C10K1800) • SSD (write journals): 4 x SSD 128 GB Intel S3700 • NIC: 1 x 1 Gig NIC, 4 x 10 Gig (2 bonds – 1 for ceph-public and 1 for ceph- replication) – 1000/ (20 * 1.8 *0.85) * 3 = 98 servers to serve 1 petabyte net
  • 17. Expected performance Level • Conservative drive rating: 250 IOPs • Cluster read rating: – 250*20*98*.88 = 431200 IOPs – Approx 800 IOPs per VM capacity is available • Cluster write rating: – 250*20*98*.88 = 313600 IOPs – Approx 600 IOPs per VM capacity is available
  • 18. Sizing example #3 – Ceph-OSD • Higher density, cost optimized – Sample spec: • SuperMicro 6048R • CPU: 2 x 2630v3 (16 cores) • RAM: 192 GB • HDD (OS drives): 2 x SATA 500 Gb • HDD (OSD drives): 28 x SATA 4 Tb (7.2k RPM, 3.5 inch – HGST 7K4000) • SSD (write journals): 6 x SSD 128 GB Intel S3700 • NIC: 1 x 1 Gig NIC, 4 x 10 Gig (2 bonds – 1 for ceph-public and 1 for ceph- replication) – 1000/ (28 * 4 * 0.85) * 3 = 32 servers to serve 1 petabyte net
  • 19. Expected performance Level • Conservative drive rating: 150 IOPs • Cluster read rating: – 150*32*28*0.88 = 118,272 IOPs – Approx 200 IOPs per VM capacity is available • Cluster write rating: – 150*32*28*0.64 = 86,016 IOPs – Approx 150 IOPs per VM capacity is available
  • 20. What about SSDs? • In Firefly (0.8x) an OSD is limited by ~5k IOPS, so full-SSD Ceph cluster performance disappoints • In Giant (0.9x) ~30k per OSD should be reachable, but has yet to be verified
  • 22. How we deploy (with Fuel) Select storage options ⇒ Assign roles to nodes ⇒ Allocate disks:
  • 23. What we now know #1 • RBD is not suitable for IOPS heavy workloads: – Realistic expectations: • 100 IOPS per VM (OSD with HDD disks) • 10K IOPS per VM (OSD with SSD disks, pre-Giant Ceph) • 10-40 ms latency expected and accepted • high CPU load on OSD nodes at high IOPS (bumping up CPU requirements) • Object storage for tenants and applications has limitations – Swift API gaps in radosgw: • bulk operations • custom account metadata, • static website • expiring objects • object versioning
  • 24. What we now know #2 • NIC bonding – Do linux bonds, not OVS bonds. OVS underperforms and eats CPU • Ceph and OVS – DON’T. Bridges for Ceph networks should be connected directly to NIC bond, not via OVS integration bridge • Nova evacuate doesn’t work with Ceph RBD-backed VMs – Fixed in Kilo • Ceph network co-location (with Management, public etc) – DON’T. Dedicated NICs/NIC pairs for Ceph-public and Ceph- internal (replication) • Tested ceph.conf – Enable RBD cache and others - https://bugs.launchpad.net/fuel/+bug/1374969