3. 3 Confidential
Dell is a certified reseller of Inktank Services,
Support and Training.
• Need to Access and buy Inktank Services & Support?
• Inktank 1-year subscription packages
– Inktank Pre-Production subscription
– Gold (24*7) Subscription
• Inktank Professional Services
– Ceph Pro services Starter Pack
– Additional days services options
• Ceph Training from Inktank
– Inktank Ceph100 Fundamentals Training
– Inktank Ceph110 Operations and Tuning Training
– Inktank Ceph120 Ceph and OpenStack Training
8. Quantum Cinder
SwiftNova Support
Proxy
Dashboard
Store Nodes
(min 3 nodes)
Controller
API
API
UI
API
API
Block Device
(SAN/NAS/DAS)
Scheduler
Keystone
Compute Nodes
Controller
Database
Glance
API
RabbitMQ
API
#8
#9 #3
#4
#2
#5
Barclamps!
Automatisierte und einfache Installation
#7 #6
#1
11. Object Storage Daemons (OSD)
• Allocate sufficient CPU cycles and memory per OSD
–2GB memory and 1GHz of AMD or Xeon CPU cycles
per OSD
–Hyper Threading can be used in Xeon Sandybridge and
UP
• Use SSDs as dedicated Journal devices to improve
random latency
–Some workloads benefit from separate journal devices
on SSDs
–Rule of Thumb: 6 OSD for 1 SSD
• No Raid Controller
–Just JBOD
11
12. Ceph Cluster Monitors
• Best practice to deploy monitor role on dedicated
hardware
– Not resource intensive but critical
– Using separate hardware ensures no contention for resources
• Make sure monitor processes are never starved for
resources
– If running monitor process on shared hardware, fence off
resources
• Deploy an odd number of monitors (3 or 5)
– Need to have an odd number of monitors for quorum voting
– Clusters < 200 nodes work well with 3 monitors
– Larger clusters may benefit from 5
– Main reason to go to 7 is to have redundancy in fault zones
• Add redundancy to monitor nodes as appropriate
– Make sure the monitor nodes are distributed across fault zones
– Consider refactoring fault zones if needing more than 7 monitors
12
13. Potential Dell Server Hardware Choices
• Rackable Storage Node
– Dell PowerEdge R720XD or R515
– INTEL Xeon 2603v2 or AMD C32 Plattform
– 32GB RAM
– 2x 400GB SSD drives
(OS and optionally Journals)
– 12x 4TB SATA drives
– 2x 10GbE, 1x 1GbE, IPMI
• Bladed Storage Node
– Dell PowerEdge C8000XD Disk
and PowerEdge C8220 CPU
– 2x Xeon E5-2603v2 CPU, 32GB RAM
– 2x 400GB SSD drives
(OS and optionally Journals)
– 12x 4TB NL SAS drive
– 2x 10GbE, 1x 1GbE, IPMI
• Monitor Node
– Dell PowerEdge R415
– 2x 1TB SATA
– 1x 10GbE
Confidential13
14. Configure Networking within the Rack
• Each Pod (e.g., row of racks) contains two Spine switches
• Each Leaf switch is redundantly uplinked to each Spine switch
• Spine switches are redundantly linked to each other with 2x 40GbE
• Each Spine switch has three uplinks to other pods with 3x 40GbE
14
10GbE link
40GbE link
High-Speed
Top-of-Rack
(Leaf) Switch
Nodes in Rack
High-Speed
Top-of-Rack
(Leaf) Switch
Nodes in Rack
High-Speed
Top-of-Rack
(Leaf) Switch
Nodes in Rack
High-Speed
End-of-Row
(Spine) Switch
High-Speed
End-of-Row
(Spine) Switch
To Other Rows (Pods) To Other Rows (Pods)
15. Networking Overview
• Plan for low latency and high bandwidth
• Use 10GbE switches within the rack
• Use 40GbE uplinks between racks
• One option: Dell Force10 S4810 switches with
port aggregation & Force10 S6000 for
aggregation Level with 40GigE
15