0
Wicked Easy Ceph Block
Storage & OpenStack
Deployment with Crowbar
Michael Holzerland:
michael_holzerland@dell.com
Paul Br...
Confidential
Agenda:
Introduction
 Inktank & Dell
 Dell Crowbar
 Automation, scale
Best Practice with CEPH
 Cluster Be...
3 Confidential
Dell is a certified reseller of Inktank Services,
Support and Training.
• Need to Access and buy Inktank Se...
OPS
SW
Dell OpenStack Cloud Solution
HW
SW
OPS
“Crowbar”
CloudOps
Software
Services &
Consulting
Reference
Architecture
Co...
Components Involved
http://docs.openstack.org/trunk/openstack-compute/admin/content/conceptual-architecture.html
Data Center Solutions
Crowbar
4) Ergänzende ProdukteDell“Crowbar”
OpsManagement
Core Components &
Operating Systems
Cloud
Infrastructure
Physical Resour...
Quantum Cinder
SwiftNova Support
Proxy
Dashboard
Store Nodes
(min 3 nodes)
Controller
API
API
UI
API
API
Block Device
(SAN...
Crowbar Landingpage
• http://crowbar.github.io/
2/28/20149
Best Practices
Object Storage Daemons (OSD)
• Allocate sufficient CPU cycles and memory per OSD
–2GB memory and 1GHz of AMD or Xeon CPU c...
Ceph Cluster Monitors
• Best practice to deploy monitor role on dedicated
hardware
– Not resource intensive but critical
–...
Potential Dell Server Hardware Choices
• Rackable Storage Node
– Dell PowerEdge R720XD or R515
– INTEL Xeon 2603v2 or AMD ...
Configure Networking within the Rack
• Each Pod (e.g., row of racks) contains two Spine switches
• Each Leaf switch is red...
Networking Overview
• Plan for low latency and high bandwidth
• Use 10GbE switches within the rack
• Use 40GbE uplinks bet...
Whitepapers!
Questions?
- Demo -
Upcoming SlideShare
Loading in...5
×

Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

1,512

Published on

Paul Brook and Michael Holzerland, Dell

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,512
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
81
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Transcript of "Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt "

  1. 1. Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar Michael Holzerland: michael_holzerland@dell.com Paul Brook Paul_brook@dell.com Twitter @paulbrookatdell
  2. 2. Confidential Agenda: Introduction  Inktank & Dell  Dell Crowbar  Automation, scale Best Practice with CEPH  Cluster Best Practice  Networking  Whitepapers Crowbar Demo
  3. 3. 3 Confidential Dell is a certified reseller of Inktank Services, Support and Training. • Need to Access and buy Inktank Services & Support? • Inktank 1-year subscription packages – Inktank Pre-Production subscription – Gold (24*7) Subscription • Inktank Professional Services – Ceph Pro services Starter Pack – Additional days services options • Ceph Training from Inktank – Inktank Ceph100 Fundamentals Training – Inktank Ceph110 Operations and Tuning Training – Inktank Ceph120 Ceph and OpenStack Training
  4. 4. OPS SW Dell OpenStack Cloud Solution HW SW OPS “Crowbar” CloudOps Software Services & Consulting Reference Architecture Confidential4
  5. 5. Components Involved http://docs.openstack.org/trunk/openstack-compute/admin/content/conceptual-architecture.html
  6. 6. Data Center Solutions Crowbar
  7. 7. 4) Ergänzende ProdukteDell“Crowbar” OpsManagement Core Components & Operating Systems Cloud Infrastructure Physical Resources APIs, User Access, & Ecosystem Partners
  8. 8. Quantum Cinder SwiftNova Support Proxy Dashboard Store Nodes (min 3 nodes) Controller API API UI API API Block Device (SAN/NAS/DAS) Scheduler Keystone Compute Nodes Controller Database Glance API RabbitMQ API #8 #9 #3 #4 #2 #5 Barclamps! Automatisierte und einfache Installation #7 #6 #1
  9. 9. Crowbar Landingpage • http://crowbar.github.io/ 2/28/20149
  10. 10. Best Practices
  11. 11. Object Storage Daemons (OSD) • Allocate sufficient CPU cycles and memory per OSD –2GB memory and 1GHz of AMD or Xeon CPU cycles per OSD –Hyper Threading can be used in Xeon Sandybridge and UP • Use SSDs as dedicated Journal devices to improve random latency –Some workloads benefit from separate journal devices on SSDs –Rule of Thumb: 6 OSD for 1 SSD • No Raid Controller –Just JBOD 11
  12. 12. Ceph Cluster Monitors • Best practice to deploy monitor role on dedicated hardware – Not resource intensive but critical – Using separate hardware ensures no contention for resources • Make sure monitor processes are never starved for resources – If running monitor process on shared hardware, fence off resources • Deploy an odd number of monitors (3 or 5) – Need to have an odd number of monitors for quorum voting – Clusters < 200 nodes work well with 3 monitors – Larger clusters may benefit from 5 – Main reason to go to 7 is to have redundancy in fault zones • Add redundancy to monitor nodes as appropriate – Make sure the monitor nodes are distributed across fault zones – Consider refactoring fault zones if needing more than 7 monitors 12
  13. 13. Potential Dell Server Hardware Choices • Rackable Storage Node – Dell PowerEdge R720XD or R515 – INTEL Xeon 2603v2 or AMD C32 Plattform – 32GB RAM – 2x 400GB SSD drives (OS and optionally Journals) – 12x 4TB SATA drives – 2x 10GbE, 1x 1GbE, IPMI • Bladed Storage Node – Dell PowerEdge C8000XD Disk and PowerEdge C8220 CPU – 2x Xeon E5-2603v2 CPU, 32GB RAM – 2x 400GB SSD drives (OS and optionally Journals) – 12x 4TB NL SAS drive – 2x 10GbE, 1x 1GbE, IPMI • Monitor Node – Dell PowerEdge R415 – 2x 1TB SATA – 1x 10GbE Confidential13
  14. 14. Configure Networking within the Rack • Each Pod (e.g., row of racks) contains two Spine switches • Each Leaf switch is redundantly uplinked to each Spine switch • Spine switches are redundantly linked to each other with 2x 40GbE • Each Spine switch has three uplinks to other pods with 3x 40GbE 14 10GbE link 40GbE link High-Speed Top-of-Rack (Leaf) Switch Nodes in Rack High-Speed Top-of-Rack (Leaf) Switch Nodes in Rack High-Speed Top-of-Rack (Leaf) Switch Nodes in Rack High-Speed End-of-Row (Spine) Switch High-Speed End-of-Row (Spine) Switch To Other Rows (Pods) To Other Rows (Pods)
  15. 15. Networking Overview • Plan for low latency and high bandwidth • Use 10GbE switches within the rack • Use 40GbE uplinks between racks • One option: Dell Force10 S4810 switches with port aggregation & Force10 S6000 for aggregation Level with 40GigE 15
  16. 16. Whitepapers!
  17. 17. Questions? - Demo -
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×