SlideShare a Scribd company logo
1 of 31
Dell and CEPH 
Steve Smith: 
Steve_l_smith@dell.com 
@SteveSAtDell 
Paul Brook 
Paul_brook@dell.com 
Twitter @PaulBrookAtDell 
Ceph Day London 
October 22nd 2014
agenda 
• Why we are here. – we sell CEPH support 
• You need hardware to sit this on. Here are some ideas 
• Some best practice shared with CEPH colleagues this year 
• A concept – (Research Data – would like your input) 
Dell Corporation
Dell is a certified reseller of Red Hat-Inktank 
Services, Support and Training. 
• Need to Access and buy Red Hat Services & Support? 
15+ Years of Red Hat and Dell 
• Red Hat 1-year /3-year subscription packages 
– Inktank Pre-Production subscription 
– Gold (24*7) Subscription 
• Red Hat Professional Services 
– Ceph Pro services Starter Pack 
– Additional days services options 
– Ceph Training from Red Hat 
Or…you can download CEPH for Free 
Dell Corporation 
3Confidential
Components Involved 
http://docs.openstack.org/training-guides/content/module001-ch004-openstack-architecture.html 
Dell Corporation
Dell OpenStack Cloud Solution 
You Get 
Stuff 
Stuff 
Dell Corporation
Best Practices 
(well…….some) 
With acknowledgement and thanks to Kyle and Mark at InkTank 
Dell Corporation
Planning your Ceph Implementation 
• Business Requirements 
– Budget considerations, organisational commitment 
– Replacing Enterprise SAN/NAS for cost saving 
– xaaS use cases for massive-scale, cost-effective storage 
– Avoid lock-in – use open source and industry standards 
– Steady-state vs. Spike data usage 
• Sizing requirements 
– What is the initial storage capacity? 
– What is the expected growth rate? 
• Workload requirements 
– Does the workload need high performance or it is more capacity 
focused? 
– What are IOPS/Throughput requirements? 
– What applications will be running on Ceph cluster? 
– What type of data will be stored? 
Dell Corporation
Architectural considerations – Redundancy and 
replication considerations 
• Tradeoff between Cost vs. Reliability (use-case dependent) 
• How many node failures can be tolerated? 
• In a multi-rack scenario, should a whole rack failure be 
tolerated? 
• Is there a need for multi-site data replication? 
• Erasure coding (more capacity with the same raw disk. More 
CPU load) 
• Plan for redundancy of the monitor nodes – distribute across 
fault zones 
• 3 copies = 8 nines availability, less than 1 second downtime per 
year 
• Many many things affect performance - in Ceph, above Ceph 
and below Ceph. 
Dell Corporation
Understanding Your Workload 
Dell Corporation
CEPH Architecture Refresh 
Dell Corporation
Understanding Ceph (1) 
Dell Corporation
Understanding Ceph (2) 
Dell Corporation
Understanding The Storage Server 
Dell Corporation
Multi-Site Issues 
• Within a CEPH cluster RADOS enforces Strong Consistency 
• The Writer process will wait for the ACK, which happens after the 
primary copy, the replicated copies and the journals have all been 
written. 
• On a WAN this might extend latencies unacceptably. 
• Alternatives 
• For S3/Swift systems, federated gateways between CEPH clusters, 
RADOS uses Eventual Consistency. 
• For remote backup use RBD with sync agents and incremental 
snapshots. 
Dell Corporation
Recommended Storage Server Configurations 
CEPH and InkTank recommendations are a bit out of date. 
• CPU – 1 core GHz per OSD 
– so a 2 x 8-core Intel Haswell 2.0GHz server could support 32 OSDs 
– less for AMD 
• Memory – 2GB per OSD 
– Must be ECC 
• Disk Controller – SAS or SATA without extender for data and 
journal, RAID 1 for operating system disks 
• Data Disks – Size doesn’t matter! Rebuilds happen across 
hundreds of placement groups. 
– 12 disks seems a good number 
• Journal Disks – SSDs – write optimised 
Dell Corporation
Intel Processors 
Dell Corporation
Memory Considerations 
C0 C1 C2 C3 
C0 C1 C2 C3 
C4 C5 C6 C7 C4 C5 C6 C7 
• Always populate all channels – groups of 8 
• Anything less loses significant memory bandwidth 
• Speed drops with 3DPC (sometimes 2DPC) 
• Use Dual Rank RDIMMs for maximum performance and expandability 
• Important to PIN process and data to same NUMA node 
• But let OS processes float 
• Or try Hyperthreading 
• Sensible memory is now 64GB (8 x 8GB RDIMMs) 
Dell Corporation
STORAGE NODE LOAD BALANCER x2 
Dell PowerEdge R515 
6 core AMD CPU, 32GB RAM 
2x 300GB SAS drives (OS) 
12x 3TB SATA drives 
2x 10GbE, 1x 1GbE, IPMI 
M 
RADOS GATEWAY 
STORAGE NODE 
DreamObjects Hardware Specs 
STORAGE NODE 
STORAGE NODE 
STORAGE NODE 
STORAGE NODE 
STORAGE NODE 
x4 
x90 
MANAGEMENT NODE x3 
MANAGEMENT NODE 
Dell PowerEdge R415 
2x 1TB SATA 
1x 10GbE 
Dell Corporation
Ceph Gateway Server 
• Gateway does CRC32 and MD5 checksumming 
– Now included in Intel AVX2 on Haswell 
• 64GB memory (minimum sensible) 
• 2 separate 10GbE NICs, 1 for client comms, 1 for store/retrieve 
• Make sure you have enough file handles, default is 100 - you should 
start at 4096! 
• Load balancing with multiple gateways 
Dell Corporation
Ceph Cluster Monitors 
• Best practice to deploy monitor role on dedicated hardware 
– Not resource intensive but critical – Stewards of the cluster 
– Using separate hardware ensures no contention for resources 
• Make sure monitor processes are never starved for resources 
– If running monitor process on shared hardware, fence off resources 
• Deploy an odd number of monitors (3 or 5) 
– Need to have an odd number of monitors for quorum voting 
– Clusters < 200 nodes work well with 3 monitors 
– Larger clusters may benefit from 5 
– Main reason to go to 7 is to have redundancy in fault zones 
• Add redundancy to monitor nodes as appropriate 
– Make sure the monitor nodes are distributed across fault zones 
– Consider refactoring fault zones if needing more than 7 monitors 
– Build in redundant power, cooling, disk 
2 
0 
Dell Corporation
Networking Overview 
• Plan for low latency and high bandwidth 
• Use 10GbE switches within the rack 
• Use 40GbE uplinks between racks in the datacentre 
• Use more bandwidth at the backend compared to the front end 
• Enable Jumbo frames 
• Replication is done by the storage not the client 
• Client writes to primary and journal 
• Primary writes to replicas through back end network 
• Backend also does recovery and rebalancing 
2 
1 
Dell Corporation
Potential Dell Server Hardware Choices 
• Rackable Storage Node 
– Dell PowerEdge R720XD OR new 13g R730/R730xd 
• Bladed Storage Node 
– Dell PowerEdge C8000XD Disk 
and PowerEdge C8220 CPU 
– 2x Xeon E5-2687 CPU, 128GB RAM 
– 2x 400GB SSD drives 
(OS and optionally Journals) 
– 12x 3TB NL SAS drive 
– 2x 10GbE, 1x 1GbE, IPMI 
• Monitor Node 
– Dell PowerEdge R415 
– 2x 1TB SATA 
– 1x 10GbE 
Dell Corporation 
2Confidential 
2
Mixed Use Deployments 
• For simplicity, dedicate hardware to specific role 
– That may not always be practical (e.g., small clusters) 
– If needed, can combine multiple functions on same hardware 
• Multiple Ceph Roles (e.g., OSD+RGW, OSD+MDS, Mon+RGW) 
– Balance IO-intensive with CPU/memory intensive roles 
– If both roles are relatively light (e.g., Mon and RGW) can 
combine 
• Multiple Applications (e.g., OSD+Compute, Mon+Horizon) 
– In OpenStack environment, may need to mix components 
– Follow same logic of balancing IO-intensive with CPU intensive 
2 
3 
Dell Corporation
Super-size CEPH 
• Lots of Disk space 
• CEPH Rules apply 
• Great for cold dark storage 
• Surprisingly popular with 
Customers 
• 3PB raw in a rack! 
R730/R730XD or R720/R720XD 
PowerVault JBOD 
Dell Corporation
Other Design Guidelines 
• Use simple components, don't buy more than you 
need. 
–Save money on RAID, redundant NICs, PS 
and buy more disks 
• Keep networks as flat as possible (East-West) 
–VLANs don't scale 
– Use Software Defined Networking for multi-tenancy in 
cloud 
• Design the fault zones carefully for NoSPoF 
–Rack 
–Row 
–Datacentre 
2 
5 
Dell Corporation
Research Data: 
Beta Slides 
Dell Corporation
Concept: Get started? 
Keep, 
Search, 
Collaborate- 
Publish 
Research Data & Publications 
Digital - Pre-Publication 
(Any Format?) 
Digital -Other (Any Format?) 
Dell Corporation
Concept: Get started? 
Keep, 
Search, 
Collaborate- 
Publish 
Research Data & Publications 
Digital - Pre-Publication 
(Any Format?) 
Digital -Other (Any Format?) 
How tag metadata? 
How Search? 
Data Security? 
File types to store? 
How long to store? 
How Collaborate? 
Dell Corporation
Holding a tin cup below a Niagara Falls of data!" 
Data keeps on 
coming &……. 
..coming……& 
coming……….. 
Has anyone else had this problem and already solved it. ? 
Open Source is best protection/longevity. “Web 2.0/Social has already solved scale-storage 
problem” 
Dell Corporation
Solve problems one at a time 
OpenStack 
Layer 
(Access) 
CEPH Storage 
Identity 
Management 
Governance 
Policy & 
Control 
PUBLISH: 
Existing 
Publishing 
routes 
Dell Corporation
Solve problems one at a time 
OpenStack 
Layer 
(Access) 
CEPH Storage 
Identity 
Management 
Governance 
Policy & 
Control 
Start Here 
PUBLISH: 
Existing 
Publishing 
routes 
Dell Corporation

More Related Content

What's hot

What's hot (20)

Using S3 Select to Deliver 100X Performance Improvements Versus the Public Cloud
Using S3 Select to Deliver 100X Performance Improvements Versus the Public CloudUsing S3 Select to Deliver 100X Performance Improvements Versus the Public Cloud
Using S3 Select to Deliver 100X Performance Improvements Versus the Public Cloud
 
Ceph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion Objects
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
Secure container: Kata container and gVisor
Secure container: Kata container and gVisorSecure container: Kata container and gVisor
Secure container: Kata container and gVisor
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Using the KVMhypervisor in CloudStack
Using the KVMhypervisor in CloudStackUsing the KVMhypervisor in CloudStack
Using the KVMhypervisor in CloudStack
 
Linux Instrumentation
Linux InstrumentationLinux Instrumentation
Linux Instrumentation
 
RBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason DillamanRBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason Dillaman
 
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster Recovery
 
Cassandra NoSQL Tutorial
Cassandra NoSQL TutorialCassandra NoSQL Tutorial
Cassandra NoSQL Tutorial
 
AI Accelerators for Cloud Datacenters
AI Accelerators for Cloud DatacentersAI Accelerators for Cloud Datacenters
AI Accelerators for Cloud Datacenters
 
Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
 
Transaction Management on Cassandra
Transaction Management on CassandraTransaction Management on Cassandra
Transaction Management on Cassandra
 
Optane DC Persistent Memory(DCPMM) 성능 테스트
Optane DC Persistent Memory(DCPMM) 성능 테스트Optane DC Persistent Memory(DCPMM) 성능 테스트
Optane DC Persistent Memory(DCPMM) 성능 테스트
 
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsApache Ignite vs Alluxio: Memory Speed Big Data Analytics
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
 
Under the Hood of a Shard-per-Core Database Architecture
Under the Hood of a Shard-per-Core Database ArchitectureUnder the Hood of a Shard-per-Core Database Architecture
Under the Hood of a Shard-per-Core Database Architecture
 
Common Patterns of Multi Data-Center Architectures with Apache Kafka
Common Patterns of Multi Data-Center Architectures with Apache KafkaCommon Patterns of Multi Data-Center Architectures with Apache Kafka
Common Patterns of Multi Data-Center Architectures with Apache Kafka
 
MySQL Shell/AdminAPI - MySQL Architectures Made Easy For All!
MySQL Shell/AdminAPI - MySQL Architectures Made Easy For All!MySQL Shell/AdminAPI - MySQL Architectures Made Easy For All!
MySQL Shell/AdminAPI - MySQL Architectures Made Easy For All!
 

Viewers also liked

B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
softlayerjp
 

Viewers also liked (8)

Rabbit mq, amqp and php
Rabbit mq, amqp and phpRabbit mq, amqp and php
Rabbit mq, amqp and php
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
 
Ceph ベンチマーク
Ceph ベンチマークCeph ベンチマーク
Ceph ベンチマーク
 
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
Ceph アーキテクチャ概説
Ceph アーキテクチャ概説Ceph アーキテクチャ概説
Ceph アーキテクチャ概説
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 

Similar to Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Storage as-a-Service

Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final
Juergen Domnik
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
solarisyourep
 
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix
 

Similar to Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Storage as-a-Service (20)

Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data Analysis
 
Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis
 
Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Výhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database ApplianceVýhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database Appliance
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data Analysis
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data Analysis
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Optimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for HadoopOptimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for Hadoop
 
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
 
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, SuccessesSQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
 
Citrix Synergy 2014: Going the CloudPlatform Way
Citrix Synergy 2014: Going the CloudPlatform WayCitrix Synergy 2014: Going the CloudPlatform Way
Citrix Synergy 2014: Going the CloudPlatform Way
 
How to Build a Compute Cluster
How to Build a Compute ClusterHow to Build a Compute Cluster
How to Build a Compute Cluster
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 

Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Storage as-a-Service

  • 1. Dell and CEPH Steve Smith: Steve_l_smith@dell.com @SteveSAtDell Paul Brook Paul_brook@dell.com Twitter @PaulBrookAtDell Ceph Day London October 22nd 2014
  • 2. agenda • Why we are here. – we sell CEPH support • You need hardware to sit this on. Here are some ideas • Some best practice shared with CEPH colleagues this year • A concept – (Research Data – would like your input) Dell Corporation
  • 3. Dell is a certified reseller of Red Hat-Inktank Services, Support and Training. • Need to Access and buy Red Hat Services & Support? 15+ Years of Red Hat and Dell • Red Hat 1-year /3-year subscription packages – Inktank Pre-Production subscription – Gold (24*7) Subscription • Red Hat Professional Services – Ceph Pro services Starter Pack – Additional days services options – Ceph Training from Red Hat Or…you can download CEPH for Free Dell Corporation 3Confidential
  • 5. Dell OpenStack Cloud Solution You Get Stuff Stuff Dell Corporation
  • 6. Best Practices (well…….some) With acknowledgement and thanks to Kyle and Mark at InkTank Dell Corporation
  • 7. Planning your Ceph Implementation • Business Requirements – Budget considerations, organisational commitment – Replacing Enterprise SAN/NAS for cost saving – xaaS use cases for massive-scale, cost-effective storage – Avoid lock-in – use open source and industry standards – Steady-state vs. Spike data usage • Sizing requirements – What is the initial storage capacity? – What is the expected growth rate? • Workload requirements – Does the workload need high performance or it is more capacity focused? – What are IOPS/Throughput requirements? – What applications will be running on Ceph cluster? – What type of data will be stored? Dell Corporation
  • 8. Architectural considerations – Redundancy and replication considerations • Tradeoff between Cost vs. Reliability (use-case dependent) • How many node failures can be tolerated? • In a multi-rack scenario, should a whole rack failure be tolerated? • Is there a need for multi-site data replication? • Erasure coding (more capacity with the same raw disk. More CPU load) • Plan for redundancy of the monitor nodes – distribute across fault zones • 3 copies = 8 nines availability, less than 1 second downtime per year • Many many things affect performance - in Ceph, above Ceph and below Ceph. Dell Corporation
  • 9. Understanding Your Workload Dell Corporation
  • 10. CEPH Architecture Refresh Dell Corporation
  • 11. Understanding Ceph (1) Dell Corporation
  • 12. Understanding Ceph (2) Dell Corporation
  • 13. Understanding The Storage Server Dell Corporation
  • 14. Multi-Site Issues • Within a CEPH cluster RADOS enforces Strong Consistency • The Writer process will wait for the ACK, which happens after the primary copy, the replicated copies and the journals have all been written. • On a WAN this might extend latencies unacceptably. • Alternatives • For S3/Swift systems, federated gateways between CEPH clusters, RADOS uses Eventual Consistency. • For remote backup use RBD with sync agents and incremental snapshots. Dell Corporation
  • 15. Recommended Storage Server Configurations CEPH and InkTank recommendations are a bit out of date. • CPU – 1 core GHz per OSD – so a 2 x 8-core Intel Haswell 2.0GHz server could support 32 OSDs – less for AMD • Memory – 2GB per OSD – Must be ECC • Disk Controller – SAS or SATA without extender for data and journal, RAID 1 for operating system disks • Data Disks – Size doesn’t matter! Rebuilds happen across hundreds of placement groups. – 12 disks seems a good number • Journal Disks – SSDs – write optimised Dell Corporation
  • 16. Intel Processors Dell Corporation
  • 17. Memory Considerations C0 C1 C2 C3 C0 C1 C2 C3 C4 C5 C6 C7 C4 C5 C6 C7 • Always populate all channels – groups of 8 • Anything less loses significant memory bandwidth • Speed drops with 3DPC (sometimes 2DPC) • Use Dual Rank RDIMMs for maximum performance and expandability • Important to PIN process and data to same NUMA node • But let OS processes float • Or try Hyperthreading • Sensible memory is now 64GB (8 x 8GB RDIMMs) Dell Corporation
  • 18. STORAGE NODE LOAD BALANCER x2 Dell PowerEdge R515 6 core AMD CPU, 32GB RAM 2x 300GB SAS drives (OS) 12x 3TB SATA drives 2x 10GbE, 1x 1GbE, IPMI M RADOS GATEWAY STORAGE NODE DreamObjects Hardware Specs STORAGE NODE STORAGE NODE STORAGE NODE STORAGE NODE STORAGE NODE x4 x90 MANAGEMENT NODE x3 MANAGEMENT NODE Dell PowerEdge R415 2x 1TB SATA 1x 10GbE Dell Corporation
  • 19. Ceph Gateway Server • Gateway does CRC32 and MD5 checksumming – Now included in Intel AVX2 on Haswell • 64GB memory (minimum sensible) • 2 separate 10GbE NICs, 1 for client comms, 1 for store/retrieve • Make sure you have enough file handles, default is 100 - you should start at 4096! • Load balancing with multiple gateways Dell Corporation
  • 20. Ceph Cluster Monitors • Best practice to deploy monitor role on dedicated hardware – Not resource intensive but critical – Stewards of the cluster – Using separate hardware ensures no contention for resources • Make sure monitor processes are never starved for resources – If running monitor process on shared hardware, fence off resources • Deploy an odd number of monitors (3 or 5) – Need to have an odd number of monitors for quorum voting – Clusters < 200 nodes work well with 3 monitors – Larger clusters may benefit from 5 – Main reason to go to 7 is to have redundancy in fault zones • Add redundancy to monitor nodes as appropriate – Make sure the monitor nodes are distributed across fault zones – Consider refactoring fault zones if needing more than 7 monitors – Build in redundant power, cooling, disk 2 0 Dell Corporation
  • 21. Networking Overview • Plan for low latency and high bandwidth • Use 10GbE switches within the rack • Use 40GbE uplinks between racks in the datacentre • Use more bandwidth at the backend compared to the front end • Enable Jumbo frames • Replication is done by the storage not the client • Client writes to primary and journal • Primary writes to replicas through back end network • Backend also does recovery and rebalancing 2 1 Dell Corporation
  • 22. Potential Dell Server Hardware Choices • Rackable Storage Node – Dell PowerEdge R720XD OR new 13g R730/R730xd • Bladed Storage Node – Dell PowerEdge C8000XD Disk and PowerEdge C8220 CPU – 2x Xeon E5-2687 CPU, 128GB RAM – 2x 400GB SSD drives (OS and optionally Journals) – 12x 3TB NL SAS drive – 2x 10GbE, 1x 1GbE, IPMI • Monitor Node – Dell PowerEdge R415 – 2x 1TB SATA – 1x 10GbE Dell Corporation 2Confidential 2
  • 23. Mixed Use Deployments • For simplicity, dedicate hardware to specific role – That may not always be practical (e.g., small clusters) – If needed, can combine multiple functions on same hardware • Multiple Ceph Roles (e.g., OSD+RGW, OSD+MDS, Mon+RGW) – Balance IO-intensive with CPU/memory intensive roles – If both roles are relatively light (e.g., Mon and RGW) can combine • Multiple Applications (e.g., OSD+Compute, Mon+Horizon) – In OpenStack environment, may need to mix components – Follow same logic of balancing IO-intensive with CPU intensive 2 3 Dell Corporation
  • 24. Super-size CEPH • Lots of Disk space • CEPH Rules apply • Great for cold dark storage • Surprisingly popular with Customers • 3PB raw in a rack! R730/R730XD or R720/R720XD PowerVault JBOD Dell Corporation
  • 25. Other Design Guidelines • Use simple components, don't buy more than you need. –Save money on RAID, redundant NICs, PS and buy more disks • Keep networks as flat as possible (East-West) –VLANs don't scale – Use Software Defined Networking for multi-tenancy in cloud • Design the fault zones carefully for NoSPoF –Rack –Row –Datacentre 2 5 Dell Corporation
  • 26. Research Data: Beta Slides Dell Corporation
  • 27. Concept: Get started? Keep, Search, Collaborate- Publish Research Data & Publications Digital - Pre-Publication (Any Format?) Digital -Other (Any Format?) Dell Corporation
  • 28. Concept: Get started? Keep, Search, Collaborate- Publish Research Data & Publications Digital - Pre-Publication (Any Format?) Digital -Other (Any Format?) How tag metadata? How Search? Data Security? File types to store? How long to store? How Collaborate? Dell Corporation
  • 29. Holding a tin cup below a Niagara Falls of data!" Data keeps on coming &……. ..coming……& coming……….. Has anyone else had this problem and already solved it. ? Open Source is best protection/longevity. “Web 2.0/Social has already solved scale-storage problem” Dell Corporation
  • 30. Solve problems one at a time OpenStack Layer (Access) CEPH Storage Identity Management Governance Policy & Control PUBLISH: Existing Publishing routes Dell Corporation
  • 31. Solve problems one at a time OpenStack Layer (Access) CEPH Storage Identity Management Governance Policy & Control Start Here PUBLISH: Existing Publishing routes Dell Corporation

Editor's Notes

  1. Welcome to a short overview of Ceph storage in Dell OpenStack-Powered Cloud Solutions Ceph is a transformational storage technology available as free open source software. It’s a universal storage solution that provides block, file, and object storage from a scalable cluster built on top of standard utility server hardware. Dell has partnered with Inktank, the Ceph experts, to bring a validated Ceph storage solution to Dell cloud customers
  2. Suggested notes: Paul_ We sell Red Hat /Inktank support and training and stuff. If you want it/need it – we can help you get it
  3. Not even the least bit complicated. – But if we are positioning this OUTSIDE CEPH community – what is best way ? Cloud scale-low cost-flexible stoRage -
  4. “Executive Pitch”