SlideShare a Scribd company logo
1 of 31
Best Practices for Ceph- 
Powered Implementations of 
Storage as-a-Service 
Kamesh Pemmaraju, Sr. Product Mgr, Dell 
Ceph Developer Day, New York City, Oct 2014
Outline 
• Planning your Ceph implementation 
• Ceph Use Cases 
• Choosing targets for Ceph deployments 
• Reference Architecture Considerations 
• Dell Reference Configurations 
• Customer Case Study
Planning your Ceph Implementation 
• Business Requirements 
– Budget considerations, organizational commitment 
– Avoiding lock-in – use open source and industry standards 
– Enterprise IT use cases 
– Cloud applications/XaaS use cases for massive-scale, cost-effective storage 
• Sizing requirements 
– What is the initial storage capacity? 
– Is it steady-state data usage vs. Spike data usage 
– What is the expected growth rate? 
• Workload requirements 
– Does the workload need high performance or it is more capacity focused? 
– What are IOPS/Throughput requirements? 
– What type of data will be stored? 
– Ephemeral vs. persistent data, Object, Block, File? 
• Ceph is like a Swiss Army knife – it can tuned a wide variety of use cases. Let 
us look at some of them
Ceph is like a Swiss Army Knife – it can fit in a 
wide variety of target use cases 
Virtualization and Private 
Ceph Cloud 
Target 
(traditional SAN/NAS) 
High Performance 
(traditional SAN) 
NAS & Object 
Content Store 
(traditional NAS) 
Capacity Performance 
Traditional IT 
Cloud 
Applications 
XaaS Compute Cloud 
Open Source Block 
XaaS Content Store 
Open Source NAS/Object 
Ceph Target
USE CASE: OPENSTACK 
Copyright © 2013 by Inktank | Private and Confidential 
5
USE CASE: OPENSTACK 
Copyright © 2013 by Inktank | Private and Confidential 
6 
Volumes Ephemeral 
Copy-on-Write Snapshots
USE CASE: OPENSTACK 
Copyright © 2013 by Inktank | Private and Confidential 
7
USE CASE: CLOUD STORAGE 
Copyright © 2013 by Inktank | Private and Confidential 
8 
S3/Swift S3/Swift S3/Swift S3/Swift
USE CASE: WEBSCALE APPLICATIONS 
Copyright © 2013 by Inktank | Private and Confidential 
9 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol
USE CASE: PERFORMANCE BLOCK 
Copyright © 2013 by Inktank | Private and Confidential 
10 
CEPH STORAGE CLUSTER
USE CASE: PERFORMANCE BLOCK 
Read/Write Read/Write 
Copyright © 2013 by Inktank | Private and Confidential 
11 
CEPH STORAGE CLUSTER
USE CASE: PERFORMANCE BLOCK 
Write Write Read Read 
Copyright © 2013 by Inktank | Private and Confidential 
12 
CEPH STORAGE CLUSTER
USE CASE: ARCHIVE / COLD STORAGE 
Copyright © 2013 by Inktank | Private and Confidential 
13 
CEPH STORAGE CLUSTER
USE CASE: DATABASES 
Copyright © 2013 by Inktank | Private and Confidential 
14 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol
USE CASE: HADOOP 
Copyright © 2013 by Inktank | Private and Confidential 
15 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol
Architectural considerations – Redundancy and 
replication considerations 
• Tradeoff between Cost vs. Reliability (use-case dependent) 
• Use the Crush configs to map out your failures domains and performance pools 
• Failure domains 
– Disk (OSD and OS) 
– SSD journals 
– Node 
– Rack 
– Site (replication at the RADOS level, Block replication, consider latencies) 
• Storage pools 
– SSD pool for higher performance 
– Capacity pool 
• Plan for failure domains of the monitor nodes 
• Consider failure replacement scenarios, lowered redundancies, and performance 
impacts
Server Considerations 
• Storage Node: 
– one OSD per HDD, 1 – 2 GB ram, and 1 Gz/core/OSD, 
– SSD’s for journaling and for using SSD pooling (tiering) in Firefly 
– Erasure coding will increase useable capacity at the expense of additional compute 
load 
– SAS JBOD expanders for extra capacity (beware of extra latency, oversubscribed 
SAS lanes, large footprint for a failure zone) 
• Monitor nodes (MON): odd number for quorum, services 
can be hosted on the storage node for smaller 
deployments, but will need dedicated nodes larger 
installations 
• Dedicated RADOS Gateway nodes for large object store 
deployments and for federated gateways for multi-site
Networking Considerations 
• Dedicated or Shared network 
– Be sure to involve the networking and security teams early when designing your 
networking options 
– Network redundancy considerations 
– Dedicated client and OSD networks 
– VLAN’s vs. Dedicated switches 
– 1 Gbs vs 10 Gbs vs 40 Gbs! 
• Networking design 
– Spine and Leaf 
– Multi-rack 
– Core fabric connectivity 
– WAN connectivity and latency issues for multi-site deployments
Ceph additions coming to the Dell Red Hat 
OpenStack solution 
Pilot configuration Components 
• Dell PowerEdge R620/R720/R720XD Servers 
• Dell Networking S4810/S55 Switches, 10GB 
• Red Hat Enterprise Linux OpenStack Platform 
• Dell ProSupport 
• Dell Professional Services 
• Avail. w/wo High Availability 
Specs at a glance 
• Node 1: Red Hat Openstack Manager 
• Node 2: OpenStack Controller (2 additional controllers 
for HA) 
• Nodes 3-8: OpenStack Nova Compute 
• Nodes: 9-11: Ceph 12x3 TB raw storage 
• Network Switches: Dell Networking S4810/S55 
• Supports ~ 170-228 virtual machines 
Benefits 
• Rapid on-ramp to OpenStack cloud 
• Scale up, modular compute and storage blocks 
• Single point of contact for solution support 
• Enterprise-grade OpenStack software package 
Storage 
bundles
Example Ceph Dell Server Configurations 
Type Size Components 
Performance 20 TB • R720XD 
• 24 GB DRAM 
• 10 X 4 TB HDD (data drives) 
• 2 X 300 GB SSD (journal) 
Capacity 44TB / 
105 TB* 
• R720XD 
• 64 GB DRAM 
• 10 X 4 TB HDD (data drives) 
• 2 X 300 GB SSH (journal) 
• MD1200 
• 12 X 4 TB HHD (data drives) 
Extra Capacity 144 TB / 
240 TB* 
• R720XD 
• 128 GB DRAM 
• 12 X 4 TB HDD (data drives) 
• MD3060e (JBOD) 
• 60 X 4 TB HHD (data drives)
What Are We Doing To Enable? 
• Dell & Red Hat & Inktank have partnered to bring a complete 
Enterprise-grade storage solution for RHEL-OSP + Ceph 
• The joint solution provides: 
– Co-engineered and validated Reference Architecture 
– Pre-configured storage bundles optimized for performance or 
storage 
– Storage enhancements to existing OpenStack Bundles 
– Certification against RHEL-OSP 
– Professional Services, Support, and Training 
› Collaborative Support for Dell hardware customers 
› Deployment services & tools
UAB Case Study
Overcoming a data deluge 
US university that specializes in Cancer and Genomic research 
• 900 researchers 
• Data sets challenging resources 
• Research data scattered everywhere 
• Transferring datasets took forever and clogged 
shared networks 
• Distributed data management reduced 
productivity and put data at risk 
• Needed centralized repository for compliance 
Dell - Confidential
Research Computing System (Originally) 
A collection of grids, proto-clouds, tons of virtualization and DevOps 
HPC 
Cluster 
HPC 
Cluster 
HPC 
Storage 
DDR Infiniband QDR Infiniband 
1Gb Ethernet 
University Research Network 
Interactive Services 
Thumb 
drives 
Local 
servers 
Laptops 
Laptops 
Thumb 
drives 
Local 
servers 
Dell - Confidential
Solution: a scale-out storage cloud 
Based on OpenStack and Ceph 
• Housed and managed centrally, accessible 
across campus network 
− File system + cluster, can grow as big as you want 
− Provisions from a massive common pool 
− 400+ TBs at less than 41¢/GB; scalable to 5PB 
• Researchers gain 
− Work with larger, more diverse data sets 
− Save workflows for new devices & analysis 
− Qualify for grants due to new levels of protection 
• Demonstrating utility with applications 
− Research storage 
− Crashplan (cloud back up) on POC 
− Gitlab hosting on POC 
“We’ve made it possible for users to 
satisfy their own storage needs with 
the Dell private cloud, so that their 
research is not hampered by IT.” 
David L. Shealy, PhD 
Faculty Director, Research Computing 
Chairman, Dept. of Physics 
Dell - Confidential
Research Computing System (Today) 
Centralized storage cloud based on OpenStack and Ceph 
Ceph 
node 
University Research Network 
Cep 
node 
Ceph 
node 
Ceph 
node 
Ceph 
node 
POC 
Open 
Stack 
node 
HPC 
Cluster 
HPC 
Cluster 
HPC 
Storage 
DDR Infiniband QDR Infiniband 
10Gb Ethernet 
Cloud services layer 
Virtualized server and storage computing cloud 
based on OpenStack, Crowbar and Ceph 
Dell - Confidential
Building a research cloud 
Project goals extend well beyond data management 
• Designed to support emerging 
data-intensive scientific computing paradigm 
− 12 x 16-core compute nodes 
− 1 TB RAM, 420 TBs storage 
− 36 TBs storage attached to each compute node 
• Individually customized test/development/ 
production environments 
− Direct user control over all aspects of the 
application environment 
− Rapid setup and teardown 
• Growing set of cloud-based tools & services 
− Easily integrate shareware, open source, and 
commercial software 
“We envision the OpenStack-based 
cloud to act as the gateway to our 
HPC resources, not only as the 
purveyor of services we provide, but 
also enabling users to build their own 
cloud-based services.” 
John-Paul Robinson, System Architect 
Dell - Confidential
Research Computing System (Next Gen) 
A cloud-based computing environment with high speed access to 
dedicated and dynamic compute resources 
Open 
Stack 
node 
Open 
Stack 
node 
Open 
Stack 
node 
Ceph 
node 
University Research Network 
Ceph 
node 
Ceph 
node 
Ceph 
node 
Ceph 
node 
Open 
Stack 
node 
Open 
Stack 
node 
Open 
Stack 
node 
Open 
Stack 
node 
HPC 
Cluster 
HPC 
Cluster 
HPC 
Storage 
DDR Infiniband QDR Infiniband 
10Gb Ethernet 
Cloud services layer 
Virtualized server and storage computing cloud 
based on OpenStack, Crowbar and Ceph 
Dell - Confidential
THANK YOU!
Contact Information 
Reach Kamesh additional information: 
Kamesh_Pemmaraju@Dell.com 
@kpemmaraju 
http://www.cloudel.com
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of Storage as-a-Service

More Related Content

What's hot

Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed_Hat_Storage
 
Ceph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Community
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red_Hat_Storage
 
CBlocks - Posix compliant files systems for HDFS
CBlocks - Posix compliant files systems for HDFSCBlocks - Posix compliant files systems for HDFS
CBlocks - Posix compliant files systems for HDFSDataWorks Summit
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed_Hat_Storage
 
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...Red_Hat_Storage
 
How the Internet of Things are Turning the Internet Upside Down
How the Internet of Things are Turning the Internet Upside DownHow the Internet of Things are Turning the Internet Upside Down
How the Internet of Things are Turning the Internet Upside DownDataWorks Summit
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Community
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph Community
 
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed_Hat_Storage
 
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage MattersRed Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...Red_Hat_Storage
 
Red Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed_Hat_Storage
 

What's hot (20)

Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Ceph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Day Melabourne - Community Update
Ceph Day Melabourne - Community Update
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
CBlocks - Posix compliant files systems for HDFS
CBlocks - Posix compliant files systems for HDFSCBlocks - Posix compliant files systems for HDFS
CBlocks - Posix compliant files systems for HDFS
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
 
How the Internet of Things are Turning the Internet Upside Down
How the Internet of Things are Turning the Internet Upside DownHow the Internet of Things are Turning the Internet Upside Down
How the Internet of Things are Turning the Internet Upside Down
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
 
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage MattersRed Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
 
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
 
Red Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and Future
 

Viewers also liked

Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFS
Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFSCeph Day New York 2014: Distributed OLAP queries in seconds using CephFS
Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFSCeph Community
 
Ceph Day LA: Ceph Ecosystem Update
Ceph Day LA: Ceph Ecosystem Update Ceph Day LA: Ceph Ecosystem Update
Ceph Day LA: Ceph Ecosystem Update Ceph Community
 
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Community
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Community
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Community
 
Ceph Day Beijing: Optimizations on Ceph Cache Tiering
Ceph Day Beijing: Optimizations on Ceph Cache Tiering Ceph Day Beijing: Optimizations on Ceph Cache Tiering
Ceph Day Beijing: Optimizations on Ceph Cache Tiering Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
 
Ceph Day NYC: Developing With Librados
Ceph Day NYC: Developing With LibradosCeph Day NYC: Developing With Librados
Ceph Day NYC: Developing With LibradosCeph Community
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Transforming the Ceph Integration Tests with OpenStack
Transforming the Ceph Integration Tests with OpenStack Transforming the Ceph Integration Tests with OpenStack
Transforming the Ceph Integration Tests with OpenStack Ceph Community
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Ceph Community
 
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Community
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Community
 
Ceph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setupCeph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setupCeph Community
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
 
Ceph Day Beijing: Containers and Ceph
Ceph Day Beijing: Containers and Ceph Ceph Day Beijing: Containers and Ceph
Ceph Day Beijing: Containers and Ceph Ceph Community
 
London Ceph Day: Erasure Coding: Purpose and Progress
London Ceph Day: Erasure Coding: Purpose and Progress London Ceph Day: Erasure Coding: Purpose and Progress
London Ceph Day: Erasure Coding: Purpose and Progress Ceph Community
 
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client Ceph Community
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
London Ceph Day: Ceph Performance and Optimization
London Ceph Day: Ceph Performance and Optimization London Ceph Day: Ceph Performance and Optimization
London Ceph Day: Ceph Performance and Optimization Ceph Community
 

Viewers also liked (20)

Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFS
Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFSCeph Day New York 2014: Distributed OLAP queries in seconds using CephFS
Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFS
 
Ceph Day LA: Ceph Ecosystem Update
Ceph Day LA: Ceph Ecosystem Update Ceph Day LA: Ceph Ecosystem Update
Ceph Day LA: Ceph Ecosystem Update
 
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective
 
Ceph Day Beijing: Optimizations on Ceph Cache Tiering
Ceph Day Beijing: Optimizations on Ceph Cache Tiering Ceph Day Beijing: Optimizations on Ceph Cache Tiering
Ceph Day Beijing: Optimizations on Ceph Cache Tiering
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Ceph Day NYC: Developing With Librados
Ceph Day NYC: Developing With LibradosCeph Day NYC: Developing With Librados
Ceph Day NYC: Developing With Librados
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Transforming the Ceph Integration Tests with OpenStack
Transforming the Ceph Integration Tests with OpenStack Transforming the Ceph Integration Tests with OpenStack
Transforming the Ceph Integration Tests with OpenStack
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic Cloud
 
Ceph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setupCeph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setup
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
Ceph Day Beijing: Containers and Ceph
Ceph Day Beijing: Containers and Ceph Ceph Day Beijing: Containers and Ceph
Ceph Day Beijing: Containers and Ceph
 
London Ceph Day: Erasure Coding: Purpose and Progress
London Ceph Day: Erasure Coding: Purpose and Progress London Ceph Day: Erasure Coding: Purpose and Progress
London Ceph Day: Erasure Coding: Purpose and Progress
 
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
London Ceph Day: Ceph Performance and Optimization
London Ceph Day: Ceph Performance and Optimization London Ceph Day: Ceph Performance and Optimization
London Ceph Day: Ceph Performance and Optimization
 

Similar to Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of Storage as-a-Service

Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on DockerDataWorks Summit
 
Big data talk barcelona - jsr - jc
Big data talk   barcelona - jsr - jcBig data talk   barcelona - jsr - jc
Big data talk barcelona - jsr - jcJames Saint-Rossy
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFSUSE Italy
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarWicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarKamesh Pemmaraju
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowEd Balduf
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
 
A3 transforming data_management_in_the_cloud
A3 transforming data_management_in_the_cloudA3 transforming data_management_in_the_cloud
A3 transforming data_management_in_the_cloudDr. Wilfred Lin (Ph.D.)
 
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and InfrastrctureRevolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and Infrastrcturesabnees
 
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Ontico
 
Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2hdhappy001
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
 
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃Etu Solution
 
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...OPNFV
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyPeter Clapham
 
Optimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for HadoopOptimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for HadoopMike Pittaro
 
Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final Juergen Domnik
 
How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...Alluxio, Inc.
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and FutureDataWorks Summit
 

Similar to Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of Storage as-a-Service (20)

Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on Docker
 
Big data talk barcelona - jsr - jc
Big data talk   barcelona - jsr - jcBig data talk   barcelona - jsr - jc
Big data talk barcelona - jsr - jc
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarWicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
A3 transforming data_management_in_the_cloud
A3 transforming data_management_in_the_cloudA3 transforming data_management_in_the_cloud
A3 transforming data_management_in_the_cloud
 
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and InfrastrctureRevolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
 
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...
 
Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
 
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
 
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journey
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Optimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for HadoopOptimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for Hadoop
 
Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final
 
How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and Future
 

Recently uploaded

What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...Technogeeks
 
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Cizo Technology Services
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odishasmiwainfosol
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfMarharyta Nedzelska
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Rob Geurden
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalLionel Briand
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Natan Silnitsky
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZABSYZ Inc
 
Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf31events.com
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfStefano Stabellini
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
UI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptxUI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptxAndreas Kunz
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringHironori Washizaki
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsAhmed Mohamed
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtimeandrehoraa
 

Recently uploaded (20)

Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...
 
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdf
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive Goal
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a series
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZ
 
Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdf
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
UI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptxUI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptx
 
Advantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your BusinessAdvantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your Business
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their Engineering
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML Diagrams
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtime
 

Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of Storage as-a-Service

  • 1. Best Practices for Ceph- Powered Implementations of Storage as-a-Service Kamesh Pemmaraju, Sr. Product Mgr, Dell Ceph Developer Day, New York City, Oct 2014
  • 2. Outline • Planning your Ceph implementation • Ceph Use Cases • Choosing targets for Ceph deployments • Reference Architecture Considerations • Dell Reference Configurations • Customer Case Study
  • 3. Planning your Ceph Implementation • Business Requirements – Budget considerations, organizational commitment – Avoiding lock-in – use open source and industry standards – Enterprise IT use cases – Cloud applications/XaaS use cases for massive-scale, cost-effective storage • Sizing requirements – What is the initial storage capacity? – Is it steady-state data usage vs. Spike data usage – What is the expected growth rate? • Workload requirements – Does the workload need high performance or it is more capacity focused? – What are IOPS/Throughput requirements? – What type of data will be stored? – Ephemeral vs. persistent data, Object, Block, File? • Ceph is like a Swiss Army knife – it can tuned a wide variety of use cases. Let us look at some of them
  • 4. Ceph is like a Swiss Army Knife – it can fit in a wide variety of target use cases Virtualization and Private Ceph Cloud Target (traditional SAN/NAS) High Performance (traditional SAN) NAS & Object Content Store (traditional NAS) Capacity Performance Traditional IT Cloud Applications XaaS Compute Cloud Open Source Block XaaS Content Store Open Source NAS/Object Ceph Target
  • 5. USE CASE: OPENSTACK Copyright © 2013 by Inktank | Private and Confidential 5
  • 6. USE CASE: OPENSTACK Copyright © 2013 by Inktank | Private and Confidential 6 Volumes Ephemeral Copy-on-Write Snapshots
  • 7. USE CASE: OPENSTACK Copyright © 2013 by Inktank | Private and Confidential 7
  • 8. USE CASE: CLOUD STORAGE Copyright © 2013 by Inktank | Private and Confidential 8 S3/Swift S3/Swift S3/Swift S3/Swift
  • 9. USE CASE: WEBSCALE APPLICATIONS Copyright © 2013 by Inktank | Private and Confidential 9 Native Protocol Native Protocol Native Protocol Native Protocol
  • 10. USE CASE: PERFORMANCE BLOCK Copyright © 2013 by Inktank | Private and Confidential 10 CEPH STORAGE CLUSTER
  • 11. USE CASE: PERFORMANCE BLOCK Read/Write Read/Write Copyright © 2013 by Inktank | Private and Confidential 11 CEPH STORAGE CLUSTER
  • 12. USE CASE: PERFORMANCE BLOCK Write Write Read Read Copyright © 2013 by Inktank | Private and Confidential 12 CEPH STORAGE CLUSTER
  • 13. USE CASE: ARCHIVE / COLD STORAGE Copyright © 2013 by Inktank | Private and Confidential 13 CEPH STORAGE CLUSTER
  • 14. USE CASE: DATABASES Copyright © 2013 by Inktank | Private and Confidential 14 Native Protocol Native Protocol Native Protocol Native Protocol
  • 15. USE CASE: HADOOP Copyright © 2013 by Inktank | Private and Confidential 15 Native Protocol Native Protocol Native Protocol Native Protocol
  • 16. Architectural considerations – Redundancy and replication considerations • Tradeoff between Cost vs. Reliability (use-case dependent) • Use the Crush configs to map out your failures domains and performance pools • Failure domains – Disk (OSD and OS) – SSD journals – Node – Rack – Site (replication at the RADOS level, Block replication, consider latencies) • Storage pools – SSD pool for higher performance – Capacity pool • Plan for failure domains of the monitor nodes • Consider failure replacement scenarios, lowered redundancies, and performance impacts
  • 17. Server Considerations • Storage Node: – one OSD per HDD, 1 – 2 GB ram, and 1 Gz/core/OSD, – SSD’s for journaling and for using SSD pooling (tiering) in Firefly – Erasure coding will increase useable capacity at the expense of additional compute load – SAS JBOD expanders for extra capacity (beware of extra latency, oversubscribed SAS lanes, large footprint for a failure zone) • Monitor nodes (MON): odd number for quorum, services can be hosted on the storage node for smaller deployments, but will need dedicated nodes larger installations • Dedicated RADOS Gateway nodes for large object store deployments and for federated gateways for multi-site
  • 18. Networking Considerations • Dedicated or Shared network – Be sure to involve the networking and security teams early when designing your networking options – Network redundancy considerations – Dedicated client and OSD networks – VLAN’s vs. Dedicated switches – 1 Gbs vs 10 Gbs vs 40 Gbs! • Networking design – Spine and Leaf – Multi-rack – Core fabric connectivity – WAN connectivity and latency issues for multi-site deployments
  • 19. Ceph additions coming to the Dell Red Hat OpenStack solution Pilot configuration Components • Dell PowerEdge R620/R720/R720XD Servers • Dell Networking S4810/S55 Switches, 10GB • Red Hat Enterprise Linux OpenStack Platform • Dell ProSupport • Dell Professional Services • Avail. w/wo High Availability Specs at a glance • Node 1: Red Hat Openstack Manager • Node 2: OpenStack Controller (2 additional controllers for HA) • Nodes 3-8: OpenStack Nova Compute • Nodes: 9-11: Ceph 12x3 TB raw storage • Network Switches: Dell Networking S4810/S55 • Supports ~ 170-228 virtual machines Benefits • Rapid on-ramp to OpenStack cloud • Scale up, modular compute and storage blocks • Single point of contact for solution support • Enterprise-grade OpenStack software package Storage bundles
  • 20. Example Ceph Dell Server Configurations Type Size Components Performance 20 TB • R720XD • 24 GB DRAM • 10 X 4 TB HDD (data drives) • 2 X 300 GB SSD (journal) Capacity 44TB / 105 TB* • R720XD • 64 GB DRAM • 10 X 4 TB HDD (data drives) • 2 X 300 GB SSH (journal) • MD1200 • 12 X 4 TB HHD (data drives) Extra Capacity 144 TB / 240 TB* • R720XD • 128 GB DRAM • 12 X 4 TB HDD (data drives) • MD3060e (JBOD) • 60 X 4 TB HHD (data drives)
  • 21. What Are We Doing To Enable? • Dell & Red Hat & Inktank have partnered to bring a complete Enterprise-grade storage solution for RHEL-OSP + Ceph • The joint solution provides: – Co-engineered and validated Reference Architecture – Pre-configured storage bundles optimized for performance or storage – Storage enhancements to existing OpenStack Bundles – Certification against RHEL-OSP – Professional Services, Support, and Training › Collaborative Support for Dell hardware customers › Deployment services & tools
  • 23. Overcoming a data deluge US university that specializes in Cancer and Genomic research • 900 researchers • Data sets challenging resources • Research data scattered everywhere • Transferring datasets took forever and clogged shared networks • Distributed data management reduced productivity and put data at risk • Needed centralized repository for compliance Dell - Confidential
  • 24. Research Computing System (Originally) A collection of grids, proto-clouds, tons of virtualization and DevOps HPC Cluster HPC Cluster HPC Storage DDR Infiniband QDR Infiniband 1Gb Ethernet University Research Network Interactive Services Thumb drives Local servers Laptops Laptops Thumb drives Local servers Dell - Confidential
  • 25. Solution: a scale-out storage cloud Based on OpenStack and Ceph • Housed and managed centrally, accessible across campus network − File system + cluster, can grow as big as you want − Provisions from a massive common pool − 400+ TBs at less than 41¢/GB; scalable to 5PB • Researchers gain − Work with larger, more diverse data sets − Save workflows for new devices & analysis − Qualify for grants due to new levels of protection • Demonstrating utility with applications − Research storage − Crashplan (cloud back up) on POC − Gitlab hosting on POC “We’ve made it possible for users to satisfy their own storage needs with the Dell private cloud, so that their research is not hampered by IT.” David L. Shealy, PhD Faculty Director, Research Computing Chairman, Dept. of Physics Dell - Confidential
  • 26. Research Computing System (Today) Centralized storage cloud based on OpenStack and Ceph Ceph node University Research Network Cep node Ceph node Ceph node Ceph node POC Open Stack node HPC Cluster HPC Cluster HPC Storage DDR Infiniband QDR Infiniband 10Gb Ethernet Cloud services layer Virtualized server and storage computing cloud based on OpenStack, Crowbar and Ceph Dell - Confidential
  • 27. Building a research cloud Project goals extend well beyond data management • Designed to support emerging data-intensive scientific computing paradigm − 12 x 16-core compute nodes − 1 TB RAM, 420 TBs storage − 36 TBs storage attached to each compute node • Individually customized test/development/ production environments − Direct user control over all aspects of the application environment − Rapid setup and teardown • Growing set of cloud-based tools & services − Easily integrate shareware, open source, and commercial software “We envision the OpenStack-based cloud to act as the gateway to our HPC resources, not only as the purveyor of services we provide, but also enabling users to build their own cloud-based services.” John-Paul Robinson, System Architect Dell - Confidential
  • 28. Research Computing System (Next Gen) A cloud-based computing environment with high speed access to dedicated and dynamic compute resources Open Stack node Open Stack node Open Stack node Ceph node University Research Network Ceph node Ceph node Ceph node Ceph node Open Stack node Open Stack node Open Stack node Open Stack node HPC Cluster HPC Cluster HPC Storage DDR Infiniband QDR Infiniband 10Gb Ethernet Cloud services layer Virtualized server and storage computing cloud based on OpenStack, Crowbar and Ceph Dell - Confidential
  • 30. Contact Information Reach Kamesh additional information: Kamesh_Pemmaraju@Dell.com @kpemmaraju http://www.cloudel.com

Editor's Notes

  1. R720XD configurations use 4 TB drives 2 X 300 GB OS drives 2 X 10 GB NIC iDRAC 7 Enterprise LSI 9207-[8i, 8e] HBAs 2 X E5-2650 2 GHz processors (*) - The larger capacity is that were erasure encoding is in use. To get the same redundancy as 2 X in erasure encoding uses a factor of 1.2. Erasure encoding is a feature of the Ceph Firefly release, which is in its final phase of development. Additional performance could be gained by adding either Intel’s CAS or Dell FluidFS DAS caching software packages. Doing so would impose additional memory and processing overhead, and more work in the deployment/installation bucket (because we would have to install and configure it).
  2. https://dev.uabgrid.uab.edu/wiki/OpenStackPlusCeph The research computing system (RCS) is built on a collection of distinct hardware systems designed to provide specific services to applications. The RCS hardware includes dedicated compute fabrics that support high performance computing (HPC) applications where hundreds of compute cores can work together on a single application. These clusters of commodity compute hardware make it possible to do data analysis and modelling work in hours, work that would have taken months using a single computer. The clusters are connected with dedicated high bandwidth, low latency networks for applications to efficiently coordinate their actions across many computers and access a shared high speed storage system for working efficiently with terabytes of data. Our newest hardware fabric, acquired 2012Q4, is designed to support emerging data intensive scientific computing and virtualization paradigms. This hardware is very similar to the commodity computers used by our traditional HPC fabrics, however, in addition to having many compute cores and lots of RAM, each individual computer contains 36TB of built in disk storage. Taken together, this newest hardware fabric adds 192 cores, 1TB RAM, and 420TB of storage to the RCS. The built-in disk storage is designed to support applications running local to each computer. The data intensive computing paradigm exchanges the external storage networks of traditional HPC clusters with the native, very high speed system buses that provide access to local hard disks in each computer. Large datasets are distributed across these computers and then applications are assigned to run on the specific computer that stores the portion of the dataset it has been assigned to analyze. The hardware requirements for data intensive computing closely resemble the requirements for virtualization and can benefit tremendously from the configuration flexibility that a virtualization fabric offers. In order to enhance flexibility and further improve support for scaling research applications, we are engineering our latest hardware cluster to act as a virtualized storage and compute fabric. This enables support for a wide variety of storage and compute use cases, most prominently, ample storage capacity for reliably housing large research data collections and flexible application development and deployment capabilities that allow direct user control over all aspects of the application environment. In short, we are tooling this hardware to build a cloud computing environment. We are building this cloud using OpenStack for compute virtualization and Ceph for storage virtualization. Crowbar will provision the raw hardware fabric. This approach is very similar to the mode we have been following with our traditional ROCKS-based HPC cluster environment. The new approach enhances our ability to automatically provision hardware and further improve the economics large scale computing. We are implementing this environment with Dell and Inktank. These vendors and the upstream open source projects on which this platform is built, embrace the DevOps model for systems development. This will support further engineering collaboration with our vendors, enabling the UAB research community to continually enhance our fabric as needed and feed those enhancements upstream for inclusion in future support releases. This solution rounds out the feature set of the RCS core and will provide a general framework to scale future growth.
  3. User base: 900+ researchers across Campus. KVM-based 2 Nova nodes 4 primary storage nodes 4 replication nodes 2 control nodes 12 x R720XD systems
  4. https://dev.uabgrid.uab.edu/wiki/OpenStackPlusCeph The research computing system (RCS) is built on a collection of distinct hardware systems designed to provide specific services to applications. The RCS hardware includes dedicated compute fabrics that support high performance computing (HPC) applications where hundreds of compute cores can work together on a single application. These clusters of commodity compute hardware make it possible to do data analysis and modelling work in hours, work that would have taken months using a single computer. The clusters are connected with dedicated high bandwidth, low latency networks for applications to efficiently coordinate their actions across many computers and access a shared high speed storage system for working efficiently with terabytes of data. Our newest hardware fabric, acquired 2012Q4, is designed to support emerging data intensive scientific computing and virtualization paradigms. This hardware is very similar to the commodity computers used by our traditional HPC fabrics, however, in addition to having many compute cores and lots of RAM, each individual computer contains 36TB of built in disk storage. Taken together, this newest hardware fabric adds 192 cores, 1TB RAM, and 420TB of storage to the RCS. The built-in disk storage is designed to support applications running local to each computer. The data intensive computing paradigm exchanges the external storage networks of traditional HPC clusters with the native, very high speed system buses that provide access to local hard disks in each computer. Large datasets are distributed across these computers and then applications are assigned to run on the specific computer that stores the portion of the dataset it has been assigned to analyze. The hardware requirements for data intensive computing closely resemble the requirements for virtualization and can benefit tremendously from the configuration flexibility that a virtualization fabric offers. In order to enhance flexibility and further improve support for scaling research applications, we are engineering our latest hardware cluster to act as a virtualized storage and compute fabric. This enables support for a wide variety of storage and compute use cases, most prominently, ample storage capacity for reliably housing large research data collections and flexible application development and deployment capabilities that allow direct user control over all aspects of the application environment. In short, we are tooling this hardware to build a cloud computing environment. We are building this cloud using OpenStack for compute virtualization and Ceph for storage virtualization. Crowbar will provision the raw hardware fabric. This approach is very similar to the mode we have been following with our traditional ROCKS-based HPC cluster environment. The new approach enhances our ability to automatically provision hardware and further improve the economics large scale computing. We are implementing this environment with Dell and Inktank. These vendors and the upstream open source projects on which this platform is built, embrace the DevOps model for systems development. This will support further engineering collaboration with our vendors, enabling the UAB research community to continually enhance our fabric as needed and feed those enhancements upstream for inclusion in future support releases. This solution rounds out the feature set of the RCS core and will provide a general framework to scale future growth.
  5. User base: 900+ researchers across Campus. KVM-based 2 Nova nodes 4 primary storage nodes 4 replication nodes 2 control nodes 12 x R720XD systems
  6. https://dev.uabgrid.uab.edu/wiki/OpenStackPlusCeph The research computing system (RCS) is built on a collection of distinct hardware systems designed to provide specific services to applications. The RCS hardware includes dedicated compute fabrics that support high performance computing (HPC) applications where hundreds of compute cores can work together on a single application. These clusters of commodity compute hardware make it possible to do data analysis and modelling work in hours, work that would have taken months using a single computer. The clusters are connected with dedicated high bandwidth, low latency networks for applications to efficiently coordinate their actions across many computers and access a shared high speed storage system for working efficiently with terabytes of data. Our newest hardware fabric, acquired 2012Q4, is designed to support emerging data intensive scientific computing and virtualization paradigms. This hardware is very similar to the commodity computers used by our traditional HPC fabrics, however, in addition to having many compute cores and lots of RAM, each individual computer contains 36TB of built in disk storage. Taken together, this newest hardware fabric adds 192 cores, 1TB RAM, and 420TB of storage to the RCS. The built-in disk storage is designed to support applications running local to each computer. The data intensive computing paradigm exchanges the external storage networks of traditional HPC clusters with the native, very high speed system buses that provide access to local hard disks in each computer. Large datasets are distributed across these computers and then applications are assigned to run on the specific computer that stores the portion of the dataset it has been assigned to analyze. The hardware requirements for data intensive computing closely resemble the requirements for virtualization and can benefit tremendously from the configuration flexibility that a virtualization fabric offers. In order to enhance flexibility and further improve support for scaling research applications, we are engineering our latest hardware cluster to act as a virtualized storage and compute fabric. This enables support for a wide variety of storage and compute use cases, most prominently, ample storage capacity for reliably housing large research data collections and flexible application development and deployment capabilities that allow direct user control over all aspects of the application environment. In short, we are tooling this hardware to build a cloud computing environment. We are building this cloud using OpenStack for compute virtualization and Ceph for storage virtualization. Crowbar will provision the raw hardware fabric. This approach is very similar to the mode we have been following with our traditional ROCKS-based HPC cluster environment. The new approach enhances our ability to automatically provision hardware and further improve the economics large scale computing. We are implementing this environment with Dell and Inktank. These vendors and the upstream open source projects on which this platform is built, embrace the DevOps model for systems development. This will support further engineering collaboration with our vendors, enabling the UAB research community to continually enhance our fabric as needed and feed those enhancements upstream for inclusion in future support releases. This solution rounds out the feature set of the RCS core and will provide a general framework to scale future growth.
  7. User base: 900+ researchers across Campus. KVM-based 2 Nova nodes 4 primary storage nodes 4 replication nodes 2 control nodes 12 x R720XD systems