SlideShare a Scribd company logo
QCT Ceph
Solution – Design
Consideration and
Reference
Architecture
Gary Lee
AVP, QCT
• Industry Trend and Customer Needs
• Ceph Architecture
• Technology
• Ceph Reference Architecture and QCT Solution
• Test Result
• QCT/Red Hat Ceph Whitepaper
2
AGENDA
3
Industry Trend
and
Customer Needs
4
• Structured Data -> Unstructured/Structured Data
• Data -> Big Data, Fast Data
• Data Processing -> Data Modeling -> Data Science
• IT -> DT
• Monolithic -> Microservice
5
Industry Trend
• Scalable Size
• Variable Type
• Longivity Time
• Distributed Location
• Versatile Workload
6
• Affordable Price
• Available Service
• Continuous Innovation
• Consistent Management
• Neutral Vendor
Customer Needs
7
Ceph
Architecture
Ceph Storage Cluster
8
Cluster Network
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Object Block File
Unified
Storage
Scale-out
Cluster
Open
Source
Software
Open
Commodity
Hardware
…..
9
Block
I/O
Ceph
Client
RBD
RADO
SGW
Ceph
FS
Object
I/O
File I/O
RADOS/Cluster Network
OSD
File
System
I/O
Disk I/O
PublicNetwork
End-to-end Data Path
App
Service
10
Ceph Software Architecture
Public Network (ex. 10GbE or 40GbE)
Cluster Network (ex. 10GbE or 40GbE)
Ceph Monitor
…...
RCT or RCC
Nx Ceph OSD Nodes
Ceph OSD Node
Clients
Ceph OSD Node Ceph OSD Node
Ceph Hardware Architecture
12
Technology
13
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 12x 3.5” SAS/SATA HDD
• 4x SATA SSD + PCIe M.2
• 1x SATADOM
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
D51PH-1ULH
14
• Mono/Dual Node
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 78x or 2x 35x SSD/HDD
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 SAS Controller
• 1x PCIe x8 HHLH Card
• 1x PCIe x16 FHHL Card
• 4U
QCT Ceph Storage Server
T21P-4U
15
• 1x Intel Xeon D SoC CPU
• 4x DDR4 Memory
• 12x SAS/SATA HDD
• 4x SATA SSD
• 2x SATA SSD for OS
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
SD1Q-1ULH
• Standalone, without EC
• Standalone, with EC
• Hyper-converged, without EC
• High Core vs. High Frequency
• 1x OSD ~ (0.3-0.5)x Core + 2G RAM
16
CPU/Memory
• SSD:
– Journal
– Tier
– File System Cache
– Client Cache
• Journal
– HDD: SSD (SATA/SAS): 4~5
– HDD: NVMe: 12~18
17
SSD/NVMe
• 2x NVMe ~40Gb
• 4x NVMe ~100Gb
• 2x SATA SSD ~10Gb
• 1x SAS SSD ~10Gb
• (20~25)x HDD ~10Gb
• ~100x HDD ~40Gb
18
NIC
10G/40G -> 25G/100G
• CPU Offload through RDMA/iWARP
• Erasure Coding Offload
• Allocate computing on different silicon areas
19
NIC
I/O Offloading
• Object Replication
– 1 Primary + 2 Replica (or more)
– CRUSH Allocation Ruleset
• Erasure Coding
– [k+m], e.g. 4+2, 8+3
– Better Data Efficiency
• k/(k+m) vs. 1/(1+replication)
20
Erasure Coding vs. Replication
Size/
Workload
Small Medium Large
Throughput
Transfer Bandwidth
Sequential R/W
Capacity
Cost/capacity
Scalability
IOPS
IOPS/ per 4k Block
Random R/W
Hyper-converged ?
Desktop
Virtualization
Latency
Random R/W
Hadoop ?
21
Workload and Configuration
22
Red Hat Ceph
• Intel ISA-L
• Intel SPDK
• Intel CAS
• Mellanox Accelio Library
23
Vendor-specific Value-added Software
24
Ceph Reference
Architecture and
QCT Solution
• Trade-off among Technologies
• Scalable in Architecture
• Optimized for Workload
• Affordable as Expected
Design Principle
1. Needs for scale-out storage
2. Target workload
3. Access method
4. Storage capacity
5. Data protection methods
6. Fault domain risk tolerance
26
Design Considerations
27
Transactio
n
Data
Warehous
e
Big
Data
Scientific
Block
Transfe
r
Audio Video
IOPS
MB/sec
OLTP
OLAP
HPC
Streaming
DB
Storage Workload
SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*)
Throughput
optimized
QxStor RCT-200
16x D51PH-1ULH (16U)
• 12x 8TB HDDs
• 3x SSDs
• 1x dual port 10GbE
• 3x replica
QxStor RCT-400
6x T21P-4U/Dual (24U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
QxStor RCT-400
11x T21P-4U/Dual (44U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
Cost/Capacity
optimized
IOPS optimized Future direction Future direction NA
* Usable storage capacity
QxStor RCC-400
Nx T21P-4U/Dual
• 2x 35x 8TB HDDs
• 0x SSDs
• 2x dual port 10GbE
• Erasure Coding 4:2
QCT QxStor Red Hat Ceph Storage Edition Portfolio
Workload-driven Integrated Software/Hardware Solution
• Densest 1U Ceph building block
• Best reliability with smaller
failure domain
• Scale at high scale 2x 280TB
• At once obtain best throughput
and density
• Block or object storage
• 3x replication
• Video, audio, image repositories, and streaming media
• Highest density 560TB raw
capacity per chassis with greatest
price/performance
• Typically object storage
• Erasure coding common
for maximizing usable capacity
• Object archive
Throughput-Optimized
RCC-400RCT-200 RCT-400
Cost/Capacity-Optimized
USECASE
QCT QxStor Red Hat Ceph Storage Edition
Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
30
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
31
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
32
QCT Solution Value Proposition
• Workload-driven
• Hardware/software pre-validated, pre-optimized and
pre-integrated
• Up and running in minutes
• Balance between production (stable) and innovation
(up-streaming)
33
Test Result
Client 1
S2B
Client 2
S2B
Client 3
S2B
Ceph 1
S2PH
Ceph 2
S2PH
Ceph 3
S2PH
Ceph 5
S2PH
Ceph 4
S2PH
Client 8
S2B
Client 9
S2B
Client 10
S2B
10Gb
10Gb
Public Network
Cluster Network
General Configuration
• 5 Ceph nodes (S2PH) with each 2 x 10Gb link.
• 10 Client nodes (S2B) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 12 OSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 12 OSD / 3 SSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 12 (OSD) x 2GB = 24 GB
Testing Configuration (Throughput-Optimized)
Client 1
S2S
Client 2
S2S
Client 3
S2S
Ceph 1
S2P
Ceph 2
S2P
Client 6
S2S
Client 7
S2S
Client 8
S2S
10Gb
Public Network
40Gb 40Gb
General Configuration
• 2 Ceph nodes (S2P) with each 2 x 10Gb link.
• 8 Client nodes (S2S) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 35 OSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 35 OSD / 2 PCI-SSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Testing Configuration (Capacity-Optimized)
Level Component Test Suite
Raw I/O Disk FIO
Network I/O Network iperf
Object API I/O librados radosbench
Object I/O RGW Cosbench
Block I/O RBD librbdfio
36
CBT (Ceph Benchmarking Tool)
37
Linear Scale Out
38
Linear Scale Up
39
Price, in terms of Performance
40
Price, in terms of Capacity
41
Protection Scheme
42
Cluster Network
43
QCT/Red Hat
Ceph
Whitepaper
44
http://www.qct.io/account/d
ownload/download?order_
download_id=1022&dtype=
Reference%20Architecture
QCT/Red Hat Ceph Solution Brief
https://www.redhat.com/en/
files/resources/st-
performance-sizing-guide-
ceph-qct-inc0347490.pdf
http://www.qct.io/Solution/
Software-Defined-
Infrastructure/Storage-
Virtualization/QCT-and-
Red-Hat-Ceph-Storage-
p365c225c226c230
QCT/Red Hat Ceph Reference Architecture
• The Red Hat Ceph Storage Test Drive lab in QCT Solution Center
provides you a free hands-on experience. You'll be able to
explore the features and simplicity of the product in real-time.
• Concepts:
Ceph feature and functional test
• Lab Exercises:
Ceph Basics
Ceph Management - Calamari/CLI
Ceph Object/Block Access
46
QCT Offer TryCeph (Test Drive) Later
47
Remote access
to QCT cloud solution centers
• Easy to test. Anytime and anywhere.
• No facilities and logistic needed
• Configurations
• RCT-200 and newest QCT solutions
QCT Offer TryCeph (Test Drive) Later
• Ceph is Open Architecture
• QCT, Red Hat and Intel collaborate to provide
– Workload-driven,
– Pre-integrated,
– Comprehensive-tested and
– Well-optimized solution
• Red Hat – Open Software/Support Pioneer
Intel – Open Silicon/Technology Innovator
QCT – Open System/Solution Provider
• Together We Provide the Best
48
CONCLUSION
www.QuantaQCT.com
Thank you!
www.QCT.io
QCT CONFIDENTIAL50
Looking for
innovative cloud solution?
Come to QCT,
who else?

More Related Content

What's hot

BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
Sage Weil
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
ktdreyer
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Danny Al-Gaaf
 
An intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data WorkshopAn intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data Workshop
Patrick McGarry
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
Italo Santos
 
QCT Fact Sheet-English
QCT Fact Sheet-EnglishQCT Fact Sheet-English
QCT Fact Sheet-EnglishPeggy Ho
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
John Spray
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
Sage Weil
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
zhouyuan
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
OpenStack_Online
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
Robert Starmer
 
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
Ian Colle
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red_Hat_Storage
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
Kyle Bader
 
Managing data analytics in a hybrid cloud
Managing data analytics in a hybrid cloudManaging data analytics in a hybrid cloud
Managing data analytics in a hybrid cloud
Karan Singh
 

What's hot (19)

BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
An intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data WorkshopAn intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data Workshop
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
QCT Fact Sheet-English
QCT Fact Sheet-EnglishQCT Fact Sheet-English
QCT Fact Sheet-English
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
 
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
 
Managing data analytics in a hybrid cloud
Managing data analytics in a hybrid cloudManaging data analytics in a hybrid cloud
Managing data analytics in a hybrid cloud
 

Viewers also liked

Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
James Saint-Rossy
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
Jonathan Long
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red_Hat_Storage
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014
Ian Colle
 
Open source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration managementOpen source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration management
Dyaa El-din Ahmed
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
Daniel Schneller
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Lenz Grimmer
 
Ceph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to DeploymentCeph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to Deployment
Patrick McGarry
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with cephIan Colle
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Patrick McGarry
 
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityTechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
OpenNebula Project
 
Ceph Object Store
Ceph Object StoreCeph Object Store
Ceph Object Store
Daniel Schneller
 
Bluestore
BluestoreBluestore
Bluestore
Patrick McGarry
 
Which Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephWhich Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on Ceph
Red_Hat_Storage
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
NAVER D2
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Sage Weil
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
Sage Weil
 

Viewers also liked (20)

Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014
 
Open source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration managementOpen source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration management
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
 
Ceph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to DeploymentCeph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to Deployment
 
OpenStack at PayPal
OpenStack at PayPalOpenStack at PayPal
OpenStack at PayPal
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with ceph
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityTechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
 
Ceph Object Store
Ceph Object StoreCeph Object Store
Ceph Object Store
 
Bluestore
BluestoreBluestore
Bluestore
 
Which Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephWhich Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on Ceph
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 

Similar to QCT Ceph Solution - Design Consideration and Reference Architecture

Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Ceph
CephCeph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
inwin stack
 
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Databricks
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
Nicolas Poggi
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red_Hat_Storage
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
DataWorks Summit/Hadoop Summit
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_cluster
Prabhat gangwar
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
Red_Hat_Storage
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBS
Hanson Dong
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
Patrick Quairoli
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
Kernel TLV
 
NSCC Training Introductory Class
NSCC Training Introductory Class NSCC Training Introductory Class
NSCC Training Introductory Class
National Supercomputing Centre Singapore
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Community
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
Nicolas Poggi
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Shuquan Huang
 
FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)
Julien SIMON
 

Similar to QCT Ceph Solution - Design Consideration and Reference Architecture (20)

Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph
CephCeph
Ceph
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_cluster
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBS
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 
NSCC Training Introductory Class
NSCC Training Introductory Class NSCC Training Introductory Class
NSCC Training Introductory Class
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
 
FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)
 

More from Patrick McGarry

Community Update
Community UpdateCommunity Update
Community Update
Patrick McGarry
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
Patrick McGarry
 
Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
Patrick McGarry
 
2014 Ceph NYLUG Talk
2014 Ceph NYLUG Talk2014 Ceph NYLUG Talk
2014 Ceph NYLUG Talk
Patrick McGarry
 
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Patrick McGarry
 
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Patrick McGarry
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
Patrick McGarry
 
Ceph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper MeliorCeph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper Melior
Patrick McGarry
 
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputeIn-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
Patrick McGarry
 
Ceph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston MeetupCeph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston Meetup
Patrick McGarry
 
Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013
Patrick McGarry
 
Powering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - ApacheconPowering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - Apachecon
Patrick McGarry
 

More from Patrick McGarry (12)

Community Update
Community UpdateCommunity Update
Community Update
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
 
2014 Ceph NYLUG Talk
2014 Ceph NYLUG Talk2014 Ceph NYLUG Talk
2014 Ceph NYLUG Talk
 
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
 
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
 
Ceph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper MeliorCeph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper Melior
 
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputeIn-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
 
Ceph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston MeetupCeph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston Meetup
 
Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013
 
Powering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - ApacheconPowering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - Apachecon
 

Recently uploaded

GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
ThomasParaiso2
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 

Recently uploaded (20)

GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 

QCT Ceph Solution - Design Consideration and Reference Architecture

  • 1. QCT Ceph Solution – Design Consideration and Reference Architecture Gary Lee AVP, QCT
  • 2. • Industry Trend and Customer Needs • Ceph Architecture • Technology • Ceph Reference Architecture and QCT Solution • Test Result • QCT/Red Hat Ceph Whitepaper 2 AGENDA
  • 4. 4
  • 5. • Structured Data -> Unstructured/Structured Data • Data -> Big Data, Fast Data • Data Processing -> Data Modeling -> Data Science • IT -> DT • Monolithic -> Microservice 5 Industry Trend
  • 6. • Scalable Size • Variable Type • Longivity Time • Distributed Location • Versatile Workload 6 • Affordable Price • Available Service • Continuous Innovation • Consistent Management • Neutral Vendor Customer Needs
  • 8. Ceph Storage Cluster 8 Cluster Network Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Object Block File Unified Storage Scale-out Cluster Open Source Software Open Commodity Hardware …..
  • 11. Public Network (ex. 10GbE or 40GbE) Cluster Network (ex. 10GbE or 40GbE) Ceph Monitor …... RCT or RCC Nx Ceph OSD Nodes Ceph OSD Node Clients Ceph OSD Node Ceph OSD Node Ceph Hardware Architecture
  • 13. 13 • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 12x 3.5” SAS/SATA HDD • 4x SATA SSD + PCIe M.2 • 1x SATADOM • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server D51PH-1ULH
  • 14. 14 • Mono/Dual Node • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 78x or 2x 35x SSD/HDD • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 SAS Controller • 1x PCIe x8 HHLH Card • 1x PCIe x16 FHHL Card • 4U QCT Ceph Storage Server T21P-4U
  • 15. 15 • 1x Intel Xeon D SoC CPU • 4x DDR4 Memory • 12x SAS/SATA HDD • 4x SATA SSD • 2x SATA SSD for OS • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server SD1Q-1ULH
  • 16. • Standalone, without EC • Standalone, with EC • Hyper-converged, without EC • High Core vs. High Frequency • 1x OSD ~ (0.3-0.5)x Core + 2G RAM 16 CPU/Memory
  • 17. • SSD: – Journal – Tier – File System Cache – Client Cache • Journal – HDD: SSD (SATA/SAS): 4~5 – HDD: NVMe: 12~18 17 SSD/NVMe
  • 18. • 2x NVMe ~40Gb • 4x NVMe ~100Gb • 2x SATA SSD ~10Gb • 1x SAS SSD ~10Gb • (20~25)x HDD ~10Gb • ~100x HDD ~40Gb 18 NIC 10G/40G -> 25G/100G
  • 19. • CPU Offload through RDMA/iWARP • Erasure Coding Offload • Allocate computing on different silicon areas 19 NIC I/O Offloading
  • 20. • Object Replication – 1 Primary + 2 Replica (or more) – CRUSH Allocation Ruleset • Erasure Coding – [k+m], e.g. 4+2, 8+3 – Better Data Efficiency • k/(k+m) vs. 1/(1+replication) 20 Erasure Coding vs. Replication
  • 21. Size/ Workload Small Medium Large Throughput Transfer Bandwidth Sequential R/W Capacity Cost/capacity Scalability IOPS IOPS/ per 4k Block Random R/W Hyper-converged ? Desktop Virtualization Latency Random R/W Hadoop ? 21 Workload and Configuration
  • 23. • Intel ISA-L • Intel SPDK • Intel CAS • Mellanox Accelio Library 23 Vendor-specific Value-added Software
  • 25. • Trade-off among Technologies • Scalable in Architecture • Optimized for Workload • Affordable as Expected Design Principle
  • 26. 1. Needs for scale-out storage 2. Target workload 3. Access method 4. Storage capacity 5. Data protection methods 6. Fault domain risk tolerance 26 Design Considerations
  • 28. SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*) Throughput optimized QxStor RCT-200 16x D51PH-1ULH (16U) • 12x 8TB HDDs • 3x SSDs • 1x dual port 10GbE • 3x replica QxStor RCT-400 6x T21P-4U/Dual (24U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica QxStor RCT-400 11x T21P-4U/Dual (44U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica Cost/Capacity optimized IOPS optimized Future direction Future direction NA * Usable storage capacity QxStor RCC-400 Nx T21P-4U/Dual • 2x 35x 8TB HDDs • 0x SSDs • 2x dual port 10GbE • Erasure Coding 4:2 QCT QxStor Red Hat Ceph Storage Edition Portfolio Workload-driven Integrated Software/Hardware Solution
  • 29. • Densest 1U Ceph building block • Best reliability with smaller failure domain • Scale at high scale 2x 280TB • At once obtain best throughput and density • Block or object storage • 3x replication • Video, audio, image repositories, and streaming media • Highest density 560TB raw capacity per chassis with greatest price/performance • Typically object storage • Erasure coding common for maximizing usable capacity • Object archive Throughput-Optimized RCC-400RCT-200 RCT-400 Cost/Capacity-Optimized USECASE QCT QxStor Red Hat Ceph Storage Edition Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
  • 30. 30 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 31. 31 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 32. 32 QCT Solution Value Proposition • Workload-driven • Hardware/software pre-validated, pre-optimized and pre-integrated • Up and running in minutes • Balance between production (stable) and innovation (up-streaming)
  • 34. Client 1 S2B Client 2 S2B Client 3 S2B Ceph 1 S2PH Ceph 2 S2PH Ceph 3 S2PH Ceph 5 S2PH Ceph 4 S2PH Client 8 S2B Client 9 S2B Client 10 S2B 10Gb 10Gb Public Network Cluster Network General Configuration • 5 Ceph nodes (S2PH) with each 2 x 10Gb link. • 10 Client nodes (S2B) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 12 OSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 12 OSD / 3 SSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 12 (OSD) x 2GB = 24 GB Testing Configuration (Throughput-Optimized)
  • 35. Client 1 S2S Client 2 S2S Client 3 S2S Ceph 1 S2P Ceph 2 S2P Client 6 S2S Client 7 S2S Client 8 S2S 10Gb Public Network 40Gb 40Gb General Configuration • 2 Ceph nodes (S2P) with each 2 x 10Gb link. • 8 Client nodes (S2S) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 35 OSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 35 OSD / 2 PCI-SSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Testing Configuration (Capacity-Optimized)
  • 36. Level Component Test Suite Raw I/O Disk FIO Network I/O Network iperf Object API I/O librados radosbench Object I/O RGW Cosbench Block I/O RBD librbdfio 36 CBT (Ceph Benchmarking Tool)
  • 39. 39 Price, in terms of Performance
  • 40. 40 Price, in terms of Capacity
  • 46. • The Red Hat Ceph Storage Test Drive lab in QCT Solution Center provides you a free hands-on experience. You'll be able to explore the features and simplicity of the product in real-time. • Concepts: Ceph feature and functional test • Lab Exercises: Ceph Basics Ceph Management - Calamari/CLI Ceph Object/Block Access 46 QCT Offer TryCeph (Test Drive) Later
  • 47. 47 Remote access to QCT cloud solution centers • Easy to test. Anytime and anywhere. • No facilities and logistic needed • Configurations • RCT-200 and newest QCT solutions QCT Offer TryCeph (Test Drive) Later
  • 48. • Ceph is Open Architecture • QCT, Red Hat and Intel collaborate to provide – Workload-driven, – Pre-integrated, – Comprehensive-tested and – Well-optimized solution • Red Hat – Open Software/Support Pioneer Intel – Open Silicon/Technology Innovator QCT – Open System/Solution Provider • Together We Provide the Best 48 CONCLUSION
  • 50. www.QCT.io QCT CONFIDENTIAL50 Looking for innovative cloud solution? Come to QCT, who else?

Editor's Notes

  1. Here is 3 skus based on small-medium-large scale. For larger scale, suggest customer to adopt RCT-400 or RCC-400. QCT is planning to do sku that is optimized for IOPS-intensive workloads. We’ll launch it in 2016H2