SlideShare a Scribd company logo
1 of 30
Download to read offline
Reference	
  Architectures:	
  
Architec/ng	
  Ceph	
  Storage	
  Solu/ons	
  
Brent	
  Compton	
  
Director	
  Storage	
  Solu/ons,	
  Red	
  Hat	
  
	
  
Kyle	
  Bader	
  
Senior	
  Solu/on	
  Architect,	
  Red	
  Hat	
  
RefArch	
  Building	
  Blocks	
  
Servers	
  and	
  Media	
  (HDD,	
  SSD,	
  PCIe)	
  
Network	
  
Bare	
  metal	
   OpenStack	
  virt	
   Container	
  virt	
   Other	
  virt	
  
Defined	
  
Workloads	
  
OS/Virt	
  
PlaMorm	
  
Network	
  
Ceph	
  
RefArch	
  Flavors	
  
•  How-­‐to	
  integra/on	
  guides	
  
(Ceph+OS/Virt,	
  or	
  Ceph+OS/Virt+Workloads)	
  
hQp://www.dell.com/learn/us/en/04/shared-­‐content~data-­‐sheets~en/documents~dell-­‐red-­‐hat-­‐cloud-­‐
solu/ons.pdf	
  	
  
	
  
•  Performance	
  and	
  sizing	
  guides	
  
(Network+Server+Ceph+[OS/Virt])	
  
hQp://www.redhat.com/en/resources/red-­‐hat-­‐ceph-­‐storage-­‐clusters-­‐supermicro-­‐storage-­‐servers	
  
	
  hQps://www.redhat.com/en/resources/cisco-­‐ucs-­‐c3160-­‐rack-­‐server-­‐red-­‐hat-­‐ceph-­‐storage	
  	
  
hQps://www.scalableinforma/cs.com/assets/documents/Unison-­‐Ceph-­‐Performance.pdf	
  	
  
1.  Qualify	
  need	
  for	
  scale-­‐out	
  storage	
  
2.  Design	
  for	
  target	
  workload	
  IO	
  profile(s)	
  
3.  Choose	
  storage	
  access	
  method(s)	
  
4.  Iden/fy	
  capacity	
  
5.  Determine	
  fault-­‐domain	
  risk	
  tolerance	
  
6.  Select	
  data	
  protec/on	
  method	
  
•  Target	
  server	
  and	
  network	
  hardware	
  architecture	
  
(performance	
  and	
  sizing)	
  
Design	
  Considera/ons	
  
1.	
  Qualify	
  Need	
  for	
  Scale-­‐out	
  
•  Elas/c	
  provisioning	
  across	
  storage	
  server	
  cluster	
  
•  Data	
  HA	
  across	
  ‘islands’	
  of	
  scale-­‐up	
  storage	
  servers	
  
•  Standardized	
  servers	
  and	
  networking	
  
•  Performance	
  and	
  capacity	
  scaled	
  independently	
  
•  Incremental	
  v.	
  forklie	
  upgrades	
  
PAST:	
  SCALE	
  UP	
  
FUTURE:	
  SCALE	
  OUT	
  
Designed	
  for	
  Agility	
  
2.	
  Design	
  for	
  Workload	
  IO	
  
•  Performance	
  v.	
  ‘cheap-­‐and-­‐deep’?	
  
•  Performance:	
  throughput	
  v.	
  IOPS	
  intensive?	
  
•  Sequen/al	
  v.	
  random?	
  
•  Small	
  block	
  v.	
  large	
  block?	
  
•  Read	
  v.	
  write	
  mix?	
  
•  Latency:	
  absolute	
  v.	
  consistent	
  targets?	
  
•  Sync	
  v.	
  async?	
  
Generalized	
  Workload	
  IO	
  
Categories	
  
IOPS	
  	
  
Op/mized	
  
Throughput	
  
Op/mized	
  
Cost-­‐Capacity	
  
Op/mized	
  
•  Highest	
  performance	
  (MB/sec	
  or	
  IOPS)	
  
•  CapEx:	
  Lowest	
  $/performance-­‐unit	
  
•  OpEx:	
  Highest	
  performance/BTU	
  
•  OpEx:	
  Highest	
  performance/waQ	
  
•  Meets	
  minimum	
  server-­‐fault	
  domain	
  
recommenda)on	
  (1	
  server	
  <=	
  10%	
  cluster)	
  
Performance-­‐Op/mized	
  
•  CapEx:	
  Lowest	
  $/TB	
  
•  OpEx:	
  Lowest	
  BTU/TB	
  
•  OpEx:	
  Lowest	
  waQ/TB	
  
•  OpEx:	
  Highest	
  TB/Rack-­‐unit	
  
•  Meets	
  minimum	
  server-­‐fault	
  domain	
  
recommenda)on	
  (1	
  server	
  <=	
  15%	
  cluster)	
  
Cost/Capacity-­‐Op/mized	
  
3.	
  Storage	
  Access	
  Methods	
  
distributed	
  file*	
   object	
   block**	
  
software-defined storage cluster
*CephFS	
  not	
  yet	
  supported	
  by	
  RHCS	
  
**	
  RBD	
  supported	
  with	
  replicated	
  data	
  protec/on	
  
4.	
  Iden/fy	
  Capacity	
  
OpenStack	
  
Starter	
  
100TB	
  
S	
  
500TB	
  
M	
  
1PTB	
  
L	
  
2PB+	
  
5.	
  Fault-­‐Domain	
  Risk	
  Tolerance	
  
•  What	
  %	
  of	
  cluster	
  capacity	
  does	
  you	
  want	
  on	
  a	
  single	
  node?	
  
When	
  a	
  server	
  fails:	
  
•  More	
  workload	
  performance	
  impairment	
  during	
  backfill/recovery	
  with	
  fewer	
  nodes	
  in	
  
the	
  cluster	
  (each	
  node	
  has	
  greater	
  %	
  of	
  its	
  compute/IO	
  u/liza/on	
  devoted	
  to	
  recovery).	
  	
  
•  Larger	
  %	
  of	
  cluster’s	
  reserve	
  storage	
  capacity	
  u/lized	
  during	
  backfill/recovery	
  with	
  fewer	
  
nodes	
  in	
  the	
  cluster	
  (must	
  reserve	
  larger	
  %	
  of	
  capacity	
  for	
  recovery	
  with	
  fewer	
  nodes).	
  
•  Guidelines:	
  
–  Minimum	
  supported	
  (RHCS):	
  3	
  OSD	
  nodes	
  per	
  Ceph	
  cluster.	
  
–  Minimum	
  recommended	
  (performance	
  cluster):	
  10	
  OSD	
  nodes	
  cluster	
  
(1	
  node	
  represents	
  <10%	
  of	
  total	
  cluster	
  capacity)	
  
–  Minimum	
  recommended	
  (cost/capacity	
  cluster):	
  7	
  OSD	
  nodes	
  per	
  cluster	
  
(1	
  node	
  represents	
  <15%	
  of	
  total	
  cluster	
  capacity)	
  
6.	
  Data	
  Protec/on	
  Schemes	
  
•  Replica/on	
  
•  Erasure	
  Coding	
  (analogous	
  to	
  network	
  RAID)	
  
	
  
One	
  of	
  the	
  biggest	
  choices	
  affec)ng	
  TCO	
  in	
  the	
  en)re	
  
solu)on!	
  
•  Replica/on	
  
– 3x	
  rep	
  over	
  JBOD	
  =	
  33%	
  usable:raw	
  capacity	
  ra/o	
  
	
  
•  Erasure	
  Coding	
  (analogous	
  to	
  network	
  RAID)	
  
– 8+3	
  over	
  JBOD	
  =	
  73%	
  usable:raw	
  
Data	
  Protec/on	
  Schemes	
  
•  Replica/on	
  
–  Ceph	
  block	
  storage	
  default:	
  3x	
  rep	
  over	
  JBOD	
  disks.	
  
–  Gluster	
  file	
  storage	
  default:	
  2x	
  rep	
  over	
  RAID6	
  bricks.	
  
	
  
•  Erasure	
  Coding	
  (analogous	
  to	
  network	
  RAID)	
  
–  Data	
  encoded	
  into	
  k	
  chunks	
  with	
  m	
  parity	
  chunks	
  and	
  
spread	
  onto	
  different	
  disks	
  (frequently	
  on	
  different	
  
servers).	
  	
  Can	
  tolerate	
  m	
  disk	
  failures	
  without	
  data	
  
loss.	
  	
  8+3	
  popular.	
  
Data	
  Protec/on	
  Schemes	
  
Target	
  Cluster	
  Hardware	
  
OSP	
  Starter	
  
100TB	
  
S	
  
500TB	
  
M	
  
1PTB	
  
L	
  
2PB	
  
IOPS	
  	
  
Op/mized	
  
Throughput	
  
Op/mized	
  
Cost-­‐Capacity	
  
Op/mized	
  
•  Following	
  are	
  extracts	
  from	
  the	
  recently	
  
published	
  Ceph	
  on	
  Supermicro	
  RefArch	
  
•  Based	
  on	
  lab	
  benchmarking	
  results	
  from	
  many	
  
different	
  configura/ons	
  
RefArch	
  Examples	
  
Sequen/al	
  Throughput	
  (R)	
  
(per	
  server)	
  
!"
#!!"
$!!!"
$#!!"
%!!!"
%#!!"
&!'(" '!!" (&" &"
!"#$%&'
()*%&+',-.%'/0"1'
23',%45%6789':%8;'<=>?5@=A5+'BC:',%>D%>'
EF':%AG'9-)>8;?$'
$%)$"
$!*)$!*"
$+)$"
$!*)$!*"
$+)!"
$!*)$!*"
,()%"
$!*)$!*"
,()!"
$!*)$!*"
,()%"
&!*"-./01234"
!)%"
&!*"-./01234"
(!)$%"
&!*"-./01234"
5%)!"
&!*"-./01234"
Sequen/al	
  Throughput	
  (R)	
  
(per	
  OSD/HDD)	
  
!"
#!"
$!"
%!"
&!"
'!"
(!"
)!"
*!"
&!+(" +!!" (&" &"
!"#$%&'
()*%&+',-.%'/0"1'
23',%45%6789':%8;'<=>?5@=A5+'BC:'(,D'
EF':%AG'9-)>8;?$'
#$,#"
#!-,#!-"
#*,#"
#!-,#!-"
#*,!"
#!-,#!-"
%(,$"
#!-,#!-"
%(,!"
#!-,#!-"
%(,$"
&!-"./012345"
(!,#$"
61/37893"
(!,#$"
&!-"./012345"
)$,!"
&!-"./012345"
Sequen/al	
  Throughput	
  (W)	
  
(per	
  OSD/HDD,	
  3xRep)	
  
!"
#"
$!"
$#"
%!"
%#"
&!"
'!()" (!!" )'" '"
!"#$%&'
()*%&+',-.%'/0"1'
234',%56%789:';<-+%'=><?6@>A6+'BCD'(,E'
FG'D%A'/H6:8A:I')I'F'+?'@%+'A>I$-&9:'J<-+%'+><?6@>A6+1K':-)<9L?$'
$%*$"
$!+*$!+"
$,*$"
$!+*$!+"
$,*!"
$!+*$!+"
&)*%"
$!+*$!+"
&)*!"
$!+*$!+"
&)*%"
'!+"-./01234"
)!*$%"
50.26782"
)!*$%"
'!+"-./01234"
9%*!"
'!+"-./01234"
!"
#"
$!"
$#"
%!"
%#"
&!"
&#"
'!"
'#"
'!()" (!!" )'" '"
!"#$%&'
()*%&+',-.%'/0"1'
234',%56%789:';<-+%'=><?6@>A6+'BCD'(,E'
CF'GH3'/I6:8A:J')J'/2H/K#/KHI11'+?'@%+'A>J$-&9:'L<-+%'+><?6@>A6+1M':-)<9N?$'
$%*$"
$!+*$!+"
$,*$"
$!+*$!+"
$,*!"
$!+*$!+"
&)*%"
$!+*$!+"
&)*!"
$!+*$!+"
&)*%"
'!+"-./01234"
)!*$%"
50.26782"
$%*!"
$!+*$!+"
)!*$%"
'!+"-./01234"
9%*!"
'!+"-./01234"
Sequen/al	
  Throughput	
  (W)	
  
(per	
  OSD/HDD,	
  EC)	
  
!"#$%"&'()$*"+,&")$-"%$./)0"
12%3452#46"7#89:;.83<=">$./"
?%:*$"#$%"4<:6"3@"A3%B"
+C3A$)6"#%:*$"D"E$)60"
!"#$%&'&())*&
$(#+%&'&,"#))*&
-$#+%&'&"#./01&
-(#+%&'&$#))*&
,"#+%&'&,#./01&
Throughput-­‐Op/mized	
  (R)	
  
Throughput-­‐Op/mized	
  (W)	
  
!"#$%"&'()$*"+,&")$-"%$./)0"
12%3452#46"7#89:;.83<=">%:6$"
?%:*$"#$%"4<:6"3@"A3%B"
+C3A$)6"#%:*$"D"E$)60"
!"#$%&'&())*&
$(#+%&'&,"#))*&
-$#+%&'&"#./01&
-(#+%&'&$#))*&
,"#+%&'&,#./01&
Capacity-­‐Op/mized	
  
!"#$%"&'"
()#)*+,-".#/0+1)/23"
4%+*$"#$%"&'"
5627$8,"#%+*$"9":$8,;"
!"#$%&'&())*&
+$#$%&'&())*&
,"#$%&'&())*&
!"#$%&'()*)+,-.
!"#$%&'()*
%&'+&#+
,-./ %*0*,--./ 10*23/ 4*0*53/
6!3%*!"&7879#:
/0'12*3,+)-24'4)-)
.;+<=>;"=&*!"&7879#:
/0'12*3,+)-24'4)-)
5'6742
89:;).'<8=9!>
5=?'@)+A:!6,-B
890'5C$'DEE
80'?FFG'""EHI(J2
/9'6742
89:;).'<8=9!>
/9=K5'@)+A:!6,-B
890'5C$'DEE
80'?FFG'""EHI(J2
K/'6742
89:;).'<8=9!>
K/=89K'@)+A:!6,-B
890'5C$'DEE
80'?FFG'""EHI(J2
89L'6742
89:;).'<8=9!>
89L=9LF'@)+A:!6,-B
890'5C$'DEE
80'?FFG'""EHI(J2
?'"'(7&@*!"&7879#:
KM9'2+'*17-2+-24'4)-)
8F'6742
89:;).'<8=9!>
8F=9F'@)+A:!6,-B
890'KC$'DEE
F0'""E
N'6742
/K:;).
9?'@)+A:!6,-B
/K0'KC$'DEE
F0'""E
N'6742
N9:;).
9?'@)+A:!6,-B
N90'KC$'DEE
F0'""E
I3)6624'O71'"PQQ21'9F8L
Ceph	
  Op/mized	
  Configs	
  
•  Server	
  chassis	
  size	
  
•  CPU	
  
•  Memory	
  
•  Disk	
  
•  SSD	
  Write	
  Journals	
  (Ceph	
  only)	
  
•  Network	
  
Add’l	
  Subsystem	
  Guidelines	
  
•  See	
  Ceph	
  on	
  Supermicro	
  PorMolio	
  RefArch	
  
hQp://www.redhat.com/en/resources/red-­‐hat-­‐ceph-­‐storage-­‐clusters-­‐supermicro-­‐storage-­‐servers	
  	
  
•  See	
  Ceph	
  on	
  Cisco	
  UCS	
  C3160	
  Whitepaper	
  
hQp://www.cisco.com/c/en/us/products/collateral/servers-­‐unified-­‐compu/ng/ucs-­‐c-­‐series-­‐rack-­‐servers/whitepaper-­‐C11-­‐735004.html	
  	
  
•  See	
  Ceph	
  on	
  Scalable	
  Informa/cs	
  Whitepaper	
  
hQps://www.scalableinforma/cs.com/assets/documents/Unison-­‐Ceph-­‐Performance.pdf	
  	
  
RefArchs	
  &	
  Whitepapers	
  
THANK	
  YOU	
  

More Related Content

What's hot

Extreme Linux Performance Monitoring and Tuning
Extreme Linux Performance Monitoring and TuningExtreme Linux Performance Monitoring and Tuning
Extreme Linux Performance Monitoring and TuningMilind Koyande
 
OpenHPC: A Comprehensive System Software Stack
OpenHPC: A Comprehensive System Software StackOpenHPC: A Comprehensive System Software Stack
OpenHPC: A Comprehensive System Software Stackinside-BigData.com
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Community
 
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
 
Container Performance Analysis
Container Performance AnalysisContainer Performance Analysis
Container Performance AnalysisBrendan Gregg
 
Fun with Network Interfaces
Fun with Network InterfacesFun with Network Interfaces
Fun with Network InterfacesKernel TLV
 
Linux Profiling at Netflix
Linux Profiling at NetflixLinux Profiling at Netflix
Linux Profiling at NetflixBrendan Gregg
 
The ideal and reality of NVDIMM RAS
The ideal and reality of NVDIMM RASThe ideal and reality of NVDIMM RAS
The ideal and reality of NVDIMM RASYasunori Goto
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...OpenStack Korea Community
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringShapeBlue
 
Linux Initialization Process (2)
Linux Initialization Process (2)Linux Initialization Process (2)
Linux Initialization Process (2)shimosawa
 
Introduction to Linux Kernel by Quontra Solutions
Introduction to Linux Kernel by Quontra SolutionsIntroduction to Linux Kernel by Quontra Solutions
Introduction to Linux Kernel by Quontra SolutionsQUONTRASOLUTIONS
 
LinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking WalkthroughLinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDKKernel TLV
 

What's hot (20)

Extreme Linux Performance Monitoring and Tuning
Extreme Linux Performance Monitoring and TuningExtreme Linux Performance Monitoring and Tuning
Extreme Linux Performance Monitoring and Tuning
 
OpenHPC: A Comprehensive System Software Stack
OpenHPC: A Comprehensive System Software StackOpenHPC: A Comprehensive System Software Stack
OpenHPC: A Comprehensive System Software Stack
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
DPDK In Depth
DPDK In DepthDPDK In Depth
DPDK In Depth
 
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
 
Container Performance Analysis
Container Performance AnalysisContainer Performance Analysis
Container Performance Analysis
 
Fun with Network Interfaces
Fun with Network InterfacesFun with Network Interfaces
Fun with Network Interfaces
 
Linux Profiling at Netflix
Linux Profiling at NetflixLinux Profiling at Netflix
Linux Profiling at Netflix
 
The ideal and reality of NVDIMM RAS
The ideal and reality of NVDIMM RASThe ideal and reality of NVDIMM RAS
The ideal and reality of NVDIMM RAS
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uring
 
Linux Initialization Process (2)
Linux Initialization Process (2)Linux Initialization Process (2)
Linux Initialization Process (2)
 
Introduction to Linux Kernel by Quontra Solutions
Introduction to Linux Kernel by Quontra SolutionsIntroduction to Linux Kernel by Quontra Solutions
Introduction to Linux Kernel by Quontra Solutions
 
LinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking WalkthroughLinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking Walkthrough
 
Understanding DPDK
Understanding DPDKUnderstanding DPDK
Understanding DPDK
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 

Viewers also liked

Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Community
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Community
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Community
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Community
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Community
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Community
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Community
 

Viewers also liked (20)

Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 

Similar to Reference Architecture: Architecting Ceph Storage Solutions

Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster inwin stack
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph SolutionsRed_Hat_Storage
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)Lars Marowsky-Brée
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterAaron Joue
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...In-Memory Computing Summit
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Lars Marowsky-Brée
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Community
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraCeph Community
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanHadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanNarayana B
 
openSUSE storage workshop 2016
openSUSE storage workshop 2016openSUSE storage workshop 2016
openSUSE storage workshop 2016Alex Lau
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
 
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...Aerospike
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed_Hat_Storage
 

Similar to Reference Architecture: Architecting Ceph Storage Solutions (20)

Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph Solutions
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
 
Ceph
CephCeph
Ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanHadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_Plan
 
openSUSE storage workshop 2016
openSUSE storage workshop 2016openSUSE storage workshop 2016
openSUSE storage workshop 2016
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep Dive
 

Recently uploaded

TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 

Recently uploaded (20)

TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 

Reference Architecture: Architecting Ceph Storage Solutions

  • 1. Reference  Architectures:   Architec/ng  Ceph  Storage  Solu/ons   Brent  Compton   Director  Storage  Solu/ons,  Red  Hat     Kyle  Bader   Senior  Solu/on  Architect,  Red  Hat  
  • 2. RefArch  Building  Blocks   Servers  and  Media  (HDD,  SSD,  PCIe)   Network   Bare  metal   OpenStack  virt   Container  virt   Other  virt   Defined   Workloads   OS/Virt   PlaMorm   Network   Ceph  
  • 3. RefArch  Flavors   •  How-­‐to  integra/on  guides   (Ceph+OS/Virt,  or  Ceph+OS/Virt+Workloads)   hQp://www.dell.com/learn/us/en/04/shared-­‐content~data-­‐sheets~en/documents~dell-­‐red-­‐hat-­‐cloud-­‐ solu/ons.pdf       •  Performance  and  sizing  guides   (Network+Server+Ceph+[OS/Virt])   hQp://www.redhat.com/en/resources/red-­‐hat-­‐ceph-­‐storage-­‐clusters-­‐supermicro-­‐storage-­‐servers    hQps://www.redhat.com/en/resources/cisco-­‐ucs-­‐c3160-­‐rack-­‐server-­‐red-­‐hat-­‐ceph-­‐storage     hQps://www.scalableinforma/cs.com/assets/documents/Unison-­‐Ceph-­‐Performance.pdf    
  • 4. 1.  Qualify  need  for  scale-­‐out  storage   2.  Design  for  target  workload  IO  profile(s)   3.  Choose  storage  access  method(s)   4.  Iden/fy  capacity   5.  Determine  fault-­‐domain  risk  tolerance   6.  Select  data  protec/on  method   •  Target  server  and  network  hardware  architecture   (performance  and  sizing)   Design  Considera/ons  
  • 5. 1.  Qualify  Need  for  Scale-­‐out   •  Elas/c  provisioning  across  storage  server  cluster   •  Data  HA  across  ‘islands’  of  scale-­‐up  storage  servers   •  Standardized  servers  and  networking   •  Performance  and  capacity  scaled  independently   •  Incremental  v.  forklie  upgrades  
  • 6.
  • 7. PAST:  SCALE  UP   FUTURE:  SCALE  OUT   Designed  for  Agility  
  • 8. 2.  Design  for  Workload  IO   •  Performance  v.  ‘cheap-­‐and-­‐deep’?   •  Performance:  throughput  v.  IOPS  intensive?   •  Sequen/al  v.  random?   •  Small  block  v.  large  block?   •  Read  v.  write  mix?   •  Latency:  absolute  v.  consistent  targets?   •  Sync  v.  async?  
  • 9. Generalized  Workload  IO   Categories   IOPS     Op/mized   Throughput   Op/mized   Cost-­‐Capacity   Op/mized  
  • 10. •  Highest  performance  (MB/sec  or  IOPS)   •  CapEx:  Lowest  $/performance-­‐unit   •  OpEx:  Highest  performance/BTU   •  OpEx:  Highest  performance/waQ   •  Meets  minimum  server-­‐fault  domain   recommenda)on  (1  server  <=  10%  cluster)   Performance-­‐Op/mized  
  • 11. •  CapEx:  Lowest  $/TB   •  OpEx:  Lowest  BTU/TB   •  OpEx:  Lowest  waQ/TB   •  OpEx:  Highest  TB/Rack-­‐unit   •  Meets  minimum  server-­‐fault  domain   recommenda)on  (1  server  <=  15%  cluster)   Cost/Capacity-­‐Op/mized  
  • 12. 3.  Storage  Access  Methods   distributed  file*   object   block**   software-defined storage cluster *CephFS  not  yet  supported  by  RHCS   **  RBD  supported  with  replicated  data  protec/on  
  • 13. 4.  Iden/fy  Capacity   OpenStack   Starter   100TB   S   500TB   M   1PTB   L   2PB+  
  • 14. 5.  Fault-­‐Domain  Risk  Tolerance   •  What  %  of  cluster  capacity  does  you  want  on  a  single  node?   When  a  server  fails:   •  More  workload  performance  impairment  during  backfill/recovery  with  fewer  nodes  in   the  cluster  (each  node  has  greater  %  of  its  compute/IO  u/liza/on  devoted  to  recovery).     •  Larger  %  of  cluster’s  reserve  storage  capacity  u/lized  during  backfill/recovery  with  fewer   nodes  in  the  cluster  (must  reserve  larger  %  of  capacity  for  recovery  with  fewer  nodes).   •  Guidelines:   –  Minimum  supported  (RHCS):  3  OSD  nodes  per  Ceph  cluster.   –  Minimum  recommended  (performance  cluster):  10  OSD  nodes  cluster   (1  node  represents  <10%  of  total  cluster  capacity)   –  Minimum  recommended  (cost/capacity  cluster):  7  OSD  nodes  per  cluster   (1  node  represents  <15%  of  total  cluster  capacity)  
  • 15. 6.  Data  Protec/on  Schemes   •  Replica/on   •  Erasure  Coding  (analogous  to  network  RAID)     One  of  the  biggest  choices  affec)ng  TCO  in  the  en)re   solu)on!  
  • 16. •  Replica/on   – 3x  rep  over  JBOD  =  33%  usable:raw  capacity  ra/o     •  Erasure  Coding  (analogous  to  network  RAID)   – 8+3  over  JBOD  =  73%  usable:raw   Data  Protec/on  Schemes  
  • 17. •  Replica/on   –  Ceph  block  storage  default:  3x  rep  over  JBOD  disks.   –  Gluster  file  storage  default:  2x  rep  over  RAID6  bricks.     •  Erasure  Coding  (analogous  to  network  RAID)   –  Data  encoded  into  k  chunks  with  m  parity  chunks  and   spread  onto  different  disks  (frequently  on  different   servers).    Can  tolerate  m  disk  failures  without  data   loss.    8+3  popular.   Data  Protec/on  Schemes  
  • 18. Target  Cluster  Hardware   OSP  Starter   100TB   S   500TB   M   1PTB   L   2PB   IOPS     Op/mized   Throughput   Op/mized   Cost-­‐Capacity   Op/mized  
  • 19. •  Following  are  extracts  from  the  recently   published  Ceph  on  Supermicro  RefArch   •  Based  on  lab  benchmarking  results  from  many   different  configura/ons   RefArch  Examples  
  • 20. Sequen/al  Throughput  (R)   (per  server)   !" #!!" $!!!" $#!!" %!!!" %#!!" &!'(" '!!" (&" &" !"#$%&' ()*%&+',-.%'/0"1' 23',%45%6789':%8;'<=>?5@=A5+'BC:',%>D%>' EF':%AG'9-)>8;?$' $%)$" $!*)$!*" $+)$" $!*)$!*" $+)!" $!*)$!*" ,()%" $!*)$!*" ,()!" $!*)$!*" ,()%" &!*"-./01234" !)%" &!*"-./01234" (!)$%" &!*"-./01234" 5%)!" &!*"-./01234"
  • 21. Sequen/al  Throughput  (R)   (per  OSD/HDD)   !" #!" $!" %!" &!" '!" (!" )!" *!" &!+(" +!!" (&" &" !"#$%&' ()*%&+',-.%'/0"1' 23',%45%6789':%8;'<=>?5@=A5+'BC:'(,D' EF':%AG'9-)>8;?$' #$,#" #!-,#!-" #*,#" #!-,#!-" #*,!" #!-,#!-" %(,$" #!-,#!-" %(,!" #!-,#!-" %(,$" &!-"./012345" (!,#$" 61/37893" (!,#$" &!-"./012345" )$,!" &!-"./012345"
  • 22. Sequen/al  Throughput  (W)   (per  OSD/HDD,  3xRep)   !" #" $!" $#" %!" %#" &!" '!()" (!!" )'" '" !"#$%&' ()*%&+',-.%'/0"1' 234',%56%789:';<-+%'=><?6@>A6+'BCD'(,E' FG'D%A'/H6:8A:I')I'F'+?'@%+'A>I$-&9:'J<-+%'+><?6@>A6+1K':-)<9L?$' $%*$" $!+*$!+" $,*$" $!+*$!+" $,*!" $!+*$!+" &)*%" $!+*$!+" &)*!" $!+*$!+" &)*%" '!+"-./01234" )!*$%" 50.26782" )!*$%" '!+"-./01234" 9%*!" '!+"-./01234"
  • 23. !" #" $!" $#" %!" %#" &!" &#" '!" '#" '!()" (!!" )'" '" !"#$%&' ()*%&+',-.%'/0"1' 234',%56%789:';<-+%'=><?6@>A6+'BCD'(,E' CF'GH3'/I6:8A:J')J'/2H/K#/KHI11'+?'@%+'A>J$-&9:'L<-+%'+><?6@>A6+1M':-)<9N?$' $%*$" $!+*$!+" $,*$" $!+*$!+" $,*!" $!+*$!+" &)*%" $!+*$!+" &)*!" $!+*$!+" &)*%" '!+"-./01234" )!*$%" 50.26782" $%*!" $!+*$!+" )!*$%" '!+"-./01234" 9%*!" '!+"-./01234" Sequen/al  Throughput  (W)   (per  OSD/HDD,  EC)  
  • 27. !"#$%&'()*)+,-. !"#$%&'()* %&'+&#+ ,-./ %*0*,--./ 10*23/ 4*0*53/ 6!3%*!"&7879#: /0'12*3,+)-24'4)-) .;+<=>;"=&*!"&7879#: /0'12*3,+)-24'4)-) 5'6742 89:;).'<8=9!> 5=?'@)+A:!6,-B 890'5C$'DEE 80'?FFG'""EHI(J2 /9'6742 89:;).'<8=9!> /9=K5'@)+A:!6,-B 890'5C$'DEE 80'?FFG'""EHI(J2 K/'6742 89:;).'<8=9!> K/=89K'@)+A:!6,-B 890'5C$'DEE 80'?FFG'""EHI(J2 89L'6742 89:;).'<8=9!> 89L=9LF'@)+A:!6,-B 890'5C$'DEE 80'?FFG'""EHI(J2 ?'"'(7&@*!"&7879#: KM9'2+'*17-2+-24'4)-) 8F'6742 89:;).'<8=9!> 8F=9F'@)+A:!6,-B 890'KC$'DEE F0'""E N'6742 /K:;). 9?'@)+A:!6,-B /K0'KC$'DEE F0'""E N'6742 N9:;). 9?'@)+A:!6,-B N90'KC$'DEE F0'""E I3)6624'O71'"PQQ21'9F8L Ceph  Op/mized  Configs  
  • 28. •  Server  chassis  size   •  CPU   •  Memory   •  Disk   •  SSD  Write  Journals  (Ceph  only)   •  Network   Add’l  Subsystem  Guidelines  
  • 29. •  See  Ceph  on  Supermicro  PorMolio  RefArch   hQp://www.redhat.com/en/resources/red-­‐hat-­‐ceph-­‐storage-­‐clusters-­‐supermicro-­‐storage-­‐servers     •  See  Ceph  on  Cisco  UCS  C3160  Whitepaper   hQp://www.cisco.com/c/en/us/products/collateral/servers-­‐unified-­‐compu/ng/ucs-­‐c-­‐series-­‐rack-­‐servers/whitepaper-­‐C11-­‐735004.html     •  See  Ceph  on  Scalable  Informa/cs  Whitepaper   hQps://www.scalableinforma/cs.com/assets/documents/Unison-­‐Ceph-­‐Performance.pdf     RefArchs  &  Whitepapers