SlideShare a Scribd company logo
RED HAT CEPH STORAGE
ACCELERATION UTILIZING FLASH
TECHNOLOGY
Applications and Ecosystem Solutions Development
Rick Stehno
Ceph Day SJC 2017
1
Seagate Confidential 2
• Utilize flash caching features to accelerate critical data. Caching methods
can be write-back for writes, write-thru for disk/cache transparency, read
cache, etc..
• Utilize storage tiering capabilities. Performance critical data resides on
flash storage, colder data resides on HDD
• Utilize all flash storage to accelerate performance when all application
data is performance critical or when the application does not provide the
features or capabilities to cache or to migrate the data
Three ways to accelerate application performance with flash
Flash Acceleration for Applications
Seagate Confidential 3
Configurations:
• All flash storage - Performance
• Highest performance per node
• Less maximum capacity per node
• Hybrid HDD and flash storage - Balanced
• Balances performance, capacity and cost
• Application and workload suitable for
• Performance critical data on flash
• Utilize host software caching or tiering on flash
• All HDD storage - Capacity
• Maximum capacity per node, lowest cost
• Lower performance per node
Ceph Software Defined Storage (SDS) Acceleration
Seagate Confidential 4
–Higher performance in half the rack space
–28% less power and cooling
–Higher MTBF inherent with reduced component count
–Reduced OSD recovery time per Ceph node
–Lower TCO
Why 1U server with 10 NVMe SSDs may be better choice
vs. 2U Server with 24 SATA SSDs
Storage - NVMe vs SATA SSD
Seagate Confidential 5
• 4.5x increase for 128k sequential
reads
• 3.5x increase for 128k sequential
writes
• 3.7x increase for 4k random reads
• 1.4x increase for 4k random 70/30
RR/RW
• Equal performance for 4k random
writes
Why 1U server with 10 NVMe SSDs may be better choice
vs. 2U Server with 24 SATA SSDs
All Flash Storage - NVMe vs SATA SSD cont’d
FIO Benchmarks
(1x represents 24 SATA SSD baseline)
Seagate Confidential 6
Why 1U server with 10 NVMe SSDs may be better choice
vs. 2U Server with 24 SATA SSDs
All Flash Storage - NVMe vs SATA SSD cont’d
Increasing the load to extend NVMe
advantage over and above the 128
thread SATA SSD Test:
• 5.8x increase for Random Writes at
512 threads
• 3.1x increase for 70/30 RR/RW at
512 threads
• 4.2x increase for Random Reads at
790 threads
• 8.2x increase for Sequential Reads
at 1264 threads
10 NVMe SSDs support higher
workloads and more users
3x
5.8x
1.4x
3.1x
1.0x
4.2x
1.3x
8.2x
128
Theads
512
Theads
128
Threads
512
Threads
128
threads
790
threads
128
threads
1264
threads
Gains
Random Write 70/30 RR/RW
Random Reads Sequential Reads
Ceph RBD NVMe Performance Gains over
SATA SSD
Random Writes 70/30 RR/RW Random Reads Sequential Reads
128k FIO RBD IOEngine Benchmark
Seagate Confidential 7
Price per MB/s: Cost of ((Retail Cost of SSD) / MB/s for each test)
SSD
Total SSD
Price
Price MB/s 128k Random Writes
128 threads
Price MB/s 128k Random Writes
512 threads
24 - SATA SSD 960G $7,896 24 - SATA SSD 960G $15.00
10 - NVMe 2TB $10,990 10 - NVMe 2TB $7.00 10 – NVMe 2TB $3.00
These prices do not include savings from electrical/cooling costs, reducing datacenter floor space, from the reduction of SATA SSD
Note: 128k random write FIO RBD benchmark: SATA SSD averaged 85% busy, NVMe averaged 80% busy with 512 threads
FIO RBD Maximum Threads Random Write Performance for NVMe
Ceph Storage Costs
Seagate SATA SSD vs. Seagate NVMe SSD
Seagate Confidential 8
MySQL
• MySQL is the most popular and the most widely used open-source database in the world
• MySQL is both feature rich in the areas of performance, scalability and reliability
• Database users demand high OLTP performance - Small random reads/writes
Ceph
• Most popular Software Defined Storage system
• Scalable
• Reliable
Does it make sense implementing Ceph into a MySQL
Database environment?
Ceph was not designed to provide high performance for OLTP environments
OLTP entails small random reads/writes
Seagate Confidential 9
MySQL Setup:
Release 5.7
45,000,000 rows
6GB Buffer
4G logfiles
RAID 0 over 18 HDD
Ceph Setup:
3 Nodes each containing:
Jewel Using Filestore
4 NVMe SSDs
1 Pool over 12 NVMe SSDs
Replica 2
40G private and public
network
For all tests, all MySQL
files were local on local
server except the database
file, this file was moved to
the Ceph cluster.
MySQL - Comparing Local HDD to Ceph Cluster
Threads
Seagate Confidential 10
MySQL - Comparing Local NVMe SSD to Ceph Cluster
MySQL Setup:
Release 5.7
45,000,000 rows
6GB Buffer
4G logfiles
RAID 0 over 4 NVMe SSDs
Ceph Setup:
3 Nodes each containing:
Jewel Using Filestore
4 NVMe SSDs
1 Pool over 12 NVMe SSDs
Replica 1
40G private and public
network
For all tests, all MySQL
files were local on local
server except the database
file, this file was moved to
the Ceph cluster.
Seagate Confidential 11
All SSD
Case-1: Case-2: Case-3:
2 SSDs 2 SSDs 1 PCIe flash
1 OSD/SSD 4 OSDs/SSD 4 OSDs/SSD
8 OSD journals on PCIe flash
0
100000
200000
300000
400000
500000
600000
700000
800000
0
200000
400000
600000
800000
1000000
1200000
2 ssd, 2 osd 2 ssd, 8 osd 2 ssd, 8 osd,
+journal
IOPS
KB/s
FIO Random Write - 200 Threads -
128k Data
Seagate SSD and Seagate PCIe Storage
Ceph All Flash Storage Acceleration
Seagate Confidential 12
Ceph All Flash Storage Acceleration
4K FIO RBD Benchmarks
1M FIO RBD Benchmarks
3 node Ceph cluster
100G Public and Private Networks
4 - Seagate NVMe SSD per node
12 Seagate NVMe SSD per cluster
Benchmark 1:
1 - OSD per NVMe SSD
Benchmark 2:
4 - OSD per NVMe
4k Random
Write
12 NVMe 12
OSD MB/s
12 NVMe 48
OSD MB/s
12 NVMe 12
OSD IO/s
12 NVMe
48 OSD
IO/s
Total: 174 291 43885 72634
4k Random
Read
12 NVMe 12
OSD MB/s
12 NVMe 48
OSD MB/s
12 NVMe 12
OSD IO/s
12 NVMe
48 OSD
IO/s
Total: 580 2537 188801 634739
1M Random
Write
12 NVMe 12
OSD MB/s
12 NVMe 48
OSD MB/s
12 NVMe 12
OSD IO/s
12 NVMe
48 OSD
IO/s
Total: 1335 2706 1300 2636
1M Random
Read
12 NVMe 12
OSD MB/s
12 NVMe 48
OSD MB/s
12 NVMe 12
OSD IO/s
12 NVMe
48 OSD
IO/s
Total: 19434 39743 19609 39734
Seagate Confidential 13
• Use RAW device or create 1st partition on 1M boundary (sector 2048 for 512B
sectors, sector 256 for 4k sectors)
• Ceph-deploy uses the optimal alignment when creating an OSD
• Use blk-mq/scsi-mq if kernel supports it
• rq_affinity = 1 for NVMe, rq_affinity = 2 for non-NVMe
• rotational = 0
• blockdev --setra 256 (for 4k sectors, 4096 for 512B sectors)
Linux tuning is still a requirement to get optimum performance out of a SSD
Linux Flash Storage Tuning
Seagate Confidential 14
• If using an older kernel that doesn’t support BLK-MQ, use:
• “deadline” IO-Scheduler with supporting variables:
• fifo-batch
• front-merges
• writes-starved
• XFS Mount options:
• nobarrier,discard,noatime,attr2,inode64,noquota
• MySQL – when using flash, configure both innodb_io_capacity and
innodb_lru_scan_depth
• Modify Linux read ahead on mapped RBD image on client
• echo 1024 > /sys/class/block/rbd0/queue/read_ahead_kb
Linux tuning is still a requirement to get optimum performance out of a SSD
Linux Flash Storage Tuning cont’d
Seagate Confidential 15
Flash Storage Device Configuration
Ceph tuning options can make a difference:
• RBD
Cache
• If using a smaller number of SSD/NVMe SSD, try creating multiple OSD’s per
SSD/NVMe SSD. Have seen good performance increases using 4 OSD per
SSD/NVMe SSD
128k Random
Writes 12 NVMe
MB/s
30 NVMe
MB/s
disabled Total: 365 1107
enabled Total: 432 1067
Gain/Loss 18% 0%
128k Sequential
Reads
12 NVMe
MB/s
disabled Total: 9691
enabled Total: 6359
Gain/Loss -34%
1M Random
Reads
12 NVMe
MB/s
disabled Total: 38899
enabled Total: 42677
Gain/Loss 10%
128k Random
Reads 12 NVMe MB/s
disabled Total: 8915
enabled Total: 5669
Gain/Loss -37%
Seagate Confidential 16
Flash Storage Device Configuration
If the NVMe SSD or SAS/SATA SSD device can be configured to use a 4k sector size,
this could increase performance for certain applications like databases.
For my FIO tests with the RBD engine and for all of my MySQL tests, I saw up to a 3x
improvement (depending on the test) when using 4k sector sizes compared to using
512 byte sectors.
Precondition all SSD before running benchmarks. Have seen over a 3x gain in
performance after preconditioning
Storage devices used for all of the above benchmarks/tests:
• Seagate Nytro XF1440 NVMe SSD
• Seagate Nytro XF1230 SATA SSD
• Seagate 1200.2 SAS SSD
• Seagate XP6500 PCIe Flash Accelerator Card
Seagate Confidential 17
Seagate Broadest PCIe, SAS and SATA Portfolio
Seagate Confidential 18Seagate Confidential
Thank You!
Questions?
Learn how Seagate accelerates storage
with one of the broadest SSD and Flash
portfolios in the market

More Related Content

What's hot

Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
Ceph Community
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Community
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Community
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Community
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Community
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Ceph Community
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Community
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
Patrick McGarry
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
Ceph Community
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
inwin stack
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Community
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
Ceph Community
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Community
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
Ceph Community
 

What's hot (19)

Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 

Viewers also liked

Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Community
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Community
 
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Community
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Community
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
Ceph Community
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Community
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red_Hat_Storage
 
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
Ceph Community
 
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
Hakka Labs
 
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Hong-Linh Truong
 
Web security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-kearyWeb security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-kearydrewz lin
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Sage Weil
 
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017 Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Alluxio, Inc.
 
Connected Vehicle Data Platform
Connected Vehicle Data PlatformConnected Vehicle Data Platform
Connected Vehicle Data Platform
DataWorks Summit/Hadoop Summit
 
Double Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSenseDouble Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSense
Hortonworks
 

Viewers also liked (17)

Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
 
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
 
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
 
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
 
Web security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-kearyWeb security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-keary
 
Ford
FordFord
Ford
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017 Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
 
Connected Vehicle Data Platform
Connected Vehicle Data PlatformConnected Vehicle Data Platform
Connected Vehicle Data Platform
 
Double Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSenseDouble Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSense
 

Similar to Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology

Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red_Hat_Storage
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red_Hat_Storage
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Community
 
V mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep diveV mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep dive
solarisyougood
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Red_Hat_Storage
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021
Gene Leyzarovich
 
Storage spaces direct webinar
Storage spaces direct webinarStorage spaces direct webinar
Storage spaces direct webinar
Виталий Стародубцев
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
Aaron Joue
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
DataStax
 
Next Generation Software-Defined Storage
Next Generation Software-Defined StorageNext Generation Software-Defined Storage
Next Generation Software-Defined Storage
StorMagic
 
Mega Launch Recap Slide Deck
Mega Launch Recap Slide DeckMega Launch Recap Slide Deck
Mega Launch Recap Slide Deck
Varrow Inc.
 
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMsGlobal Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Marco Obinu
 
NVMe over Fabric
NVMe over FabricNVMe over Fabric
NVMe over Fabric
singh.gurjeet
 
Storage, San And Business Continuity Overview
Storage, San And Business Continuity OverviewStorage, San And Business Continuity Overview
Storage, San And Business Continuity OverviewAlan McSweeney
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
Jose De La Rosa
 
HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY
 
Need for Speed: Using Flash Storage to Optimise Performance and Reduce Costs
Need for Speed: Using Flash Storage to Optimise Performance and Reduce CostsNeed for Speed: Using Flash Storage to Optimise Performance and Reduce Costs
Need for Speed: Using Flash Storage to Optimise Performance and Reduce Costs
NetApp
 
Enterprise Storage NAS - Dual Controller
Enterprise Storage NAS - Dual ControllerEnterprise Storage NAS - Dual Controller
Enterprise Storage NAS - Dual Controller
Fernando Barrientos
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Getting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDsGetting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDs
Aerospike, Inc.
 

Similar to Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology (20)

Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
 
V mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep diveV mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep dive
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021
 
Storage spaces direct webinar
Storage spaces direct webinarStorage spaces direct webinar
Storage spaces direct webinar
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
 
Next Generation Software-Defined Storage
Next Generation Software-Defined StorageNext Generation Software-Defined Storage
Next Generation Software-Defined Storage
 
Mega Launch Recap Slide Deck
Mega Launch Recap Slide DeckMega Launch Recap Slide Deck
Mega Launch Recap Slide Deck
 
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMsGlobal Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
 
NVMe over Fabric
NVMe over FabricNVMe over Fabric
NVMe over Fabric
 
Storage, San And Business Continuity Overview
Storage, San And Business Continuity OverviewStorage, San And Business Continuity Overview
Storage, San And Business Continuity Overview
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big Data
 
Need for Speed: Using Flash Storage to Optimise Performance and Reduce Costs
Need for Speed: Using Flash Storage to Optimise Performance and Reduce CostsNeed for Speed: Using Flash Storage to Optimise Performance and Reduce Costs
Need for Speed: Using Flash Storage to Optimise Performance and Reduce Costs
 
Enterprise Storage NAS - Dual Controller
Enterprise Storage NAS - Dual ControllerEnterprise Storage NAS - Dual Controller
Enterprise Storage NAS - Dual Controller
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Getting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDsGetting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDs
 

Recently uploaded

Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 

Recently uploaded (20)

Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 

Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology

  • 1. RED HAT CEPH STORAGE ACCELERATION UTILIZING FLASH TECHNOLOGY Applications and Ecosystem Solutions Development Rick Stehno Ceph Day SJC 2017 1
  • 2. Seagate Confidential 2 • Utilize flash caching features to accelerate critical data. Caching methods can be write-back for writes, write-thru for disk/cache transparency, read cache, etc.. • Utilize storage tiering capabilities. Performance critical data resides on flash storage, colder data resides on HDD • Utilize all flash storage to accelerate performance when all application data is performance critical or when the application does not provide the features or capabilities to cache or to migrate the data Three ways to accelerate application performance with flash Flash Acceleration for Applications
  • 3. Seagate Confidential 3 Configurations: • All flash storage - Performance • Highest performance per node • Less maximum capacity per node • Hybrid HDD and flash storage - Balanced • Balances performance, capacity and cost • Application and workload suitable for • Performance critical data on flash • Utilize host software caching or tiering on flash • All HDD storage - Capacity • Maximum capacity per node, lowest cost • Lower performance per node Ceph Software Defined Storage (SDS) Acceleration
  • 4. Seagate Confidential 4 –Higher performance in half the rack space –28% less power and cooling –Higher MTBF inherent with reduced component count –Reduced OSD recovery time per Ceph node –Lower TCO Why 1U server with 10 NVMe SSDs may be better choice vs. 2U Server with 24 SATA SSDs Storage - NVMe vs SATA SSD
  • 5. Seagate Confidential 5 • 4.5x increase for 128k sequential reads • 3.5x increase for 128k sequential writes • 3.7x increase for 4k random reads • 1.4x increase for 4k random 70/30 RR/RW • Equal performance for 4k random writes Why 1U server with 10 NVMe SSDs may be better choice vs. 2U Server with 24 SATA SSDs All Flash Storage - NVMe vs SATA SSD cont’d FIO Benchmarks (1x represents 24 SATA SSD baseline)
  • 6. Seagate Confidential 6 Why 1U server with 10 NVMe SSDs may be better choice vs. 2U Server with 24 SATA SSDs All Flash Storage - NVMe vs SATA SSD cont’d Increasing the load to extend NVMe advantage over and above the 128 thread SATA SSD Test: • 5.8x increase for Random Writes at 512 threads • 3.1x increase for 70/30 RR/RW at 512 threads • 4.2x increase for Random Reads at 790 threads • 8.2x increase for Sequential Reads at 1264 threads 10 NVMe SSDs support higher workloads and more users 3x 5.8x 1.4x 3.1x 1.0x 4.2x 1.3x 8.2x 128 Theads 512 Theads 128 Threads 512 Threads 128 threads 790 threads 128 threads 1264 threads Gains Random Write 70/30 RR/RW Random Reads Sequential Reads Ceph RBD NVMe Performance Gains over SATA SSD Random Writes 70/30 RR/RW Random Reads Sequential Reads 128k FIO RBD IOEngine Benchmark
  • 7. Seagate Confidential 7 Price per MB/s: Cost of ((Retail Cost of SSD) / MB/s for each test) SSD Total SSD Price Price MB/s 128k Random Writes 128 threads Price MB/s 128k Random Writes 512 threads 24 - SATA SSD 960G $7,896 24 - SATA SSD 960G $15.00 10 - NVMe 2TB $10,990 10 - NVMe 2TB $7.00 10 – NVMe 2TB $3.00 These prices do not include savings from electrical/cooling costs, reducing datacenter floor space, from the reduction of SATA SSD Note: 128k random write FIO RBD benchmark: SATA SSD averaged 85% busy, NVMe averaged 80% busy with 512 threads FIO RBD Maximum Threads Random Write Performance for NVMe Ceph Storage Costs Seagate SATA SSD vs. Seagate NVMe SSD
  • 8. Seagate Confidential 8 MySQL • MySQL is the most popular and the most widely used open-source database in the world • MySQL is both feature rich in the areas of performance, scalability and reliability • Database users demand high OLTP performance - Small random reads/writes Ceph • Most popular Software Defined Storage system • Scalable • Reliable Does it make sense implementing Ceph into a MySQL Database environment? Ceph was not designed to provide high performance for OLTP environments OLTP entails small random reads/writes
  • 9. Seagate Confidential 9 MySQL Setup: Release 5.7 45,000,000 rows 6GB Buffer 4G logfiles RAID 0 over 18 HDD Ceph Setup: 3 Nodes each containing: Jewel Using Filestore 4 NVMe SSDs 1 Pool over 12 NVMe SSDs Replica 2 40G private and public network For all tests, all MySQL files were local on local server except the database file, this file was moved to the Ceph cluster. MySQL - Comparing Local HDD to Ceph Cluster Threads
  • 10. Seagate Confidential 10 MySQL - Comparing Local NVMe SSD to Ceph Cluster MySQL Setup: Release 5.7 45,000,000 rows 6GB Buffer 4G logfiles RAID 0 over 4 NVMe SSDs Ceph Setup: 3 Nodes each containing: Jewel Using Filestore 4 NVMe SSDs 1 Pool over 12 NVMe SSDs Replica 1 40G private and public network For all tests, all MySQL files were local on local server except the database file, this file was moved to the Ceph cluster.
  • 11. Seagate Confidential 11 All SSD Case-1: Case-2: Case-3: 2 SSDs 2 SSDs 1 PCIe flash 1 OSD/SSD 4 OSDs/SSD 4 OSDs/SSD 8 OSD journals on PCIe flash 0 100000 200000 300000 400000 500000 600000 700000 800000 0 200000 400000 600000 800000 1000000 1200000 2 ssd, 2 osd 2 ssd, 8 osd 2 ssd, 8 osd, +journal IOPS KB/s FIO Random Write - 200 Threads - 128k Data Seagate SSD and Seagate PCIe Storage Ceph All Flash Storage Acceleration
  • 12. Seagate Confidential 12 Ceph All Flash Storage Acceleration 4K FIO RBD Benchmarks 1M FIO RBD Benchmarks 3 node Ceph cluster 100G Public and Private Networks 4 - Seagate NVMe SSD per node 12 Seagate NVMe SSD per cluster Benchmark 1: 1 - OSD per NVMe SSD Benchmark 2: 4 - OSD per NVMe 4k Random Write 12 NVMe 12 OSD MB/s 12 NVMe 48 OSD MB/s 12 NVMe 12 OSD IO/s 12 NVMe 48 OSD IO/s Total: 174 291 43885 72634 4k Random Read 12 NVMe 12 OSD MB/s 12 NVMe 48 OSD MB/s 12 NVMe 12 OSD IO/s 12 NVMe 48 OSD IO/s Total: 580 2537 188801 634739 1M Random Write 12 NVMe 12 OSD MB/s 12 NVMe 48 OSD MB/s 12 NVMe 12 OSD IO/s 12 NVMe 48 OSD IO/s Total: 1335 2706 1300 2636 1M Random Read 12 NVMe 12 OSD MB/s 12 NVMe 48 OSD MB/s 12 NVMe 12 OSD IO/s 12 NVMe 48 OSD IO/s Total: 19434 39743 19609 39734
  • 13. Seagate Confidential 13 • Use RAW device or create 1st partition on 1M boundary (sector 2048 for 512B sectors, sector 256 for 4k sectors) • Ceph-deploy uses the optimal alignment when creating an OSD • Use blk-mq/scsi-mq if kernel supports it • rq_affinity = 1 for NVMe, rq_affinity = 2 for non-NVMe • rotational = 0 • blockdev --setra 256 (for 4k sectors, 4096 for 512B sectors) Linux tuning is still a requirement to get optimum performance out of a SSD Linux Flash Storage Tuning
  • 14. Seagate Confidential 14 • If using an older kernel that doesn’t support BLK-MQ, use: • “deadline” IO-Scheduler with supporting variables: • fifo-batch • front-merges • writes-starved • XFS Mount options: • nobarrier,discard,noatime,attr2,inode64,noquota • MySQL – when using flash, configure both innodb_io_capacity and innodb_lru_scan_depth • Modify Linux read ahead on mapped RBD image on client • echo 1024 > /sys/class/block/rbd0/queue/read_ahead_kb Linux tuning is still a requirement to get optimum performance out of a SSD Linux Flash Storage Tuning cont’d
  • 15. Seagate Confidential 15 Flash Storage Device Configuration Ceph tuning options can make a difference: • RBD Cache • If using a smaller number of SSD/NVMe SSD, try creating multiple OSD’s per SSD/NVMe SSD. Have seen good performance increases using 4 OSD per SSD/NVMe SSD 128k Random Writes 12 NVMe MB/s 30 NVMe MB/s disabled Total: 365 1107 enabled Total: 432 1067 Gain/Loss 18% 0% 128k Sequential Reads 12 NVMe MB/s disabled Total: 9691 enabled Total: 6359 Gain/Loss -34% 1M Random Reads 12 NVMe MB/s disabled Total: 38899 enabled Total: 42677 Gain/Loss 10% 128k Random Reads 12 NVMe MB/s disabled Total: 8915 enabled Total: 5669 Gain/Loss -37%
  • 16. Seagate Confidential 16 Flash Storage Device Configuration If the NVMe SSD or SAS/SATA SSD device can be configured to use a 4k sector size, this could increase performance for certain applications like databases. For my FIO tests with the RBD engine and for all of my MySQL tests, I saw up to a 3x improvement (depending on the test) when using 4k sector sizes compared to using 512 byte sectors. Precondition all SSD before running benchmarks. Have seen over a 3x gain in performance after preconditioning Storage devices used for all of the above benchmarks/tests: • Seagate Nytro XF1440 NVMe SSD • Seagate Nytro XF1230 SATA SSD • Seagate 1200.2 SAS SSD • Seagate XP6500 PCIe Flash Accelerator Card
  • 17. Seagate Confidential 17 Seagate Broadest PCIe, SAS and SATA Portfolio
  • 18. Seagate Confidential 18Seagate Confidential Thank You! Questions? Learn how Seagate accelerates storage with one of the broadest SSD and Flash portfolios in the market

Editor's Notes

  1. SMRs – Drive Managed
  2. SMRs – Drive Managed
  3. SMRs – Drive Managed
  4. SMRs – Drive Managed