SlideShare a Scribd company logo
1 of 30
1
Ceph on All-Flash Storage –
Breaking Performance Barriers
Axel Rosenberg
Sr. Technical Marketing Manager
April 28, 2015
Forward-Looking Statements
During our meeting today we will make forward-looking statements.
Any statement that refers to expectations, projections or other characterizations of future events or
circumstances is a forward-looking statement, including those relating to market growth, industry
trends, future products, product performance and product capabilities. This presentation also
contains forward-looking statements attributed to third parties, which reflect their projections as of the
date of issuance.
Actual results may differ materially from those expressed in these forward-looking statements due
to a number of risks and uncertainties, including the factors detailed under the caption “Risk Factors”
and elsewhere in the documents we file from time to time with the SEC, including our annual and
quarterly reports.
We undertake no obligation to update these forward-looking statements, which speak only as
of the date hereof or as of the date of issuance by a third party, as the case may be.
Designed for Big Data Workloads @ PB Scale
 Mixed media container, active-
archiving, backup, locality of data
 Large containers with application
SLAs
 Internet of Things, Sensor
Analytics
 Time-to-Value and Time-to-Insight
 Hadoop
 NoSQL
 Cassandra
 MongoDB
 High read intensive access from
billions of edge devices
 Hi-Def video driving even greater
demand for capacity and
performance
 Surveillance systems, analytics
CONTENT REPOSITORIES BIG DATA ANALYTICS MEDIA SERVICES
InfiniFlash System
• Ultra-dense All-Flash Appliance
- 512TB in 3U
- Best in class $/IOPS/TB
• Scale-out software for massive capacity
- Unified Content: Block, Object
- Flash optimized software with
programmable interfaces (SDK)
• Enterprise-Class storage features
- snapshots, replication, thin
provisioning
IF500
InfiniFlash OS (Ceph)
Ideal for large-scale object storage use cases
12G IOPS Performance Numbers
Innovating Performance @Massive Scale
InfiniFlash OS
Ceph Transformed for Flash Performance and
Contributed Back to Community
• 10x Improvement for Block Reads,
2x Improvement for Object Reads
Major Improvements to Enhance Parallelism
• Removed single Dispatch queue bottlenecks for
OSD and Client (librados) layers
• Shard thread pool implementation
• Major lock reordering
• Improved lock granularity – Reader / Writer locks
• Granular locks at Object level
• Optimized OpTracking path in OSD eliminating
redundant locks
Messenger Performance Enhancements
• Message signing
• Socket Read aheads
• Resolved severe lock contentions
Backend Optimizations – XFS and Flash
• Reduced ~2 CPU core usage with improved
file path resolution from object ID
• CPU and Lock optimized fast path for reads
• Disabled throttling for Flash
• Index Manager caching and Shared
FdCache in filestore
Emerging Storage Solutions (EMS) SanDisk Confidential 7
Results!
Test Configuration – Single InfiniFlash System
Test Configuration – Single InfiniFlash System
Performance Config - InfiniFlash with 2 storage controller configuration
2 Node Cluster ( 32 drives shared to each OSD node)
Node
2 Servers
(Dell R720) 2x E5-2680 8C 2.8GHz 25M$ 4x 16GB RDIMM, dual rank x4 (64GB)
1x Mellanox X3 Dual 40GbE 1x LSI 9207 HBA card
RBD Client
4 Servers
(Dell R620)
2 x E5-2680 10C 2.8GHz 25M$ 2 x 16GB RDIMM, dual rank x4 (32 GB) 1x
Mellanox X3 Dual 40GbE
Storage - InfiniFlash with 512TB with 2 OSD servers
InfiniFlash
1-InfiniFlash is connected 64 x 1YX2 Icechips in A2
topology.
Total storage - 64 * 8 tb = 512tb ( effective 430 TB )
InfiniFlash- fw details FFU 1.0.0.31.1
Network Details
40G Switch NA
OS Details
OS Ubuntu 14.04 LTS 64bit 3.13.0-32
LSI card/ driver SAS2308(9207) mpt2sas
Mellanox 40gbps nw card MT27500 [ConnectX-3] mlx4_en - 2.2-1 (Feb 2014)
Cluster Configuration
CEPH Version sndk-ifos-1.0.0.04 0.86.rc.eap2
Replication (Default)
2 [Host]
Note: - Host level replication.
Number of Pools, PGs & RBDs
pool = 4 ;PG = 2048 per pool
2 RBDs from each pool
RBD size 2TB
Number of Monitors 1
Number of OSD Nodes 2
Number of OSDs per Node 32 total OSDs = 32 * 2 = 64
Performance Improvement: Stock Ceph vs IF OS
8K Random Blocks
Read Performance improves 3x to 12x depending on the Block size
Top Row: Queue Depth
Bottom Row: % Read IOs
IOPS
Avglatenv(ms)
Avg Latency
0
50000
100000
150000
200000
250000
1 4 16 1 4 16 1 4 16 1 4 16 1 4 16
0 25 50 75 100
Stock Ceph
(Giant)
IFOS 1.0
0
20
40
60
80
100
120
1 4 16 1 4 16 1 4 16 1 4 16 1 4 16
0 25 50 75 100
• 2 RBD/Client x Total 4 Clients
• 1 InfiniFlash node with 512TB
IOPS
Top Row: Queue Depth
Bottom Row: % Read IOs
0
20000
40000
60000
80000
100000
120000
140000
160000
1 4 16 1 4 16 1 4 16 1 4 16 1 4 16
0 25 50 75 100
Stock Ceph
IFOS 1.0
AvgLatency(ms)
0
20
40
60
80
100
120
140
160
180
1 4 16 1 4 16 1 4 16 1 4 16 1 4 16
0 25 50 75 100
IOPS
Performance Improvement: Stock Ceph vs IF OS
64K Random Blocks
IOPS Avg Latency
• 2 RBD/Client x Total 4 Clients
• 1 InfiniFlash node with 512TB
Top Row: Queue Depth
Bottom Row: % Read IOs
Top Row: Queue Depth
Bottom Row: % Read IOs
Performance Improvement: Stock Ceph vs IF OS
256K Random Blocks
0
5000
10000
15000
20000
25000
30000
35000
40000
1 4 16 1 4 16 1 4 16 1 4 16 1 4 16
0 25 50 75 100
Stock Ceph
IFOS 1.0
0
50
100
150
200
250
300
1 4 16 1 4 16 1 4 16 1 4 16 1 4 16
0 25 50 75 100
IOPS
AvgLatency(ms)
For 256K blocks , max throughput is getting up to 5.8GB/s , bare metal
InfiniFlash performance is at 7.5GB/s
IOPS Avg Latency
Top Row: Queue Depth
Bottom Row: % Read IOs
Top Row: Queue Depth
Bottom Row: % Read IOs
• 2 RBD/Client x Total 4 Clients
• 1 InfiniFlash node with 512TB
Test Configuration – 3 InfiniFlash Systems (128TB each)
Scaling with Performance
8K Random Blocks
0
100000
200000
300000
400000
500000
600000
700000
1 8 64 1 8 64 1 8 64 1 8 64 1 8 64
0 25 50 75 100
0
50
100
150
200
250
300
350
1 8 64 1 8 64 1 8 64 1 8 64 1 8 64
0 25 50 75 100
Performance scales linearly with additional InfiniFlash nodes
IOPS
Avg Latency
• 2 RBD/Client x 5 Clients
• 3 InfiniFlash nodes with 128TB each
Top Row: Queue Depth
Bottom Row: % Read IOs
Top Row: Queue Depth
Bottom Row: % Read IOs
IOPS
AvgLatency(ms)
Scaling with Performance
64K Random Blocks
0
50000
100000
150000
200000
250000 1
4
16
64
256
2
8
32
128
1
4
16
64
256
2
8
32
128
1
4
16
64
256
0 25 50 75 100
0
100
200
300
400
500
600
700
800
900
1000
1 8 64 1 8 64 1 8 64 1 8 64 1 8 64
0 25 50 75 100
IOPS
Avg Latency
Performance scales linearly with additional InfiniFlash nodes • 2 RBD/Client x 5 Clients
• 3 InfiniFlash nodes with 128TB each
Top Row: Queue Depth
Bottom Row: % Read IOs
Top Row: Queue Depth
Bottom Row: % Read IOs
IOPS
AvgLatency(ms)
Scaling with Performance
256K Random Blocks
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
1
4
16
64
256
2
8
32
128
1
4
16
64
256
2
8
32
128
1
4
16
64
256
0 25 50 75 100
0
500
1000
1500
2000
2500
3000
3500
1 8 64 1 8 64 1 8 64 1 8 64 1 8 64
0 25 50 75 100
IOPS
Avg Latency
Performance scales linearly with additional InfiniFlash nodes • 2 RBD/Client x 5 Clients
• 3 InfiniFlash nodes with 128TB each
Top Row: Queue Depth
Bottom Row: % Read IOs
Top Row: Queue Depth
Bottom Row: % Read IOs
IOPS
AvgLatency(ms)
Open Source with SanDisk Advantage
InfiniFlash OS – Enterprise Level Hardened Ceph
 Innovation and speed of Open Source with
the trustworthiness of Enterprise grade and
Web-Scale testing, hardware optimization
 Performance optimization for flash and
hardware tuning
 Hardened and tested for Hyperscale
deployments and workloads
 Enterprise class support and services
from SanDisk
 Risk mitigation through long term support
and a reliable long term roadmap
 Continual contribution back to the community
Enterprise Level
Hardening
Testing at
Hyperscale
Failure
Testing
 9,000 hours
of cumulative
IO tests
 1,100+
unique test
cases
 1,000 hours
of Cluster
Rebalancing
tests
 1,000 hours
of IO on iSCSI
 Over 100
server node
clusters
 Over 4PB of
Flash Storage
 2,000 Cycle
Node Reboot
 1,000 times
Node Abrupt
Power Cycle
 1,000 times
Storage Failure
 1,000 times
Network
Failure
 IO for 250
hours at a
stretch
IFOS on InfiniFlash
HSEB A HSEB B
OSDs
SAS
Connected
….
HSEB A HSEB B HSEB A HSEB B
….
ComputeFarm
LUN LUN
Client Application
…LUN LUN
Client Application
…LUN LUN
Client Application
…
RBDs / RGW
SCSI Targets
ReadIOO
ReadIOO
Write IO
RBDs / RGW
SCSI Targets
RBDs / RGW
SCSI Targets
OSDs OSDs OSDs OSDs OSDs
ReadIOO
 Disaggregated Architecture
 Compute & Storage
Disaggregation leads to Optimal
Resource utilization
 Independent Scaling of Compute
and Storage
 Optimized for Performance
 Software & Hardware
Configurations tuned for
performance
 Reduced Costs
 Reduce the replica count with
higher reliability of Flash
 Choice of Full Replicas or Erasure
Coded Storage pool on Flash
StorageFarm
Flash + HDD with Data Tier-ing
Flash Performance with TCO of HDD
 InfiniFlash OS performs automatic data
placement and data movement between tiers
based transparent to Applications
 User defined Policies for data placement on
tiers
 Can be used with Erasure coding to further
reduce the TCO
Benefits
 Flash based performance with HDD like TCO
 Lower performance requirements on HDD tier
enables use of denser and cheaper SMR drives
 Denser and lower power compared to HDD only
solution
 InfiniFlash for High Activity data and SMR drives
for Low activity data
 60+ HDD per Server
Compute Farm
Flash Primary + HDD Replicas
Flash Performance with TCO of HDD
Primary replica on
InfiniFlash
HDD based data node
for 2nd local replica
HDD based data node
for 3rd DR replica
 Higher Affinity of the Primary Replica ensures much
of the compute is on InfiniFlash Data
 2nd and 3rd replicas on HDDs are primarily for data
protection
 High throughput of InfiniFlash provides data
protection, movement for all replicas without
impacting application IO
 Eliminates cascade data propagation requirement
for HDD replicas
 Flash-based accelerated Object performance for
Replica 1 allows for denser and cheaper SMR HDDs
for Replica 2 and 3
Compute Farm
TCO Example - Object Storage
Scale-out Flash Benefits at the TCO of HDD
Note that operational/maintenance cost and performance
benefits are not accounted for in these models!!!
@Scale Operational Costs demand Flash
• Weekly failure rate for a 100PB deployment
15-35 HDD vs. 1 InfiniFlash Card
• HDD cannot handle simultaneous egress/ingress
• Long rebuild times, multiple failures
• Rebalancing of P’s of data impact in service
disruption
• Flash provides guaranteed & consistent SLA
• Flash capacity utilization >> HDD due to reliability & ops
$-
$10,000,000
$20,000,000
$30,000,000
$40,000,000
$50,000,000
$60,000,000
$70,000,000
$80,000,000
Traditional
ObjStore on
HDD
InfiniFlash
ObjectStore -3
Full Replicas
on Flash
InfiniFlash
with
ErasureCoding
- All Flash
InfiniFlash -
Flash Primary
& HDD copies
3 Year TCO Comparison for 96PB Object
Storage
3 Year
Opex
TCA
0
20
40
60
80
100
Total Racks
Flash Card Performance**
 Read Throughput > 400MB/s
 Read IOPS > 20K IOPS
 Random Read/Write
@4K- 90/10 > 15K IOPS
Flash Card Integration
 Alerts and monitoring
 Latching integrated
and monitored
 Integrated air temperature
sampling
InfiniFlash System
Capacity 512TB* raw
 All-Flash 3U Storage System
 64 x 8TB Flash Cards with Pfail
 8 SAS ports total
Operational Efficiency and Resilient
 Hot Swappable components, Easy
FRU
 Low power 450W(avg), 750W(active)
 MTBF 1.5+ million hours
Scalable Performance**
 780K IOPS
 7GB/s Throughput
 Upgrade to 12GB/s in Q315
* 1TB = 1,000,000,000,000 bytes. Actual user capacity less.
** Based on internal testing of InfiniFlash 100. Test report available.
InfiniFlash™ System
The First All-Flash Storage System Built for High Performance Ceph
24
Thank You! @BigDataFlash
#bigdataflash
©2015 SanDisk Corporation. All rights reserved. SanDisk is a trademark of SanDisk Corporation, registered in the United States and other countries. InfiniFlash is a trademarks of SanDisk Enterprise IP
LLC. All other product and company names are used for identification purposes and may be trademarks of their respective holder(s).
Emerging Storage Solutions (EMS) SanDisk Confidential 25
Messenger layer
 Removed Dispatcher and introduced a “fast path” mechanism for
read/write requests
• Same mechanism is now present on client side (librados) as well
 Fine grained locking in message transmit path
 Introduced an efficient buffering mechanism for improved
throughput
 Configuration options to disable message signing, CRC check etc
Emerging Storage Solutions (EMS) SanDisk Confidential 26
OSD Request Processing
 Running with Memstore backend revealed bottleneck in OSD
thread pool code
 OSD worker thread pool mutex heavily contended
 Implemented a sharded worker thread pool. Requests sharded
based on their pg (placement group) identifier
 Configuration options to set number of shards and number of
worker threads per shard
 Optimized OpTracking path (Sharded Queue and removed
redundant locks)
Emerging Storage Solutions (EMS) SanDisk Confidential 27
FileStore improvements
 Eliminated backend storage from picture by using a small workload
(FileStore served data from page cache)
 Severe lock contention in LRU FD (file descriptor) cache. Implemented a
sharded version of LRU cache
 CollectionIndex (per-PG) object was being created upon every IO request.
Implemented a cache for the same as PG info doesn’t change often
 Optimized “Object-name to XFS file name” mapping function
 Removed redundant snapshot related checks in parent read processing
path
Emerging Storage Solutions (EMS) SanDisk Confidential 28
Inconsistent Performance Observation
 Large performance variations on different pools across multiple
clients
 First client after cluster restart gets maximum performance
irrespective of the pool
 Continued degraded performance from clients starting later
 Issue also observed on read I/O with unpopulated RBD images –
Ruled out FS issues
 Performance counters show up to 3x increase in latency through
the I/O path with no particular bottleneck
Emerging Storage Solutions (EMS) SanDisk Confidential 29
Issue with TCmalloc
 Perf top shows rapid increase in time spent in TCmalloc functions
14.75% libtcmalloc.so.4.1.2 [.] tcmalloc::CentralFreeList::FetchFromSpans()
7.46% libtcmalloc.so.4.1.2 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned
long, int)
 I/O from different client causing new threads in sharded thread pool to process
I/O
 Causing memory movement from thread caches and increasing alloc/free
latency
 JEmalloc and Glibc malloc do not exhibit this behavior
 JEmalloc build option added to Ceph Hammer
 Setting TCmalloc tunable 'TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES’ to
larger value (64M) alleviates the issue
Emerging Storage Solutions (EMS) SanDisk Confidential 30
Client Optimizations
 Ceph by default, turns Nagle’s algorithm OFF
 RBD kernel driver ignored TCP_NODELAY setting
 Large latency variations at lower queue depths
 Changes to RBD driver submitted upstream

More Related Content

What's hot

Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
 
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13Gosuke Miyashita
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Community
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Community
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Community
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...In-Memory Computing Summit
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Ceph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph Community
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 

What's hot (20)

Stabilizing Ceph
Stabilizing CephStabilizing Ceph
Stabilizing Ceph
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
 
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Ceph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/O
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 

Viewers also liked

Duong thang ss voi mp
Duong thang ss voi mpDuong thang ss voi mp
Duong thang ss voi mpHoa Phượng
 
Giao an chuan 20142015
Giao an chuan 20142015Giao an chuan 20142015
Giao an chuan 20142015Hoa Phượng
 
Energy Storage Management in Grid Connected Solar Photovoltaic System
Energy Storage Management in Grid Connected Solar Photovoltaic SystemEnergy Storage Management in Grid Connected Solar Photovoltaic System
Energy Storage Management in Grid Connected Solar Photovoltaic SystemIJERA Editor
 
7 17-27-37-47-57-67-77
7 17-27-37-47-57-67-777 17-27-37-47-57-67-77
7 17-27-37-47-57-67-77Andrei Crivatu
 
Kespro (perkosaan)
Kespro (perkosaan)Kespro (perkosaan)
Kespro (perkosaan)Febrian Dini
 
Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison
Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison
Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison Ceph Community
 
Epic research daily agri report 07th may 2015
Epic research daily agri report  07th may  2015Epic research daily agri report  07th may  2015
Epic research daily agri report 07th may 2015Epic Research Limited
 
Servizi SNAG Milano - 2015
Servizi SNAG Milano - 2015Servizi SNAG Milano - 2015
Servizi SNAG Milano - 2015SNAG Milano
 
портфоліо рик о.й.(3) нове
портфоліо рик о.й.(3) новепортфоліо рик о.й.(3) нове
портфоліо рик о.й.(3) новеСергій Рик
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Community
 
ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5
ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5
ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5Haris Gamvrelis
 
Improving System Security and User Privacy in Secure Electronic Transaction (...
Improving System Security and User Privacy in Secure Electronic Transaction (...Improving System Security and User Privacy in Secure Electronic Transaction (...
Improving System Security and User Privacy in Secure Electronic Transaction (...IJERA Editor
 

Viewers also liked (17)

Duong thang ss voi mp
Duong thang ss voi mpDuong thang ss voi mp
Duong thang ss voi mp
 
Giao an chuan 20142015
Giao an chuan 20142015Giao an chuan 20142015
Giao an chuan 20142015
 
Outputmanagement
OutputmanagementOutputmanagement
Outputmanagement
 
Energy Storage Management in Grid Connected Solar Photovoltaic System
Energy Storage Management in Grid Connected Solar Photovoltaic SystemEnergy Storage Management in Grid Connected Solar Photovoltaic System
Energy Storage Management in Grid Connected Solar Photovoltaic System
 
Tiết 15
Tiết 15Tiết 15
Tiết 15
 
7 17-27-37-47-57-67-77
7 17-27-37-47-57-67-777 17-27-37-47-57-67-77
7 17-27-37-47-57-67-77
 
Athletics Flyer-Sample
Athletics Flyer-SampleAthletics Flyer-Sample
Athletics Flyer-Sample
 
Kespro (perkosaan)
Kespro (perkosaan)Kespro (perkosaan)
Kespro (perkosaan)
 
Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison
Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison
Ceph Day Berlin: CEPH@DeutscheTelekom - a 2+ years production liaison
 
Epic research daily agri report 07th may 2015
Epic research daily agri report  07th may  2015Epic research daily agri report  07th may  2015
Epic research daily agri report 07th may 2015
 
Servizi SNAG Milano - 2015
Servizi SNAG Milano - 2015Servizi SNAG Milano - 2015
Servizi SNAG Milano - 2015
 
портфоліо рик о.й.(3) нове
портфоліо рик о.й.(3) новепортфоліо рик о.й.(3) нове
портфоліо рик о.й.(3) нове
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic Cloud
 
ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5
ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5
ΧΡΥΣΑ ΝΕΑ ΤΕΥΧΟΣ 5
 
Improving System Security and User Privacy in Secure Electronic Transaction (...
Improving System Security and User Privacy in Secure Electronic Transaction (...Improving System Security and User Privacy in Secure Electronic Transaction (...
Improving System Security and User Privacy in Secure Electronic Transaction (...
 
Презентация Re therm
Презентация Re thermПрезентация Re therm
Презентация Re therm
 
4.4
4.44.4
4.4
 

Similar to Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers

Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Community
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화OpenStack Korea Community
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Community
 
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...Ceph Community
 
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the BankDeploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the BankWestern Digital
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...In-Memory Computing Summit
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Ceph Community
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Community
 
SDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptxSDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptxssuserabc741
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdfhellobank1
 
Ceph Day Beijing - Storage Modernization with Intel & Ceph
Ceph Day Beijing - Storage Modernization with Intel & Ceph Ceph Day Beijing - Storage Modernization with Intel & Ceph
Ceph Day Beijing - Storage Modernization with Intel & Ceph Ceph Community
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephDanielle Womboldt
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph Community
 
Fusion-IO - Building a High Performance and Reliable VSAN Environment
Fusion-IO - Building a High Performance and Reliable VSAN EnvironmentFusion-IO - Building a High Performance and Reliable VSAN Environment
Fusion-IO - Building a High Performance and Reliable VSAN EnvironmentVMUG IT
 

Similar to Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers (20)

Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
 
Ceph
CephCeph
Ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in Ceph
 
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
 
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the BankDeploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
 
SDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptxSDC20 ScaleFlux.pptx
SDC20 ScaleFlux.pptx
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
 
Ceph Day Beijing - Storage Modernization with Intel & Ceph
Ceph Day Beijing - Storage Modernization with Intel & Ceph Ceph Day Beijing - Storage Modernization with Intel & Ceph
Ceph Day Beijing - Storage Modernization with Intel & Ceph
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and Ceph
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance Barriers
 
Fusion-IO - Building a High Performance and Reliable VSAN Environment
Fusion-IO - Building a High Performance and Reliable VSAN EnvironmentFusion-IO - Building a High Performance and Reliable VSAN Environment
Fusion-IO - Building a High Performance and Reliable VSAN Environment
 

Recently uploaded

Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard37
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data PlatformLess Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data PlatformWSO2
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingWSO2
 
Choreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringChoreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringWSO2
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)Samir Dash
 
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Simplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxSimplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxMarkSteadman7
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityWSO2
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMKumar Satyam
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxRemote DBA Services
 

Recently uploaded (20)

Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data PlatformLess Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation Computing
 
Choreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringChoreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software Engineering
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Simplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxSimplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptx
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDM
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 

Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers

  • 1. 1 Ceph on All-Flash Storage – Breaking Performance Barriers Axel Rosenberg Sr. Technical Marketing Manager April 28, 2015
  • 2. Forward-Looking Statements During our meeting today we will make forward-looking statements. Any statement that refers to expectations, projections or other characterizations of future events or circumstances is a forward-looking statement, including those relating to market growth, industry trends, future products, product performance and product capabilities. This presentation also contains forward-looking statements attributed to third parties, which reflect their projections as of the date of issuance. Actual results may differ materially from those expressed in these forward-looking statements due to a number of risks and uncertainties, including the factors detailed under the caption “Risk Factors” and elsewhere in the documents we file from time to time with the SEC, including our annual and quarterly reports. We undertake no obligation to update these forward-looking statements, which speak only as of the date hereof or as of the date of issuance by a third party, as the case may be.
  • 3. Designed for Big Data Workloads @ PB Scale  Mixed media container, active- archiving, backup, locality of data  Large containers with application SLAs  Internet of Things, Sensor Analytics  Time-to-Value and Time-to-Insight  Hadoop  NoSQL  Cassandra  MongoDB  High read intensive access from billions of edge devices  Hi-Def video driving even greater demand for capacity and performance  Surveillance systems, analytics CONTENT REPOSITORIES BIG DATA ANALYTICS MEDIA SERVICES
  • 4. InfiniFlash System • Ultra-dense All-Flash Appliance - 512TB in 3U - Best in class $/IOPS/TB • Scale-out software for massive capacity - Unified Content: Block, Object - Flash optimized software with programmable interfaces (SDK) • Enterprise-Class storage features - snapshots, replication, thin provisioning IF500 InfiniFlash OS (Ceph) Ideal for large-scale object storage use cases
  • 6. Innovating Performance @Massive Scale InfiniFlash OS Ceph Transformed for Flash Performance and Contributed Back to Community • 10x Improvement for Block Reads, 2x Improvement for Object Reads Major Improvements to Enhance Parallelism • Removed single Dispatch queue bottlenecks for OSD and Client (librados) layers • Shard thread pool implementation • Major lock reordering • Improved lock granularity – Reader / Writer locks • Granular locks at Object level • Optimized OpTracking path in OSD eliminating redundant locks Messenger Performance Enhancements • Message signing • Socket Read aheads • Resolved severe lock contentions Backend Optimizations – XFS and Flash • Reduced ~2 CPU core usage with improved file path resolution from object ID • CPU and Lock optimized fast path for reads • Disabled throttling for Flash • Index Manager caching and Shared FdCache in filestore
  • 7. Emerging Storage Solutions (EMS) SanDisk Confidential 7 Results!
  • 8. Test Configuration – Single InfiniFlash System
  • 9. Test Configuration – Single InfiniFlash System Performance Config - InfiniFlash with 2 storage controller configuration 2 Node Cluster ( 32 drives shared to each OSD node) Node 2 Servers (Dell R720) 2x E5-2680 8C 2.8GHz 25M$ 4x 16GB RDIMM, dual rank x4 (64GB) 1x Mellanox X3 Dual 40GbE 1x LSI 9207 HBA card RBD Client 4 Servers (Dell R620) 2 x E5-2680 10C 2.8GHz 25M$ 2 x 16GB RDIMM, dual rank x4 (32 GB) 1x Mellanox X3 Dual 40GbE Storage - InfiniFlash with 512TB with 2 OSD servers InfiniFlash 1-InfiniFlash is connected 64 x 1YX2 Icechips in A2 topology. Total storage - 64 * 8 tb = 512tb ( effective 430 TB ) InfiniFlash- fw details FFU 1.0.0.31.1 Network Details 40G Switch NA OS Details OS Ubuntu 14.04 LTS 64bit 3.13.0-32 LSI card/ driver SAS2308(9207) mpt2sas Mellanox 40gbps nw card MT27500 [ConnectX-3] mlx4_en - 2.2-1 (Feb 2014) Cluster Configuration CEPH Version sndk-ifos-1.0.0.04 0.86.rc.eap2 Replication (Default) 2 [Host] Note: - Host level replication. Number of Pools, PGs & RBDs pool = 4 ;PG = 2048 per pool 2 RBDs from each pool RBD size 2TB Number of Monitors 1 Number of OSD Nodes 2 Number of OSDs per Node 32 total OSDs = 32 * 2 = 64
  • 10. Performance Improvement: Stock Ceph vs IF OS 8K Random Blocks Read Performance improves 3x to 12x depending on the Block size Top Row: Queue Depth Bottom Row: % Read IOs IOPS Avglatenv(ms) Avg Latency 0 50000 100000 150000 200000 250000 1 4 16 1 4 16 1 4 16 1 4 16 1 4 16 0 25 50 75 100 Stock Ceph (Giant) IFOS 1.0 0 20 40 60 80 100 120 1 4 16 1 4 16 1 4 16 1 4 16 1 4 16 0 25 50 75 100 • 2 RBD/Client x Total 4 Clients • 1 InfiniFlash node with 512TB IOPS Top Row: Queue Depth Bottom Row: % Read IOs
  • 11. 0 20000 40000 60000 80000 100000 120000 140000 160000 1 4 16 1 4 16 1 4 16 1 4 16 1 4 16 0 25 50 75 100 Stock Ceph IFOS 1.0 AvgLatency(ms) 0 20 40 60 80 100 120 140 160 180 1 4 16 1 4 16 1 4 16 1 4 16 1 4 16 0 25 50 75 100 IOPS Performance Improvement: Stock Ceph vs IF OS 64K Random Blocks IOPS Avg Latency • 2 RBD/Client x Total 4 Clients • 1 InfiniFlash node with 512TB Top Row: Queue Depth Bottom Row: % Read IOs Top Row: Queue Depth Bottom Row: % Read IOs
  • 12. Performance Improvement: Stock Ceph vs IF OS 256K Random Blocks 0 5000 10000 15000 20000 25000 30000 35000 40000 1 4 16 1 4 16 1 4 16 1 4 16 1 4 16 0 25 50 75 100 Stock Ceph IFOS 1.0 0 50 100 150 200 250 300 1 4 16 1 4 16 1 4 16 1 4 16 1 4 16 0 25 50 75 100 IOPS AvgLatency(ms) For 256K blocks , max throughput is getting up to 5.8GB/s , bare metal InfiniFlash performance is at 7.5GB/s IOPS Avg Latency Top Row: Queue Depth Bottom Row: % Read IOs Top Row: Queue Depth Bottom Row: % Read IOs • 2 RBD/Client x Total 4 Clients • 1 InfiniFlash node with 512TB
  • 13. Test Configuration – 3 InfiniFlash Systems (128TB each)
  • 14. Scaling with Performance 8K Random Blocks 0 100000 200000 300000 400000 500000 600000 700000 1 8 64 1 8 64 1 8 64 1 8 64 1 8 64 0 25 50 75 100 0 50 100 150 200 250 300 350 1 8 64 1 8 64 1 8 64 1 8 64 1 8 64 0 25 50 75 100 Performance scales linearly with additional InfiniFlash nodes IOPS Avg Latency • 2 RBD/Client x 5 Clients • 3 InfiniFlash nodes with 128TB each Top Row: Queue Depth Bottom Row: % Read IOs Top Row: Queue Depth Bottom Row: % Read IOs IOPS AvgLatency(ms)
  • 15. Scaling with Performance 64K Random Blocks 0 50000 100000 150000 200000 250000 1 4 16 64 256 2 8 32 128 1 4 16 64 256 2 8 32 128 1 4 16 64 256 0 25 50 75 100 0 100 200 300 400 500 600 700 800 900 1000 1 8 64 1 8 64 1 8 64 1 8 64 1 8 64 0 25 50 75 100 IOPS Avg Latency Performance scales linearly with additional InfiniFlash nodes • 2 RBD/Client x 5 Clients • 3 InfiniFlash nodes with 128TB each Top Row: Queue Depth Bottom Row: % Read IOs Top Row: Queue Depth Bottom Row: % Read IOs IOPS AvgLatency(ms)
  • 16. Scaling with Performance 256K Random Blocks 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 1 4 16 64 256 2 8 32 128 1 4 16 64 256 2 8 32 128 1 4 16 64 256 0 25 50 75 100 0 500 1000 1500 2000 2500 3000 3500 1 8 64 1 8 64 1 8 64 1 8 64 1 8 64 0 25 50 75 100 IOPS Avg Latency Performance scales linearly with additional InfiniFlash nodes • 2 RBD/Client x 5 Clients • 3 InfiniFlash nodes with 128TB each Top Row: Queue Depth Bottom Row: % Read IOs Top Row: Queue Depth Bottom Row: % Read IOs IOPS AvgLatency(ms)
  • 17. Open Source with SanDisk Advantage InfiniFlash OS – Enterprise Level Hardened Ceph  Innovation and speed of Open Source with the trustworthiness of Enterprise grade and Web-Scale testing, hardware optimization  Performance optimization for flash and hardware tuning  Hardened and tested for Hyperscale deployments and workloads  Enterprise class support and services from SanDisk  Risk mitigation through long term support and a reliable long term roadmap  Continual contribution back to the community Enterprise Level Hardening Testing at Hyperscale Failure Testing  9,000 hours of cumulative IO tests  1,100+ unique test cases  1,000 hours of Cluster Rebalancing tests  1,000 hours of IO on iSCSI  Over 100 server node clusters  Over 4PB of Flash Storage  2,000 Cycle Node Reboot  1,000 times Node Abrupt Power Cycle  1,000 times Storage Failure  1,000 times Network Failure  IO for 250 hours at a stretch
  • 18. IFOS on InfiniFlash HSEB A HSEB B OSDs SAS Connected …. HSEB A HSEB B HSEB A HSEB B …. ComputeFarm LUN LUN Client Application …LUN LUN Client Application …LUN LUN Client Application … RBDs / RGW SCSI Targets ReadIOO ReadIOO Write IO RBDs / RGW SCSI Targets RBDs / RGW SCSI Targets OSDs OSDs OSDs OSDs OSDs ReadIOO  Disaggregated Architecture  Compute & Storage Disaggregation leads to Optimal Resource utilization  Independent Scaling of Compute and Storage  Optimized for Performance  Software & Hardware Configurations tuned for performance  Reduced Costs  Reduce the replica count with higher reliability of Flash  Choice of Full Replicas or Erasure Coded Storage pool on Flash StorageFarm
  • 19. Flash + HDD with Data Tier-ing Flash Performance with TCO of HDD  InfiniFlash OS performs automatic data placement and data movement between tiers based transparent to Applications  User defined Policies for data placement on tiers  Can be used with Erasure coding to further reduce the TCO Benefits  Flash based performance with HDD like TCO  Lower performance requirements on HDD tier enables use of denser and cheaper SMR drives  Denser and lower power compared to HDD only solution  InfiniFlash for High Activity data and SMR drives for Low activity data  60+ HDD per Server Compute Farm
  • 20. Flash Primary + HDD Replicas Flash Performance with TCO of HDD Primary replica on InfiniFlash HDD based data node for 2nd local replica HDD based data node for 3rd DR replica  Higher Affinity of the Primary Replica ensures much of the compute is on InfiniFlash Data  2nd and 3rd replicas on HDDs are primarily for data protection  High throughput of InfiniFlash provides data protection, movement for all replicas without impacting application IO  Eliminates cascade data propagation requirement for HDD replicas  Flash-based accelerated Object performance for Replica 1 allows for denser and cheaper SMR HDDs for Replica 2 and 3 Compute Farm
  • 21. TCO Example - Object Storage Scale-out Flash Benefits at the TCO of HDD Note that operational/maintenance cost and performance benefits are not accounted for in these models!!! @Scale Operational Costs demand Flash • Weekly failure rate for a 100PB deployment 15-35 HDD vs. 1 InfiniFlash Card • HDD cannot handle simultaneous egress/ingress • Long rebuild times, multiple failures • Rebalancing of P’s of data impact in service disruption • Flash provides guaranteed & consistent SLA • Flash capacity utilization >> HDD due to reliability & ops $- $10,000,000 $20,000,000 $30,000,000 $40,000,000 $50,000,000 $60,000,000 $70,000,000 $80,000,000 Traditional ObjStore on HDD InfiniFlash ObjectStore -3 Full Replicas on Flash InfiniFlash with ErasureCoding - All Flash InfiniFlash - Flash Primary & HDD copies 3 Year TCO Comparison for 96PB Object Storage 3 Year Opex TCA 0 20 40 60 80 100 Total Racks
  • 22. Flash Card Performance**  Read Throughput > 400MB/s  Read IOPS > 20K IOPS  Random Read/Write @4K- 90/10 > 15K IOPS Flash Card Integration  Alerts and monitoring  Latching integrated and monitored  Integrated air temperature sampling InfiniFlash System Capacity 512TB* raw  All-Flash 3U Storage System  64 x 8TB Flash Cards with Pfail  8 SAS ports total Operational Efficiency and Resilient  Hot Swappable components, Easy FRU  Low power 450W(avg), 750W(active)  MTBF 1.5+ million hours Scalable Performance**  780K IOPS  7GB/s Throughput  Upgrade to 12GB/s in Q315 * 1TB = 1,000,000,000,000 bytes. Actual user capacity less. ** Based on internal testing of InfiniFlash 100. Test report available.
  • 23. InfiniFlash™ System The First All-Flash Storage System Built for High Performance Ceph
  • 24. 24 Thank You! @BigDataFlash #bigdataflash ©2015 SanDisk Corporation. All rights reserved. SanDisk is a trademark of SanDisk Corporation, registered in the United States and other countries. InfiniFlash is a trademarks of SanDisk Enterprise IP LLC. All other product and company names are used for identification purposes and may be trademarks of their respective holder(s).
  • 25. Emerging Storage Solutions (EMS) SanDisk Confidential 25 Messenger layer  Removed Dispatcher and introduced a “fast path” mechanism for read/write requests • Same mechanism is now present on client side (librados) as well  Fine grained locking in message transmit path  Introduced an efficient buffering mechanism for improved throughput  Configuration options to disable message signing, CRC check etc
  • 26. Emerging Storage Solutions (EMS) SanDisk Confidential 26 OSD Request Processing  Running with Memstore backend revealed bottleneck in OSD thread pool code  OSD worker thread pool mutex heavily contended  Implemented a sharded worker thread pool. Requests sharded based on their pg (placement group) identifier  Configuration options to set number of shards and number of worker threads per shard  Optimized OpTracking path (Sharded Queue and removed redundant locks)
  • 27. Emerging Storage Solutions (EMS) SanDisk Confidential 27 FileStore improvements  Eliminated backend storage from picture by using a small workload (FileStore served data from page cache)  Severe lock contention in LRU FD (file descriptor) cache. Implemented a sharded version of LRU cache  CollectionIndex (per-PG) object was being created upon every IO request. Implemented a cache for the same as PG info doesn’t change often  Optimized “Object-name to XFS file name” mapping function  Removed redundant snapshot related checks in parent read processing path
  • 28. Emerging Storage Solutions (EMS) SanDisk Confidential 28 Inconsistent Performance Observation  Large performance variations on different pools across multiple clients  First client after cluster restart gets maximum performance irrespective of the pool  Continued degraded performance from clients starting later  Issue also observed on read I/O with unpopulated RBD images – Ruled out FS issues  Performance counters show up to 3x increase in latency through the I/O path with no particular bottleneck
  • 29. Emerging Storage Solutions (EMS) SanDisk Confidential 29 Issue with TCmalloc  Perf top shows rapid increase in time spent in TCmalloc functions 14.75% libtcmalloc.so.4.1.2 [.] tcmalloc::CentralFreeList::FetchFromSpans() 7.46% libtcmalloc.so.4.1.2 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned long, int)  I/O from different client causing new threads in sharded thread pool to process I/O  Causing memory movement from thread caches and increasing alloc/free latency  JEmalloc and Glibc malloc do not exhibit this behavior  JEmalloc build option added to Ceph Hammer  Setting TCmalloc tunable 'TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES’ to larger value (64M) alleviates the issue
  • 30. Emerging Storage Solutions (EMS) SanDisk Confidential 30 Client Optimizations  Ceph by default, turns Nagle’s algorithm OFF  RBD kernel driver ignored TCP_NODELAY setting  Large latency variations at lower queue depths  Changes to RBD driver submitted upstream

Editor's Notes

  1. Video continues to drive the need for storage, and Point-Of-View cameras like GoPro are producing compelling high resolution videos on our performance cards. People using smartphones to make high resolution videos choose our performance mobile cards also, driving the need for higher capacities. There is a growing customer base for us around the world, with one billion additional people joining the Global Middle Class between 2013 and 2020. These people will use smart mobile devices as their first choice to spend discretionary income on, and will expand their storage using removable cards and USB drives. We are not standing still, but creating new product categories to allow people to expand and share their most cherished memories. ___________________________________________________________
  2. Performance: Shorter jobs by 4x per study, flash enablement) Share Compute with other infrastructure (win for any company with seasonality). Flexible & Elastic Storage Platform to handle MapReduce load spikes
  3. X2 is 2 bits per cell, X3 is 3 bits per cell