SlideShare a Scribd company logo
Unleash Ceph over Flash Storage Potential with
Mellanox High-Performance Interconnect
Ceph Day Berlin – Apr 28th, 2015
Oren Duer, Director of Storage Software, Software R&D
- Mellanox C nfidential -
Leading Supplier of End-to-End Interconnect Solutions
Storage
Server / Compute Switch / Gateway Front / Back-End
Virtual Protocol Interconnect
56G IB & FCoIB
Virtual Protocol Interconnect
56G InfiniBand
10/40/56GbE & FCoE 10/40/56GbE
Comprehensive End-to-End InfiniBand and Ethernet Portfolio
ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN Cables/Modules
© 2015 Mellanox Technologies o 2
© 2015 Mellanox Technologies - Mellanox Confidential - 6
How Customers Deploy Ceph with Mellanox Interconnect
 Building Scalable, Performing Storage Solutions
•
•
Cluster network @ 40Gb Ethernet
Clients @ 10G/40Gb Ethernet
 High performance at Low Cost
•
•
Allows more capacity per OSD
Lower cost/TB
 Flash Deployment Options
•
•
•
All HDD (no flash)
Flash for OSD Journals
100% Flash in OSDs
Faster Cluster Network Improves Price/Capacity and Price/Performance
Ceph Deployment Using 10GbE and 40GbE
 Cluster (Private) Network @ 40/56GbE
Client Nodes
10GbE/40Gb
E
• Smooth HA, unblocked heartbeats, efficient data balancing
 Throughput Clients @ 40/56GbE
• Guaranties line rate for high ingress/egress clients Public
Network
10GbE/40GB
E
 IOPs Clients @ 10GbE or 40/56GbE
• 100K+ IOPs/Client @4K blocks
Ceph Nodes
(Monitors, OSDs,
MDS
Admin Node
Cluster
Network
40Gb
E
Throughput Testing results based on fio benchmark,8m block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2
IOPs Testing results based on fio benchmark, 4k block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2
20x Higher Throughput , 4x Higher IOPs with 40Gb Ethernet Clients!
(http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf)
© 2015 Mellanox Technologies - Mellanox Confidential - 7
R r
OK, But How Do We Further Improve IOPS? We Use RDMA!
Application
1
Application
Buffer 21Buffer 1
Buffer 1Buffer 1
OS OS
Buffer 1 Buffer 1
DMA over InfiniBand
Ethernet
HCAHCA
Buffer 1 Buffer 1NIC NIC
TCP/IP
RACK 1 RACK 2
© 2015 Mellanox Technologies - Mellanox Confidential - 8
HARDWAREKERNELUSER
Ceph Throughput using 40Gb and 56Gb Ethernet
One OSD, One Client, 8 Threads
6000
5000
4000
40Gb TCP MTU=1500
56Gb TCP MTU=4500
3000
56 Mb RDMA MTU=4500
2000
1000
0
64KB Random Read 256KB Random Read
© 2015 Mellanox Technologies - Mellanox Confidential - 9
MB/s
40GbTCP
56GbTCP
56GbRDMA
40GbTCP
56GbTCP
56GbRDMA
© 2015 Mellanox Technologies - Mellanox Confidential - 10
Optimizing Ceph for Flash
By SanDisk & Mellanox
Ceph Flash Optimization
Highlights Compared to Stock Ceph
•
•
Read performance up to 8x better
Write performance up to 2x better with tuning
Optimizations
•
•
•
•
All-flash storage for OSDs
Enhanced parallelism and lock optimization
Optimization for reads from flash
Improvements to Ceph messenger
SanDisk InfiniFlash
Test Configuration
•
•
•
•
InfiniFlash Storage with IFOS 1.0 EAP3
Up to 4 RBDs
2 Ceph OSD nodes, connected to InfiniFlash
40GbE NICs from Mellanox
© 2015 Mellanox Technologies - Mellanox Confidential - 11
8K Random - 2 RBD/Client with File System
IOPS: 2 LUNs /Client (Total 4 Clients) Lat(ms): 2 LUNs/Client (Total 4 Clients)
300000
120
250000
100
200000
80
LatencyIOPS
150000 (ms)60
100000
40
50000 20
0 0
[Queue Depth]
Read Percent
IFOS 1.0 Stock Ceph
© 2015 Mellanox Technologies - Mellanox Confidential - 12
1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32
0 25 50 75 100
1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32
0 25 50 75 100
Performance: 64K Random -2 RBD/Client with File System
Lat(ms): 2 LUNs/Client (Total 4 Clients)IOPS: 2 LUNs/Client (Total 4 Clients)
160000
180
140000
160
140120000
120
Latency
100000
IOPS
80000
100
(ms)
80
60000
60
40000
40
20000 20
0
0
[Queue Depth]
Read Percent
IFOS 1.0 Stock Ceph
© 2015 Mellanox Technologies - Mellanox Confidential - 13
1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32
0 25 50 75 100
1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32
0 25 50 75 100
© 2015 Mellanox Technologies - Mellanox Confidential - 14
Adding RDMA To Ceph
XioMessenger
I/O Offload Frees Up CPU for Application Processing
Without RDMA With RDMA and Offload
~53% CPU
Efficiency
~88% CPU
Efficiency
~47% CPU
Overhead/Idle
~12% CPU
Overhead/Idle
© 2015 Mellanox Technologies - Mellanox Confidential - 15
SystemSpaceUserSpace
SystemSpaceUserSpace
Adding RDMA to Ceph
RDMABeta
Hammer
in •
•
Mellanox, Red Hat, CohortFS, and Community collaboration
Full RDMA expected in Infernalis
Messaging
Layer
Buffers
Management
•
•
•
New RDMAmessenger layer called XioMessenger
New class hierarchy allowing multiple transports (simple one is TCP)
Async design, reduced locks, reduced number of threads
• Introduced non-sharable messages
On top of •
•
•
Accelio is RDMAabstraction layer
Integrated into all CEPH user space components: daemons
“public network” and “cluster network”
and clients
Accelio
© 2015 Mellanox Technologies - Mellanox Confidential - 16
Accelio, High-Performance Reliable Messaging and RPC Library
 Open source!
• https://github.com/accelio/accelio/ && www.accelio.org
 Faster RDMA integration to application
 Asynchronous
 Maximize msg and CPU parallelism
•
•
Enable >10GB/s from single node
Enable <10usec latency under load
 In
•
Giant and Hammer
http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger
© 2015 Mellanox Technologies - Mellanox Confidential - 17
Ceph 4KB Read IOPS: 40Gb TCP vs. 40Gb RDMA
450
400
350
300
250
40Gb
40Gb
TCP
RDMA200
150
100
50
0
2 OSDs, 4 clients 4 OSDs, 4 clients 8 OSDs, 4 clients
© 2015 Mellanox Technologies - Mellanox Confidential - 18
ThousandsofIOPS
38
30
cores
cores
inOSD
inclient
34
24
cores
cores
in
in
OSD
client
38
24
cores
cores
in
in
OSD
client
38
24
cores
cores
inOSD
inclient
36
32
cores
cores
in
in
OSD
client
34
27
cores
cores
in
in
OSD
client
RDMA RDMATCPTCP RDMA TCP
Ceph RDMA Performance Summary – Work In Progress


Normalized to per-core
BW is @ 256K IO size, IOPS is @ 4K IO size
© 2015 Mellanox Technologies - Mellanox Confidential - 19
READ IOPS Up to 250% better
BW Up to 50% better
WRITE IOPS Up to 20% better
BW Up to 7% better
What’s next?
XIO-Messenger to GA
Ceph Bottlenecks
Erasure Coding
© 2015 Mellanox Technologies - Mellanox Confidential - 20
• Erasure Coding is really needed to reduce redundancy capacity overhead
• Erasure Coding is complicated math for CPU
• Demanding high-end storage nodes
• New ConnectX-4 can offload Erasure Coding
• XIO-Messenger can do much more as transport!
• Collaborate to resolve, performance work group
• Infernalis?
© 2015 Mellanox Technologies - Mellanox Confidential - 21
Deployment Examples
Ceph-Powered Solutions
Ceph For Large Scale Storage– Fujitsu Eternus CD10000
 Hyperscale Storage
•
•
4 to 224 nodes
Up to 56 PB raw capacity
 Runs Ceph with Enhancements
•
•
3 different storage nodes
Object, block, and file storage
 Mellanox InfiniBand Cluster Network
•
•
40Gb
10Gb
InfiniBand cluster network
Ethernet front end network
© 2015 Mellanox Technologies - Mellanox Confidential - 22
Media & Entertainment Storage – StorageFoundry Nautilus
 Turnkey Object Storage
•
•
•
Built on Ceph
Pre-configured for rapid deployment
Mellanox 10/40GbE networking
 High-Capacity Configuration
•
•
6-8TB Helium-filled drives
Up to 2PB in 18U
 High-Performance Configuration
•
•
•
Single client read 2.2 GB/s
SSD caching + Hard Drives
Supports Ethernet, IB, FC, FCoE front-end ports
 More information: www.storagefoundry.net
© 2015 Mellanox Technologies - Mellanox Confidential - 23
SanDisk InfiniFlash
 Flash Storage System
•
•
•
•
Announced March 3, 2015
InfiniFlash OS uses Ceph
512 TB (raw) in one 3U enclosure
Tested with 40GbE networking
 High Throughput
•
•
Up to 7GB/s
Up to 1M IOPS with two nodes
 More information:
• http://bigdataflash.sandisk.com/infiniflash
© 2015 Mellanox Technologies - Mellanox Confidential - 24
 F
ml
More Ceph Solutions
 Cloud – OnyxCCS ElectraStack ISS Storage Supercore
•
•
•
•
Turnkey IaaS
Multi-tenant computing system
5x faster Node/Data restoration
https://www.onyxccs.com/products/8-series
•
•
•
•
•
Healthcare solution
82,000 IOPS on 512B reads
74,000 IOPS on 4KB reads
1.1GB/s on 256KB reads
http://www.iss-integration.com/supercore.html
lextronics CloudLabs
OpenStack on CloudX design
2SSD + 20HDD per node
Mix of 1Gb/40GbE network
http://www.flextronics.com/
 Scalable Informatics Unison
•
•
•
•
•
•
•
•
High availability cluster
60 HDD in 4U
Tier 1 performance at archive cost
https://scalableinformatics.com/unison.ht
© 2015 Mellanox Technologies - Mellanox Confidential - 25
Summary



Ceph scalability and performance benefit from high performance
Ceph being optimized for flash storage
networks
End-to-end 40/56 Gb/s transport accelerates Ceph today
•
•
100Gb/s testing has begun!
Available in various Ceph solutions and appliances
 RDMA is next to optimize flash performance—beta in Hammer
© 2015 Mellanox Technologies - Mellanox Confidential - 26
Thank You
Setup
 Two 28-core E5-2697V3@2.6GHz (Haswell) servers
•
•
•
•
•
•
•
•
•
64GB of memory
Hyperthreading is enabled
Mellanox ConnectX3-EN 40Gb/s, fw-2.33.5000
Mellanox SX1012 EN 40Gb/s switch
MLNX_OFED_LINUX-2.4-1.0.0
Accelio version 1.3 (master branch tag v1.3-rc3)
Ceph upstream branch hammer
Ubuntu 14.04 LTS stock kernel
Default mtu = 1500
 1st server run as single node ceph cluster
•
•
One monitor and
One OSD (using XFS on ramdisk /dev/ram0)
 2nd server run as ceph fio_rbd clients


BW is measured at 256K IOs
Iops is measured at 4K IOs
© 2015 Mellanox Technologies - Mellanox Confidential - 28

More Related Content

What's hot

DPDK Summit 2015 - Sprint - Arun Rajagopal
DPDK Summit 2015 - Sprint - Arun RajagopalDPDK Summit 2015 - Sprint - Arun Rajagopal
DPDK Summit 2015 - Sprint - Arun Rajagopal
Jim St. Leger
 
DPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles ShiflettDPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles Shiflett
Jim St. Leger
 
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit EthernetI/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
Mellanox Technologies
 
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PROIDEA
 
Open Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designOpen Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network design
Alexander Petrovskiy
 
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandAdvancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand
Mellanox Technologies
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Cloud Native Day Tel Aviv
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Michelle Holley
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
Tony Antony
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
inside-BigData.com
 
Intel dpdk Tutorial
Intel dpdk TutorialIntel dpdk Tutorial
Intel dpdk Tutorial
Saifuddin Kaijar
 
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
6WIND
 
SAN Extension Design and Solutions
SAN Extension Design and SolutionsSAN Extension Design and Solutions
SAN Extension Design and Solutions
Tony Antony
 
Inside Microsoft's FPGA-Based Configurable Cloud
Inside Microsoft's FPGA-Based Configurable CloudInside Microsoft's FPGA-Based Configurable Cloud
Inside Microsoft's FPGA-Based Configurable Cloud
inside-BigData.com
 
IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2
inside-BigData.com
 
Cisco Live! :: Content Delivery Networks (CDN)
Cisco Live! :: Content Delivery Networks (CDN)Cisco Live! :: Content Delivery Networks (CDN)
Cisco Live! :: Content Delivery Networks (CDN)
Bruno Teixeira
 
DPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFV
DPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFVDPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFV
DPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFV
Jim St. Leger
 
Scaling the Container Dataplane
Scaling the Container Dataplane Scaling the Container Dataplane
Scaling the Container Dataplane
Michelle Holley
 
Mellanox IBM
Mellanox IBMMellanox IBM
Mellanox IBM
IBM Danmark
 
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015 Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Tony Antony
 

What's hot (20)

DPDK Summit 2015 - Sprint - Arun Rajagopal
DPDK Summit 2015 - Sprint - Arun RajagopalDPDK Summit 2015 - Sprint - Arun Rajagopal
DPDK Summit 2015 - Sprint - Arun Rajagopal
 
DPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles ShiflettDPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles Shiflett
 
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit EthernetI/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
 
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
 
Open Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designOpen Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network design
 
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandAdvancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
 
Intel dpdk Tutorial
Intel dpdk TutorialIntel dpdk Tutorial
Intel dpdk Tutorial
 
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
 
SAN Extension Design and Solutions
SAN Extension Design and SolutionsSAN Extension Design and Solutions
SAN Extension Design and Solutions
 
Inside Microsoft's FPGA-Based Configurable Cloud
Inside Microsoft's FPGA-Based Configurable CloudInside Microsoft's FPGA-Based Configurable Cloud
Inside Microsoft's FPGA-Based Configurable Cloud
 
IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2
 
Cisco Live! :: Content Delivery Networks (CDN)
Cisco Live! :: Content Delivery Networks (CDN)Cisco Live! :: Content Delivery Networks (CDN)
Cisco Live! :: Content Delivery Networks (CDN)
 
DPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFV
DPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFVDPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFV
DPDK Summit - 08 Sept 2014 - Ericsson - A Multi-Socket Ferrari for NFV
 
Scaling the Container Dataplane
Scaling the Container Dataplane Scaling the Container Dataplane
Scaling the Container Dataplane
 
Mellanox IBM
Mellanox IBMMellanox IBM
Mellanox IBM
 
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015 Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
 

Viewers also liked

Temmuz aralık 2009
Temmuz aralık 2009Temmuz aralık 2009
Temmuz aralık 2009usevik
 
Naj Kuchárky 2015
Naj Kuchárky 2015Naj Kuchárky 2015
Naj Kuchárky 2015
Kucharky
 
Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...
Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...
Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...
qwer78
 
công ty làm phim quảng cáo chất lượng cao
công ty làm phim quảng cáo chất lượng caocông ty làm phim quảng cáo chất lượng cao
công ty làm phim quảng cáo chất lượng caomurray360
 
Application administrator performance appraisal
Application administrator performance appraisalApplication administrator performance appraisal
Application administrator performance appraisal
percyweasley32
 
Trices
TricesTrices
Sound light
Sound lightSound light
Sound light
dvsofyna
 
Geometric bubble films
Geometric bubble filmsGeometric bubble films
Geometric bubble filmsSTEM4GIRLS
 
Better Burger
Better BurgerBetter Burger
Better Burger
JDizzle_24
 
Epic research daily agri report 05th may 2015
Epic research daily agri report  05th may  2015Epic research daily agri report  05th may  2015
Epic research daily agri report 05th may 2015
Epic Research Limited
 
Ceph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setupCeph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Community
 
Tra le onde dell'amore v
Tra le onde dell'amore v Tra le onde dell'amore v
Tra le onde dell'amore v IstitutoCastri5
 
1 Objekter og Klasser
1 Objekter og Klasser1 Objekter og Klasser
1 Objekter og Klasser
jeanette89
 

Viewers also liked (17)

LSG
LSGLSG
LSG
 
Temmuz aralık 2009
Temmuz aralık 2009Temmuz aralık 2009
Temmuz aralık 2009
 
Naj Kuchárky 2015
Naj Kuchárky 2015Naj Kuchárky 2015
Naj Kuchárky 2015
 
Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...
Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...
Биохимические, физи-ко-химические и мик-робиологические про-цессы при произво...
 
công ty làm phim quảng cáo chất lượng cao
công ty làm phim quảng cáo chất lượng caocông ty làm phim quảng cáo chất lượng cao
công ty làm phim quảng cáo chất lượng cao
 
Application administrator performance appraisal
Application administrator performance appraisalApplication administrator performance appraisal
Application administrator performance appraisal
 
Ptdt
PtdtPtdt
Ptdt
 
Konsburg
KonsburgKonsburg
Konsburg
 
Trices
TricesTrices
Trices
 
casiii_poster_final
casiii_poster_finalcasiii_poster_final
casiii_poster_final
 
Sound light
Sound lightSound light
Sound light
 
Geometric bubble films
Geometric bubble filmsGeometric bubble films
Geometric bubble films
 
Better Burger
Better BurgerBetter Burger
Better Burger
 
Epic research daily agri report 05th may 2015
Epic research daily agri report  05th may  2015Epic research daily agri report  05th may  2015
Epic research daily agri report 05th may 2015
 
Ceph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setupCeph Day Berlin: Ceph and iSCSI in a high availability setup
Ceph Day Berlin: Ceph and iSCSI in a high availability setup
 
Tra le onde dell'amore v
Tra le onde dell'amore v Tra le onde dell'amore v
Tra le onde dell'amore v
 
1 Objekter og Klasser
1 Objekter og Klasser1 Objekter og Klasser
1 Objekter og Klasser
 

Similar to Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Performance

Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Community
 
Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Community
 
Software Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVSoftware Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFV
Yoshihiro Nakajima
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Ceph Community
 
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitchDPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
Jim St. Leger
 
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Netronome
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
inside-BigData.com
 
Lagopus presentation on 14th Annual ON*VECTOR International Photonics Workshop
Lagopus presentation on 14th Annual ON*VECTOR International Photonics WorkshopLagopus presentation on 14th Annual ON*VECTOR International Photonics Workshop
Lagopus presentation on 14th Annual ON*VECTOR International Photonics Workshop
Lagopus SDN/OpenFlow switch
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Community
 
High Performance Networking Leveraging the DPDK and Growing Community
High Performance Networking Leveraging the DPDK and Growing CommunityHigh Performance Networking Leveraging the DPDK and Growing Community
High Performance Networking Leveraging the DPDK and Growing Community
6WIND
 
Open coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpen coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi Alkobi
OpenInfra Days Poland 2019
 
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
Indonesia Network Operators Group
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session
Brocade
 
Scale Out Database Solution
Scale Out Database SolutionScale Out Database Solution
Scale Out Database Solution
Mellanox Technologies
 
Flexing Network Muscle with IBM Flex System Fabric Technology
Flexing Network Muscle with IBM Flex System Fabric TechnologyFlexing Network Muscle with IBM Flex System Fabric Technology
Flexing Network Muscle with IBM Flex System Fabric Technology
Brocade
 
High Performance Communication for Oracle using InfiniBand
High Performance Communication for Oracle using InfiniBandHigh Performance Communication for Oracle using InfiniBand
High Performance Communication for Oracle using InfiniBandwebhostingguy
 
Introduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RIntroduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RSimon Huang
 
Converged IO for HP ProLiant Gen8
Converged IO for HP ProLiant Gen8Converged IO for HP ProLiant Gen8
Converged IO for HP ProLiant Gen8
IT Brand Pulse
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Community
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
HPCC Systems
 

Similar to Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Performance (20)

Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks
 
Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance Networks
 
Software Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVSoftware Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFV
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
 
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitchDPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
 
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
 
Lagopus presentation on 14th Annual ON*VECTOR International Photonics Workshop
Lagopus presentation on 14th Annual ON*VECTOR International Photonics WorkshopLagopus presentation on 14th Annual ON*VECTOR International Photonics Workshop
Lagopus presentation on 14th Annual ON*VECTOR International Photonics Workshop
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
High Performance Networking Leveraging the DPDK and Growing Community
High Performance Networking Leveraging the DPDK and Growing CommunityHigh Performance Networking Leveraging the DPDK and Growing Community
High Performance Networking Leveraging the DPDK and Growing Community
 
Open coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpen coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi Alkobi
 
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session
 
Scale Out Database Solution
Scale Out Database SolutionScale Out Database Solution
Scale Out Database Solution
 
Flexing Network Muscle with IBM Flex System Fabric Technology
Flexing Network Muscle with IBM Flex System Fabric TechnologyFlexing Network Muscle with IBM Flex System Fabric Technology
Flexing Network Muscle with IBM Flex System Fabric Technology
 
High Performance Communication for Oracle using InfiniBand
High Performance Communication for Oracle using InfiniBandHigh Performance Communication for Oracle using InfiniBand
High Performance Communication for Oracle using InfiniBand
 
Introduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RIntroduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3R
 
Converged IO for HP ProLiant Gen8
Converged IO for HP ProLiant Gen8Converged IO for HP ProLiant Gen8
Converged IO for HP ProLiant Gen8
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 

Recently uploaded

Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
Fwdays
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
CatarinaPereira64715
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 

Recently uploaded (20)

Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 

Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Performance

  • 1. Unleash Ceph over Flash Storage Potential with Mellanox High-Performance Interconnect Ceph Day Berlin – Apr 28th, 2015 Oren Duer, Director of Storage Software, Software R&D
  • 2. - Mellanox C nfidential - Leading Supplier of End-to-End Interconnect Solutions Storage Server / Compute Switch / Gateway Front / Back-End Virtual Protocol Interconnect 56G IB & FCoIB Virtual Protocol Interconnect 56G InfiniBand 10/40/56GbE & FCoE 10/40/56GbE Comprehensive End-to-End InfiniBand and Ethernet Portfolio ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN Cables/Modules © 2015 Mellanox Technologies o 2
  • 3. © 2015 Mellanox Technologies - Mellanox Confidential - 6 How Customers Deploy Ceph with Mellanox Interconnect  Building Scalable, Performing Storage Solutions • • Cluster network @ 40Gb Ethernet Clients @ 10G/40Gb Ethernet  High performance at Low Cost • • Allows more capacity per OSD Lower cost/TB  Flash Deployment Options • • • All HDD (no flash) Flash for OSD Journals 100% Flash in OSDs Faster Cluster Network Improves Price/Capacity and Price/Performance
  • 4. Ceph Deployment Using 10GbE and 40GbE  Cluster (Private) Network @ 40/56GbE Client Nodes 10GbE/40Gb E • Smooth HA, unblocked heartbeats, efficient data balancing  Throughput Clients @ 40/56GbE • Guaranties line rate for high ingress/egress clients Public Network 10GbE/40GB E  IOPs Clients @ 10GbE or 40/56GbE • 100K+ IOPs/Client @4K blocks Ceph Nodes (Monitors, OSDs, MDS Admin Node Cluster Network 40Gb E Throughput Testing results based on fio benchmark,8m block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2 IOPs Testing results based on fio benchmark, 4k block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2 20x Higher Throughput , 4x Higher IOPs with 40Gb Ethernet Clients! (http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf) © 2015 Mellanox Technologies - Mellanox Confidential - 7
  • 5. R r OK, But How Do We Further Improve IOPS? We Use RDMA! Application 1 Application Buffer 21Buffer 1 Buffer 1Buffer 1 OS OS Buffer 1 Buffer 1 DMA over InfiniBand Ethernet HCAHCA Buffer 1 Buffer 1NIC NIC TCP/IP RACK 1 RACK 2 © 2015 Mellanox Technologies - Mellanox Confidential - 8 HARDWAREKERNELUSER
  • 6. Ceph Throughput using 40Gb and 56Gb Ethernet One OSD, One Client, 8 Threads 6000 5000 4000 40Gb TCP MTU=1500 56Gb TCP MTU=4500 3000 56 Mb RDMA MTU=4500 2000 1000 0 64KB Random Read 256KB Random Read © 2015 Mellanox Technologies - Mellanox Confidential - 9 MB/s 40GbTCP 56GbTCP 56GbRDMA 40GbTCP 56GbTCP 56GbRDMA
  • 7. © 2015 Mellanox Technologies - Mellanox Confidential - 10 Optimizing Ceph for Flash By SanDisk & Mellanox
  • 8. Ceph Flash Optimization Highlights Compared to Stock Ceph • • Read performance up to 8x better Write performance up to 2x better with tuning Optimizations • • • • All-flash storage for OSDs Enhanced parallelism and lock optimization Optimization for reads from flash Improvements to Ceph messenger SanDisk InfiniFlash Test Configuration • • • • InfiniFlash Storage with IFOS 1.0 EAP3 Up to 4 RBDs 2 Ceph OSD nodes, connected to InfiniFlash 40GbE NICs from Mellanox © 2015 Mellanox Technologies - Mellanox Confidential - 11
  • 9. 8K Random - 2 RBD/Client with File System IOPS: 2 LUNs /Client (Total 4 Clients) Lat(ms): 2 LUNs/Client (Total 4 Clients) 300000 120 250000 100 200000 80 LatencyIOPS 150000 (ms)60 100000 40 50000 20 0 0 [Queue Depth] Read Percent IFOS 1.0 Stock Ceph © 2015 Mellanox Technologies - Mellanox Confidential - 12 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 0 25 50 75 100 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 0 25 50 75 100
  • 10. Performance: 64K Random -2 RBD/Client with File System Lat(ms): 2 LUNs/Client (Total 4 Clients)IOPS: 2 LUNs/Client (Total 4 Clients) 160000 180 140000 160 140120000 120 Latency 100000 IOPS 80000 100 (ms) 80 60000 60 40000 40 20000 20 0 0 [Queue Depth] Read Percent IFOS 1.0 Stock Ceph © 2015 Mellanox Technologies - Mellanox Confidential - 13 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 0 25 50 75 100 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 1 2 4 8 16 32 0 25 50 75 100
  • 11. © 2015 Mellanox Technologies - Mellanox Confidential - 14 Adding RDMA To Ceph XioMessenger
  • 12. I/O Offload Frees Up CPU for Application Processing Without RDMA With RDMA and Offload ~53% CPU Efficiency ~88% CPU Efficiency ~47% CPU Overhead/Idle ~12% CPU Overhead/Idle © 2015 Mellanox Technologies - Mellanox Confidential - 15 SystemSpaceUserSpace SystemSpaceUserSpace
  • 13. Adding RDMA to Ceph RDMABeta Hammer in • • Mellanox, Red Hat, CohortFS, and Community collaboration Full RDMA expected in Infernalis Messaging Layer Buffers Management • • • New RDMAmessenger layer called XioMessenger New class hierarchy allowing multiple transports (simple one is TCP) Async design, reduced locks, reduced number of threads • Introduced non-sharable messages On top of • • • Accelio is RDMAabstraction layer Integrated into all CEPH user space components: daemons “public network” and “cluster network” and clients Accelio © 2015 Mellanox Technologies - Mellanox Confidential - 16
  • 14. Accelio, High-Performance Reliable Messaging and RPC Library  Open source! • https://github.com/accelio/accelio/ && www.accelio.org  Faster RDMA integration to application  Asynchronous  Maximize msg and CPU parallelism • • Enable >10GB/s from single node Enable <10usec latency under load  In • Giant and Hammer http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger © 2015 Mellanox Technologies - Mellanox Confidential - 17
  • 15. Ceph 4KB Read IOPS: 40Gb TCP vs. 40Gb RDMA 450 400 350 300 250 40Gb 40Gb TCP RDMA200 150 100 50 0 2 OSDs, 4 clients 4 OSDs, 4 clients 8 OSDs, 4 clients © 2015 Mellanox Technologies - Mellanox Confidential - 18 ThousandsofIOPS 38 30 cores cores inOSD inclient 34 24 cores cores in in OSD client 38 24 cores cores in in OSD client 38 24 cores cores inOSD inclient 36 32 cores cores in in OSD client 34 27 cores cores in in OSD client RDMA RDMATCPTCP RDMA TCP
  • 16. Ceph RDMA Performance Summary – Work In Progress   Normalized to per-core BW is @ 256K IO size, IOPS is @ 4K IO size © 2015 Mellanox Technologies - Mellanox Confidential - 19 READ IOPS Up to 250% better BW Up to 50% better WRITE IOPS Up to 20% better BW Up to 7% better
  • 17. What’s next? XIO-Messenger to GA Ceph Bottlenecks Erasure Coding © 2015 Mellanox Technologies - Mellanox Confidential - 20 • Erasure Coding is really needed to reduce redundancy capacity overhead • Erasure Coding is complicated math for CPU • Demanding high-end storage nodes • New ConnectX-4 can offload Erasure Coding • XIO-Messenger can do much more as transport! • Collaborate to resolve, performance work group • Infernalis?
  • 18. © 2015 Mellanox Technologies - Mellanox Confidential - 21 Deployment Examples Ceph-Powered Solutions
  • 19. Ceph For Large Scale Storage– Fujitsu Eternus CD10000  Hyperscale Storage • • 4 to 224 nodes Up to 56 PB raw capacity  Runs Ceph with Enhancements • • 3 different storage nodes Object, block, and file storage  Mellanox InfiniBand Cluster Network • • 40Gb 10Gb InfiniBand cluster network Ethernet front end network © 2015 Mellanox Technologies - Mellanox Confidential - 22
  • 20. Media & Entertainment Storage – StorageFoundry Nautilus  Turnkey Object Storage • • • Built on Ceph Pre-configured for rapid deployment Mellanox 10/40GbE networking  High-Capacity Configuration • • 6-8TB Helium-filled drives Up to 2PB in 18U  High-Performance Configuration • • • Single client read 2.2 GB/s SSD caching + Hard Drives Supports Ethernet, IB, FC, FCoE front-end ports  More information: www.storagefoundry.net © 2015 Mellanox Technologies - Mellanox Confidential - 23
  • 21. SanDisk InfiniFlash  Flash Storage System • • • • Announced March 3, 2015 InfiniFlash OS uses Ceph 512 TB (raw) in one 3U enclosure Tested with 40GbE networking  High Throughput • • Up to 7GB/s Up to 1M IOPS with two nodes  More information: • http://bigdataflash.sandisk.com/infiniflash © 2015 Mellanox Technologies - Mellanox Confidential - 24
  • 22.  F ml More Ceph Solutions  Cloud – OnyxCCS ElectraStack ISS Storage Supercore • • • • Turnkey IaaS Multi-tenant computing system 5x faster Node/Data restoration https://www.onyxccs.com/products/8-series • • • • • Healthcare solution 82,000 IOPS on 512B reads 74,000 IOPS on 4KB reads 1.1GB/s on 256KB reads http://www.iss-integration.com/supercore.html lextronics CloudLabs OpenStack on CloudX design 2SSD + 20HDD per node Mix of 1Gb/40GbE network http://www.flextronics.com/  Scalable Informatics Unison • • • • • • • • High availability cluster 60 HDD in 4U Tier 1 performance at archive cost https://scalableinformatics.com/unison.ht © 2015 Mellanox Technologies - Mellanox Confidential - 25
  • 23. Summary    Ceph scalability and performance benefit from high performance Ceph being optimized for flash storage networks End-to-end 40/56 Gb/s transport accelerates Ceph today • • 100Gb/s testing has begun! Available in various Ceph solutions and appliances  RDMA is next to optimize flash performance—beta in Hammer © 2015 Mellanox Technologies - Mellanox Confidential - 26
  • 25. Setup  Two 28-core E5-2697V3@2.6GHz (Haswell) servers • • • • • • • • • 64GB of memory Hyperthreading is enabled Mellanox ConnectX3-EN 40Gb/s, fw-2.33.5000 Mellanox SX1012 EN 40Gb/s switch MLNX_OFED_LINUX-2.4-1.0.0 Accelio version 1.3 (master branch tag v1.3-rc3) Ceph upstream branch hammer Ubuntu 14.04 LTS stock kernel Default mtu = 1500  1st server run as single node ceph cluster • • One monitor and One OSD (using XFS on ramdisk /dev/ram0)  2nd server run as ceph fio_rbd clients   BW is measured at 256K IOs Iops is measured at 4K IOs © 2015 Mellanox Technologies - Mellanox Confidential - 28