SlideShare a Scribd company logo
Red Hat Gluster Storage performance
Manoj Pillai and Ben England
Performance Engineering
June 25, 2015
New or improved features (in last year)
Erasure Coding
Snapshots
NFS-Ganesha
RDMA
SSD support
Erasure Coding
“distributed software RAID”
● Alternative to RAID controllers or 3-way replication
● Cuts storage cost/TB, but computationally expensive
● Better Sequential Write performance for some workloads
● Roughly same sequential Read performance (depends on mountpoints)
● In RHGS 3.1 avoid Erasure Coding for pure-small-file or pure random I/O workloads
● Example use cases are archival, video capture
Disperse translator spreads EC stripes in file across hosts
Example: EC4+2
E[1] E[2] E[6]
E[1] E[2] E[6]
...
Stripe 1
Stripe N
Server 1 Server 2
...
Server 6
Brick 1 Brick 2 Brick 6Brick 1 Brick 2 Brick 6
EC large-file perf summary 2-replica write limit3-replica write limit
A tale of two mountpoints (per server)
stacked CPU utilization graph by CPU core, 12 cores
glusterfs hot thread limits 1 mountpoint's throughput
1 mountpoint per server 4 mountpoints per server
A tale of two mountpoints
why the difference in CPU utilization?
SSDs as bricks
● Multi-thread-epoll = multiple threads working in each client mountpoint and server brick
● Can be helpful for SSDs or any other high-IOPS workload
● glusterfs-3.7 with single 2-socket Sandy Bridge using 1 SAS SSD (SANDisk Lightning)
●
RDMA enhancements
● Gluster has had RDMA in some form for a long time
● Gluster-3.6 added librdmacm support – broadens supported hardware
● By Gluster 3.7, memory pre-registration reduces latency
JBOD Support
● RHGS has traditionally used H/W RAID for brick storage, with replica-2 protection.
● RHGS 3.0.4 has JBOD+replica-3 support.
● H/W RAID problems:
– Proprietary interfaces for managing h/w RAID
– Performance impact with many concurrent streams
● JBOD+replica-3 shortcomings:
– Each file on one disk, low throughput for serial workloads
– Large number of bricks in the volume; problematic for some workloads
● JBOD+replica-3 expands the set of workloads that RHGS can handle well
– Best for highly-concurrent, large-file read workloads
● JBOD+replica-3 outperforms RAID-6+replica-2 at higher thread counts
● For large-file workload
NFS-Ganesha
● NFSv3 access to RHGS volumes supported so far with gluster native NFS server
● NFS-Ganesha integration with FSAL-gluster expands supported access protocols
– NFSv3 – has been in Technology Preview
– NFSv4, NFSv4.1, pNFS
● Access path uses libgfapi, avoids FUSE
Ganesha-Gluster vs Ganesha-VFS vs Kernel-NFS
Snapshots
● Based on device-mapper thin-provisioned snapshots
– Simplified space management for snapshots
– Allow large number of snapshots without performance degradation
● Required change from traditional LV to thin LV for RHGS brick storage
– Performance impact? Typically 10-15% for large file sequential read as a result of
fragmentation
● Snapshot performance impact
– Mainly due to writes to “shared” blocks, copy-on-write triggered on first write to a
region after snapshot
– Independent of number of snapshots in existence
Improved rebalancing
● Rebalancing lets you add/remove hardware from an online Gluster volume
● Important for scalability, redeployment of hardware resources
● Existing algorithm had shortcomings
– Did not work well for small files
– Was not parallel enough
– No throttle
● New algorithm solves these problems
– Executes in parallel on all bricks
– Gives you control over number of concurrent I/O requests/brick
16
Best practices for sizing, install, administration16
Configurations to avoid with Gluster (today)
● Super-large RAID volumes (e.g. RAID60)
● – example: RAID60 with 2 striped RAID6 12-disk components
● – Single glusterfsd process serving a large number of disks
– recommend separate RAID LUNs instead
● JBOD configuration with very large server count
● – Gluster directories are still spread across every brick
● – with JBOD, that means every disk!
● – 64 servers x 36 disks/server = ~2300 bricks
● – recommendation: use RAID6 bricks of 12 disks each
● – even then, 64x3 = 192 bricks, still not ideal for anything but large files
Test methodology
● How well does RHGS work for your use-case?
● Some benchmarking tools:
– Use tools with a distributed mode, so multiple clients can put load on servers
– Iozone (large-file sequential workloads), smallfile benchmark, fio (better than iozone
for random i/o testing.
● Beyond micro-benchmarking
– SPECsfs2014 provides approximation to some real-life workloads
– Being used internally
– Requires license
● SPECsfs2014 provides mixed-workload generation in different flavors
● VDA (video data acquisition), VDI (virtual desktop infrastructure), SWBUILD
(software build)
Application filesystem usage patterns to avoid with Gluster
● Single-threaded application – one-file-at-a-time processing
● – uses only small fraction (1 DHT subvolume) of Gluster hardware
Tiny files – cheap on local filesystems, expensive on distributed filesystems
● Small directories
● – creation/deletion/read/rename/metadata-change cost x brick count!
● – large file:directory ratio not bad as of glusterfs-3.7
● Using repeated directory scanning to synchronize processes on different clients
● – Gluster 3.6 (RHS 3.0.4) does not yet invalidate metadata cache on clients
Initial Data ingest
● Problem: applications often have previous data, must load Gluster volume
● Typical methods are excruciatingly slow (see lower right!)
● – Example: single mountpoint, rsync -ravu
● Solutions:
– - for large files on glusterfs, use largest xfer size
– - copy multiple subdirectories in parallel
– - multiple mountpoints per client
– - multiple clients
– - mount option "gid-timeout=5"
– - for glusterfs, increase client.event-threads to 8
SSDs as bricks
● Avoid use of storage controller WB cache
● separate volume for SSD
● Check “top -H”, look for hot glusterfsd threads on server with SSDs
● Gluster tuning for SSDs: server.event-threads > 2
● SAS SSD:
– Sequential I/O: relatively low sequential write transfer rate
– Random I/O: avoids seek overhead, good IOPS
– Scaling: more SAS slots => greater TB/host, high aggregate IOPS
● PCI:
– Sequential I/O: much higher transfer rate since shorter data path
Random I/O: lowest latency yields highest IOPS
– Scaling: more expensive, aggregate IOPS limited by PCI slots
●
High-speed networking > 10 Gbps
● Don't need RDMA for 10-Gbps network, better with >= 40 Gbps
● Infiniband alternative to RDMA – ipoib
● - Jumbo Frames (MTU=65520) – all switches must support
● - “connected mode”
● - TCP will get you to about ½ – ¾ 40-Gbps line speed
● 10-GbE bonding – see gluster.org how-to
● - default bonding mode 0 – don't use it
● - best modes are 2 (balance-xor), 4 (802.3ad), 6 (balance-alb)
● FUSE (glusterfs mountpoints) –
– No 40-Gbps line speed from one mountpoint
– Servers don't run FUSE => best with multiple clients/server
● NFS+SMB servers use libgfapi, no FUSE overhead
Networking – Putting it all together
Features coming soon
To a Gluster volume near you
(i.e. glusterfs-3.7 and later)
Lookup-unhashed fix
Bitrot detection – in glusterfs-3.7 = RHS 3.1
● Provides greater durability for Gluster data (JBOD)
● Protects against silent loss of data
● Requires signature on replica recording original checksum
● Requires periodic scan to verify data still matches checksum
● Need more data on cost of the scan
● TBS – DIAGRAMS, ANY DATA?
A tale of two mountpoints - sequential write performance
And the result... drum roll....
Balancing storage and networking performance
● Based on workload
– Transactional or small-file workloads
● don't need > 10 Gbps
● Need lots of IOPS (e.g. SSD)
– Large-file sequential workloads (e.g. video capture)
● Don't need so many IOPS
● Need network bandwidth
– When in doubt, add more networking, cost < storage
Cache tiering
● Goal: performance of SSD with cost/TB of spinning rust
● Savings from Erasure Coding can pay for SSD!
● Definition: Gluster tiered volume consists of two subvolumes:
● - “hot” tier: sub-volume low capacity, high performance
● - “cold” tier: sub-volume – high capacity, low performance
● -- promotion policy: migrates data from cold tier to hot tier
● -- demotion policy: migrates data from hot tier to cold tier
● - new files are written to hot tier initially unless hot tier is full
perf enhancements
unless otherwise stated,
UNDER CONSIDERATION, NOT IMPLEMENTED
● Lookup-unhashed=auto in Glusterfs 3.7 today, in RHGS 3.1 soon
– Eliminates LOOKUP per brick during file creation, etc.
● JBOD support – Glusterfs 4.0 – DHT V2 intended to eliminate spread of
directories across all bricks
● Sharding – spread file across more bricks (like Ceph, HDFS)
● Erasure Coding – Intel instruction support, symmetric encoding, bigger chunk
size
● Parallel utilities – examples are parallel-untar.py and parallel-rm-rf.py
● Better client-side caching – cache invalidation starting in glusterfs-3.7
● YOU CAN HELP DECIDE! Express interest and opinion on this

More Related Content

What's hot

“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~
“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~
“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~
Brocade
 
Connecting mq&amp;kafka
Connecting mq&amp;kafkaConnecting mq&amp;kafka
Connecting mq&amp;kafka
Matt Leming
 
CDW: SAN vs. NAS
CDW: SAN vs. NASCDW: SAN vs. NAS
CDW: SAN vs. NAS
Spiceworks
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
xKinAnx
 
iSCSI Protocol and Functionality
iSCSI Protocol and FunctionalityiSCSI Protocol and Functionality
iSCSI Protocol and Functionality
Lexumo
 
Lisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introductionLisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introduction
Gluster.org
 
Software defined storage
Software defined storageSoftware defined storage
Software defined storage
Gluster.org
 
Recap: Windows Server 2019 Failover Clustering
Recap: Windows Server 2019 Failover ClusteringRecap: Windows Server 2019 Failover Clustering
Recap: Windows Server 2019 Failover Clustering
Kazuki Takai
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
xKinAnx
 
SAN デザイン講座
SAN デザイン講座SAN デザイン講座
SAN デザイン講座
Brocade
 
Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)
Tuan Yang
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
xKinAnx
 
今さら聞けない!Windows server 2012 r2 hyper v入門
今さら聞けない!Windows server 2012 r2 hyper v入門今さら聞けない!Windows server 2012 r2 hyper v入門
今さら聞けない!Windows server 2012 r2 hyper v入門
Trainocate Japan, Ltd.
 
ZFS in 30 minutes
ZFS in 30 minutesZFS in 30 minutes
ZFS in 30 minutes
William Hathaway
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
xKinAnx
 
Zabbix Performance Tuning
Zabbix Performance TuningZabbix Performance Tuning
Zabbix Performance Tuning
Ricardo Santos
 
Postgresql Database Administration Basic - Day1
Postgresql  Database Administration Basic  - Day1Postgresql  Database Administration Basic  - Day1
Postgresql Database Administration Basic - Day1
PoguttuezhiniVP
 
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of FacebookTech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
The Hive
 
ZFS
ZFSZFS

What's hot (20)

“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~
“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~
“見てわかる” ファイバーチャネルSAN基礎講座(第2弾)~FC SAN設計における勘所とは?~
 
Connecting mq&amp;kafka
Connecting mq&amp;kafkaConnecting mq&amp;kafka
Connecting mq&amp;kafka
 
CDW: SAN vs. NAS
CDW: SAN vs. NASCDW: SAN vs. NAS
CDW: SAN vs. NAS
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
iSCSI Protocol and Functionality
iSCSI Protocol and FunctionalityiSCSI Protocol and Functionality
iSCSI Protocol and Functionality
 
Lisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introductionLisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introduction
 
Software defined storage
Software defined storageSoftware defined storage
Software defined storage
 
Recap: Windows Server 2019 Failover Clustering
Recap: Windows Server 2019 Failover ClusteringRecap: Windows Server 2019 Failover Clustering
Recap: Windows Server 2019 Failover Clustering
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
 
SAN デザイン講座
SAN デザイン講座SAN デザイン講座
SAN デザイン講座
 
Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
 
Dns ppt
Dns pptDns ppt
Dns ppt
 
今さら聞けない!Windows server 2012 r2 hyper v入門
今さら聞けない!Windows server 2012 r2 hyper v入門今さら聞けない!Windows server 2012 r2 hyper v入門
今さら聞けない!Windows server 2012 r2 hyper v入門
 
ZFS in 30 minutes
ZFS in 30 minutesZFS in 30 minutes
ZFS in 30 minutes
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
 
Zabbix Performance Tuning
Zabbix Performance TuningZabbix Performance Tuning
Zabbix Performance Tuning
 
Postgresql Database Administration Basic - Day1
Postgresql  Database Administration Basic  - Day1Postgresql  Database Administration Basic  - Day1
Postgresql Database Administration Basic - Day1
 
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of FacebookTech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
 
ZFS
ZFSZFS
ZFS
 

Viewers also liked

Performance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksPerformance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networks
Marian Marinov
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
Gluster.org
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on gluster
Red_Hat_Storage
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
Red_Hat_Storage
 
Gluster.community.day.2013
Gluster.community.day.2013Gluster.community.day.2013
Gluster.community.day.2013
Udo Seidel
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Giuseppe Paterno'
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecase
Deepak Shetty
 
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red_Hat_Storage
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Gluster.org
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
Sage Weil
 
The Google File System (GFS)
The Google File System (GFS)The Google File System (GFS)
The Google File System (GFS)
Romain Jacotin
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep Dive
Red_Hat_Storage
 
Gluster Data Tiering
Gluster Data TieringGluster Data Tiering
Gluster Data Tiering
Joseph Elwin Fernandes
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere Mortals
Red_Hat_Storage
 
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
Amazon Web Services
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
Karan Singh
 
Gfs vs hdfs
Gfs vs hdfsGfs vs hdfs
Gfs vs hdfs
Yuval Carmel
 
Glusterfs and Hadoop
Glusterfs and HadoopGlusterfs and Hadoop
Glusterfs and Hadoop
Shubhendu Tripathi
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
Sage Weil
 

Viewers also liked (20)

Performance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksPerformance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networks
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on gluster
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
Gluster.community.day.2013
Gluster.community.day.2013Gluster.community.day.2013
Gluster.community.day.2013
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecase
 
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
The Google File System (GFS)
The Google File System (GFS)The Google File System (GFS)
The Google File System (GFS)
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep Dive
 
Gluster Data Tiering
Gluster Data TieringGluster Data Tiering
Gluster Data Tiering
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere Mortals
 
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Gfs vs hdfs
Gfs vs hdfsGfs vs hdfs
Gfs vs hdfs
 
Glusterfs and Hadoop
Glusterfs and HadoopGlusterfs and Hadoop
Glusterfs and Hadoop
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 

Similar to Red Hat Gluster Storage Performance

QNAP TS-832PX-4G.pdf
QNAP TS-832PX-4G.pdfQNAP TS-832PX-4G.pdf
QNAP TS-832PX-4G.pdf
GustavoLippera1
 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data Deduplication
RedWireServices
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
David Grier
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
Splunk
 
Shootout at the AWS Corral
Shootout at the AWS CorralShootout at the AWS Corral
Shootout at the AWS Corral
PostgreSQL Experts, Inc.
 
Shootout at the PAAS Corral
Shootout at the PAAS CorralShootout at the PAAS Corral
Shootout at the PAAS Corral
PostgreSQL Experts, Inc.
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit Boston
Sage Weil
 
Ceph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFSCeph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFS
Ceph Community
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Lars Marowsky-Brée
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
Nicolas Poggi
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red_Hat_Storage
 
Challenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightChallenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan Lambright
Gluster.org
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC
 
Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wild
Ceph Community
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
Marius Adrian Popa
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
javier ramirez
 
Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems
confluent
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
Lars Marowsky-Brée
 

Similar to Red Hat Gluster Storage Performance (20)

QNAP TS-832PX-4G.pdf
QNAP TS-832PX-4G.pdfQNAP TS-832PX-4G.pdf
QNAP TS-832PX-4G.pdf
 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data Deduplication
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
 
Shootout at the AWS Corral
Shootout at the AWS CorralShootout at the AWS Corral
Shootout at the AWS Corral
 
Shootout at the PAAS Corral
Shootout at the PAAS CorralShootout at the PAAS Corral
Shootout at the PAAS Corral
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit Boston
 
Ceph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFSCeph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFS
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Challenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightChallenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan Lambright
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
 
The Accidental DBA
The Accidental DBAThe Accidental DBA
The Accidental DBA
 
Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wild
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
 
Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 

More from Red_Hat_Storage

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red_Hat_Storage
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
Red_Hat_Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
Red_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red_Hat_Storage
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red_Hat_Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
Red_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red_Hat_Storage
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
Red_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
Red_Hat_Storage
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
Red_Hat_Storage
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red_Hat_Storage
 

More from Red_Hat_Storage (20)

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 

Recently uploaded

Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
g2nightmarescribd
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 

Recently uploaded (20)

Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 

Red Hat Gluster Storage Performance

  • 1. Red Hat Gluster Storage performance Manoj Pillai and Ben England Performance Engineering June 25, 2015
  • 2. New or improved features (in last year) Erasure Coding Snapshots NFS-Ganesha RDMA SSD support
  • 3. Erasure Coding “distributed software RAID” ● Alternative to RAID controllers or 3-way replication ● Cuts storage cost/TB, but computationally expensive ● Better Sequential Write performance for some workloads ● Roughly same sequential Read performance (depends on mountpoints) ● In RHGS 3.1 avoid Erasure Coding for pure-small-file or pure random I/O workloads ● Example use cases are archival, video capture
  • 4. Disperse translator spreads EC stripes in file across hosts Example: EC4+2 E[1] E[2] E[6] E[1] E[2] E[6] ... Stripe 1 Stripe N Server 1 Server 2 ... Server 6 Brick 1 Brick 2 Brick 6Brick 1 Brick 2 Brick 6
  • 5. EC large-file perf summary 2-replica write limit3-replica write limit
  • 6. A tale of two mountpoints (per server) stacked CPU utilization graph by CPU core, 12 cores glusterfs hot thread limits 1 mountpoint's throughput 1 mountpoint per server 4 mountpoints per server
  • 7. A tale of two mountpoints why the difference in CPU utilization?
  • 8. SSDs as bricks ● Multi-thread-epoll = multiple threads working in each client mountpoint and server brick ● Can be helpful for SSDs or any other high-IOPS workload ● glusterfs-3.7 with single 2-socket Sandy Bridge using 1 SAS SSD (SANDisk Lightning) ●
  • 9. RDMA enhancements ● Gluster has had RDMA in some form for a long time ● Gluster-3.6 added librdmacm support – broadens supported hardware ● By Gluster 3.7, memory pre-registration reduces latency
  • 10. JBOD Support ● RHGS has traditionally used H/W RAID for brick storage, with replica-2 protection. ● RHGS 3.0.4 has JBOD+replica-3 support. ● H/W RAID problems: – Proprietary interfaces for managing h/w RAID – Performance impact with many concurrent streams ● JBOD+replica-3 shortcomings: – Each file on one disk, low throughput for serial workloads – Large number of bricks in the volume; problematic for some workloads ● JBOD+replica-3 expands the set of workloads that RHGS can handle well – Best for highly-concurrent, large-file read workloads
  • 11. ● JBOD+replica-3 outperforms RAID-6+replica-2 at higher thread counts ● For large-file workload
  • 12. NFS-Ganesha ● NFSv3 access to RHGS volumes supported so far with gluster native NFS server ● NFS-Ganesha integration with FSAL-gluster expands supported access protocols – NFSv3 – has been in Technology Preview – NFSv4, NFSv4.1, pNFS ● Access path uses libgfapi, avoids FUSE
  • 14. Snapshots ● Based on device-mapper thin-provisioned snapshots – Simplified space management for snapshots – Allow large number of snapshots without performance degradation ● Required change from traditional LV to thin LV for RHGS brick storage – Performance impact? Typically 10-15% for large file sequential read as a result of fragmentation ● Snapshot performance impact – Mainly due to writes to “shared” blocks, copy-on-write triggered on first write to a region after snapshot – Independent of number of snapshots in existence
  • 15. Improved rebalancing ● Rebalancing lets you add/remove hardware from an online Gluster volume ● Important for scalability, redeployment of hardware resources ● Existing algorithm had shortcomings – Did not work well for small files – Was not parallel enough – No throttle ● New algorithm solves these problems – Executes in parallel on all bricks – Gives you control over number of concurrent I/O requests/brick
  • 16. 16 Best practices for sizing, install, administration16
  • 17. Configurations to avoid with Gluster (today) ● Super-large RAID volumes (e.g. RAID60) ● – example: RAID60 with 2 striped RAID6 12-disk components ● – Single glusterfsd process serving a large number of disks – recommend separate RAID LUNs instead ● JBOD configuration with very large server count ● – Gluster directories are still spread across every brick ● – with JBOD, that means every disk! ● – 64 servers x 36 disks/server = ~2300 bricks ● – recommendation: use RAID6 bricks of 12 disks each ● – even then, 64x3 = 192 bricks, still not ideal for anything but large files
  • 18. Test methodology ● How well does RHGS work for your use-case? ● Some benchmarking tools: – Use tools with a distributed mode, so multiple clients can put load on servers – Iozone (large-file sequential workloads), smallfile benchmark, fio (better than iozone for random i/o testing. ● Beyond micro-benchmarking – SPECsfs2014 provides approximation to some real-life workloads – Being used internally – Requires license ● SPECsfs2014 provides mixed-workload generation in different flavors ● VDA (video data acquisition), VDI (virtual desktop infrastructure), SWBUILD (software build)
  • 19. Application filesystem usage patterns to avoid with Gluster ● Single-threaded application – one-file-at-a-time processing ● – uses only small fraction (1 DHT subvolume) of Gluster hardware Tiny files – cheap on local filesystems, expensive on distributed filesystems ● Small directories ● – creation/deletion/read/rename/metadata-change cost x brick count! ● – large file:directory ratio not bad as of glusterfs-3.7 ● Using repeated directory scanning to synchronize processes on different clients ● – Gluster 3.6 (RHS 3.0.4) does not yet invalidate metadata cache on clients
  • 20. Initial Data ingest ● Problem: applications often have previous data, must load Gluster volume ● Typical methods are excruciatingly slow (see lower right!) ● – Example: single mountpoint, rsync -ravu ● Solutions: – - for large files on glusterfs, use largest xfer size – - copy multiple subdirectories in parallel – - multiple mountpoints per client – - multiple clients – - mount option "gid-timeout=5" – - for glusterfs, increase client.event-threads to 8
  • 21. SSDs as bricks ● Avoid use of storage controller WB cache ● separate volume for SSD ● Check “top -H”, look for hot glusterfsd threads on server with SSDs ● Gluster tuning for SSDs: server.event-threads > 2 ● SAS SSD: – Sequential I/O: relatively low sequential write transfer rate – Random I/O: avoids seek overhead, good IOPS – Scaling: more SAS slots => greater TB/host, high aggregate IOPS ● PCI: – Sequential I/O: much higher transfer rate since shorter data path Random I/O: lowest latency yields highest IOPS – Scaling: more expensive, aggregate IOPS limited by PCI slots ●
  • 22. High-speed networking > 10 Gbps ● Don't need RDMA for 10-Gbps network, better with >= 40 Gbps ● Infiniband alternative to RDMA – ipoib ● - Jumbo Frames (MTU=65520) – all switches must support ● - “connected mode” ● - TCP will get you to about ½ – ¾ 40-Gbps line speed ● 10-GbE bonding – see gluster.org how-to ● - default bonding mode 0 – don't use it ● - best modes are 2 (balance-xor), 4 (802.3ad), 6 (balance-alb) ● FUSE (glusterfs mountpoints) – – No 40-Gbps line speed from one mountpoint – Servers don't run FUSE => best with multiple clients/server ● NFS+SMB servers use libgfapi, no FUSE overhead
  • 23. Networking – Putting it all together
  • 24. Features coming soon To a Gluster volume near you (i.e. glusterfs-3.7 and later)
  • 26.
  • 27. Bitrot detection – in glusterfs-3.7 = RHS 3.1 ● Provides greater durability for Gluster data (JBOD) ● Protects against silent loss of data ● Requires signature on replica recording original checksum ● Requires periodic scan to verify data still matches checksum ● Need more data on cost of the scan ● TBS – DIAGRAMS, ANY DATA?
  • 28. A tale of two mountpoints - sequential write performance And the result... drum roll....
  • 29. Balancing storage and networking performance ● Based on workload – Transactional or small-file workloads ● don't need > 10 Gbps ● Need lots of IOPS (e.g. SSD) – Large-file sequential workloads (e.g. video capture) ● Don't need so many IOPS ● Need network bandwidth – When in doubt, add more networking, cost < storage
  • 30. Cache tiering ● Goal: performance of SSD with cost/TB of spinning rust ● Savings from Erasure Coding can pay for SSD! ● Definition: Gluster tiered volume consists of two subvolumes: ● - “hot” tier: sub-volume low capacity, high performance ● - “cold” tier: sub-volume – high capacity, low performance ● -- promotion policy: migrates data from cold tier to hot tier ● -- demotion policy: migrates data from hot tier to cold tier ● - new files are written to hot tier initially unless hot tier is full
  • 31. perf enhancements unless otherwise stated, UNDER CONSIDERATION, NOT IMPLEMENTED ● Lookup-unhashed=auto in Glusterfs 3.7 today, in RHGS 3.1 soon – Eliminates LOOKUP per brick during file creation, etc. ● JBOD support – Glusterfs 4.0 – DHT V2 intended to eliminate spread of directories across all bricks ● Sharding – spread file across more bricks (like Ceph, HDFS) ● Erasure Coding – Intel instruction support, symmetric encoding, bigger chunk size ● Parallel utilities – examples are parallel-untar.py and parallel-rm-rf.py ● Better client-side caching – cache invalidation starting in glusterfs-3.7 ● YOU CAN HELP DECIDE! Express interest and opinion on this