SlideShare a Scribd company logo
1 of 31
Download to read offline
Red Hat Gluster Storage performance
Manoj Pillai and Ben England
Performance Engineering
June 25, 2015
New or improved features (in last year)
Erasure Coding
Snapshots
NFS-Ganesha
RDMA
SSD support
Erasure Coding
“distributed software RAID”
● Alternative to RAID controllers or 3-way replication
● Cuts storage cost/TB, but computationally expensive
● Better Sequential Write performance for some workloads
● Roughly same sequential Read performance (depends on mountpoints)
● In RHGS 3.1 avoid Erasure Coding for pure-small-file or pure random I/O workloads
● Example use cases are archival, video capture
Disperse translator spreads EC stripes in file across hosts
Example: EC4+2
E[1] E[2] E[6]
E[1] E[2] E[6]
...
Stripe 1
Stripe N
Server 1 Server 2
...
Server 6
Brick 1 Brick 2 Brick 6Brick 1 Brick 2 Brick 6
EC large-file perf summary 2-replica write limit3-replica write limit
A tale of two mountpoints (per server)
stacked CPU utilization graph by CPU core, 12 cores
glusterfs hot thread limits 1 mountpoint's throughput
1 mountpoint per server 4 mountpoints per server
A tale of two mountpoints
why the difference in CPU utilization?
SSDs as bricks
● Multi-thread-epoll = multiple threads working in each client mountpoint and server brick
● Can be helpful for SSDs or any other high-IOPS workload
● glusterfs-3.7 with single 2-socket Sandy Bridge using 1 SAS SSD (SANDisk Lightning)
●
RDMA enhancements
● Gluster has had RDMA in some form for a long time
● Gluster-3.6 added librdmacm support – broadens supported hardware
● By Gluster 3.7, memory pre-registration reduces latency
JBOD Support
● RHGS has traditionally used H/W RAID for brick storage, with replica-2 protection.
● RHGS 3.0.4 has JBOD+replica-3 support.
● H/W RAID problems:
– Proprietary interfaces for managing h/w RAID
– Performance impact with many concurrent streams
● JBOD+replica-3 shortcomings:
– Each file on one disk, low throughput for serial workloads
– Large number of bricks in the volume; problematic for some workloads
● JBOD+replica-3 expands the set of workloads that RHGS can handle well
– Best for highly-concurrent, large-file read workloads
● JBOD+replica-3 outperforms RAID-6+replica-2 at higher thread counts
● For large-file workload
NFS-Ganesha
● NFSv3 access to RHGS volumes supported so far with gluster native NFS server
● NFS-Ganesha integration with FSAL-gluster expands supported access protocols
– NFSv3 – has been in Technology Preview
– NFSv4, NFSv4.1, pNFS
● Access path uses libgfapi, avoids FUSE
Ganesha-Gluster vs Ganesha-VFS vs Kernel-NFS
Snapshots
● Based on device-mapper thin-provisioned snapshots
– Simplified space management for snapshots
– Allow large number of snapshots without performance degradation
● Required change from traditional LV to thin LV for RHGS brick storage
– Performance impact? Typically 10-15% for large file sequential read as a result of
fragmentation
● Snapshot performance impact
– Mainly due to writes to “shared” blocks, copy-on-write triggered on first write to a
region after snapshot
– Independent of number of snapshots in existence
Improved rebalancing
● Rebalancing lets you add/remove hardware from an online Gluster volume
● Important for scalability, redeployment of hardware resources
● Existing algorithm had shortcomings
– Did not work well for small files
– Was not parallel enough
– No throttle
● New algorithm solves these problems
– Executes in parallel on all bricks
– Gives you control over number of concurrent I/O requests/brick
16
Best practices for sizing, install, administration16
Configurations to avoid with Gluster (today)
● Super-large RAID volumes (e.g. RAID60)
● – example: RAID60 with 2 striped RAID6 12-disk components
● – Single glusterfsd process serving a large number of disks
– recommend separate RAID LUNs instead
● JBOD configuration with very large server count
● – Gluster directories are still spread across every brick
● – with JBOD, that means every disk!
● – 64 servers x 36 disks/server = ~2300 bricks
● – recommendation: use RAID6 bricks of 12 disks each
● – even then, 64x3 = 192 bricks, still not ideal for anything but large files
Test methodology
● How well does RHGS work for your use-case?
● Some benchmarking tools:
– Use tools with a distributed mode, so multiple clients can put load on servers
– Iozone (large-file sequential workloads), smallfile benchmark, fio (better than iozone
for random i/o testing.
● Beyond micro-benchmarking
– SPECsfs2014 provides approximation to some real-life workloads
– Being used internally
– Requires license
● SPECsfs2014 provides mixed-workload generation in different flavors
● VDA (video data acquisition), VDI (virtual desktop infrastructure), SWBUILD
(software build)
Application filesystem usage patterns to avoid with Gluster
● Single-threaded application – one-file-at-a-time processing
● – uses only small fraction (1 DHT subvolume) of Gluster hardware
Tiny files – cheap on local filesystems, expensive on distributed filesystems
● Small directories
● – creation/deletion/read/rename/metadata-change cost x brick count!
● – large file:directory ratio not bad as of glusterfs-3.7
● Using repeated directory scanning to synchronize processes on different clients
● – Gluster 3.6 (RHS 3.0.4) does not yet invalidate metadata cache on clients
Initial Data ingest
● Problem: applications often have previous data, must load Gluster volume
● Typical methods are excruciatingly slow (see lower right!)
● – Example: single mountpoint, rsync -ravu
● Solutions:
– - for large files on glusterfs, use largest xfer size
– - copy multiple subdirectories in parallel
– - multiple mountpoints per client
– - multiple clients
– - mount option "gid-timeout=5"
– - for glusterfs, increase client.event-threads to 8
SSDs as bricks
● Avoid use of storage controller WB cache
● separate volume for SSD
● Check “top -H”, look for hot glusterfsd threads on server with SSDs
● Gluster tuning for SSDs: server.event-threads > 2
● SAS SSD:
– Sequential I/O: relatively low sequential write transfer rate
– Random I/O: avoids seek overhead, good IOPS
– Scaling: more SAS slots => greater TB/host, high aggregate IOPS
● PCI:
– Sequential I/O: much higher transfer rate since shorter data path
Random I/O: lowest latency yields highest IOPS
– Scaling: more expensive, aggregate IOPS limited by PCI slots
●
High-speed networking > 10 Gbps
● Don't need RDMA for 10-Gbps network, better with >= 40 Gbps
● Infiniband alternative to RDMA – ipoib
● - Jumbo Frames (MTU=65520) – all switches must support
● - “connected mode”
● - TCP will get you to about ½ – ¾ 40-Gbps line speed
● 10-GbE bonding – see gluster.org how-to
● - default bonding mode 0 – don't use it
● - best modes are 2 (balance-xor), 4 (802.3ad), 6 (balance-alb)
● FUSE (glusterfs mountpoints) –
– No 40-Gbps line speed from one mountpoint
– Servers don't run FUSE => best with multiple clients/server
● NFS+SMB servers use libgfapi, no FUSE overhead
Networking – Putting it all together
Features coming soon
To a Gluster volume near you
(i.e. glusterfs-3.7 and later)
Lookup-unhashed fix
Bitrot detection – in glusterfs-3.7 = RHS 3.1
● Provides greater durability for Gluster data (JBOD)
● Protects against silent loss of data
● Requires signature on replica recording original checksum
● Requires periodic scan to verify data still matches checksum
● Need more data on cost of the scan
● TBS – DIAGRAMS, ANY DATA?
A tale of two mountpoints - sequential write performance
And the result... drum roll....
Balancing storage and networking performance
● Based on workload
– Transactional or small-file workloads
● don't need > 10 Gbps
● Need lots of IOPS (e.g. SSD)
– Large-file sequential workloads (e.g. video capture)
● Don't need so many IOPS
● Need network bandwidth
– When in doubt, add more networking, cost < storage
Cache tiering
● Goal: performance of SSD with cost/TB of spinning rust
● Savings from Erasure Coding can pay for SSD!
● Definition: Gluster tiered volume consists of two subvolumes:
● - “hot” tier: sub-volume low capacity, high performance
● - “cold” tier: sub-volume – high capacity, low performance
● -- promotion policy: migrates data from cold tier to hot tier
● -- demotion policy: migrates data from hot tier to cold tier
● - new files are written to hot tier initially unless hot tier is full
perf enhancements
unless otherwise stated,
UNDER CONSIDERATION, NOT IMPLEMENTED
● Lookup-unhashed=auto in Glusterfs 3.7 today, in RHGS 3.1 soon
– Eliminates LOOKUP per brick during file creation, etc.
● JBOD support – Glusterfs 4.0 – DHT V2 intended to eliminate spread of
directories across all bricks
● Sharding – spread file across more bricks (like Ceph, HDFS)
● Erasure Coding – Intel instruction support, symmetric encoding, bigger chunk
size
● Parallel utilities – examples are parallel-untar.py and parallel-rm-rf.py
● Better client-side caching – cache invalidation starting in glusterfs-3.7
● YOU CAN HELP DECIDE! Express interest and opinion on this

More Related Content

What's hot

Gluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGlusterFS
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Delivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSDelivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSNGINX, Inc.
 
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...Insight Technology, Inc.
 
Apache Spark on K8S and HDFS Security with Ilan Flonenko
Apache Spark on K8S and HDFS Security with Ilan FlonenkoApache Spark on K8S and HDFS Security with Ilan Flonenko
Apache Spark on K8S and HDFS Security with Ilan FlonenkoDatabricks
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Community
 
Ansibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみたAnsibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみたKeijiUehata1
 
Hadoop -ResourceManager HAの仕組み-
Hadoop -ResourceManager HAの仕組み-Hadoop -ResourceManager HAの仕組み-
Hadoop -ResourceManager HAの仕組み-Yuki Gonda
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
コンテナ基盤であるLXC/LXDを 本番環境で運用する話
コンテナ基盤であるLXC/LXDを 本番環境で運用する話コンテナ基盤であるLXC/LXDを 本番環境で運用する話
コンテナ基盤であるLXC/LXDを 本番環境で運用する話Nobuhiro Fujita
 
ロードバランスへの長い道
ロードバランスへの長い道ロードバランスへの長い道
ロードバランスへの長い道Jun Kato
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
 
Gluster technical overview
Gluster technical overviewGluster technical overview
Gluster technical overviewGluster.org
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFShapeBlue
 
Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月
Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月 Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月
Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月 VirtualTech Japan Inc.
 
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangAccelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangCeph Community
 
macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~
macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~
macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~NTT Communications Technology Development
 

What's hot (20)

Gluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & Tricks
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Delivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSDelivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWS
 
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
[db tech showcase Tokyo 2016] D13: NVMeフラッシュストレージを用いた高性能高拡張高可用なデータベースシステムの実現方...
 
Apache Spark on K8S and HDFS Security with Ilan Flonenko
Apache Spark on K8S and HDFS Security with Ilan FlonenkoApache Spark on K8S and HDFS Security with Ilan Flonenko
Apache Spark on K8S and HDFS Security with Ilan Flonenko
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
Ansibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみたAnsibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみた
 
OVS v OVS-DPDK
OVS v OVS-DPDKOVS v OVS-DPDK
OVS v OVS-DPDK
 
Hadoop -ResourceManager HAの仕組み-
Hadoop -ResourceManager HAの仕組み-Hadoop -ResourceManager HAの仕組み-
Hadoop -ResourceManager HAの仕組み-
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
ZFS in 30 minutes
ZFS in 30 minutesZFS in 30 minutes
ZFS in 30 minutes
 
コンテナ基盤であるLXC/LXDを 本番環境で運用する話
コンテナ基盤であるLXC/LXDを 本番環境で運用する話コンテナ基盤であるLXC/LXDを 本番環境で運用する話
コンテナ基盤であるLXC/LXDを 本番環境で運用する話
 
ロードバランスへの長い道
ロードバランスへの長い道ロードバランスへの長い道
ロードバランスへの長い道
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing Guide
 
Ceph
CephCeph
Ceph
 
Gluster technical overview
Gluster technical overviewGluster technical overview
Gluster technical overview
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
 
Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月
Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月 Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月
Ubuntu Juju/MAAS・OpenStackを使った検証環境構築 - OpenStack最新情報セミナー 2016年3月
 
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong TangAccelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
 
macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~
macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~
macOSの仮想化技術について ~Virtualization-rs Rust bindings for virtualization.framework ~
 

Viewers also liked

Performance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksPerformance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksMarian Marinov
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster PerformanceGluster.org
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterRed_Hat_Storage
 
Gluster.community.day.2013
Gluster.community.day.2013Gluster.community.day.2013
Gluster.community.day.2013Udo Seidel
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Giuseppe Paterno'
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseDeepak Shetty
 
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed_Hat_Storage
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Gluster.org
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
 
The Google File System (GFS)
The Google File System (GFS)The Google File System (GFS)
The Google File System (GFS)Romain Jacotin
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed_Hat_Storage
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed_Hat_Storage
 
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014Amazon Web Services
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDBSage Weil
 
Efficient data maintaince in GlusterFS using Databases
Efficient data maintaince in GlusterFS using DatabasesEfficient data maintaince in GlusterFS using Databases
Efficient data maintaince in GlusterFS using DatabasesJoseph Elwin Fernandes
 

Viewers also liked (20)

Performance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networksPerformance comparison of Distributed File Systems on 1Gbit networks
Performance comparison of Distributed File Systems on 1Gbit networks
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on gluster
 
Gluster.community.day.2013
Gluster.community.day.2013Gluster.community.day.2013
Gluster.community.day.2013
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecase
 
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
The Google File System (GFS)
The Google File System (GFS)The Google File System (GFS)
The Google File System (GFS)
 
Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep DiveRed Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep Dive
 
Gluster Data Tiering
Gluster Data TieringGluster Data Tiering
Gluster Data Tiering
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere Mortals
 
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
(SDD416) Amazon EBS Deep Dive | AWS re:Invent 2014
 
Gfs vs hdfs
Gfs vs hdfsGfs vs hdfs
Gfs vs hdfs
 
Glusterfs and Hadoop
Glusterfs and HadoopGlusterfs and Hadoop
Glusterfs and Hadoop
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
Efficient data maintaince in GlusterFS using Databases
Efficient data maintaince in GlusterFS using DatabasesEfficient data maintaince in GlusterFS using Databases
Efficient data maintaince in GlusterFS using Databases
 

Similar to Red Hat Gluster Storage Performance

Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data DeduplicationRedWireServices
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheDavid Grier
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonSage Weil
 
Ceph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFSCeph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFSCeph Community
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Lars Marowsky-Brée
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
Challenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightChallenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightGluster.org
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodbPGConf APAC
 
Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Community
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadMarius Adrian Popa
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...javier ramirez
 
Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems confluent
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)Lars Marowsky-Brée
 

Similar to Red Hat Gluster Storage Performance (20)

QNAP TS-832PX-4G.pdf
QNAP TS-832PX-4G.pdfQNAP TS-832PX-4G.pdf
QNAP TS-832PX-4G.pdf
 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data Deduplication
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
 
Shootout at the AWS Corral
Shootout at the AWS CorralShootout at the AWS Corral
Shootout at the AWS Corral
 
Shootout at the PAAS Corral
Shootout at the PAAS CorralShootout at the PAAS Corral
Shootout at the PAAS Corral
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit Boston
 
Ceph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFSCeph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFS
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Challenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightChallenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan Lambright
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
 
The Accidental DBA
The Accidental DBAThe Accidental DBA
The Accidental DBA
 
Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wild
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
 
Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems Kafka on ZFS: Better Living Through Filesystems
Kafka on ZFS: Better Living Through Filesystems
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 

More from Red_Hat_Storage

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red_Hat_Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red_Hat_Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the FanRed_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red_Hat_Storage
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed_Hat_Storage
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red_Hat_Storage
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed_Hat_Storage
 

More from Red_Hat_Storage (20)

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 

Recently uploaded

The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 

Recently uploaded (20)

The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 

Red Hat Gluster Storage Performance

  • 1. Red Hat Gluster Storage performance Manoj Pillai and Ben England Performance Engineering June 25, 2015
  • 2. New or improved features (in last year) Erasure Coding Snapshots NFS-Ganesha RDMA SSD support
  • 3. Erasure Coding “distributed software RAID” ● Alternative to RAID controllers or 3-way replication ● Cuts storage cost/TB, but computationally expensive ● Better Sequential Write performance for some workloads ● Roughly same sequential Read performance (depends on mountpoints) ● In RHGS 3.1 avoid Erasure Coding for pure-small-file or pure random I/O workloads ● Example use cases are archival, video capture
  • 4. Disperse translator spreads EC stripes in file across hosts Example: EC4+2 E[1] E[2] E[6] E[1] E[2] E[6] ... Stripe 1 Stripe N Server 1 Server 2 ... Server 6 Brick 1 Brick 2 Brick 6Brick 1 Brick 2 Brick 6
  • 5. EC large-file perf summary 2-replica write limit3-replica write limit
  • 6. A tale of two mountpoints (per server) stacked CPU utilization graph by CPU core, 12 cores glusterfs hot thread limits 1 mountpoint's throughput 1 mountpoint per server 4 mountpoints per server
  • 7. A tale of two mountpoints why the difference in CPU utilization?
  • 8. SSDs as bricks ● Multi-thread-epoll = multiple threads working in each client mountpoint and server brick ● Can be helpful for SSDs or any other high-IOPS workload ● glusterfs-3.7 with single 2-socket Sandy Bridge using 1 SAS SSD (SANDisk Lightning) ●
  • 9. RDMA enhancements ● Gluster has had RDMA in some form for a long time ● Gluster-3.6 added librdmacm support – broadens supported hardware ● By Gluster 3.7, memory pre-registration reduces latency
  • 10. JBOD Support ● RHGS has traditionally used H/W RAID for brick storage, with replica-2 protection. ● RHGS 3.0.4 has JBOD+replica-3 support. ● H/W RAID problems: – Proprietary interfaces for managing h/w RAID – Performance impact with many concurrent streams ● JBOD+replica-3 shortcomings: – Each file on one disk, low throughput for serial workloads – Large number of bricks in the volume; problematic for some workloads ● JBOD+replica-3 expands the set of workloads that RHGS can handle well – Best for highly-concurrent, large-file read workloads
  • 11. ● JBOD+replica-3 outperforms RAID-6+replica-2 at higher thread counts ● For large-file workload
  • 12. NFS-Ganesha ● NFSv3 access to RHGS volumes supported so far with gluster native NFS server ● NFS-Ganesha integration with FSAL-gluster expands supported access protocols – NFSv3 – has been in Technology Preview – NFSv4, NFSv4.1, pNFS ● Access path uses libgfapi, avoids FUSE
  • 14. Snapshots ● Based on device-mapper thin-provisioned snapshots – Simplified space management for snapshots – Allow large number of snapshots without performance degradation ● Required change from traditional LV to thin LV for RHGS brick storage – Performance impact? Typically 10-15% for large file sequential read as a result of fragmentation ● Snapshot performance impact – Mainly due to writes to “shared” blocks, copy-on-write triggered on first write to a region after snapshot – Independent of number of snapshots in existence
  • 15. Improved rebalancing ● Rebalancing lets you add/remove hardware from an online Gluster volume ● Important for scalability, redeployment of hardware resources ● Existing algorithm had shortcomings – Did not work well for small files – Was not parallel enough – No throttle ● New algorithm solves these problems – Executes in parallel on all bricks – Gives you control over number of concurrent I/O requests/brick
  • 16. 16 Best practices for sizing, install, administration16
  • 17. Configurations to avoid with Gluster (today) ● Super-large RAID volumes (e.g. RAID60) ● – example: RAID60 with 2 striped RAID6 12-disk components ● – Single glusterfsd process serving a large number of disks – recommend separate RAID LUNs instead ● JBOD configuration with very large server count ● – Gluster directories are still spread across every brick ● – with JBOD, that means every disk! ● – 64 servers x 36 disks/server = ~2300 bricks ● – recommendation: use RAID6 bricks of 12 disks each ● – even then, 64x3 = 192 bricks, still not ideal for anything but large files
  • 18. Test methodology ● How well does RHGS work for your use-case? ● Some benchmarking tools: – Use tools with a distributed mode, so multiple clients can put load on servers – Iozone (large-file sequential workloads), smallfile benchmark, fio (better than iozone for random i/o testing. ● Beyond micro-benchmarking – SPECsfs2014 provides approximation to some real-life workloads – Being used internally – Requires license ● SPECsfs2014 provides mixed-workload generation in different flavors ● VDA (video data acquisition), VDI (virtual desktop infrastructure), SWBUILD (software build)
  • 19. Application filesystem usage patterns to avoid with Gluster ● Single-threaded application – one-file-at-a-time processing ● – uses only small fraction (1 DHT subvolume) of Gluster hardware Tiny files – cheap on local filesystems, expensive on distributed filesystems ● Small directories ● – creation/deletion/read/rename/metadata-change cost x brick count! ● – large file:directory ratio not bad as of glusterfs-3.7 ● Using repeated directory scanning to synchronize processes on different clients ● – Gluster 3.6 (RHS 3.0.4) does not yet invalidate metadata cache on clients
  • 20. Initial Data ingest ● Problem: applications often have previous data, must load Gluster volume ● Typical methods are excruciatingly slow (see lower right!) ● – Example: single mountpoint, rsync -ravu ● Solutions: – - for large files on glusterfs, use largest xfer size – - copy multiple subdirectories in parallel – - multiple mountpoints per client – - multiple clients – - mount option "gid-timeout=5" – - for glusterfs, increase client.event-threads to 8
  • 21. SSDs as bricks ● Avoid use of storage controller WB cache ● separate volume for SSD ● Check “top -H”, look for hot glusterfsd threads on server with SSDs ● Gluster tuning for SSDs: server.event-threads > 2 ● SAS SSD: – Sequential I/O: relatively low sequential write transfer rate – Random I/O: avoids seek overhead, good IOPS – Scaling: more SAS slots => greater TB/host, high aggregate IOPS ● PCI: – Sequential I/O: much higher transfer rate since shorter data path Random I/O: lowest latency yields highest IOPS – Scaling: more expensive, aggregate IOPS limited by PCI slots ●
  • 22. High-speed networking > 10 Gbps ● Don't need RDMA for 10-Gbps network, better with >= 40 Gbps ● Infiniband alternative to RDMA – ipoib ● - Jumbo Frames (MTU=65520) – all switches must support ● - “connected mode” ● - TCP will get you to about ½ – ¾ 40-Gbps line speed ● 10-GbE bonding – see gluster.org how-to ● - default bonding mode 0 – don't use it ● - best modes are 2 (balance-xor), 4 (802.3ad), 6 (balance-alb) ● FUSE (glusterfs mountpoints) – – No 40-Gbps line speed from one mountpoint – Servers don't run FUSE => best with multiple clients/server ● NFS+SMB servers use libgfapi, no FUSE overhead
  • 23. Networking – Putting it all together
  • 24. Features coming soon To a Gluster volume near you (i.e. glusterfs-3.7 and later)
  • 26.
  • 27. Bitrot detection – in glusterfs-3.7 = RHS 3.1 ● Provides greater durability for Gluster data (JBOD) ● Protects against silent loss of data ● Requires signature on replica recording original checksum ● Requires periodic scan to verify data still matches checksum ● Need more data on cost of the scan ● TBS – DIAGRAMS, ANY DATA?
  • 28. A tale of two mountpoints - sequential write performance And the result... drum roll....
  • 29. Balancing storage and networking performance ● Based on workload – Transactional or small-file workloads ● don't need > 10 Gbps ● Need lots of IOPS (e.g. SSD) – Large-file sequential workloads (e.g. video capture) ● Don't need so many IOPS ● Need network bandwidth – When in doubt, add more networking, cost < storage
  • 30. Cache tiering ● Goal: performance of SSD with cost/TB of spinning rust ● Savings from Erasure Coding can pay for SSD! ● Definition: Gluster tiered volume consists of two subvolumes: ● - “hot” tier: sub-volume low capacity, high performance ● - “cold” tier: sub-volume – high capacity, low performance ● -- promotion policy: migrates data from cold tier to hot tier ● -- demotion policy: migrates data from hot tier to cold tier ● - new files are written to hot tier initially unless hot tier is full
  • 31. perf enhancements unless otherwise stated, UNDER CONSIDERATION, NOT IMPLEMENTED ● Lookup-unhashed=auto in Glusterfs 3.7 today, in RHGS 3.1 soon – Eliminates LOOKUP per brick during file creation, etc. ● JBOD support – Glusterfs 4.0 – DHT V2 intended to eliminate spread of directories across all bricks ● Sharding – spread file across more bricks (like Ceph, HDFS) ● Erasure Coding – Intel instruction support, symmetric encoding, bigger chunk size ● Parallel utilities – examples are parallel-untar.py and parallel-rm-rf.py ● Better client-side caching – cache invalidation starting in glusterfs-3.7 ● YOU CAN HELP DECIDE! Express interest and opinion on this