SlideShare a Scribd company logo
Talk Title Here
Author Name, Company
Comparison of FOSS
distributed filesystems
Marian “HackMan” Marinov, SiteGround
Who am I?
• Chief System Architect of SiteGround
• Sysadmin since 96
• Teaching Network Security & Linux System
Administration in Sofia University
• Organizing biggest FOSS conference in Bulgaria
- OpenFest
• We are a hosting provider
• We needed shared filesystem between VMs and
containers
• Different size of directories and relatively small
files
• Sometimes milions of files in a single dir
Why were we considering shared FS?
• We started with DBRD + OCFS2
• I did a small test of cLVM + ATAoE
• We tried DRBD + OCFS2 for MySQL clusters
• We then switched to GlusterFS
• Later moved to CephFS
• and finally settled on good old NFS :)
but with the storage on Ceph RBD
The story of our storage endeavors
• CephFS
• GlusterFS
• MooseFS
• OrangeFS
• BeeGFS - no stats
I haven't played with Lustre or AFS
And because of our history with OCFS2 and GFS2 I skipped them
Which filesystems were tested
Test cluster
• CPU i7-3770 @ 3.40GHz / 16G DDR3 1333MHz
• SSD SAMSUNG PM863
• 10Gbps Intel x520
• 40Gbps Qlogic QDR Infiniband
• IBM Blade 10Gbps switch
• Mellanox Infiniscale-IV switch
What did I test?
• Setup
• Fault tollerance
• Resource requirements (client & server)
• Capabilities (Redundancy, RDMA, caching)
• FS performance (creation, deletion, random
read, random write)
Complexity of the setup
• CephFS - requires a lot of knowledge and time
• GlusterFS - relatively easy to setup and install
• OrangeFS - extremely easy to do basic setup
• MooseFS - extremely easy to do basic setup
• DRBD + NFS - very complex setup if you want HA
• BeeGFS -
CephFS
• Fault tollerant by desing...
– until all MDSes start crashing
● single directory may crash all MDSes
● MDS behind on trimming
● Client failing to respond to cache pressure
● Ceph has redundancy but lacks RDMA support
CephFS
• Uses a lot of memory on the MDS nodes
• Not suitable to run on the same machines as the
compute nodes
• Small number of nodes 3-5 is a no go
CephFS tunning
• Placement Groups(PG)
• mds_log_max_expiring &
mds_log_max_segments fixes the problem with
trimming
• When you have a lot of inodes, increasing
mds_cache_size works but increases the
memory usage
CephFS fixes
• Client XXX failing to respond to cache pressure
● This issue is due to the current working set size of the
inodes.
The fix for this is setting the mds_cache_size in
/etc/ceph/ceph.conf accordingly.
ceph daemon mds.$(hostname) perf dump | grep inode
"inode_max": 100000, - max cache size value
"inodes": 109488, - currently used
CephFS fixes
• Client XXX failing to respond to cache pressure
● Another “FIX” for the same issue is to login to current active
MDS and run:
● This way it will change currnet active MDS to another server
and will drop inode usage
/etc/init.d/ceph restart mds
GlusterFS
• Fault tollerance
– in case of network hikups sometimes the mount may
become inaccesable and requires manual remount
– in case one storage node dies, you have to manually
remount if you don't have local copy of the data
• Capabilities - local caching, RDMA and data
Redundancy are supported. Offers different
ways for data redundancy and sharding
GlusterFS
• High CPU usage for heavily used file systems
– it required a copy of the data on all nodes
– the FUSE driver has a limit of the number of
small operations, that it can perform
• Unfortunatelly this limit was very easy to be
reached by our customers
GlusterFS Tunning
● use distributed and replicated volumes
instead of only replicated
– gluster volume create VOL replica 2 stripe 2 ....
● setup performance parameters
GlusterFS Tunning
volume set VOLNAME performance.cache-refresh-timeout 3
volume set VOLNAME performance.io-thread-count 8
volume set VOLNAME performance.cache-size 256MB
volume set VOLNAME performance.cache-max-file-size 300KB
volume set VOLNAME performance.cache-min-file-size 0B
volume set VOLNAME performance.readdir-ahead on
volume set VOLNAME performance.write-behind-window-size 100KB
GlusterFS Tunning
volume set VOLNAME features.lock-heal on
volume set VOLNAME cluster.self-heal-daemon enable
volume set VOLNAME cluster.metadata-self-heal on
volume set VOLNAME cluster.consistent-metadata on
volume set VOLNAME cluster.stripe-block-size 100KB
volume set VOLNAME nfs.disable on
FUSE tunning
FUSE GlusterFS
entry_timeout entry-timeout
negative_timeout negative-timeout
attr_timeout attribute-timeout
mount eveyrhting with “-o intr” :)
MooseFS
• Reliability with multiple masters
• Multiple metadata servers
• Multiple chunkservers
• Flexible replication (per directory)
• A lot of stats and a web interface
BeeGFS
• Metadata and Storage nodes are replicated by
design
• FUSE based
OrangeFS
• No native redundancy uses corosync/pacemaker
for HA
• Adding new storage servers requires stopping of
the whole cluster
• It was very easy to break it
Ceph RBD + NFS
• the main tunning goes to NFS, by using the
cachefilesd
• it is very important to have cache for both
accessed and missing files
• enable FS-Cache by using "-o fsc" mount option
Ceph RBD + NFS
• verify that your mounts are with enabled cache:
•
# cat /proc/fs/nfsfs/volumes
NV SERVER PORT DEV FSID FSC
v4 aaaaaaaa 801 0:35 17454aa0fddaa6a5:96d7706699eb981b yes
v4 aaaaaaaa 801 0:374 7d581aa468faac13:92e653953087a8a4 yes
Sysctl tunning
fs.nfs.idmap_cache_timeout = 600
fs.nfs.nfs_congestion_kb = 127360
fs.nfs.nfs_mountpoint_timeout = 500
fs.nfs.nlm_grace_period = 10
fs.nfs.nlm_timeout = 10
Sysctl tunning
# Tune the network card polling
net.core.netdev_budget=900
net.core.netdev_budget_usecs=1000
net.core.netdev_max_backlog=300000
Sysctl tunning
# Increase network stack memory
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.rmem_default=16777216
net.core.wmem_default=16777216
net.core.optmem_max=16777216
Sysctl tunning
# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
Sysctl tunning
# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_low_latency = 1
# scalable or bbr
net.ipv4.tcp_congestion_control = scalable
Sysctl tunning
# Increase system IP port range to allow for more
concurrent connections
net.ipv4.ip_local_port_range = 1024 65000
# OS tunning
vm.swappiness = 0
# Increase system file descriptor limit
fs.file-max = 65535
Stats
• For small files, network BW is not a problem
• However network latency is a killer :(
Stats
Creation of 10000 empty files:
Local SSD 7.030s
MooseFS 12.451s
NFS + Ceph RBD 16.947s
OrangeFS 40.574s
Gluster distributed 1m48.904s
Stats
MTU Congestion result
MooseFS 10G
1500 Scalable 10.411s
9000 Scalable 10.475s
9000 BBR 10.574s
1500 BBR 10.710s
GlusterFS 10G
1500 BBR 48.143s
1500 Scalable 48.292s
9000 BBR 48.747s
9000 Scalable 48.865s
StatsMTU Congestion result
MooseFS IPoIB
1500 BBR 9.484s
1500 Scalable 9.675s
GlusterFS IPoIB
9000 BBR 40.598s
1500 BBR 40.784s
1500 Scalable 41.448s
9000 Scalable 41.803s
GlusterFS RDMA
1500 Scalable 31.731s
1500 BBR 31.768s
Stats
MTU Congestion result
MooseFS 10G
1500 Scalable 3m46.501s
1500 BBR 3m47.066s
9000 Scalable 3m48.129s
9000 BBR 3m58.068s
GlusterFS 10G
1500 BBR 7m56.144s
1500 Scalable 7m57.663s
9000 BBR 7m56.607s
9000 Scalable 7m53.828s
Creation of
10000
random size
files:
Stats
MTU Congestion result
MooseFS IPoIB
1500 BBR 3m48.254s
1500 Scalable 3m49.467s
GlusterFS RDMA
1500 Scalable 8m52.168s
Creation of 10000 random size files:
Thank you!
Marian HackMan Marinov
mm@siteground.com
Thank you!
Questions?
Marian HackMan Marinov
mm@siteground.com
Comparison of foss distributed storage

More Related Content

What's hot

Redis persistence in practice
Redis persistence in practiceRedis persistence in practice
Redis persistence in practice
Eugene Fidelin
 
Setting up mongo replica set
Setting up mongo replica setSetting up mongo replica set
Setting up mongo replica setSudheer Kondla
 
Redis Persistence
Redis  PersistenceRedis  Persistence
Redis Persistence
Ismaeel Enjreny
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Gluster.org
 
Keeping your files safe in the post-Snowden era with SXFS
Keeping your files safe in the post-Snowden era with SXFSKeeping your files safe in the post-Snowden era with SXFS
Keeping your files safe in the post-Snowden era with SXFS
Robert Wojciechowski
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
Ceph Community
 
Backup, Restore, and Disaster Recovery
Backup, Restore, and Disaster RecoveryBackup, Restore, and Disaster Recovery
Backup, Restore, and Disaster Recovery
MongoDB
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Danielle Womboldt
 
Scaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/DayScaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/Day
ScyllaDB
 
Bluestore
BluestoreBluestore
Bluestore
Patrick McGarry
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
Danielle Womboldt
 
Mongodb backup
Mongodb backupMongodb backup
Mongodb backup
Dharshan Rangegowda
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
Sean Chittenden
 
SiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingSiteGround Tech TeamBuilding
SiteGround Tech TeamBuilding
Marian Marinov
 
MongoDB Backup & Disaster Recovery
MongoDB Backup & Disaster RecoveryMongoDB Backup & Disaster Recovery
MongoDB Backup & Disaster Recovery
Elankumaran Srinivasan
 
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...Tommy Lee
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Mastering PostgreSQL Administration
Mastering PostgreSQL AdministrationMastering PostgreSQL Administration
Mastering PostgreSQL Administration
EDB
 
LSA2 - 01 Virtualization with KVM
LSA2 - 01 Virtualization with KVMLSA2 - 01 Virtualization with KVM
LSA2 - 01 Virtualization with KVM
Marian Marinov
 
Cpu高效编程技术
Cpu高效编程技术Cpu高效编程技术
Cpu高效编程技术Feng Yu
 

What's hot (20)

Redis persistence in practice
Redis persistence in practiceRedis persistence in practice
Redis persistence in practice
 
Setting up mongo replica set
Setting up mongo replica setSetting up mongo replica set
Setting up mongo replica set
 
Redis Persistence
Redis  PersistenceRedis  Persistence
Redis Persistence
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
 
Keeping your files safe in the post-Snowden era with SXFS
Keeping your files safe in the post-Snowden era with SXFSKeeping your files safe in the post-Snowden era with SXFS
Keeping your files safe in the post-Snowden era with SXFS
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
Backup, Restore, and Disaster Recovery
Backup, Restore, and Disaster RecoveryBackup, Restore, and Disaster Recovery
Backup, Restore, and Disaster Recovery
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Scaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/DayScaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/Day
 
Bluestore
BluestoreBluestore
Bluestore
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
Mongodb backup
Mongodb backupMongodb backup
Mongodb backup
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
 
SiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingSiteGround Tech TeamBuilding
SiteGround Tech TeamBuilding
 
MongoDB Backup & Disaster Recovery
MongoDB Backup & Disaster RecoveryMongoDB Backup & Disaster Recovery
MongoDB Backup & Disaster Recovery
 
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Mastering PostgreSQL Administration
Mastering PostgreSQL AdministrationMastering PostgreSQL Administration
Mastering PostgreSQL Administration
 
LSA2 - 01 Virtualization with KVM
LSA2 - 01 Virtualization with KVMLSA2 - 01 Virtualization with KVM
LSA2 - 01 Virtualization with KVM
 
Cpu高效编程技术
Cpu高效编程技术Cpu高效编程技术
Cpu高效编程技术
 

Similar to Comparison of foss distributed storage

Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Rongze Zhu
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red_Hat_Storage
 
Linux Huge Pages
Linux Huge PagesLinux Huge Pages
Linux Huge Pages
Geraldo Netto
 
Caching methodology and strategies
Caching methodology and strategiesCaching methodology and strategies
Caching methodology and strategies
Tiep Vu
 
Caching Methodology & Strategies
Caching Methodology & StrategiesCaching Methodology & Strategies
Caching Methodology & Strategies
Tiệp Vũ
 
Anthony Somerset - Site Speed = Success!
Anthony Somerset - Site Speed = Success!Anthony Somerset - Site Speed = Success!
Anthony Somerset - Site Speed = Success!
WordCamp Cape Town
 
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecturehbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecture
HBaseCon
 
Gluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & Tricks
GlusterFS
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
Lars Marowsky-Brée
 
Scaling Cassandra for Big Data
Scaling Cassandra for Big DataScaling Cassandra for Big Data
Scaling Cassandra for Big DataDataStax Academy
 
Best Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on SolarisBest Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on Solaris
Jignesh Shah
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems Baruch Osoveskiy
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red_Hat_Storage
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
Marius Adrian Popa
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
Yongseok Oh
 
Windows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined StorageWindows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined Storage
Aidan Finn
 
LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)Pekka Männistö
 
Gpfs introandsetup
Gpfs introandsetupGpfs introandsetup
Gpfs introandsetup
asihan
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Community
 
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedPGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
Equnix Business Solutions
 

Similar to Comparison of foss distributed storage (20)

Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Linux Huge Pages
Linux Huge PagesLinux Huge Pages
Linux Huge Pages
 
Caching methodology and strategies
Caching methodology and strategiesCaching methodology and strategies
Caching methodology and strategies
 
Caching Methodology & Strategies
Caching Methodology & StrategiesCaching Methodology & Strategies
Caching Methodology & Strategies
 
Anthony Somerset - Site Speed = Success!
Anthony Somerset - Site Speed = Success!Anthony Somerset - Site Speed = Success!
Anthony Somerset - Site Speed = Success!
 
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecturehbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecture
 
Gluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & TricksGluster for Geeks: Performance Tuning Tips & Tricks
Gluster for Geeks: Performance Tuning Tips & Tricks
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 
Scaling Cassandra for Big Data
Scaling Cassandra for Big DataScaling Cassandra for Big Data
Scaling Cassandra for Big Data
 
Best Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on SolarisBest Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on Solaris
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
Windows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined StorageWindows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined Storage
 
LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)
 
Gpfs introandsetup
Gpfs introandsetupGpfs introandsetup
Gpfs introandsetup
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedPGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
 

More from Marian Marinov

How to implement PassKeys in your application
How to implement PassKeys in your applicationHow to implement PassKeys in your application
How to implement PassKeys in your application
Marian Marinov
 
Dev.bg DevOps March 2024 Monitoring & Logging
Dev.bg DevOps March 2024 Monitoring & LoggingDev.bg DevOps March 2024 Monitoring & Logging
Dev.bg DevOps March 2024 Monitoring & Logging
Marian Marinov
 
Basic presentation of cryptography mechanisms
Basic presentation of cryptography mechanismsBasic presentation of cryptography mechanisms
Basic presentation of cryptography mechanisms
Marian Marinov
 
Microservices: Benefits, drawbacks and are they for me?
Microservices: Benefits, drawbacks and are they for me?Microservices: Benefits, drawbacks and are they for me?
Microservices: Benefits, drawbacks and are they for me?
Marian Marinov
 
Introduction and replication to DragonflyDB
Introduction and replication to DragonflyDBIntroduction and replication to DragonflyDB
Introduction and replication to DragonflyDB
Marian Marinov
 
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQMessage Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
Marian Marinov
 
How to successfully migrate to DevOps .pdf
How to successfully migrate to DevOps .pdfHow to successfully migrate to DevOps .pdf
How to successfully migrate to DevOps .pdf
Marian Marinov
 
How to survive in the work from home era
How to survive in the work from home eraHow to survive in the work from home era
How to survive in the work from home era
Marian Marinov
 
Managing sysadmins
Managing sysadminsManaging sysadmins
Managing sysadmins
Marian Marinov
 
Защо и как да обогатяваме знанията си?
Защо и как да обогатяваме знанията си?Защо и как да обогатяваме знанията си?
Защо и как да обогатяваме знанията си?
Marian Marinov
 
Securing your MySQL server
Securing your MySQL serverSecuring your MySQL server
Securing your MySQL server
Marian Marinov
 
Sysadmin vs. dev ops
Sysadmin vs. dev opsSysadmin vs. dev ops
Sysadmin vs. dev ops
Marian Marinov
 
DoS and DDoS mitigations with eBPF, XDP and DPDK
DoS and DDoS mitigations with eBPF, XDP and DPDKDoS and DDoS mitigations with eBPF, XDP and DPDK
DoS and DDoS mitigations with eBPF, XDP and DPDK
Marian Marinov
 
Challenges with high density networks
Challenges with high density networksChallenges with high density networks
Challenges with high density networks
Marian Marinov
 
SiteGround building automation
SiteGround building automationSiteGround building automation
SiteGround building automation
Marian Marinov
 
Preventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel trackingPreventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel tracking
Marian Marinov
 
Managing a lot of servers
Managing a lot of serversManaging a lot of servers
Managing a lot of servers
Marian Marinov
 
Let's Encrypt failures
Let's Encrypt failuresLet's Encrypt failures
Let's Encrypt failures
Marian Marinov
 
Preventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel trackingPreventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel tracking
Marian Marinov
 
How to build your own anycast service
How to build your own anycast serviceHow to build your own anycast service
How to build your own anycast service
Marian Marinov
 

More from Marian Marinov (20)

How to implement PassKeys in your application
How to implement PassKeys in your applicationHow to implement PassKeys in your application
How to implement PassKeys in your application
 
Dev.bg DevOps March 2024 Monitoring & Logging
Dev.bg DevOps March 2024 Monitoring & LoggingDev.bg DevOps March 2024 Monitoring & Logging
Dev.bg DevOps March 2024 Monitoring & Logging
 
Basic presentation of cryptography mechanisms
Basic presentation of cryptography mechanismsBasic presentation of cryptography mechanisms
Basic presentation of cryptography mechanisms
 
Microservices: Benefits, drawbacks and are they for me?
Microservices: Benefits, drawbacks and are they for me?Microservices: Benefits, drawbacks and are they for me?
Microservices: Benefits, drawbacks and are they for me?
 
Introduction and replication to DragonflyDB
Introduction and replication to DragonflyDBIntroduction and replication to DragonflyDB
Introduction and replication to DragonflyDB
 
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQMessage Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
 
How to successfully migrate to DevOps .pdf
How to successfully migrate to DevOps .pdfHow to successfully migrate to DevOps .pdf
How to successfully migrate to DevOps .pdf
 
How to survive in the work from home era
How to survive in the work from home eraHow to survive in the work from home era
How to survive in the work from home era
 
Managing sysadmins
Managing sysadminsManaging sysadmins
Managing sysadmins
 
Защо и как да обогатяваме знанията си?
Защо и как да обогатяваме знанията си?Защо и как да обогатяваме знанията си?
Защо и как да обогатяваме знанията си?
 
Securing your MySQL server
Securing your MySQL serverSecuring your MySQL server
Securing your MySQL server
 
Sysadmin vs. dev ops
Sysadmin vs. dev opsSysadmin vs. dev ops
Sysadmin vs. dev ops
 
DoS and DDoS mitigations with eBPF, XDP and DPDK
DoS and DDoS mitigations with eBPF, XDP and DPDKDoS and DDoS mitigations with eBPF, XDP and DPDK
DoS and DDoS mitigations with eBPF, XDP and DPDK
 
Challenges with high density networks
Challenges with high density networksChallenges with high density networks
Challenges with high density networks
 
SiteGround building automation
SiteGround building automationSiteGround building automation
SiteGround building automation
 
Preventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel trackingPreventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel tracking
 
Managing a lot of servers
Managing a lot of serversManaging a lot of servers
Managing a lot of servers
 
Let's Encrypt failures
Let's Encrypt failuresLet's Encrypt failures
Let's Encrypt failures
 
Preventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel trackingPreventing cpu side channel attacks with kernel tracking
Preventing cpu side channel attacks with kernel tracking
 
How to build your own anycast service
How to build your own anycast serviceHow to build your own anycast service
How to build your own anycast service
 

Recently uploaded

在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
gerogepatton
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Dr.Costas Sachpazis
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
BrazilAccount1
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
Kamal Acharya
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
Kamal Acharya
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation & Control
 
ethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.pptethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.ppt
Jayaprasanna4
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Teleport Manpower Consultant
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
gdsczhcet
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
ydteq
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
Jayaprasanna4
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
WENKENLI1
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
ongomchris
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
Kerry Sado
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
FluxPrime1
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
Kamal Acharya
 

Recently uploaded (20)

在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
 
ethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.pptethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.ppt
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
 

Comparison of foss distributed storage

  • 1. Talk Title Here Author Name, Company Comparison of FOSS distributed filesystems Marian “HackMan” Marinov, SiteGround
  • 2. Who am I? • Chief System Architect of SiteGround • Sysadmin since 96 • Teaching Network Security & Linux System Administration in Sofia University • Organizing biggest FOSS conference in Bulgaria - OpenFest
  • 3. • We are a hosting provider • We needed shared filesystem between VMs and containers • Different size of directories and relatively small files • Sometimes milions of files in a single dir Why were we considering shared FS?
  • 4. • We started with DBRD + OCFS2 • I did a small test of cLVM + ATAoE • We tried DRBD + OCFS2 for MySQL clusters • We then switched to GlusterFS • Later moved to CephFS • and finally settled on good old NFS :) but with the storage on Ceph RBD The story of our storage endeavors
  • 5. • CephFS • GlusterFS • MooseFS • OrangeFS • BeeGFS - no stats I haven't played with Lustre or AFS And because of our history with OCFS2 and GFS2 I skipped them Which filesystems were tested
  • 6. Test cluster • CPU i7-3770 @ 3.40GHz / 16G DDR3 1333MHz • SSD SAMSUNG PM863 • 10Gbps Intel x520 • 40Gbps Qlogic QDR Infiniband • IBM Blade 10Gbps switch • Mellanox Infiniscale-IV switch
  • 7. What did I test? • Setup • Fault tollerance • Resource requirements (client & server) • Capabilities (Redundancy, RDMA, caching) • FS performance (creation, deletion, random read, random write)
  • 8. Complexity of the setup • CephFS - requires a lot of knowledge and time • GlusterFS - relatively easy to setup and install • OrangeFS - extremely easy to do basic setup • MooseFS - extremely easy to do basic setup • DRBD + NFS - very complex setup if you want HA • BeeGFS -
  • 9. CephFS • Fault tollerant by desing... – until all MDSes start crashing ● single directory may crash all MDSes ● MDS behind on trimming ● Client failing to respond to cache pressure ● Ceph has redundancy but lacks RDMA support
  • 10. CephFS • Uses a lot of memory on the MDS nodes • Not suitable to run on the same machines as the compute nodes • Small number of nodes 3-5 is a no go
  • 11. CephFS tunning • Placement Groups(PG) • mds_log_max_expiring & mds_log_max_segments fixes the problem with trimming • When you have a lot of inodes, increasing mds_cache_size works but increases the memory usage
  • 12. CephFS fixes • Client XXX failing to respond to cache pressure ● This issue is due to the current working set size of the inodes. The fix for this is setting the mds_cache_size in /etc/ceph/ceph.conf accordingly. ceph daemon mds.$(hostname) perf dump | grep inode "inode_max": 100000, - max cache size value "inodes": 109488, - currently used
  • 13. CephFS fixes • Client XXX failing to respond to cache pressure ● Another “FIX” for the same issue is to login to current active MDS and run: ● This way it will change currnet active MDS to another server and will drop inode usage /etc/init.d/ceph restart mds
  • 14. GlusterFS • Fault tollerance – in case of network hikups sometimes the mount may become inaccesable and requires manual remount – in case one storage node dies, you have to manually remount if you don't have local copy of the data • Capabilities - local caching, RDMA and data Redundancy are supported. Offers different ways for data redundancy and sharding
  • 15. GlusterFS • High CPU usage for heavily used file systems – it required a copy of the data on all nodes – the FUSE driver has a limit of the number of small operations, that it can perform • Unfortunatelly this limit was very easy to be reached by our customers
  • 16. GlusterFS Tunning ● use distributed and replicated volumes instead of only replicated – gluster volume create VOL replica 2 stripe 2 .... ● setup performance parameters
  • 17. GlusterFS Tunning volume set VOLNAME performance.cache-refresh-timeout 3 volume set VOLNAME performance.io-thread-count 8 volume set VOLNAME performance.cache-size 256MB volume set VOLNAME performance.cache-max-file-size 300KB volume set VOLNAME performance.cache-min-file-size 0B volume set VOLNAME performance.readdir-ahead on volume set VOLNAME performance.write-behind-window-size 100KB
  • 18. GlusterFS Tunning volume set VOLNAME features.lock-heal on volume set VOLNAME cluster.self-heal-daemon enable volume set VOLNAME cluster.metadata-self-heal on volume set VOLNAME cluster.consistent-metadata on volume set VOLNAME cluster.stripe-block-size 100KB volume set VOLNAME nfs.disable on
  • 19. FUSE tunning FUSE GlusterFS entry_timeout entry-timeout negative_timeout negative-timeout attr_timeout attribute-timeout mount eveyrhting with “-o intr” :)
  • 20. MooseFS • Reliability with multiple masters • Multiple metadata servers • Multiple chunkservers • Flexible replication (per directory) • A lot of stats and a web interface
  • 21. BeeGFS • Metadata and Storage nodes are replicated by design • FUSE based
  • 22. OrangeFS • No native redundancy uses corosync/pacemaker for HA • Adding new storage servers requires stopping of the whole cluster • It was very easy to break it
  • 23. Ceph RBD + NFS • the main tunning goes to NFS, by using the cachefilesd • it is very important to have cache for both accessed and missing files • enable FS-Cache by using "-o fsc" mount option
  • 24. Ceph RBD + NFS • verify that your mounts are with enabled cache: • # cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v4 aaaaaaaa 801 0:35 17454aa0fddaa6a5:96d7706699eb981b yes v4 aaaaaaaa 801 0:374 7d581aa468faac13:92e653953087a8a4 yes
  • 25. Sysctl tunning fs.nfs.idmap_cache_timeout = 600 fs.nfs.nfs_congestion_kb = 127360 fs.nfs.nfs_mountpoint_timeout = 500 fs.nfs.nlm_grace_period = 10 fs.nfs.nlm_timeout = 10
  • 26. Sysctl tunning # Tune the network card polling net.core.netdev_budget=900 net.core.netdev_budget_usecs=1000 net.core.netdev_max_backlog=300000
  • 27. Sysctl tunning # Increase network stack memory net.core.rmem_max=16777216 net.core.wmem_max=16777216 net.core.rmem_default=16777216 net.core.wmem_default=16777216 net.core.optmem_max=16777216
  • 28. Sysctl tunning # memory allocation min/pressure/max. # read buffer, write buffer, and buffer space net.ipv4.tcp_rmem = 4096 87380 134217728 net.ipv4.tcp_wmem = 4096 65536 134217728
  • 29. Sysctl tunning # turn off selective ACK and timestamps net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_low_latency = 1 # scalable or bbr net.ipv4.tcp_congestion_control = scalable
  • 30. Sysctl tunning # Increase system IP port range to allow for more concurrent connections net.ipv4.ip_local_port_range = 1024 65000 # OS tunning vm.swappiness = 0 # Increase system file descriptor limit fs.file-max = 65535
  • 31. Stats • For small files, network BW is not a problem • However network latency is a killer :(
  • 32. Stats Creation of 10000 empty files: Local SSD 7.030s MooseFS 12.451s NFS + Ceph RBD 16.947s OrangeFS 40.574s Gluster distributed 1m48.904s
  • 33. Stats MTU Congestion result MooseFS 10G 1500 Scalable 10.411s 9000 Scalable 10.475s 9000 BBR 10.574s 1500 BBR 10.710s GlusterFS 10G 1500 BBR 48.143s 1500 Scalable 48.292s 9000 BBR 48.747s 9000 Scalable 48.865s
  • 34. StatsMTU Congestion result MooseFS IPoIB 1500 BBR 9.484s 1500 Scalable 9.675s GlusterFS IPoIB 9000 BBR 40.598s 1500 BBR 40.784s 1500 Scalable 41.448s 9000 Scalable 41.803s GlusterFS RDMA 1500 Scalable 31.731s 1500 BBR 31.768s
  • 35. Stats MTU Congestion result MooseFS 10G 1500 Scalable 3m46.501s 1500 BBR 3m47.066s 9000 Scalable 3m48.129s 9000 BBR 3m58.068s GlusterFS 10G 1500 BBR 7m56.144s 1500 Scalable 7m57.663s 9000 BBR 7m56.607s 9000 Scalable 7m53.828s Creation of 10000 random size files:
  • 36. Stats MTU Congestion result MooseFS IPoIB 1500 BBR 3m48.254s 1500 Scalable 3m49.467s GlusterFS RDMA 1500 Scalable 8m52.168s Creation of 10000 random size files:
  • 37. Thank you! Marian HackMan Marinov mm@siteground.com
  • 38. Thank you! Questions? Marian HackMan Marinov mm@siteground.com