S O F T W A R E D E V E L O P E R
Storage Setup for
LINSTOR/DRBD/CloudStack
Rene Peinthor
About me
● Professional Software-Developer since 2005
(IntersoftEDV, Altova, LINBIT)
● Since 2014 at LINBIT
● Apache CloudStack Committer since April 2024
● Living 40 km south of Vienna
● Hobbies: Hiking, Climbing, Paragliding
Agenda
1. Linstor Architecture
2. Test results from LVM/ZFS benchmarks
3. Pro/Cons of different storage layouts
4. Theoretical guide on storage pool setup for Linstor
Linstor Architecture
Benchmark Warning
Do your own testing!
LVM vs. ZFS
● LVM (Logical Volume Manager)
○ Volume manager
○ RAID support
○ Snapshots (on thin)
○ Deduplication (VDO)
● ZFS (Zettabyte File System)
○ Volume manager + Filesystem
○ RAID support
○ Data integrity (checksums)
○ Snapshots
○ Compression
○ Deduplication
○ (Encryption)
Test Setup
Hardware configuration
● Supermicro X11DPi-N(T)
● 2 x Intel Xeon Silver 4112 CPU@2.6 GHz
● 96 GB RAM
● 2 x Samsung SM963 480 GB
○ Seq Read 1200 MB/s
○ Seq Write 900 MB/s
○ Random Write 23k IOPS
Test configuration
● AlmaLinux 9.5
● Block tests
● nvme format before each run
● FIO
○ IODepth: 32
○ Runtime: 5 min
○ Numjobs: 12
○ Tests:
■ 4K random read
■ 4K random write
■ 128K random read
■ 128K random write
■ 8K random read
■ 8K random write
Results Throughput MiB/s
Configuration 4K READ 4K WRITE 8K READ 8K WRITE 128K READ 128K WRITE
Baremetal 2116 928 3391 892 2699 903
LVM-Thin 943 21 3097 341 1846 41
LVM-T 2 stripes 963 340 3141 785 1879 331
ZFS-T 2.1.6 862 148 2349 752 1707 304
ZFS-T r0 2.1.6 866 192 2551 1189 1736 413
ZFS-T 2.3.1 1333 161 4886 954 2433 292
ZFS-T r0 2.3.1 1199 214 5121 1626 2568 416
ZFS-T zstd-fast 1223 160 4668 954 2388 292
ZFS-T r0 zstd-fast
1285 208 5063 1480 2403 393
Results IOPS K
Configuration 4K READ 4K WRITE 8K READ 8K WRITE 128K READ 128K WRITE
Baremetal 542 238.0 27.1 7.1 345 116.0
LVM-Thin 241 5.4 24.8 2.7 236 5.2
LVM-T 2 stripes 247 87.1 25.1 6.3 241 42.3
ZFS-T 2.1.6 221 37.9 18.8 6.0 219 38.9
ZFS-T r0 2.1.6 222 49.2 20.4 9.5 222 52.9
ZFS-T 2.3.1 341 41.2 39.1 7.6 311 37.4
ZFS-T r0 2.3.1 307 54.9 41.0 13.0 329 53.2
ZFS-T zstd-fast 313 41.0 37.3 7.6 306 37.4
ZFS-T r0 zstd-fast 329 53.2 40.5 11.8 308 50.3
Considerable Storage Configurations
● LVM-Thin JBOD (sequential allocation)
● LVM-Thin stripe
● LVM-Thin raid10
● ZFS(Thin) stripe
● ZFS raid10
● ZFS raidZ
Non redundant storage configurations
(LVM/ZFS)-Thin-Stripe Full
+
Smaller failure domains
Improved performance
Smaller DRBD resyncs
-
Smaller max volume sizes
(LVM/ZFS)-Thin-Stripe Grouped
+
Largest possible volumes
Most (stripe) performance
-
Largest failure domain
Larger DRBD resyncs
LVM-Thin (sequential allocation)
+
Largest possible volumes
-
Largest failure domain
Single disk performance
Redundant storage configurations (raidZ, raid10)
ZFS-Thin raidZ
+
Uninterrupted storage operation on failure
Extendable
-
less “wasted” disk space
Less performance (parity writes)
(LVM/ZFS)-Thin raid10
+
Uninterrupted storage operation on failure
-
“Waste” of disk space
Bit less performance
(LVM/ZFS)-Thin Stripe Group in Linstor - Sample configuration
Node
18 x 4 TB disks
72 TB (6 x 12TB)
● Create 6 LVM-thin/ZFS pools with 3 disks each
● Each 12 TB group will become a Linstor Thin storage-pool
● Resource/Volume distribution is handled by Linstor
● Maximum volume size ~12 TB
● 12 TB worst case DRBD resync size
Questions?
Thank you
www.linbit.com

Storage Setup for LINSTOR/DRBD/CloudStack

  • 1.
    S O FT W A R E D E V E L O P E R Storage Setup for LINSTOR/DRBD/CloudStack Rene Peinthor
  • 2.
    About me ● ProfessionalSoftware-Developer since 2005 (IntersoftEDV, Altova, LINBIT) ● Since 2014 at LINBIT ● Apache CloudStack Committer since April 2024 ● Living 40 km south of Vienna ● Hobbies: Hiking, Climbing, Paragliding
  • 3.
    Agenda 1. Linstor Architecture 2.Test results from LVM/ZFS benchmarks 3. Pro/Cons of different storage layouts 4. Theoretical guide on storage pool setup for Linstor
  • 4.
  • 5.
  • 6.
    LVM vs. ZFS ●LVM (Logical Volume Manager) ○ Volume manager ○ RAID support ○ Snapshots (on thin) ○ Deduplication (VDO) ● ZFS (Zettabyte File System) ○ Volume manager + Filesystem ○ RAID support ○ Data integrity (checksums) ○ Snapshots ○ Compression ○ Deduplication ○ (Encryption)
  • 7.
    Test Setup Hardware configuration ●Supermicro X11DPi-N(T) ● 2 x Intel Xeon Silver 4112 CPU@2.6 GHz ● 96 GB RAM ● 2 x Samsung SM963 480 GB ○ Seq Read 1200 MB/s ○ Seq Write 900 MB/s ○ Random Write 23k IOPS Test configuration ● AlmaLinux 9.5 ● Block tests ● nvme format before each run ● FIO ○ IODepth: 32 ○ Runtime: 5 min ○ Numjobs: 12 ○ Tests: ■ 4K random read ■ 4K random write ■ 128K random read ■ 128K random write ■ 8K random read ■ 8K random write
  • 14.
    Results Throughput MiB/s Configuration4K READ 4K WRITE 8K READ 8K WRITE 128K READ 128K WRITE Baremetal 2116 928 3391 892 2699 903 LVM-Thin 943 21 3097 341 1846 41 LVM-T 2 stripes 963 340 3141 785 1879 331 ZFS-T 2.1.6 862 148 2349 752 1707 304 ZFS-T r0 2.1.6 866 192 2551 1189 1736 413 ZFS-T 2.3.1 1333 161 4886 954 2433 292 ZFS-T r0 2.3.1 1199 214 5121 1626 2568 416 ZFS-T zstd-fast 1223 160 4668 954 2388 292 ZFS-T r0 zstd-fast 1285 208 5063 1480 2403 393
  • 15.
    Results IOPS K Configuration4K READ 4K WRITE 8K READ 8K WRITE 128K READ 128K WRITE Baremetal 542 238.0 27.1 7.1 345 116.0 LVM-Thin 241 5.4 24.8 2.7 236 5.2 LVM-T 2 stripes 247 87.1 25.1 6.3 241 42.3 ZFS-T 2.1.6 221 37.9 18.8 6.0 219 38.9 ZFS-T r0 2.1.6 222 49.2 20.4 9.5 222 52.9 ZFS-T 2.3.1 341 41.2 39.1 7.6 311 37.4 ZFS-T r0 2.3.1 307 54.9 41.0 13.0 329 53.2 ZFS-T zstd-fast 313 41.0 37.3 7.6 306 37.4 ZFS-T r0 zstd-fast 329 53.2 40.5 11.8 308 50.3
  • 16.
    Considerable Storage Configurations ●LVM-Thin JBOD (sequential allocation) ● LVM-Thin stripe ● LVM-Thin raid10 ● ZFS(Thin) stripe ● ZFS raid10 ● ZFS raidZ
  • 17.
    Non redundant storageconfigurations (LVM/ZFS)-Thin-Stripe Full + Smaller failure domains Improved performance Smaller DRBD resyncs - Smaller max volume sizes (LVM/ZFS)-Thin-Stripe Grouped + Largest possible volumes Most (stripe) performance - Largest failure domain Larger DRBD resyncs LVM-Thin (sequential allocation) + Largest possible volumes - Largest failure domain Single disk performance
  • 18.
    Redundant storage configurations(raidZ, raid10) ZFS-Thin raidZ + Uninterrupted storage operation on failure Extendable - less “wasted” disk space Less performance (parity writes) (LVM/ZFS)-Thin raid10 + Uninterrupted storage operation on failure - “Waste” of disk space Bit less performance
  • 19.
    (LVM/ZFS)-Thin Stripe Groupin Linstor - Sample configuration Node 18 x 4 TB disks 72 TB (6 x 12TB) ● Create 6 LVM-thin/ZFS pools with 3 disks each ● Each 12 TB group will become a Linstor Thin storage-pool ● Resource/Volume distribution is handled by Linstor ● Maximum volume size ~12 TB ● 12 TB worst case DRBD resync size
  • 20.