混合存储测试结果2. 混合存储测试结果
一. Flashcache vs sas盘
1.对比测试性能指标:
1). iops
2). Latency
3). Throughput
2. 详细测试结果
Flashcache结构
8k随机读
8k随机写
8k随机读写(20%w)
1M文件连续读写
二.各种磁盘性能对比
测试工具:
Orion
fio
监测工具:
Iostat
Flashcache
Top
fio
5. Flashcache 8k-randread
Fio测试
[root@localhost tmp]# fio fio_test.bak
...
Jobs: 7 (f=7): [__r_rrrrrr] [42.0% done] [42144K/0K /s] [5144 /0 iops] [eta 01m:23s]
read : io=4374.6MB, bw=74589KB/s, iops=9323 , runt= 60056msec
lat (usec): min=33 , max=1082.5K, avg=14400.11, stdev=26051.65
...
Disk stats (read/write):
fioa: ios=161232/50997, merge=3378/5063, ticks=47874/42760, in_queue=90618, util=60.38%
sda: ios=240676/13398, merge=139200/188, ticks=5955492/15416, in_queue=5970920, util=100.00%
# iostat -mx 2
....
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 1280.00 9.50 2527.00 754.00 16.10 5.96 13.77 118.47 36.21 0.30 100.05
sda5 1280.00 9.50 2527.00 754.00 16.10 5.96 13.77 118.47 36.21 0.30 100.05
fioa 61.00 52.00 2513.00 911.50 20.11 5.01 15.02 1.54 0.45 0.21 70.95
dm-0 0.00 0.00 3889.50 0.00 30.26 0.00 15.93 105.41 27.15 0.26 100.05
# ./utils/flashstat
Time read/s write/s diskr/s diskw/s ssdr/s ssdw/s uread/s uwrit/s metaw/s clean/s repl/s wrepl/s hit% whit% dwhit%
06-04 13:37:39 11144 0 8182 1910 4868 2426 7335 0 1569 1909 346 0 26|28 0|19 0|3
06-04 13:37:40 7148 0 5102 994 3040 1371 4563 0 844 993 362 0 28|28 0|19 0|3
06-04 13:37:41 4242 0 3019 957 2182 865 2951 0 786 958 41 0 28|28 0|19 0|3
06-04 13:37:42 5170 0 3779 870 2261 962 3551 0 743 871 173 0 26|28 0|19 0|3
6. SAS盘 8k-randread
[root@localhost tmp]# fio fio_test.bak
...
Jobs: 10 (f=10): [rrrrrrrrrr] [100.0% done] [30709K/0K /s] [3748 /0 iops] [eta 00m:00s]
read : io=1751.6MB, bw=29858KB/s, iops=3732 , runt= 60069msec
lat (msec): min=3 , max=891 , avg=42.83, stdev=38.60
...
Disk stats (read/write):
sda: ios=223796/87, merge=4/162, ticks=7243566/69, in_queue=7248467, util=100.00%
# iostat -mx 2
....
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 2.50 3605.50 24.50 28.16 0.21 16.01 121.56 33.41 0.28 100.10
sda2 0.00 0.00 3605.50 0.00 28.16 0.00 16.00 121.56 33.64 0.28 100.10
Fio的配置文件如下:
[root@localhost tmp]# vi fio_test.bak
[global]
description=Emulation of Intel IOmeter File Server Access Pattern
[iometer]
blocksize=8k
#bssplit=8k/40:16k/60
rw=randread
#rwmixwrite=20
runtime=60
direct=1
size=1g
ioengine=libaio
directory=/data2/
iodepth=16
iodepth_batch=8
iodepth_low=8
iodepth_batch_complete=8
numjobs=10
group_reporting
8. Flashcache 8k-randrw(20%w)
8k 随机读写(写占20%)
Orion的测试结果如有图所示:
Sas盘监控
# iostat -mx 2
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 1120.00 288.00 8.75 2.25 16.00 29.96 21.09 0.71 100.05
sda2 0.00 0.00 1120.00 288.00 8.75 2.25 16.00 29.96 21.09 0.71 100.05
混合测试监控
# iostat -mx 2
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 3.00 2507.50 1.00 19.59 0.02 16.01 27.89 11.12 0.40 100.05
sda5 0.00 0.00 2507.50 0.00 19.59 0.00 16.00 27.89 11.13 0.40 100.05
fioa 0.00 1.00 1141.00 4361.00 8.91 30.50 14.67 1.79 0.33 0.15 82.90
dm-0 0.00 0.00 3649.50 938.50 28.51 7.33 16.00 29.73 6.48 0.22 100.05
# ./utils/flashstat
06-04 09:51:28 3586 886 2436 0 1149 4187 0 0 863 0 2436 616 32|29 30|19 2|3
06-04 09:51:29 3699 901 2538 0 1160 4330 0 0 889 0 2538 635 31|29 29|19 1|3
06-04 09:51:30 3722 929 2556 0 1165 4383 0 0 898 0 2556 657 31|29 29|19 3|3
Orion测试命令:
#./orion -run advanced -testname disk1 -num_disks 6 -write 20 -size_small 8 -size_large 8 -duration 10 -type rand
9. Flashcache 8k randrw(20%w)
上面是通过orion测试的,但是单进程的,没有达到磁盘的极限,下面用fio工具采用多线程的并发测试
Sas盘测试结果:
[root@localhost tmp]# fio fio_test.bak
…..
Jobs: 1 (f=1): [________m_] [10.2% done] [13615K/3373K /s] [1662 /411 iops] [eta 08m:59s] s]
read : io=947664KB, bw=15735KB/s, iops=1966 , runt= 60225msec
lat (msec): min=1 , max=1614 , avg=73.64, stdev=80.75
write: io=239600KB, bw=3978.5KB/s, iops=497 , runt= 60225msec
lat (msec): min=1 , max=756 , avg=29.58, stdev=23.41
...
Disk stats (read/write):
sda: ios=118458/30027, merge=1/157, ticks=6879977/24496, in_queue=6904842, util=100.00%
# iostat -mx 2
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 2052.00 491.50 16.03 3.84 16.00 115.67 45.82 0.39 100.05
sda2 0.00 0.00 2052.00 491.50 16.03 3.84 16.00 115.67 45.82 0.39 100.05
Fio配置文件如下
[root@localhost tmp]# vi fio_test.bak
[global]
description=Emulation of Intel IOmeter File Server Access Pattern
[iometer]
blocksize=8k
#bssplit=8k/40:16k/60
rw=randread
#rwmixwrite=20
runtime=60
direct=1
size=1g
ioengine=libaio
directory=/data2/
iodepth=16
iodepth_batch=8
iodepth_low=8
iodepth_batch_complete=8
numjobs=10
group_reporting
10. Flashcache 8k-randrw(20%w)
Flashcache混合存储测试
[root@localhost tmp]# fio fio_test.bak
Jobs: 10 (f=10): [mmmmmmmmmm] [100.0% done] [23114K/5831K /s] [2821 /711 iops] [eta 00m:00s]
Description : [Emulation of Intel IOmeter File Server Access Pattern]
read : io=1936.8MB, bw=32968KB/s, iops=4120 , runt= 60158msec
lat (usec): min=194 , max=1258.4K, avg=35234.09, stdev=44695.80
write: io=496224KB, bw=8248.7KB/s, iops=1031 , runt= 60158msec
lat (usec): min=201 , max=553885 , avg=14041.82, stdev=11965.61
Disk stats (read/write):
fioa: ios=118923/53295, merge=285/22, ticks=38827/20509, in_queue=59320, util=32.94%
sda: ios=137829/35566, merge=114958/26989, ticks=6584524/27631, in_queue=6612130, util=100.00%
# iostat -mx 2
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 1968.00 457.50 2305.50 624.00 16.69 4.22 14.62 110.19 37.64 0.34 100.10
sda5 1968.00 457.50 2305.50 624.00 16.69 4.22 14.62 110.19 37.64 0.34 100.10
fioa 7.00 0.00 2150.00 849.00 16.85 5.34 15.15 0.79 0.26 0.10 29.60
dm-0 0.00 0.00 4299.50 1058.50 33.59 8.26 16.00 103.71 19.34 0.19 100.30
# ./utils/flashstat
time read/s write/s diskr/s diskw/s ssdr/s ssdw/s uread/s uwrit/s metaw/s clean/s repl/s wrepl/s hit% whit% dwhit%
06-04 15:54:30 6126 1524 4033 981 2092 832 4033 981 289 0 0 0 34|22 35|1 16|0
06-04 15:54:31 6447 1561 4311 1038 2136 787 4311 1038 264 0 0 0 33|22 33|1 16|0
06-04 15:54:32 5932 1558 4009 1065 1922 750 4009 1065 257 0 0 0 32|22 31|1 15|0
11. Flashcache 8k-randrw(20%w)
Fio配置文件如下
[root@localhost tmp]# vi fio_test.bak
[global]
description=Emulation of Intel IOmeter File Server Access Pattern
[iometer]
blocksize=8k
#bssplit=8k/40:16k/60
rw=randread
#rwmixwrite=20
runtime=60
direct=1
size=1g
ioengine=libaio
directory=/ssdsas/
iodepth=16
iodepth_batch=8
iodepth_low=8
iodepth_batch_complete=8
numjobs=10
group_reporting
总结:
读写混合
Flashcache 读iops:4120,写iops:1031,读latency:35ms,写latency:14ms
Sas盘 读iops:1966,写iops:497 ,读latency:73.6ms, 写latency:29.5ms
和orion的测试结果相近,也是2-3倍的关系
12. Flashcache 8k-randwrite
8k随机写orion的测试结果如图所示
Sas盘的iops的比较稳定,在接近800
左右,latency随压力增大,也变的很大
。而flashcache的iops在2000-3000之
间,latency在10ms以内。
说明:
数据在第一次装载到flashcache里,
Iops非常高,达到24000左右,当稳定时
Iops稳定在2000左右。
从orion测试来看,从iops和latency的
性能都提高2-3倍。
下面用fio工具测试下
Flashcache测试
[root@localhost tmp]# fio fio_test.bak
...
Jobs: 3 (f=3): [_www______] [92.3% done] [0K/87228K /s] [0 /10.7K iops] [eta 00m:05s]s]
write: io=10079MB, bw=171374KB/s, iops=21421 , runt= 60226msec
lat (usec): min=94 , max=5478.1K, avg=5554.25, stdev=88163.96
...
Disk stats (read/write):
fioa: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
sda: ios=0/118861, merge=0/849912, ticks=0/9324836, in_queue=9333573, util=98.77%
13. Flashcache 8k-randwrite
# iostat -mx 2
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 16700.50 0.00 1522.50 0.00 69.77 93.85 161.72 104.36 0.66 100.10
sda5 0.00 16698.50 0.00 1521.00 0.00 69.75 93.92 161.60 104.39 0.66 100.10
dm-0 0.00 0.00 0.00 18224.50 0.00 71.19 8.00 2261.76 122.05 0.05 100.20
Sas盘测试
[root@localhost tmp]# fio fio_test.bak
iometer: (g=0): rw=randrw, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
Jobs: 3 (f=3): [w_____w_w_] [12.9% done] [0K/130K /s] [0 /15 iops] [eta 06m:59s] s]
write: io=327872KB, bw=5304.8KB/s, iops=663 , runt= 61807msec
lat (usec): min=181 , max=26410K, avg=236365.89, stdev=1546807.76
...
Disk stats (read/write):
sda: ios=0/57137, merge=0/52418, ticks=0/9499136, in_queue=9500038, util=100.00%
# iostat -mx 2
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 769.00 0.00 823.00 0.00 6.22 15.48 153.99 185.95 1.22 100.05
sda2 0.00 625.50 0.00 659.00 0.00 5.06 15.72 109.91 169.22 1.52 100.05
总结:
Flashcache:写iops:21421,latency:5.5ms , svctm :0.05
Sas盘 : 写iops:663, latency:236ms, svctm :1.22
Sas盘的iops有merge所以看着少一些,但整体上从测试数据看,混合读写性能有很多优势的,可以提高20-30多倍
15. FIO vs SSD vs SAS
8k随机读Iops:
8k随机读latency:
三种磁盘主要按如下参数使用orion方式测试:
./orion -run advanced -testname disk1 -num_disks 3 -size_small 8 -size_large 8 -duration 10 -type rand
./orion -run advanced -testname disk1 -num_disks 3 -write 50 -size_small 8 -size_large 8 -duration 10 -type rand
./orion -run advanced -testname disk1 -num_disks 3 -size_small 1024 -size_large 1024 -duration 10 -type seq
17. FIO vs SSD vs SAS
1M文件连续读Iops: 1M文件连续读吞吐量:
1M文件连续读latency:
上面的测试结果,fusion-io在小文件的读写性能非常
强劲,比sas盘提高100倍,大文件的读写和普通sas盘没
什么优势,刚开始还会比较慢;
而ssd盘在大文件的连续读写上还进不如sas盘,小文件读
写还是很不错的,比sas盘提高10倍左右
测试ext3和ext4性能差不到太多,ext4可以提高%5的性能
18. 讨论
讨论存储与fs的选择
SAS 混合存储 ssd Fusion-io
xfs
ext4
ext3
价格/G 6元/GB 35元/GB 45元/GB 160元/GB
参考:
300G sas:2000左右
100G ssd:4500左右
320G fusion-io 50000左右