Your SlideShare is downloading. ×
混合存储测试结果
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

混合存储测试结果

1,264
views

Published on

flashcache混合磁盘测试结果

flashcache混合磁盘测试结果

Published in: Technology

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,264
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
16
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. 混合存储测试 DBA组-2012/06/04
  • 2. 混合存储测试结果一. Flashcache vs sas盘1.对比测试性能指标:1). iops2). Latency3). Throughput2. 详细测试结果Flashcache结构8k随机读8k随机写8k随机读写(20%w)1M文件连续读写二.各种磁盘性能对比测试工具: Orion fio监测工具: Iostat Flashcache Top fio
  • 3. 混合存储测试结果Flashcache结构Flashcache用于缓存热点数据,用于主流64bitlinux os上的一种内核模块加速技术,相当于在普通硬盘上做了一层cache。 Fusion-io + 6 sas raid5官网:https://github.com/facebook/flashcache/
  • 4. Flashcache 8k-randreadOrion测试结果:Iops指标8k随机读Orion测试结果如右图所示:从图中可以看出混合存储从iops和latency上性能头提高了2-3倍。用iostat命令可以监控混合存储的svctm在0.28ms左右,而sas盘的svctm在0.58左右,也就是说磁盘的服务时间提高了1倍多,而此时系统的load都在0.3以下.但这时候只是单迚程的压力测试,磁盘性能还没有到极限。通过fio工具采用多线程压力测试, 混合存储的iops可到平均9323多,latency 14ms, sas盘的iops可到3732多, latency 42ms但flashcache混合存储的命中率只有30%,如果命中率更高那效果就会更好。Orion测试命令:#./orion -run advanced -testname disk1 -num_disks 6 -size_small 8 -size_large 8 -duration 10 -type rand
  • 5. Flashcache 8k-randreadFio测试[root@localhost tmp]# fio fio_test.bak...Jobs: 7 (f=7): [__r_rrrrrr] [42.0% done] [42144K/0K /s] [5144 /0 iops] [eta 01m:23s] read : io=4374.6MB, bw=74589KB/s, iops=9323 , runt= 60056msec lat (usec): min=33 , max=1082.5K, avg=14400.11, stdev=26051.65...Disk stats (read/write): fioa: ios=161232/50997, merge=3378/5063, ticks=47874/42760, in_queue=90618, util=60.38% sda: ios=240676/13398, merge=139200/188, ticks=5955492/15416, in_queue=5970920, util=100.00%# iostat -mx 2....Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 1280.00 9.50 2527.00 754.00 16.10 5.96 13.77 118.47 36.21 0.30 100.05sda5 1280.00 9.50 2527.00 754.00 16.10 5.96 13.77 118.47 36.21 0.30 100.05fioa 61.00 52.00 2513.00 911.50 20.11 5.01 15.02 1.54 0.45 0.21 70.95dm-0 0.00 0.00 3889.50 0.00 30.26 0.00 15.93 105.41 27.15 0.26 100.05# ./utils/flashstatTime read/s write/s diskr/s diskw/s ssdr/s ssdw/s uread/s uwrit/s metaw/s clean/s repl/s wrepl/s hit% whit% dwhit%06-04 13:37:39 11144 0 8182 1910 4868 2426 7335 0 1569 1909 346 0 26|28 0|19 0|306-04 13:37:40 7148 0 5102 994 3040 1371 4563 0 844 993 362 0 28|28 0|19 0|306-04 13:37:41 4242 0 3019 957 2182 865 2951 0 786 958 41 0 28|28 0|19 0|306-04 13:37:42 5170 0 3779 870 2261 962 3551 0 743 871 173 0 26|28 0|19 0|3
  • 6. SAS盘 8k-randread[root@localhost tmp]# fio fio_test.bak...Jobs: 10 (f=10): [rrrrrrrrrr] [100.0% done] [30709K/0K /s] [3748 /0 iops] [eta 00m:00s] read : io=1751.6MB, bw=29858KB/s, iops=3732 , runt= 60069msec lat (msec): min=3 , max=891 , avg=42.83, stdev=38.60...Disk stats (read/write): sda: ios=223796/87, merge=4/162, ticks=7243566/69, in_queue=7248467, util=100.00%# iostat -mx 2....Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 2.50 3605.50 24.50 28.16 0.21 16.01 121.56 33.41 0.28 100.10sda2 0.00 0.00 3605.50 0.00 28.16 0.00 16.00 121.56 33.64 0.28 100.10Fio的配置文件如下:[root@localhost tmp]# vi fio_test.bak[global]description=Emulation of Intel IOmeter File Server Access Pattern[iometer]blocksize=8k#bssplit=8k/40:16k/60rw=randread#rwmixwrite=20runtime=60direct=1size=1gioengine=libaiodirectory=/data2/iodepth=16iodepth_batch=8iodepth_low=8iodepth_batch_complete=8numjobs=10group_reporting
  • 7. Flashcache 8k randread总结:针对8k随机读,用orion,fio等工具对比测试flashcache和sas盘,从iops、latency、svctm,都可以看出flashcache存储性能都比sas盘提高2-3倍。
  • 8. Flashcache 8k-randrw(20%w)8k 随机读写(写占20%)Orion的测试结果如有图所示:Sas盘监控# iostat -mx 2Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 0.00 1120.00 288.00 8.75 2.25 16.00 29.96 21.09 0.71 100.05sda2 0.00 0.00 1120.00 288.00 8.75 2.25 16.00 29.96 21.09 0.71 100.05混合测试监控# iostat -mx 2Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 3.00 2507.50 1.00 19.59 0.02 16.01 27.89 11.12 0.40 100.05sda5 0.00 0.00 2507.50 0.00 19.59 0.00 16.00 27.89 11.13 0.40 100.05fioa 0.00 1.00 1141.00 4361.00 8.91 30.50 14.67 1.79 0.33 0.15 82.90dm-0 0.00 0.00 3649.50 938.50 28.51 7.33 16.00 29.73 6.48 0.22 100.05# ./utils/flashstat06-04 09:51:28 3586 886 2436 0 1149 4187 0 0 863 0 2436 616 32|29 30|19 2|306-04 09:51:29 3699 901 2538 0 1160 4330 0 0 889 0 2538 635 31|29 29|19 1|306-04 09:51:30 3722 929 2556 0 1165 4383 0 0 898 0 2556 657 31|29 29|19 3|3Orion测试命令:#./orion -run advanced -testname disk1 -num_disks 6 -write 20 -size_small 8 -size_large 8 -duration 10 -type rand
  • 9. Flashcache 8k randrw(20%w)上面是通过orion测试的,但是单进程的,没有达到磁盘的极限,下面用fio工具采用多线程的并发测试Sas盘测试结果:[root@localhost tmp]# fio fio_test.bak…..Jobs: 1 (f=1): [________m_] [10.2% done] [13615K/3373K /s] [1662 /411 iops] [eta 08m:59s] s] read : io=947664KB, bw=15735KB/s, iops=1966 , runt= 60225msec lat (msec): min=1 , max=1614 , avg=73.64, stdev=80.75 write: io=239600KB, bw=3978.5KB/s, iops=497 , runt= 60225msec lat (msec): min=1 , max=756 , avg=29.58, stdev=23.41...Disk stats (read/write): sda: ios=118458/30027, merge=1/157, ticks=6879977/24496, in_queue=6904842, util=100.00%# iostat -mx 2Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 0.00 2052.00 491.50 16.03 3.84 16.00 115.67 45.82 0.39 100.05sda2 0.00 0.00 2052.00 491.50 16.03 3.84 16.00 115.67 45.82 0.39 100.05Fio配置文件如下[root@localhost tmp]# vi fio_test.bak[global]description=Emulation of Intel IOmeter File Server Access Pattern[iometer]blocksize=8k#bssplit=8k/40:16k/60rw=randread#rwmixwrite=20runtime=60direct=1size=1gioengine=libaiodirectory=/data2/iodepth=16iodepth_batch=8iodepth_low=8iodepth_batch_complete=8numjobs=10group_reporting
  • 10. Flashcache 8k-randrw(20%w)Flashcache混合存储测试[root@localhost tmp]# fio fio_test.bakJobs: 10 (f=10): [mmmmmmmmmm] [100.0% done] [23114K/5831K /s] [2821 /711 iops] [eta 00m:00s] Description : [Emulation of Intel IOmeter File Server Access Pattern] read : io=1936.8MB, bw=32968KB/s, iops=4120 , runt= 60158msec lat (usec): min=194 , max=1258.4K, avg=35234.09, stdev=44695.80 write: io=496224KB, bw=8248.7KB/s, iops=1031 , runt= 60158msec lat (usec): min=201 , max=553885 , avg=14041.82, stdev=11965.61Disk stats (read/write): fioa: ios=118923/53295, merge=285/22, ticks=38827/20509, in_queue=59320, util=32.94% sda: ios=137829/35566, merge=114958/26989, ticks=6584524/27631, in_queue=6612130, util=100.00%# iostat -mx 2Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 1968.00 457.50 2305.50 624.00 16.69 4.22 14.62 110.19 37.64 0.34 100.10sda5 1968.00 457.50 2305.50 624.00 16.69 4.22 14.62 110.19 37.64 0.34 100.10fioa 7.00 0.00 2150.00 849.00 16.85 5.34 15.15 0.79 0.26 0.10 29.60dm-0 0.00 0.00 4299.50 1058.50 33.59 8.26 16.00 103.71 19.34 0.19 100.30# ./utils/flashstat time read/s write/s diskr/s diskw/s ssdr/s ssdw/s uread/s uwrit/s metaw/s clean/s repl/s wrepl/s hit% whit% dwhit%06-04 15:54:30 6126 1524 4033 981 2092 832 4033 981 289 0 0 0 34|22 35|1 16|006-04 15:54:31 6447 1561 4311 1038 2136 787 4311 1038 264 0 0 0 33|22 33|1 16|006-04 15:54:32 5932 1558 4009 1065 1922 750 4009 1065 257 0 0 0 32|22 31|1 15|0
  • 11. Flashcache 8k-randrw(20%w)Fio配置文件如下[root@localhost tmp]# vi fio_test.bak[global]description=Emulation of Intel IOmeter File Server Access Pattern[iometer]blocksize=8k#bssplit=8k/40:16k/60rw=randread#rwmixwrite=20runtime=60direct=1size=1gioengine=libaiodirectory=/ssdsas/iodepth=16iodepth_batch=8iodepth_low=8iodepth_batch_complete=8numjobs=10group_reporting总结:读写混合Flashcache 读iops:4120,写iops:1031,读latency:35ms,写latency:14msSas盘 读iops:1966,写iops:497 ,读latency:73.6ms, 写latency:29.5ms和orion的测试结果相近,也是2-3倍的关系
  • 12. Flashcache 8k-randwrite8k随机写orion的测试结果如图所示Sas盘的iops的比较稳定,在接近800左右,latency随压力增大,也变的很大。而flashcache的iops在2000-3000之间,latency在10ms以内。说明:数据在第一次装载到flashcache里,Iops非常高,达到24000左右,当稳定时Iops稳定在2000左右。从orion测试来看,从iops和latency的性能都提高2-3倍。下面用fio工具测试下Flashcache测试[root@localhost tmp]# fio fio_test.bak...Jobs: 3 (f=3): [_www______] [92.3% done] [0K/87228K /s] [0 /10.7K iops] [eta 00m:05s]s] write: io=10079MB, bw=171374KB/s, iops=21421 , runt= 60226msec lat (usec): min=94 , max=5478.1K, avg=5554.25, stdev=88163.96...Disk stats (read/write): fioa: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sda: ios=0/118861, merge=0/849912, ticks=0/9324836, in_queue=9333573, util=98.77%
  • 13. Flashcache 8k-randwrite# iostat -mx 2Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 16700.50 0.00 1522.50 0.00 69.77 93.85 161.72 104.36 0.66 100.10sda5 0.00 16698.50 0.00 1521.00 0.00 69.75 93.92 161.60 104.39 0.66 100.10dm-0 0.00 0.00 0.00 18224.50 0.00 71.19 8.00 2261.76 122.05 0.05 100.20Sas盘测试[root@localhost tmp]# fio fio_test.bakiometer: (g=0): rw=randrw, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=16...Jobs: 3 (f=3): [w_____w_w_] [12.9% done] [0K/130K /s] [0 /15 iops] [eta 06m:59s] s] write: io=327872KB, bw=5304.8KB/s, iops=663 , runt= 61807msec lat (usec): min=181 , max=26410K, avg=236365.89, stdev=1546807.76...Disk stats (read/write): sda: ios=0/57137, merge=0/52418, ticks=0/9499136, in_queue=9500038, util=100.00%# iostat -mx 2Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 769.00 0.00 823.00 0.00 6.22 15.48 153.99 185.95 1.22 100.05sda2 0.00 625.50 0.00 659.00 0.00 5.06 15.72 109.91 169.22 1.52 100.05总结:Flashcache:写iops:21421,latency:5.5ms , svctm :0.05Sas盘 : 写iops:663, latency:236ms, svctm :1.22Sas盘的iops有merge所以看着少一些,但整体上从测试数据看,混合读写性能有很多优势的,可以提高20-30多倍
  • 14. Flashcache 1M连续文件读Orion测试1m连续文件读:从测试结果可以看到是没有优势的,Flashcache的大文读取不如sas盘,所有以后要是用此方案的话,要避免大文件读取。结论:1.Flashcache混合存储在小文件读写上,io能力至少可以提高2-3倍2.Flashcache在大文件读写上不如sas盘3.维护成本比以前要高4.价格也不菲注意:这里sas盘是6个300G的sas盘做成的raid5
  • 15. FIO vs SSD vs SAS8k随机读Iops:8k随机读latency:三种磁盘主要按如下参数使用orion方式测试:./orion -run advanced -testname disk1 -num_disks 3 -size_small 8 -size_large 8 -duration 10 -type rand./orion -run advanced -testname disk1 -num_disks 3 -write 50 -size_small 8 -size_large 8 -duration 10 -type rand./orion -run advanced -testname disk1 -num_disks 3 -size_small 1024 -size_large 1024 -duration 10 -type seq
  • 16. FIO vs SSD vs SAS8k随机读写Iops:8k随机读写latency:
  • 17. FIO vs SSD vs SAS1M文件连续读Iops: 1M文件连续读吞吐量: 1M文件连续读latency:上面的测试结果,fusion-io在小文件的读写性能非常强劲,比sas盘提高100倍,大文件的读写和普通sas盘没什么优势,刚开始还会比较慢;而ssd盘在大文件的连续读写上还进不如sas盘,小文件读写还是很不错的,比sas盘提高10倍左右测试ext3和ext4性能差不到太多,ext4可以提高%5的性能
  • 18. 讨论讨论存储与fs的选择 SAS 混合存储 ssd Fusion-io xfs ext4 ext3 价格/G 6元/GB 35元/GB 45元/GB 160元/GB参考:300G sas:2000左右100G ssd:4500左右320G fusion-io 50000左右
  • 19. Flashcache 8k randread