• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Hp cloud performance_benchmark
 

Hp cloud performance_benchmark

on

  • 774 views

 

Statistics

Views

Total Views
774
Views on SlideShare
773
Embed Views
1

Actions

Likes
0
Downloads
22
Comments
0

1 Embed 1

http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Hp cloud performance_benchmark Hp cloud performance_benchmark Presentation Transcript

    • HP Cloud Services Performance Testing Qingye Jiang (John)Email: qjiang@ieee.org Weibo: @qyjohn_
    • Introduction • Virtual Machines • az-1.region-a.geo-1 • web-created • Ubuntu 11.04 64 bit • 3+ VM’s / model • total 20 VM’s • Benchmark Suite • byte-unixbench • mbw • iozone • iperf • pgbench • Hadoop wordcount Mediu XXLar XSmall Small Large XLarge m gevCPU 1 2 2 4 4 8 • Data FilteringMEM (GB) 1 2 4 8 16 32 • best VM / modelDISK (GB) 30 60 120 240 480 960 • average by 10Price ($/hr) 0.04 0.08 0.16 0.32 0.64 1.28
    • byte-unixbench4500 Si ngl e Thr ead4000 M t i Thr ead ul350030002500200015001000 500 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• byte-unixbench index measures overall system performance• in multi-thread testing, n-Thread = n-vCPU• system with the same amount of vCPU exhibits similar performance• memory size does not have much impact on performance• 2 x vCPU => 1.5 x performance
    • mbw1200010000 M CPY EM 8000 DM UP MCBLOCK 6000 4000 2000 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• mbw 128• MB/s• different systems exhibit similar memory performance
    • iozone – os disk7000000 w i te r6000000 r ew i t e r5000000 r andom w i t e r r ead4000000 r er ead r andom r ead300000020000001000000 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• iozone -Mcew -i0 -i1 -i2 -s4g -r256k -f /io.tmp• KB/s• difference systems exhibit similar write performance• L / XL / XXL systems exhibit much better read performance• cgroup blkio throttling? QEMU blk throttle? Different disk types?
    • iozone – data disk6000000 w i te r5000000 r ew i t e r r andom w i t e r4000000 r ead r er ead3000000 r andom r ead20000001000000 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• iozone -Mcew -i0 -i1 -i2 -s4g -r256k -f /mnt/io.tmp• KB/s• difference systems exhibit similar write performance• XL / XXL systems exhibit much better read performance• cgroup blkio throttling? QEMU blk throttle? Different disk types?
    • iperf XXLarg XSmall Small Medium Large XLarge e XSmall 25 25 25 25 25 25 Small 25 50 50 50 50 50 Medium 25 50 100 100 100 100 Large 25 50 100 200 200 200 XLarge 25 50 100 200 400 400 XXLarge 25 50 100 200 400 650• (x, y) represents the bandwidth between two systems• Mbps• bandwidth limited by the system with lower configuration• cisco quantum plugin?
    • hadoop wordcount single no de900800 2GB700600500400300200100 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• hadoop wordcount application provided in official distribution• test directory with 3 files, total file size is 2 GB.• test result shows the time needed to finish the calculation (s)
    • hadoop wordcount multiple nod es1000 900 800 700 600 500 400 300 200 100 0 1 x 2 x 3 x 4 x Sm l al XXLar ge Xsm l al XSm l al XSm l al XSm l al• dfs.replication = nNodes• test directory with 3 files, total file size is 2 GB.• test result shows the time needed to finish the calculation (s)
    • pgbench18001600 Si ngl e Thr ead M t i Thr ead ul140012001000 800 600 400 200 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• postgresql-9.1, postgresql-contrib-9.1• pgbench -i -s 16 pgbench• pgbench -t 2000 -c 16 –j n -U postgres pgbench• in multi-thread testing, n-Thread = n-vCPU
    • defects – pgbench single thr ead12001000 800 N m or al D ect ef 600 400 200 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• defects were observed in all VM models• test results were smooth on the same VM instance• the following test results were not affected on defected instances • mbw • iperf • byte-unixbench
    • defects – iozone write result s300000 N m or al250000 D ect ef200000150000100000 50000 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• test performed on OS disks only• write performance seems to be the major problem
    • defects – iozone read result s7000000 N m or al6000000 D ect ef50000004000000300000020000001000000 0 XSm l al Sm l al M um edi Lar ge XLar ge XXLar ge• test performed on OS disks only• read performance is similar for all instances in both cases
    • defect rate 7 = % 35 20• 7 defected instances were found out of 20 total instances• defect rate too high for deploying production systems• need extra caution when VM’s are auto-generated by API’s
    • conclusion 先以欲勾牵,后令入佛智。 鸠摩罗什大师译 《维摩诘所说经 . 佛道品第八 》• HP defects were not directly related to OpenStack• OpenStack still lacks key functionalities for production deployment• building IaaS service is more complicated than installing OpenStack• open source IaaS software => IaaS support and service => $$$
    • Thank You! Qingye Jiang (John)Email: qjiang@ieee.org Weibo: @qyjohn_ http://www.qyjohn.net/