In this session, we explain how to measure the key performance-impacting metrics in a cloud-based application and best practices for a reliable benchmarking process. Measuring the performance of applications correctly can be challenging and there are many tools available to measure and track performance. This session will provide you with specific examples of good and bad tests. We make it clear how to get reliable measurements of and how to map benchmark results to your application. We also cover the importance of selecting tests wisely, repeating tests, and measuring variability. In addition a customer will provide real-life examples of how they developed their application testing stack, utilize it for repeatable testing and identify bottlenecks.
(PFC302) Performance Benchmarking on AWS | AWS re:Invent 2014
1.
2. • The best benchmark
• Absolute vs. relative measures
• Fixed time or fixed work
• What’s different?
• Use a good AMI
0.00 5.00 10.0015.0020.0025.0030.00
Ubuntu 12.4 ami-…
AWS CentOS 5.4 ami-…
CentOS 5.4 ami-…
CentOS 5.4 ami-…
CentOS 5.4 ami-…
Average CPU result
0%
10%
20%
30%
40%
50%
60%
Coefficient of Variance
3. • Application runs on premises
• Primary requirement is integer CPU performance
• Application is complex to set up, no benchmark tests exist, limited time
• What instance would work best?
1. Choose a synthetic benchmark
2. Baseline: Build, configure, tune, and run it on premises
3. Run the same test (or tests) on a set of instance types
4. Use results from the instance tests to choose the best match
15. Benchmark Category
400.perlbench C Programming language
401.bzip2 C Compression
403.gcc C C compiler
429.mcf C Combinatorial optimization
445.gobmk C Artificial intelligence
456.hmmer C Search gene sequence
458.sjeng C Artificial intelligence
462.libquantum C Physics / quantum computing
464.h264ref C Video compression
471.omnetpp C++ Discrete event simulation
473.astar C++ Path-finding algorithms
483.xalancbmk C++ Xml processing
22. • Application runs on premises
• Primary requirement: memory throughput of 20K MB/sec
• What instance would work best?
1. Choose a synthetic benchmark
2. Baseline: Build, configure, tune, and run it on premises
3. Run the same test (or tests) on a set of instance types
4. Use results from the instance tests to choose the best match
32. If benchmarking your application is not practical, synthetic
benchmarks can be used if you are careful.
• Choose the best benchmark that represents your application
• Analysis – what does “best” mean?
• Run enough tests to quantify variability
• Baseline – what is a “good result” ?
• Samples – keep all of your results – more is better!