Benchmarks, performance, scalability, and capacity what s behind the numbers_ presentation

5,776 views

Published on

Baron Schwartz 介绍benchmark需要的东西,如何通过排队论以及USL来评估benchmark,排队论的基础知识,以及性能与吞吐量的基本关系、基本知识。

Published in: Technology
1 Comment
32 Likes
Statistics
Notes
No Downloads
Views
Total views
5,776
On SlideShare
0
From Embeds
0
Number of Embeds
1,402
Actions
Shares
0
Downloads
244
Comments
1
Likes
32
Embeds 0
No embeds

No notes for slide

Benchmarks, performance, scalability, and capacity what s behind the numbers_ presentation

  1. 1. Beyond The Numbers Baron Schwartz
  2. 2. Who Am I? ● baron@percona.com ● @xaprb ● linkedin.com/in/xaprb ● xaprb.com/blog
  3. 3. Who Am I?● Maatkit ● Percona Toolkit● Innotop ● Monitoring Plugins● Aspersa ● Online Tools● JavaScript Libraries
  4. 4. ● Consulting ● Percona Server● Support ● Percona XtraBackup● Remote DBA ● Percona XtraDB Cluster● Engineering ● Percona Toolkit● Conferences & Training ● Many More
  5. 5. Todays Agenda● Benchmarks● Aggregation and Distributions● Performance, Capacity & Utilization● Rules of Thumb● Queueing Theory and Scalability
  6. 6. Benchmarks
  7. 7. Whats Missing? ● Distribution ● Time Series ● Response Times ● Parameters ● Goals ● System Specs
  8. 8. Whats Misleading? ● Logarithmic X-Axis ● Interpolation
  9. 9. Whats Good? ● Y-Axis Reaches 0 ● No Fake-Smoothing
  10. 10. Behind a Single Dot
  11. 11. Look At All That Data...
  12. 12. Whats With The Grid Lines?!?!?
  13. 13. Better Benchmarks What does an ideal benchmark report look like?
  14. 14. Clear Benchmark Goals● Validating hardware configuration● Comparing two systems● Checking for regressions● Capacity planning● Reproducing bad behavior to solve it● Stress-testing to find bottlenecks
  15. 15. Hardware and Software● Specs for CPU, disk, memory, network● Software versions (OS, SUT, benchmark)● Filesystem, RAID controller● Disk queue scheduler
  16. 16. Presenting Results● Ideally, make raw results available● Include metrics from OS (CPU, RAM, IO, network)● Generate some plots to summarize ● This is where the rubber meets the road!
  17. 17. Better Aggregate Measures● Average● Percentiles ● 95th ● 99th● Maximum● Observation Duration ● Question: how bad can 95th percentile be?
  18. 18. More Aggregate Measures● Median (50th Percentile)● Standard Deviation● Index of Dispersion
  19. 19. Better...
  20. 20. Better Still...
  21. 21. Keep It Coming...
  22. 22. Throughput AND Response Time
  23. 23. Performance● What is Performance?● Two Metrics ● Response Time (time per task) ● Throughput (tasks per time)● Theyre not reciprocals ● More on this later
  24. 24. What Performance Isnt● CPU Usage● Load Average● Other metrics of resource consumption
  25. 25. Performance● I often focus on response time ● It represents user experience ● Throughput indicates capacity rather than performance● For benchmarking, throughput is primary
  26. 26. Utilization● The portion of time during which the resource is busy ● i.e. there is at least one thing in progress
  27. 27. Utilization is Confusing● Be very careful with tools that report utilization● From the Linux iostat man page: ● “%util: Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%.”● Can you parse that? Is it true?
  28. 28. Capacity● What is Capacity?
  29. 29. Capacity
  30. 30. Capacity – My Definition Capacity is the maximum throughput ... at achievable concurrency ... with acceptable performance ... as defined by response time ... meeting specified constraints ... over specified observation intervals.
  31. 31. Capacity Example● What is capacity of the system at a concurrency of 32 with 10-second 95th- percentile response time not to exceed 2ms over a 60-minute duration?● To determine this, we need goal-seeking benchmark software ● Most benchmark software cant do this
  32. 32. Benchmarks, etc Recap● Most benchmarks reveal very little● Benchmark reports reveal even less● Its good to go beyond the surface
  33. 33. Amdahls Law● “The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program.” - Wikipedia● Its basically a law of diminishing returns.
  34. 34. Should I Defragment My Disk?● Method 1: Google “defragment”● Method 2: Try it and see● Method 3: Measure if the disk is a bottleneck
  35. 35. Spolsky -vs- Millsap
  36. 36. Spolsky -vs- Millsap
  37. 37. Amdahls Law● Dont try to optimize little things.
  38. 38. Littles Law● N = XR● That is, ● Concurrency = Throughput * Response Time● This holds regardless of queueing, arrival rate distribution, response time distribution, etc.
  39. 39. Littles Law Example● If disk IOs average 4ms...● And there are 280 IOs per second...● Then the disks average concurrency is: ● N = 280 * .004 ● N = 1.12● Do you believe this? ● When might it not be true?
  40. 40. Littles Law Example #2● If disk utilization is 98%● And there are 280 IOs per second● What do we know?
  41. 41. Utilization Law● U = SX ● Also independent of distributions, etc...● That is, ● Utilization = Service Time * Throughput● Utilization = 98% and Throughput = 280 ● S = U/X ● Service Time = .98 / 280 = .0035
  42. 42. Queueing Theory● How can we predict the amount of queueing in a system?● How can we predict its response times?● How can we predict capacity?
  43. 43. Erlang Queueing● Erlangs formulas model the probability of queueing for a given arrival rate, service time, and number of servers.● A “server” is anything capable of serving a request. ● CPUs ● Disks
  44. 44. CPU -vs- Disk Queueing● Scenario: 4-CPU, 4-disk (RAID0) server● Thought experiment: ● How do processes queue for CPU? ● How do I/O requests queue on disks?
  45. 45. Notation● Typically see something like M/M/1● Each letter is a placeholder in A/S/n ● A = Arrival distribution ● S = Service-time distribution ● n = Number of servers● A and S can be one of: ● Markov ● Deterministic ● General
  46. 46. CPUs -vs- Disks● CPUs: M/M/4● Disks: 4 x {M/M/1}
  47. 47. M/M/1 Queueing cmg.org
  48. 48. M/M/n Queueing cmg.org
  49. 49. Erlang C Function● M/M/n queueing is modeled by Erlang C ● See http://en.wikipedia.org/wiki/Erlang_(unit)
  50. 50. Whats Wrong With Erlang C?● You must validate your arrival times.● You must validate your service times.● The equation is hard to work with.● In practice, its hard to use Erlang C.
  51. 51. Scalability● Queueing causes non-linear scaling.● But first, lets talk about linearity.
  52. 52. System ScalabilityThroughput Why? Concurrency
  53. 53. Universal Scalability Law Linear AmdahlThroughput USL Concurrency
  54. 54. Amdahl Scalability
  55. 55. USL Scalability
  56. 56. USL Scalability Modeling
  57. 57. USL Performance Modeling
  58. 58. Scalability Limitations● Locks● Synchronization points● Shared resources● Duplicated data to be kept in sync● Weakest-link problems
  59. 59. RAID10 On EBS● Which is faster? ● RAID 10 over 10 EBS volumes ● RAID 10 over 20 EBS volumes● Hint: http://goo.gl/Xm92Y ● Also, http://goo.gl/fAEIL
  60. 60. Debunking “Linear”● Ask to see the actual numbers. ● They shouldnt be rounded off suspiciously. ● They must be truly linear. ● They must intersect the point (0, 0).
  61. 61. Debunking, Example #1
  62. 62. Is it Linear?
  63. 63. Its Not Linear
  64. 64. Resources● Naomi Robbins Blog ● http://blogs.forbes.com/naomirobbins/● Percona White Papers ● http://www.percona.com/● Neil J. Gunther ● Guerrilla Capacity Planning● http://www.contextneeded.com/
  65. 65. Questions?
  66. 66. baron@percona.com @xaprb

×