Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Upcoming SlideShare
VLDB2015 会議報告
Next
Download to read offline and view in fullscreen.

9

Share

Download to read offline

The Art of Performance Evaluation

Download to read offline

A talk presented at PGCon 2015.
This talk presents principles of designing performance evaluations and shows how you can put them into practice by introducing the speaker's experiences of performance evaluations with database and storage systems.

See also: http://www.pgcon.org/2015/schedule/events/821.en.html

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all

The Art of Performance Evaluation

  1. 1. The Art of Performance Evaluation Yuto Hayamizu (The University of Tokyo) 2015/06/19 1 PGCon 2015 @ University of Ottawa
  2. 2. Q: What is the role of IT? 2015/06/19 2
  3. 3. The Role of Information Technology •  “Transforming” computing power into business/social values 3 Tabulating Machine (Herman Hollerith, 1980)
  4. 4. Performance •  The key criterion of “transformation” efficiency 4 Tabulating Machine (Herman Hollerith, 1980) National Population Census in U.S 1880: 8 years (by humans) 1890: 13 years (by humans, expected) → 18 months
  5. 5. And Today 5Data Is The Source of Values ✓ Big Data ✓ Internet of Things ✓ Cyber Physical System
  6. 6. Database System •  Foundation of “transformation” •  The value of an engineer == Knowledge of how “transformation” works 6 Hardware App App App App Database System
  7. 7. Performance of Database Systems •  End-to-end study –  From physics of hardware to logics of applications •  Learning every component technology is NOT enough –  A component technology changes rapidly –  An architecture could change drastically •  Acquire the way of learning how “transformation” works 2015/06/19 7
  8. 8. Performance Evaluation •  The key process to understand performance of systems – Identify performance goals which contribute to user value – Design and execute an evaluation procedure 2015/06/19 8
  9. 9. Common Misconception 2015/06/19 9
  10. 10. The Heart of Performance Evaluation Is Not Individual Skills 2015/06/19 10 Kernel probing and tracingHardware simulation Statistical analysis Workload characterization Workload generation Factorial experiments design Analytical modeling Metrics selection Application monitoring Middleware monitoring OS monitoringDB benchmarking CPU benchmarking Storage benchmarking Memory benchmarking Designing network Designing storage Integrating cloud environments Kernel tuning DB tuning Network monitoring
  11. 11. But A Philosophy of Orchestrating Individual Skills 2015/06/19 11 Kernel probing and tracing Hardware simulation Statistical analysis Workload characterizationWorkload generation Factorial experiments design Analytical modeling Metrics selection Application monitoring Middleware monitoring OS monitoring DB benchmarking CPU benchmarking Storage benchmarking Memory benchmarking Designing network Designing storage Kernel tuning DB tuning Network monitoring
  12. 12. Example Story •  “Find the best price/performance SSD for our PostgreSQL database” 2015/06/19 12 List all possible SSDs Build test systems for all SSDs Run pg_bench on all test systems “XX tps/$ with SSD Z!”
  13. 13. Smarter Way •  “Find the best price/performance SSD for our PostgreSQL system” 2015/06/19 13 Characterize workload Model performance with some metrics Estimate performance with datasheets Validate estimation w/ pg_bench on several SSDs Narrow down candidate SSDs Run pg_bench on candidates and get the result
  14. 14. Nature of Performance Evaluation •  A goal could change during evaluation – Progress of evaluation deepens insight into target systems •  A philosophy will guide you 2015/06/19 14
  15. 15. 2015/06/19 15 Contrary to common belief, performance evaluation is an art. Like a work of art, successful evaluation cannot be produced mechanically. Every evaluation requires an intimate knowledge of the system being modeled and a careful selection of the methodology, workload, and tools. — Raj Jain
  16. 16. How can one develop his/her own philosophy? 2015/06/19 16 Learn principles and individual techniques Put into practice and get experiences
  17. 17. The First Step to Performance Evaluation •  Fundamentals of Performance Evaluation – Principles – Basic Techniques •  Modeling •  Measurement •  Simulation ... with my practices and experiences 2015/06/19 17
  18. 18. Principles •  Define a goal first – Understand the system and identify the problem to be solved •  The most important, and usually the most difficult part •  Stay goal-oriented – Do not start with a tool or a technique – Evaluation could change a goal itself •  Redesign the process for a new goal 2015/06/19 18 Principles are so simple :)
  19. 19. Basic Techniques 2015/06/19 19 Modeling MeasurementSimulation
  20. 20. Modeling 2015/06/19 20 Modeling MeasurementSimulation
  21. 21. Modeling •  Building a model: approximation – Understand how it works – Select variables which effect performance – Ignore unnecessary details – Formulate performance •  The core of scientific approach 2015/06/19 21
  22. 22. Example: Throughput of HDD •  Hard disk drive –  Platters rotates at constant speed –  A cylinder is selected by moving heads •  Important variables –  Rotating speed: R [rpm] –  Radius of a cylinder: r [m] –  Density of a cylinder: D [byte/m] •  Performance model 2015/06/19 22 2πr × D × R / 60 [byte/sec]
  23. 23. Seq. Read Throughput of HDD 2015/06/19 23 Outer platter Inner platter
  24. 24. Example: Latency of HDD •  Important variables – Rotating speed: R [rpm] – Avg. rotational latency: Δt = 60/R [sec] – Max seek time: ts [sec] •  Latency model: 2015/06/19 24 ts < (latency) < ts+Δt [sec]
  25. 25. I/O Latency of HDD •  a 2015/06/19 25 ~ 6ms (rotational latency of 10k)
  26. 26. Queueing Model •  Simple queueing model –  Avg. arrival rate of customer: λ –  Avg. service latency: µ •  Modeling computer system performance –  CPU : customer = CPU instructions, server = execution units in CPU –  Storage: customer = I/O requests, server = disks and controllers –  Database system: customer = queries, server = query executors –  ... 2015/06/19 26
  27. 27. Performance Modeling with Queueing Model •  M/M/1 : simple but useful model – 1 server and 1 infinite queue – Customer arrives at rate λ according to Poission process 2015/06/19 27 Average response time λ μ Waiting time Response time (sojourn time)
  28. 28. Modeling OLTP Performance with Various CPU Frequencies •  28 Theoretical response time Measured response time
  29. 29. Modeling Deepens Understanding •  A good model provides good view – Performance values – Scaling trends – Highlight bugs •  A bad model is also informative – Missing key variables – Complexity of systems 2015/06/19 29
  30. 30. Measurement 2015/06/19 30 Modeling MeasurementSimulation
  31. 31. Measurement •  Measurement without modeling is pointless – Modeling validates measurement – Measurement validates modeling 2015/06/19 31 Model
  32. 32. Guide For Fruitful Measurement •  Selecting workload –  Does it characterize your application well? •  Selecting performance metrics –  Which metrics define your goal? •  Modeling performance •  Selecting environments –  Is it designed for measuring your metrics? •  Selecting measurement methods and tools •  Conduct measurements •  Validating results with models 2015/06/19 32 Goalden rule: Decide methods and tools based on your goal Measurement != just picking up a benchmark tool and run it
  33. 33. Example story: Storage for OLTP System •  Goal – Build a storage system for OLTP system which can attain XXXX tps •  Workload – Random I/O from many clients •  Metric – IOPS of storage system (random access) 2015/06/19 33
  34. 34. Example story: Storage for OLTP System •  Modeling performance – What affects your metrics? – IOPS (random access) •  I/O block size: 8KB ~ 16KB •  I/O concurrency: 1 ~ 1000 •  The length of I/O queue in: –  I/O scheduler of OS –  HBA driver –  Storage controller –  Storage device 2015/06/19 34
  35. 35. Example story: Storage for OLTP System •  Measurement environment –  Carefully design the flow of resources –  Your metrics: IOPS of the storage system •  Ensure whether I/O requests really reach the storage system 2015/06/19 35 Server Storage Processors Memory StorageController (withcache)
  36. 36. Example story: Storage for OLTP System •  Measurement methods and tools –  Select an appropriate tool or make it yourself •  What should be measured? •  What can be measured? –  How measured value represent your metric? •  Ex) iostat value != IOPS of storage devices 2015/06/19 36 Server Storage Processors Memory StorageController (withcache)
  37. 37. Example story: Storage for OLTP System •  Conduct measurement – Plan your experiments with minimal steps •  Exhaustive parameter combinations ...? •  A good model tells you where should be measured 2015/06/19 37 Next measurement should be ...
  38. 38. Example story: Storage for OLTP System •  Validate measurement results with your model –  Results match the model •  Right understanding of system performance –  Results do not match the model •  Missing something •  Improve your model or measurement method 2015/06/19 38
  39. 39. Simulation 2015/06/19 39 Modeling MeasurementSimulation
  40. 40. Simulation •  Simulating unavailable systems – Coming hardware devices – Unimplemented software components •  Useful technique for – More detailed behaviour analysis than modeling 2015/06/19 40
  41. 41. I/O Replay •  Performance evaluation with realistic I/O workload –  Trace I/O requests during actual execution –  Replay traced I/O patters on hypothetical systems 2015/06/19 41 TPC-C benchmark (SF=100)
  42. 42. I/O Replay (TPC-C) on Various Devices 2015/06/19 42 HDD SATA SSD PCIe SSD
  43. 43. Tips on Simulation •  Technique of what-if analysis based on some assumptions – Underlying models, workload generation (random number distribution), omitted details, ... – Only proper assumptions can lead to meaningful results •  Validating assumptions and results with modeling/measurement is important 2015/06/19 43
  44. 44. Basic Techniques 2015/06/19 44 Modeling MeasurementSimulation Validate
  45. 45. Wrap Up •  Why performance evaluation matters – The key process to understand systems •  Stay goal-oriented •  Use basic techniques effectively – Modeling, measurement and simulation – Validate each other •  Experiences develop your philosophy 2015/06/19 45
  • KyawKyawKhaing5

    Jun. 14, 2019
  • ChristopherJamesRaol

    Aug. 10, 2017
  • minnanomameswork

    Mar. 17, 2017
  • ssuser7b6cf7

    Sep. 23, 2016
  • hogewhoo

    Jul. 5, 2015
  • mogwaing

    Jul. 1, 2015
  • kbkbkbkb1

    Jun. 28, 2015
  • YukiIwamoto

    Jun. 28, 2015
  • yoheiazekatsu

    Jun. 28, 2015

A talk presented at PGCon 2015. This talk presents principles of designing performance evaluations and shows how you can put them into practice by introducing the speaker's experiences of performance evaluations with database and storage systems. See also: http://www.pgcon.org/2015/schedule/events/821.en.html

Views

Total views

11,136

On Slideshare

0

From embeds

0

Number of embeds

8,215

Actions

Downloads

40

Shares

0

Comments

0

Likes

9

×