An Introduction To Load Testing A Blackboard Primer

3,165 views
3,101 views

Published on

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,165
On SlideShare
0
From Embeds
0
Number of Embeds
63
Actions
Shares
0
Downloads
112
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

An Introduction To Load Testing A Blackboard Primer

  1. 1. An Introduction to Load Testing: A Blackboard Primer Steve Feldman Director of Performance Engineering and Architecture Blackboard Inc. July 18 th 3pm
  2. 2. What is Software Performance Engineering? Software Execution Model System Execution Model SPE System Software
  3. 3. <ul><li>Response Time Performance Absolutely Critical Performance Measurement. </li></ul><ul><li>Emphasis on optimizing the application business logic. </li></ul><ul><li>Design Pattern implementation is primary concern. </li></ul>The Software Execution Model SPE System Software
  4. 4. <ul><li>Response Time Performance Remains Critical Performance Measurement. </li></ul><ul><li>Emphasis on optimizing the deployment environment. </li></ul><ul><li>System Resource Utilization of primary concern. </li></ul>The System Execution Model
  5. 5. <ul><li>Awareness of system performance peaks and valleys. </li></ul><ul><li>Knowledge of capacity planning needs. </li></ul><ul><li>All of the data is available, but little is done other then basic optimization. </li></ul><ul><li>Looking to extend performance management via environmental optimization. </li></ul>The System Execution Model and the Performance Maturity Model Level 1: Reactive Fire Fighting Level 2: Monitoring And Instrumenting Level 3: Performance Optimizing Level 4: Business Optimizing Level 5: Process Optimizing
  6. 6. How Do We Optimize our Environment Using the System Execution Model? <ul><li>Study existing behavior, adoption, growth and system resource utilization patterns. </li></ul><ul><li>Measure live system response times during periods of variations for watermark analysis. </li></ul><ul><li>Simulate synthetic load mocking the usage patterns of the deployment. </li></ul>
  7. 7. Introduction to Load Testing <ul><li>What is load testing? </li></ul><ul><li>Why do we load test? </li></ul><ul><li>What tools do we use? </li></ul><ul><li>Preparing for load testing? </li></ul><ul><li>How do we load test? </li></ul><ul><li>What to do with the results of a load test? </li></ul>
  8. 8. What is Load Testing? <ul><li>Load testing is a controlled method of exercising artificial workload against a running system. </li></ul><ul><ul><li>A system can be hardware or software oriented. </li></ul></ul><ul><ul><li>A system can be both. </li></ul></ul><ul><li>Load testing can be executed in a manual or automated fashion. </li></ul><ul><ul><li>Automated Load Testing can mitigate inconsistencies and not compromise scientific reliability of data. </li></ul></ul>
  9. 9. Why do we Load Test? <ul><li>Most load tests are executed with false intentions, (performance barometer). </li></ul><ul><li>Understanding the impact of response times for predictable behavioral conditions and scenarios. </li></ul><ul><li>Understanding the impact of response times for patterns of adoption and growth. </li></ul><ul><li>Understanding the resource demands of the deployment environment. </li></ul>
  10. 10. What tools do we use? <ul><li>Commercial Tools </li></ul><ul><ul><li>Mercury LoadRunner (Currently Used at Blackboard) </li></ul></ul><ul><ul><li>Segue SilkPerformer (Formally Used at Blackboard) </li></ul></ul><ul><ul><li>Rational Performance Studio </li></ul></ul><ul><li>Freeware Tools </li></ul><ul><ul><li>Grinder (Occasionally Used at Blackboard) </li></ul></ul><ul><ul><li>Apache JMeter (Occasionally Used at Blackboard) </li></ul></ul><ul><ul><li>OpenSTA </li></ul></ul><ul><li>Great Resources for Picking a Load Testing Tool </li></ul><ul><ul><li>Performance Analysis of Java Web Sites by Stacy Joines (ISBN: 0201844540) </li></ul></ul><ul><ul><li>http://www.testingfaqs.org/t-load.html </li></ul></ul>
  11. 11. Preparing for load testing? <ul><li>Define Performance Objectives </li></ul><ul><li>Use Case Definition </li></ul><ul><li>Performance Scenarios </li></ul><ul><li>Data Modeling </li></ul><ul><li>Scripting and Parameterization </li></ul>
  12. 12. Define Performance Objectives <ul><li>Every Load Test Should Have a Purpose of Measurement. </li></ul><ul><li>Common Objectives </li></ul><ul><ul><li>Sessions Per Hour </li></ul></ul><ul><ul><li>Transactions Per Hour </li></ul></ul><ul><ul><li>Throughput Per Transaction and Per Hour </li></ul></ul><ul><ul><li>Response Time Calibration </li></ul></ul><ul><ul><li>Resource Saturation Calibration </li></ul></ul>
  13. 13. Define Use Cases <ul><li>Use Cases should be prioritized based on the following: </li></ul><ul><ul><li>Criticality of Execution </li></ul></ul><ul><ul><li>Observation of Execution (Behavioral Modeling) </li></ul></ul><ul><ul><li>Expectation of Adoption </li></ul></ul><ul><ul><li>Baseline Analysis </li></ul></ul>
  14. 14. Define Performance Scenarios <ul><li>Collection of one or more use cases sequenced in a logical manner (compilation of a user session) </li></ul><ul><li>Scenarios should be realistic in nature and based on recurring patterns identified in session behavior models. </li></ul><ul><ul><li>Avoid simulating extraneous workload. </li></ul></ul><ul><ul><li>Iterate when necessary. </li></ul></ul>
  15. 15. Design an Accurate Data Model <ul><li>Uniform in Construction </li></ul><ul><ul><li>Naming Conventions (Sequential) </li></ul></ul><ul><ul><ul><li>User000000001, LargeCourse000000001, 25-MC-QBQ-Assessment, XSmallMsg000000001, etc… </li></ul></ul></ul><ul><ul><li>Data Constructions (Uniform for Testability) </li></ul></ul><ul><ul><ul><li>XSmallMsg000000001 Contains 100 Characters of Text </li></ul></ul></ul><ul><ul><ul><li>XLargeMsg000000001 Contains 1000 Characters of Text (Factor of 10X) </li></ul></ul></ul><ul><li>Multi-Dimensional Data Model </li></ul><ul><ul><li>Fragmented in Nature </li></ul></ul><ul><ul><ul><li>Data Conditions for Testability </li></ul></ul></ul><ul><ul><ul><li>Scenarios for Testability </li></ul></ul></ul>
  16. 16. Scripting and Parameterization <ul><li>Script Programmatically </li></ul><ul><ul><li>Focus on Reusability, Encapsulation and Testability </li></ul></ul><ul><ul><li>Componentize the Action Step of the Use Case </li></ul></ul><ul><ul><li>Use Explicit Naming Conventions </li></ul></ul><ul><ul><li>Example: Measure the Performance of Opening a Word Document. </li></ul></ul><ul><ul><ul><li>Authenticate(), NavPortal(), NavCourse(), NavCourseMenu(Docs), ReadDoc() </li></ul></ul></ul>
  17. 17. Scripting and Parameterization: Example <ul><li>/** </li></ul><ul><li>* Quick Navigation: Click on Course Menu </li></ul><ul><li>* Params Required: course_id </li></ul><ul><li>* Params Saved: none </li></ul><ul><li>*/ </li></ul><ul><li>CourseMenu() </li></ul><ul><li>{ </li></ul><ul><li>static char *status = &quot;Course Menu: Course Menu&quot;; </li></ul><ul><li>bb_status(status); </li></ul><ul><li>lr_start_transaction(status); </li></ul><ul><li>bb_web_url(&quot;{bb_target_url}/webapps/blackboard/content/courseMenu.jsp?mini=Y&course_id={bb_course_pk}&quot;, 'l'); </li></ul><ul><li>lr_end_transaction(status, LR_AUTO); </li></ul><ul><li>lr_think_time(navigational); </li></ul><ul><li>} </li></ul>Code Comments Reusable Action Name Echo Status and Transaction HTTP Representation with Parameterization and Abandonment
  18. 18. Scripting and Parameterization <ul><li>Parameterize Dynamically </li></ul><ul><ul><li>Realistic load simulations test against unique data conditions. </li></ul></ul><ul><ul><li>Avoid hard-coding dynamic or user-defined data elements. </li></ul></ul><ul><ul><li>Work with uniform, well-constructed data sets (sequences) </li></ul></ul><ul><ul><li>Example: Parameterize the username for Authentication(). </li></ul></ul><ul><ul><ul><li>student000000001, student000000002, instructor000000001, admin000000001, observer000000001, hs_student000000001 </li></ul></ul></ul>
  19. 19. Scripting and Parameterization: Example <ul><li>// Save various folder pks and go to the course menu folder </li></ul><ul><li>web_reg_save_param(&quot;course_assessments_pk&quot;, &quot;NotFound=Warning&quot;, &quot;LB=content_id=_&quot;, &quot;RB=_1&mode=reset&quot; target=&quot;main&quot;>Assessments&quot;, LAST); </li></ul><ul><li>web_reg_save_param(&quot;course_documents_pk&quot;, &quot;NotFound=Warning&quot;, &quot;LB=content_id=_&quot;, &quot;RB=_1&mode=reset&quot; target=&quot;main&quot;>Course Documents&quot;, LAST); </li></ul>Parameterization Name: Course_Assessments_Pk If Not Found: Issues a Warning Finds the Values b/w Left Boundary and Right Boundary These are LoadRunner Terminology References. However, other scripting tools use same constructs.
  20. 20. Scripting and Parameterization: Blackboard Gotchas <ul><li>RDBMS Authentication </li></ul><ul><ul><li>One Time Token </li></ul></ul><ul><ul><li>MD5 Encrypted Password </li></ul></ul><ul><ul><li>MD5 (MD5 Encrypted Password + One Time Token) </li></ul></ul>
  21. 21. Scripting and Parameterization: Blackboard Gotchas <ul><li>Navigational Concerns </li></ul><ul><ul><li>Dynamic ID’s </li></ul></ul><ul><ul><ul><li>Tab IDs </li></ul></ul></ul><ul><ul><ul><li>Content IDs </li></ul></ul></ul><ul><ul><ul><li>Course IDs </li></ul></ul></ul><ul><ul><ul><li>Tool IDs </li></ul></ul></ul><ul><ul><li>Modes </li></ul></ul><ul><ul><ul><li>Reset </li></ul></ul></ul><ul><ul><ul><li>Quick </li></ul></ul></ul><ul><ul><ul><li>View </li></ul></ul></ul><ul><ul><li>Action Steps </li></ul></ul><ul><ul><ul><li>Manage, CaretManage, Copy, Remove_Proc </li></ul></ul></ul><ul><ul><ul><li>Family </li></ul></ul></ul>
  22. 22. Scripting and Parameterization: Blackboard Gotchas <ul><li>Transactional Concerns </li></ul><ul><ul><li>HTTP Post </li></ul></ul><ul><ul><ul><li>Multiple ID submissions </li></ul></ul></ul><ul><ul><ul><li>Action Steps </li></ul></ul></ul><ul><ul><ul><li>Data Values </li></ul></ul></ul><ul><ul><ul><li>Permissions </li></ul></ul></ul><ul><ul><ul><li>Metadata </li></ul></ul></ul>
  23. 23. Scripting and Parameterization: Blackboard Gotchas (Example) <ul><li>/** * User Modifies Grade and Submits Change </li></ul><ul><li>* Params Required: Random Number from 0 to 100 </li></ul><ul><li>* Params Saved: none */ </li></ul><ul><li>ModifyGrade() </li></ul><ul><li>{ </li></ul><ul><li>static char *status = &quot;Modify Grades&quot;; </li></ul><ul><li>bb_status(status); </li></ul><ul><li>start_timer(); </li></ul><ul><li>lr_start_transaction(status); </li></ul><ul><li>web_submit_form(&quot;itemGrades&quot;, </li></ul><ul><li>&quot;Snapshot=t28.inf&quot;, </li></ul><ul><li>ITEMDATA, </li></ul><ul><li>&quot;Name=grade[0]&quot;, &quot;Value={randGrade}&quot;, ENDITEM, </li></ul><ul><li>&quot;Name=grade[1]&quot;, &quot;Value={randGrade}&quot;, ENDITEM, </li></ul><ul><li>&quot;Name=grade[2]&quot;, &quot;Value={randGrade}&quot;, ENDITEM, </li></ul><ul><li>&quot;Name=grade[3]&quot;, &quot;Value={randGrade}&quot;, ENDITEM, </li></ul><ul><li>&quot;Name=submit.x&quot;, &quot;Value=41&quot;, ENDITEM, </li></ul><ul><li>&quot;Name=submit.y&quot;, &quot;Value=5&quot;, ENDITEM, LAST); </li></ul><ul><li>lr_end_transaction(status, LR_AUTO); </li></ul><ul><li>stop_timer(); </li></ul><ul><li>abandon('h'); </li></ul><ul><li>lr_think_time(transactional);} </li></ul>Transaction Timer Dynamic Parameterized Values Explicit Abandonment Policy and Parameterized Think Time
  24. 24. How do we load test? <ul><li>Initial Configuration </li></ul><ul><li>Calibration </li></ul><ul><li>Baseline </li></ul><ul><li>Environmental Clean-Up </li></ul><ul><li>Collecting Enough Samples </li></ul><ul><li>Optimization </li></ul>
  25. 25. Load Testing: How To Initially Configure <ul><li>Optimize the Environment from the Start </li></ul><ul><ul><li>Consider it your baseline configuration </li></ul></ul><ul><ul><li>Knowledge of embedded sub-systems </li></ul></ul><ul><ul><li>Previous Experience with Blackboard and/or current deployment Configuration </li></ul></ul><ul><ul><li>Think twice about using the out of the box configuration. </li></ul></ul>
  26. 26. Load Testing: Calibration <ul><li>Definition: The process of identifying an ideal workload to execute against a system. </li></ul><ul><li>Blackboard Performance Engineering uses two types of Calibration. </li></ul><ul><ul><li>Identify Peak of Concurrency (Key Metric for Identifying Sessions per Hour) </li></ul></ul><ul><ul><ul><li>Calibrate to Response Time </li></ul></ul></ul><ul><ul><ul><li>Calibrate to System Saturation </li></ul></ul></ul>
  27. 27. Load Testing: Response Time Calibration X-Axis: Iterations Y-Axis: Response Time Response Time Threshold Line Optimal Workload
  28. 28. Load Testing: Response Time Calibration X-Axis: Iterations Y-Axis: Resource Utilization Resource Saturation Threshold CPU Optimal Workload
  29. 29. Load Testing: How To Baseline <ul><li>The baseline is the starting point or comparative measurement </li></ul><ul><ul><li>Defined Use Cases </li></ul></ul><ul><ul><li>Arrival Rate, Departure Rate and Run-Time iterations </li></ul></ul><ul><ul><li>Software/System Configuration. </li></ul></ul><ul><li>Arrival Rate: Rate in which virtual users are introduced on a system. </li></ul><ul><li>Departure Rate: Rate in which virtual users exit the system. </li></ul><ul><li>Run-Time Iterations: The number of unique, iterative sessions executed during a measured test. </li></ul>
  30. 30. Load Testing: How To Baseline Arrival Period Run-Time Iterations Departure Period X-Axis: Time Y-Axis: Users Workload Departure Period
  31. 31. Load Testing: How To Clean-Up between Tests <ul><li>Tests Should Be Identical in Every which Way </li></ul><ul><ul><li>Restore the Environment to it’s previous state </li></ul></ul><ul><ul><ul><li>Remove Data Added from Test </li></ul></ul></ul><ul><ul><ul><li>Truncate Logs </li></ul></ul></ul><ul><ul><li>Keep the Environment Pristine </li></ul></ul><ul><ul><ul><li>Shutdown and Restart Sub-Systems </li></ul></ul></ul><ul><li>Remove All Guessing and What If Questions </li></ul><ul><li>Automate these Steps because you will test more then once and hopefully more then twice. </li></ul>
  32. 32. Load Testing: Samples and Measurements X-Axis: Time Y-Axis: Users Samples = Iterations Response Time Measurement Begins After Arrivals Response Time Measurement Ends Before Departure Calibrated Data Is more reliable
  33. 33. Load Testing: How To Optimize <ul><li>Measure Against Baseline </li></ul><ul><ul><li>Instrument 1 data element at a time </li></ul></ul><ul><ul><li>Never use results from an instrumentation run </li></ul></ul><ul><li>Introduce One Change at a Time </li></ul><ul><li>Comparative Regression Against Baseline </li></ul><ul><li>Changes Should be Based on Quantifiable Measurement and Explanation </li></ul><ul><ul><li>Avoid Guessing </li></ul></ul><ul><ul><li>Cut Down on Googling (Not Everything You Read on the Net is True) </li></ul></ul><ul><ul><li>Validate improvements through repeatability </li></ul></ul>
  34. 34. What to do with the Results of a Load Test <ul><li>Advanced Capacity Planning </li></ul><ul><li>Operational Efficiencies </li></ul><ul><li>Business Process Optimization </li></ul>
  35. 35. Advanced Topic: Behavioral Modeling <ul><li>Behavior modeling is a form of trend analysis. </li></ul><ul><ul><li>Study navigational and transactional patterns of user activity within the application. </li></ul></ul><ul><ul><li>Session lengths and click path depths </li></ul></ul><ul><ul><li>Study patterns of resource utilization and peak usage </li></ul></ul><ul><ul><li>Deep understanding of seasonality usage versus general usage adoption. </li></ul></ul><ul><li>Many tools to diagnose collected data. </li></ul><ul><ul><li>Sherlog, Webalizer, WebTrends </li></ul></ul>
  36. 36. Advanced Topic: User Abandonment <ul><li>User Abandonment is the simulation of a user’s psychological patience when transacting with a software application. </li></ul><ul><li>Two Types of Abandonment: </li></ul><ul><ul><li>Uniform (All things equal) </li></ul></ul><ul><ul><li>Utility (Element of randomness) </li></ul></ul><ul><li>Load tests that do not simulate abandonment are flawed. </li></ul><ul><li>Two Great Reads </li></ul><ul><ul><li>http://www-106.ibm.com/developerworks/rational/library/4250.html </li></ul></ul><ul><ul><li>http:// www.keynote.com/downloads/articles/tradesecrets.pdf </li></ul></ul>
  37. 37. Resources and Citations <ul><li>Joines, Stacey. Performance Analysis for Java™ Websites, First Edition, Addison-Wesley, ISBN: 0201844540; </li></ul><ul><li>Maddox, Michael. “A Performance Process Maturity Model,” 2004 CMG Proceedings. </li></ul><ul><li>Barber, Scott. “Beyond Performance Testing part 4: Accounting for User Abandonment,” http://www-128.ibm.com/developerworks/rational/library/4250.html , April 2004; </li></ul><ul><li>Savia, Alberto. “http://www.keynote.com/downloads/articles/tradesecrets.pdf,” http://www.keynote.com/downloads/articles/tradesecrets.pdf , May 2001; </li></ul>

×