• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
An Introduction To Load Testing A Blackboard Primer

An Introduction To Load Testing A Blackboard Primer






Total Views
Views on SlideShare
Embed Views



4 Embeds 52

http://www.techgig.com 38
http://www.javeriana.edu.co 8
http://www.slideshare.net 5 1



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    An Introduction To Load Testing A Blackboard Primer An Introduction To Load Testing A Blackboard Primer Presentation Transcript

    • An Introduction to Load Testing: A Blackboard Primer Steve Feldman Director of Performance Engineering and Architecture Blackboard Inc. July 18 th 3pm
    • What is Software Performance Engineering? Software Execution Model System Execution Model SPE System Software
      • Response Time Performance Absolutely Critical Performance Measurement.
      • Emphasis on optimizing the application business logic.
      • Design Pattern implementation is primary concern.
      The Software Execution Model SPE System Software
      • Response Time Performance Remains Critical Performance Measurement.
      • Emphasis on optimizing the deployment environment.
      • System Resource Utilization of primary concern.
      The System Execution Model
      • Awareness of system performance peaks and valleys.
      • Knowledge of capacity planning needs.
      • All of the data is available, but little is done other then basic optimization.
      • Looking to extend performance management via environmental optimization.
      The System Execution Model and the Performance Maturity Model Level 1: Reactive Fire Fighting Level 2: Monitoring And Instrumenting Level 3: Performance Optimizing Level 4: Business Optimizing Level 5: Process Optimizing
    • How Do We Optimize our Environment Using the System Execution Model?
      • Study existing behavior, adoption, growth and system resource utilization patterns.
      • Measure live system response times during periods of variations for watermark analysis.
      • Simulate synthetic load mocking the usage patterns of the deployment.
    • Introduction to Load Testing
      • What is load testing?
      • Why do we load test?
      • What tools do we use?
      • Preparing for load testing?
      • How do we load test?
      • What to do with the results of a load test?
    • What is Load Testing?
      • Load testing is a controlled method of exercising artificial workload against a running system.
        • A system can be hardware or software oriented.
        • A system can be both.
      • Load testing can be executed in a manual or automated fashion.
        • Automated Load Testing can mitigate inconsistencies and not compromise scientific reliability of data.
    • Why do we Load Test?
      • Most load tests are executed with false intentions, (performance barometer).
      • Understanding the impact of response times for predictable behavioral conditions and scenarios.
      • Understanding the impact of response times for patterns of adoption and growth.
      • Understanding the resource demands of the deployment environment.
    • What tools do we use?
      • Commercial Tools
        • Mercury LoadRunner (Currently Used at Blackboard)
        • Segue SilkPerformer (Formally Used at Blackboard)
        • Rational Performance Studio
      • Freeware Tools
        • Grinder (Occasionally Used at Blackboard)
        • Apache JMeter (Occasionally Used at Blackboard)
        • OpenSTA
      • Great Resources for Picking a Load Testing Tool
        • Performance Analysis of Java Web Sites by Stacy Joines (ISBN: 0201844540)
        • http://www.testingfaqs.org/t-load.html
    • Preparing for load testing?
      • Define Performance Objectives
      • Use Case Definition
      • Performance Scenarios
      • Data Modeling
      • Scripting and Parameterization
    • Define Performance Objectives
      • Every Load Test Should Have a Purpose of Measurement.
      • Common Objectives
        • Sessions Per Hour
        • Transactions Per Hour
        • Throughput Per Transaction and Per Hour
        • Response Time Calibration
        • Resource Saturation Calibration
    • Define Use Cases
      • Use Cases should be prioritized based on the following:
        • Criticality of Execution
        • Observation of Execution (Behavioral Modeling)
        • Expectation of Adoption
        • Baseline Analysis
    • Define Performance Scenarios
      • Collection of one or more use cases sequenced in a logical manner (compilation of a user session)
      • Scenarios should be realistic in nature and based on recurring patterns identified in session behavior models.
        • Avoid simulating extraneous workload.
        • Iterate when necessary.
    • Design an Accurate Data Model
      • Uniform in Construction
        • Naming Conventions (Sequential)
          • User000000001, LargeCourse000000001, 25-MC-QBQ-Assessment, XSmallMsg000000001, etc…
        • Data Constructions (Uniform for Testability)
          • XSmallMsg000000001 Contains 100 Characters of Text
          • XLargeMsg000000001 Contains 1000 Characters of Text (Factor of 10X)
      • Multi-Dimensional Data Model
        • Fragmented in Nature
          • Data Conditions for Testability
          • Scenarios for Testability
    • Scripting and Parameterization
      • Script Programmatically
        • Focus on Reusability, Encapsulation and Testability
        • Componentize the Action Step of the Use Case
        • Use Explicit Naming Conventions
        • Example: Measure the Performance of Opening a Word Document.
          • Authenticate(), NavPortal(), NavCourse(), NavCourseMenu(Docs), ReadDoc()
    • Scripting and Parameterization: Example
      • /**
      • * Quick Navigation: Click on Course Menu
      • * Params Required: course_id
      • * Params Saved: none
      • */
      • CourseMenu()
      • {
      • static char *status = "Course Menu: Course Menu";
      • bb_status(status);
      • lr_start_transaction(status);
      • bb_web_url("{bb_target_url}/webapps/blackboard/content/courseMenu.jsp?mini=Y&course_id={bb_course_pk}", 'l');
      • lr_end_transaction(status, LR_AUTO);
      • lr_think_time(navigational);
      • }
      Code Comments Reusable Action Name Echo Status and Transaction HTTP Representation with Parameterization and Abandonment
    • Scripting and Parameterization
      • Parameterize Dynamically
        • Realistic load simulations test against unique data conditions.
        • Avoid hard-coding dynamic or user-defined data elements.
        • Work with uniform, well-constructed data sets (sequences)
        • Example: Parameterize the username for Authentication().
          • student000000001, student000000002, instructor000000001, admin000000001, observer000000001, hs_student000000001
    • Scripting and Parameterization: Example
      • // Save various folder pks and go to the course menu folder
      • web_reg_save_param("course_assessments_pk", "NotFound=Warning", "LB=content_id=_", "RB=_1&mode=reset" target="main">Assessments", LAST);
      • web_reg_save_param("course_documents_pk", "NotFound=Warning", "LB=content_id=_", "RB=_1&mode=reset" target="main">Course Documents", LAST);
      Parameterization Name: Course_Assessments_Pk If Not Found: Issues a Warning Finds the Values b/w Left Boundary and Right Boundary These are LoadRunner Terminology References. However, other scripting tools use same constructs.
    • Scripting and Parameterization: Blackboard Gotchas
      • RDBMS Authentication
        • One Time Token
        • MD5 Encrypted Password
        • MD5 (MD5 Encrypted Password + One Time Token)
    • Scripting and Parameterization: Blackboard Gotchas
      • Navigational Concerns
        • Dynamic ID’s
          • Tab IDs
          • Content IDs
          • Course IDs
          • Tool IDs
        • Modes
          • Reset
          • Quick
          • View
        • Action Steps
          • Manage, CaretManage, Copy, Remove_Proc
          • Family
    • Scripting and Parameterization: Blackboard Gotchas
      • Transactional Concerns
        • HTTP Post
          • Multiple ID submissions
          • Action Steps
          • Data Values
          • Permissions
          • Metadata
    • Scripting and Parameterization: Blackboard Gotchas (Example)
      • /** * User Modifies Grade and Submits Change
      • * Params Required: Random Number from 0 to 100
      • * Params Saved: none */
      • ModifyGrade()
      • {
      • static char *status = "Modify Grades";
      • bb_status(status);
      • start_timer();
      • lr_start_transaction(status);
      • web_submit_form("itemGrades",
      • "Snapshot=t28.inf",
      • ITEMDATA,
      • "Name=grade[0]", "Value={randGrade}", ENDITEM,
      • "Name=grade[1]", "Value={randGrade}", ENDITEM,
      • "Name=grade[2]", "Value={randGrade}", ENDITEM,
      • "Name=grade[3]", "Value={randGrade}", ENDITEM,
      • "Name=submit.x", "Value=41", ENDITEM,
      • "Name=submit.y", "Value=5", ENDITEM, LAST);
      • lr_end_transaction(status, LR_AUTO);
      • stop_timer();
      • abandon('h');
      • lr_think_time(transactional);}
      Transaction Timer Dynamic Parameterized Values Explicit Abandonment Policy and Parameterized Think Time
    • How do we load test?
      • Initial Configuration
      • Calibration
      • Baseline
      • Environmental Clean-Up
      • Collecting Enough Samples
      • Optimization
    • Load Testing: How To Initially Configure
      • Optimize the Environment from the Start
        • Consider it your baseline configuration
        • Knowledge of embedded sub-systems
        • Previous Experience with Blackboard and/or current deployment Configuration
        • Think twice about using the out of the box configuration.
    • Load Testing: Calibration
      • Definition: The process of identifying an ideal workload to execute against a system.
      • Blackboard Performance Engineering uses two types of Calibration.
        • Identify Peak of Concurrency (Key Metric for Identifying Sessions per Hour)
          • Calibrate to Response Time
          • Calibrate to System Saturation
    • Load Testing: Response Time Calibration X-Axis: Iterations Y-Axis: Response Time Response Time Threshold Line Optimal Workload
    • Load Testing: Response Time Calibration X-Axis: Iterations Y-Axis: Resource Utilization Resource Saturation Threshold CPU Optimal Workload
    • Load Testing: How To Baseline
      • The baseline is the starting point or comparative measurement
        • Defined Use Cases
        • Arrival Rate, Departure Rate and Run-Time iterations
        • Software/System Configuration.
      • Arrival Rate: Rate in which virtual users are introduced on a system.
      • Departure Rate: Rate in which virtual users exit the system.
      • Run-Time Iterations: The number of unique, iterative sessions executed during a measured test.
    • Load Testing: How To Baseline Arrival Period Run-Time Iterations Departure Period X-Axis: Time Y-Axis: Users Workload Departure Period
    • Load Testing: How To Clean-Up between Tests
      • Tests Should Be Identical in Every which Way
        • Restore the Environment to it’s previous state
          • Remove Data Added from Test
          • Truncate Logs
        • Keep the Environment Pristine
          • Shutdown and Restart Sub-Systems
      • Remove All Guessing and What If Questions
      • Automate these Steps because you will test more then once and hopefully more then twice.
    • Load Testing: Samples and Measurements X-Axis: Time Y-Axis: Users Samples = Iterations Response Time Measurement Begins After Arrivals Response Time Measurement Ends Before Departure Calibrated Data Is more reliable
    • Load Testing: How To Optimize
      • Measure Against Baseline
        • Instrument 1 data element at a time
        • Never use results from an instrumentation run
      • Introduce One Change at a Time
      • Comparative Regression Against Baseline
      • Changes Should be Based on Quantifiable Measurement and Explanation
        • Avoid Guessing
        • Cut Down on Googling (Not Everything You Read on the Net is True)
        • Validate improvements through repeatability
    • What to do with the Results of a Load Test
      • Advanced Capacity Planning
      • Operational Efficiencies
      • Business Process Optimization
    • Advanced Topic: Behavioral Modeling
      • Behavior modeling is a form of trend analysis.
        • Study navigational and transactional patterns of user activity within the application.
        • Session lengths and click path depths
        • Study patterns of resource utilization and peak usage
        • Deep understanding of seasonality usage versus general usage adoption.
      • Many tools to diagnose collected data.
        • Sherlog, Webalizer, WebTrends
    • Advanced Topic: User Abandonment
      • User Abandonment is the simulation of a user’s psychological patience when transacting with a software application.
      • Two Types of Abandonment:
        • Uniform (All things equal)
        • Utility (Element of randomness)
      • Load tests that do not simulate abandonment are flawed.
      • Two Great Reads
        • http://www-106.ibm.com/developerworks/rational/library/4250.html
        • http:// www.keynote.com/downloads/articles/tradesecrets.pdf
    • Resources and Citations
      • Joines, Stacey. Performance Analysis for Java™ Websites, First Edition, Addison-Wesley, ISBN: 0201844540;
      • Maddox, Michael. “A Performance Process Maturity Model,” 2004 CMG Proceedings.
      • Barber, Scott. “Beyond Performance Testing part 4: Accounting for User Abandonment,” http://www-128.ibm.com/developerworks/rational/library/4250.html , April 2004;
      • Savia, Alberto. “http://www.keynote.com/downloads/articles/tradesecrets.pdf,” http://www.keynote.com/downloads/articles/tradesecrets.pdf , May 2001;