Your SlideShare is downloading. ×
Top ten secret weapons for performance testing in an agile environment
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Top ten secret weapons for performance testing in an agile environment

2,505
views

Published on

Published in: Technology

0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,505
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
5
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • In a conventional project, we focus on the functionality that needs to be delivered.
    Performance might be important, but performance requirements are considered quite separate from functional requirements.
    One approach is to attach “conditions” to story cards, i.e. this functionality must handle a certain load.
    In our experience, where performance is of critical conern, pull out the performance requirement as its own story…
  • Calling out performance requirements as their own stories allows you to:
    validate the benefit you expect from delivering the performance
    -prioritise performance work against other requirements
    -know when you’re done
  • not sure if you like this picture, I was really looking for a good shot looking out over no-man’s land at the Berlin wall.
    I want the idea of divisions along skill lines breading hostility and un-cooperation.
  • Everything should be based on some foreseeable scenario, and who benefits from it

    Harder to do without repetition (involvement and feedback) [not sure if this makes sense anymore]

    Extremely important to keep people focused as its easy to drift

    Capture different profiles

    Separation simulation from optimisation -> Problem Identification vs Problem Resolution (or broken down further Solution Brainstorm -> Solution Investigation)

    Linking back to why is even more essential -> map to existing problems or fears

    Latency vs throughput -> determine what is the most useful metric and define service level agreements
  • http://www.flickr.com/photos/denniskatinas/2183690848/

    Not sure which one you like better
  • Here’s an example... (in the style of Feature Injection) “What’s our upper limit?”
  • Here’s another example... (in the style of Feature Injection), “Can we handle peaks in traffic again?”

    So that we have confidence in meeting our SLA As the Operations Manager I want to ensure that a sustained peak load does not take out our service
  • It helps to be clear about who is going to benefit from any performance testing (tuning and optimisation) that is going to take place. Ensure that they get a stake on prioritisation that will help with the next point...
  • Evidence-based decision-making. Don’t commit to a code change until you know it’s the right thing to do.
  • Evidence-based decision-making. Don’t commit to a code change until you know it’s the right thing to do.
  • It helps to have the customer (mentioned in the previous slide) be a key stakeholder to prioritise.
  • Application supports better ability to be performance tested easier
    Like TDD changes the design/architecture of a system
    Need to find reference for this

    Measuring it early helps raise what changes contribute to slowness

    Performance work takes longer
    Lead times potentially large and long lead time (sequential) – think of where gantt chart may actually be useful
    Run it as a parallel track of work to normal functionality (not sequential)


    Minimal environment availability (expensive, non concurrent use)

    Need minimal functionality or at least clearly defined interfaces to operate against

    Want to have some time to respond to feedback -> work that into the process as early as possible and potentially change architecture/design
  • Start with the simplest performance test scenarios
    -> Sanity test/smoke test
    -> Hit all aspects
    -> Use to drive out automated deployment (environment limitations, configuration issues, minimal set of reporting needs – green/red)
    -> Hit integration boundaries but with a small problem rather than everything

    Next story might be a more complex script or something that drives out more of the infrastrcutre

    Performance stories should not be :
    -> Build out tasks
    -> Does not enhance anything without other stories

    Log files -> Contents early. Consumer Driven. Contracts for analysis. Keep around. Keep notes around what was varied

    INVEST stories
    Avoid the large “performance test” story
    Separate types of stories
    Optimise vs Measure
    Optimise is riskier components. Less known. “Done” is difficult to estimate
    Measure is clearer. Allows you to make better informed choices
    Know when to stop
    When enough is enough


  • The best lessons are learned from iterating, not from incrementing. Iterate over your performance test harness, framework and test fixtures. Make it easier to increment into new areas by incrementing in a different direction each time.
    - Start with simple performance test scenarios
    - Don’t build too much infrastructure at once
    - Refine the test harness and things used to create more tests
    - Should always be delivering value
    - Identify useful features in performance testing and involve the stakeholder(s) to help prioritise them in

    Prioritise and schedule in analysis stories (metrics and graphs)

    Some of this work will still be big
  • Sashimi is nice and bite sized. You don’t eat the entire fish at once. You’re eating a part of it. Sashimi slices are nice and thin. There are a couple of different strategies linking this in.

    Think of sashimi as the thinnest possible slice.
  • Number of requests over time
  • Latency over time
  • “I don’t want to click through to each graph”
  • “I don’t want to click through to each graph”
  • “I don’t want to click through to each graph”
  • Automated build is a key XP practice.
    The first stage of automating a build is often to automate compilation
    However, for a typical project, we go on after compilation to run tests, as another automated step.
    In fact we may have a whole series of automted steps that chain on after each other, automating many aspects of the development process, all the way from compiling source to to deploying a complete application into the production environment.
  • Automation is powerful lever in software projects because:
    it gives us reproducable, consistent processes
    We get faster feedback when something goes wrong
    Overall higher productivity – we can repeat an automated build much more often than we could if it was manual
  • In performance testing we can use automate many of the common tasks in a similar way to how we automate a software build.
    For any performance test, there is a linear series of activities that can be automated (first row of slide)
    In our recent projects we’ve been using the build tool ant for most of performance scripting. You could use any scripting language, but here are some very basic scripts to show you the kind of thing we mean… [possibly animate transitions to the 4 following slides]
    Once we’ve auomted the running of a single test, we can move on even more aspects of automation such as scheduling and result archiving, whch lead us into…
    Continuous Performance testing.
  • Performance tests can take a long time to run, you need all the time you can to get good results.
    Lean on your automation to have tests running all the time, automatically using more hardware when available (in the evening or at the weekend for example)
  • For a faster feedback, set up your CI server so that performance tests are always running against the latest version of the application.
  • Transcript

    • 1. Top ten secret weapons for performance testing in an agile environment by Alistair Jones & Patrick Kua agile2009@thoughtworks.com http://connect.thoughtworks.com/agile2009/ © ThoughtWorks 2009
    • 2. Make Performance Explicit © ThoughtWorks 2009
    • 3. So that I can make better investment decisions As an investor I want to see the value of my portfolio presented on a single web page must have “good” performance, less than 0.2s page load for about 10,000 concurrent users © ThoughtWorks 2009
    • 4. © ThoughtWorks 2009 So that investors have a high- quality experience as the business grows As the Operations Manager I want the portfolio value page to render within 0.2s when 10,000 users are logged in
    • 5. One Team © ThoughtWorks 2009
    • 6. Team Dynamics © ThoughtWorks 2009
    • 7. Performance Testers Part of Team © ThoughtWorks 2009
    • 8. © ThoughtWorks 2009
    • 9. Performance Testers Part of Team © ThoughtWorks 2009
    • 10. Pair on Performance Test Stories © ThoughtWorks 2009
    • 11. Rotate Pairs © ThoughtWorks 2009
    • 12. Customer Driven © ThoughtWorks 2009
    • 13. What was a good source of requirements? © ThoughtWorks 2009
    • 14. © ThoughtWorks 2009 Existing Pain Points
    • 15. An example... © ThoughtWorks 2009
    • 16. So that we can budget for future hardware needs as we grow As the data centre manager I want to know how much traffic we can handle now © ThoughtWorks 2009
    • 17. Another example © ThoughtWorks 2009
    • 18. © ThoughtWorks 2009 So that we have confidence in meeting our SLA As the Operations Manager I want to ensure that a sustained peak load does not take out our service
    • 19. Personas © ThoughtWorks 2009
    • 20. Who is the customer? © ThoughtWorks 2009 End Users Operations Power Users Marketing Investors
    • 21. Discipline © ThoughtWorks 2009
    • 22. © ThoughtWorks 2009 Is the hypothesis valid? Change the application code Observe test results Formulate an hypothesis Design an experiment Run the experiment What do you see? Why is it doing that? How can I prove that’s what’s happening? Take the time to gather the evidence. Safe in the knowledge that I’m making it faster
    • 23. © ThoughtWorks 2009
    • 24. © ThoughtWorks 2009 Saw tooth pattern (1 minute intervals) Directory structure of (yyyy/mm/minuteofday)?. Slow down due to # of files in directory? 1 directory should result in even worse performance... We ran the test… Is the hypothesis valid? Change the application code Observe test results Formulate an hypothesis Design an experiment Run the experiment
    • 25. One Directory © ThoughtWorks 2009
    • 26. Play Performance Early © ThoughtWorks 2009
    • 27. © ThoughtWorks 2009 End Start End Other projects start performance testing here Start Agile projects start performance testing as early as possible
    • 28. Iterate Don’t (Just) Increment © ThoughtWorks 2009
    • 29. © ThoughtWorks 2009
    • 30. We Sashimi © ThoughtWorks 2009
    • 31. Sashimi Slice By... Presentation © ThoughtWorks 2009
    • 32. © ThoughtWorks 2009 So that I can better see trends in performance As the Operations Manager I want a graph of requests per second
    • 33. © ThoughtWorks 2009 So that I can better see trends in performance As the Operations Manager I want a graph of average latency per second
    • 34. © ThoughtWorks 2009 So that I can easily scan results at a single glance As the Operations Manager I want a one page showing all results
    • 35. Sashimi Slice By... Scenario © ThoughtWorks 2009
    • 36. © ThoughtWorks 2009 So that we never have a day like “October 10” As the Operations Manager I want to ensure that a sustained peak load does not take out our service
    • 37. © ThoughtWorks 2009 So that we never have a day like “November 12” As the Operations Manager I want to ensure that an escalating load up to xxx requests/second does not take out our service
    • 38. Automate, Automate, Automate © ThoughtWorks 2009
    • 39. © ThoughtWorks 2009 Automated Compilation Automated Tests Automated Packaging Automated Deployment
    • 40. Automation => Reproducible and Consistent Automation => Faster Feedback Automation => Higher Productivity Why Automation? © ThoughtWorks 2009
    • 41. © ThoughtWorks 2009 Automated Application Deployment Automated Load Generation Automated Test Orchestration Automated Analysis Automated Scheduling Automated Result Archiving
    • 42. Continuous Performance Testing © ThoughtWorks 2009
    • 43. © ThoughtWorks 2009
    • 44. Application Build Pipelines © ThoughtWorks 2009 Performance Build RPM Functional Test Compile & Unit Test
    • 45. © ThoughtWorks 2009
    • 46. Test Drive Your Performance Test Code © ThoughtWorks 2009
    • 47. V Model Testing © ThoughtWorks 2009 http://en.wikipedia.org/wiki/V-Model_(software_development) Performance Testing Slower + Longer Fast Speed
    • 48. We make mistakes © ThoughtWorks 2009
    • 49. V Model Testing © ThoughtWorks 2009 http://en.wikipedia.org/wiki/V-Model_(software_development) Performance Testing Slower + Longer Fast Speed Unit test performance code to fail faster
    • 50. Fail Fast! © ThoughtWorks 2009 Fast feedback! Faster learning Faster results
    • 51. Classic Performance Areas to Test © ThoughtWorks 2009 Analysis Information Collection Visualisation Publishing Presentation
    • 52. Get Feedback © ThoughtWorks 2009
    • 53. Frequently (Weekly) Showcase © ThoughtWorks 2009 Here is what we learned this week....
    • 54. Frequently (Weekly) Showcase © ThoughtWorks 2009 And based on this... We changed our directory structure.
    • 55. Frequently (Weekly) Showcase © ThoughtWorks 2009 Should we do something different knowing this new information?
    • 56. List of All Secret Weapons 1. Make Performance Explicit 2. One Team 3. Customer Driven 4. Discipline 5. Play Performance Early 6. Iterate Don't (Just) Increment 7. Automate, Automate, Automate 8. Test Drive Your Performance Code 9. Continuous Performance Testing 10. Get Feedback © ThoughtWorks 2009
    • 57. • Talk to us tonight at... ThoughtWorks’ Agile Open Office @ 7pm: http://tiny.cc/uqtLa • Email us... agile2009@thoughtworks.com • Visit our website: http://www.thoughtworks.com • Leave your business card at the back Photo Credits (Creative Commons licence) • Barbed wire picture: http://www.flickr.com/photos/lapideo/446201948/ • Eternal clock: http://www.flickr.com/photos/robbie73/3387189144/ • Sashimi from http://www.flickr.com/photos/mac-ash/3719114621/ For more information © ThoughtWorks 2009

    ×