Top ten secret weapons for agile performance testingby Patrick Kuapatrick.kua@thoughtworks.comhttp://www.thekua.com/atwork/presentations-and-papers/© ThoughtWorks2010
Make Performance Explicit© ThoughtWorks2010  1
So that I can make better investment decisionsAs an investorI want to see the value of my portfolio presented on a single web pagemust have “good” performance, less than 0.2s page load for about 10,000 concurrent users© ThoughtWorks2010
© ThoughtWorks2010So that investors have a high-quality experience as the business growsAs the Operations ManagerI want the portfolio value page to render within 0.2s when 10,000 users are logged in
One Team© ThoughtWorks2010   2
Team Dynamics© ThoughtWorks2010
Performance Testers Part of Team© ThoughtWorks2010
© ThoughtWorks2010
Performance Testers Part of Team© ThoughtWorks2010
Pair on Performance Test Stories© ThoughtWorks2010
Rotate Pairs© ThoughtWorks2010
Customer Driven© ThoughtWorks2010      3
What was a good source of requirements?© ThoughtWorks2010
© ThoughtWorks2010Existing Pain Points
An example...© ThoughtWorks2010
So that we can budget for future hardware needs as we growAs the data centre managerI want to know how much traffic we can handle now© ThoughtWorks2010
Another example© ThoughtWorks2010
© ThoughtWorks2010So that we have confidence in meeting our SLAAs the Operations ManagerI want to ensure that a sustained peak load does not take out our service
Personas© ThoughtWorks2010
Who is the customer?© ThoughtWorks2010InvestorsMarketingEnd UsersPower UsersOperations
Discipline© ThoughtWorks2010  4
© ThoughtWorks2010Observe test resultsWhat do you see?Formulate an hypothesisWhy is it doing that?Design an experimentHow can I prove that’s what’s happening?Run the experimentTake the time to gather the evidence.Is the hypothesis valid?Change the application codeSafe in the knowledge that I’m making it faster
??????????© ThoughtWorks2010
© ThoughtWorks2010Observe test resultsSaw tooth pattern  (1 minute intervals)Formulate an hypothesisDirectory structure of (yyyy/mm/minuteofday)?.  Slow down due to # of files in directory?Design an experiment 1 directory should result in even worse performance...Run the experimentWe ran the test…Is the hypothesis valid?Change the application code
One Directory© ThoughtWorks2010
Play Performance Early© ThoughtWorks2010   5
© ThoughtWorks2010EndStartOther projects start performance testing hereEndStartAgile projects start performance testing as early as possible
Iterate Don’t (Just) Increment© ThoughtWorks2010       6
© ThoughtWorks2010
We               Sashimi© ThoughtWorks2010
Sashimi Slice By... Presentation© ThoughtWorks2010
© ThoughtWorks2010So that I can better see trends in performanceAs the Operations ManagerI want a graph of requests per second
© ThoughtWorks2010So that I can better see trends in performanceAs the Operations ManagerI want a graph of average latency per second
© ThoughtWorks2010So that I can easily scan results at a single glanceAs the Operations ManagerI want a one page showing all results
Sashimi Slice By...  Scenario© ThoughtWorks2010
© ThoughtWorks2010So that we never have a day like “October 10”As the Operations ManagerI want to ensure that a sustained peak load does not take out our service
© ThoughtWorks2010So that we never have a day like “November 12”As the Operations ManagerI want to ensure that an escalating load up to xxx requests/second does not take out our service
Automate, Automate, Automate© ThoughtWorks2010   7
© ThoughtWorks2010AutomatedCompilationAutomatedTestsAutomatedPackagingAutomatedDeployment
Automation => Reproducible and ConsistentAutomation => Faster FeedbackAutomation => Higher ProductivityWhy Automation?© ThoughtWorks2010
© ThoughtWorks2010AutomatedTest OrchestrationAutomatedAnalysisAutomated SchedulingAutomatedLoad GenerationAutomatedApplication DeploymentAutomated Result Archiving
Continuous Performance Testing© ThoughtWorks2010   8
ApplicationBuild Pipelines© ThoughtWorks2010Performance
© ThoughtWorks2010
Test Drive Your Performance Test Code© ThoughtWorks2010   9
V Model Testing© ThoughtWorks2010Slower + LongerPerformance TestingSpeedFasthttp://en.wikipedia.org/wiki/V-Model_(software_development)
We make mistakes© ThoughtWorks2010
V Model Testing© ThoughtWorks2010Slower + LongerPerformance TestingSpeedUnit test performance code to fail fasterFasthttp://en.wikipedia.org/wiki/V-Model_(software_development)
Classic Performance Areas to Test© ThoughtWorks2010AnalysisInformation CollectionPresentationPublishingVisualisation
Get Feedback© ThoughtWorks201010
Frequently (Weekly) Showcase© ThoughtWorks2010Here is what we learned this week....
Frequently (Weekly) Showcase© ThoughtWorks2010And based on this... We changed our directory structure.
Frequently (Weekly) Showcase© ThoughtWorks2010Should we do something different knowing this new information?
List of All Secret WeaponsMake Performance ExplicitOne TeamCustomer DrivenDisciplinePlay Performance EarlyIterate Don't (Just) IncrementAutomate, Automate, Automate Test Drive Your Performance CodeContinuous Performance TestingGet Feedback© ThoughtWorks2010
Photo Credits (Creative Commons licence)Barbed wire picture: http://www.flickr.com/photos/lapideo/446201948/Eternal clock: http://www.flickr.com/photos/robbie73/3387189144/Sashimi from http://www.flickr.com/photos/mac-ash/3719114621/Questions? © ThoughtWorks2010

Top Ten Secret Weapons For Agile Performance Testing

Editor's Notes

  • #4 In a conventional project, we focus on the functionality that needs to be delivered.Performance might be important, but performance requirements are considered quite separate from functional requirements.One approach is to attach “conditions” to story cards, i.e. this functionality must handle a certain load.In our experience, where performance is of critical conern, pull out the performance requirement as its own story…
  • #5 Calling out performance requirements as their own stories allows you to:validate the benefit you expect from delivering the performance-prioritise performance work against other requirements-know when you’re done
  • #9 not sure if you like this picture, I was really looking for a good shot looking out over no-man’s land at the Berlin wall.I want the idea of divisions along skill lines breading hostility and un-cooperation.
  • #13 Everything should be based on some foreseeable scenario, and who benefits from itHarder to do without repetition (involvement and feedback) [not sure if this makes sense anymore]Extremely important to keep people focused as its easy to driftCapture different profilesSeparation simulation from optimisation -> Problem Identification vs Problem Resolution (or broken down further Solution Brainstorm -> Solution Investigation)Linking back to why is even more essential -> map to existing problems or fearsLatency vs throughput -> determine what is the most useful metric and define service level agreements
  • #15 http://www.flickr.com/photos/denniskatinas/2183690848/Not sure which one you like better
  • #17 Here’s an example... (in the style of Feature Injection) “What’s our upper limit?”
  • #19 Here’s another example... (in the style of Feature Injection), “Can we handle peaks in traffic again?”So that we have confidence in meeting our SLAAs the Operations ManagerI want to ensure that a sustained peak load does not take out our service
  • #21 It helps to be clear about who is going to benefit from any performance testing (tuning and optimisation) that is going to take place. Ensure that they get a stake on prioritisation that will help with the next point...
  • #23 Evidence-based decision-making. Don’t commit to a code change until you know it’s the right thing to do.
  • #25 Evidence-based decision-making. Don’t commit to a code change until you know it’s the right thing to do.
  • #27 It helps to have the customer (mentioned in the previous slide) be a key stakeholder to prioritise.
  • #28 Application supports better ability to be performance tested easierLike TDD changes the design/architecture of a systemNeed to find reference for thisMeasuring it early helps raise what changes contribute to slownessPerformance work takes longerLead times potentially large and long lead time (sequential) – think of where gantt chart may actually be usefulRun it as a parallel track of work to normal functionality (not sequential)Minimal environment availability (expensive, non concurrent use)Need minimal functionality or at least clearly defined interfaces to operate againstWant to have some time to respond to feedback -> work that into the process as early as possible and potentially change architecture/design
  • #29 Start with the simplest performance test scenarios -> Sanity test/smoke test-> Hit all aspects-> Use to drive out automated deployment (environment limitations, configuration issues, minimal set of reporting needs – green/red)-> Hit integration boundaries but with a small problem rather than everythingNext story might be a more complex script or something that drives out more of the infrastrcutrePerformance stories should not be :-> Build out tasks-> Does not enhance anything without other storiesLog files -> Contents early. Consumer Driven. Contracts for analysis. Keep around. Keep notes around what was variedINVEST storiesAvoid the large “performance test” storySeparate types of storiesOptimise vs MeasureOptimise is riskier components. Less known. “Done” is difficult to estimateMeasure is clearer. Allows you to make better informed choicesKnow when to stopWhen enough is enough
  • #30 The best lessons are learned from iterating, not from incrementing. Iterate over your performance test harness, framework and test fixtures. Make it easier to increment into new areas by incrementing in a different direction each time. - Start with simple performance test scenarios - Don’t build too much infrastructure at once - Refine the test harness and things used to create more tests - Should always be delivering value - Identify useful features in performance testing and involve the stakeholder(s) to help prioritise them inPrioritise and schedule in analysis stories (metrics and graphs)Some of this work will still be big
  • #31 Sashimi is nice and bite sized. You don’t eat the entire fish at once. You’re eating a part of it. Sashimi slices are nice and thin. There are a couple of different strategies linking this in. Think of sashimi as the thinnest possible slice.
  • #33 Number of requests over time
  • #34 Latency over time
  • #35 “I don’t want to click through to each graph”
  • #37 “I don’t want to click through to each graph”
  • #38 “I don’t want to click through to each graph”
  • #40 Automated build is a key XP practice.The first stage of automating a build is often to automate compilationHowever, for a typical project, we go on after compilation to run tests, as another automated step.In fact we may have a whole series of automted steps that chain on after each other, automating many aspects of the development process, all the way from compiling source to to deploying a complete application into the production environment.
  • #41 Automation is powerful lever in software projects because:it gives us reproducable, consistent processesWe get faster feedback when something goes wrongOverall higher productivity – we can repeat an automated build much more often than we could if it was manual
  • #42 In performance testing we can use automate many of the common tasks in a similar way to how we automate a software build.For any performance test, there is a linear series of activities that can be automated (first row of slide)In our recent projects we’ve been using the build tool ant for most of performance scripting. You could use any scripting language, but here are some very basic scripts to show you the kind of thing we mean… [possibly animate transitions to the 4 following slides]Once we’ve auomted the running of a single test, we can move on even more aspects of automation such as scheduling and result archiving, whch lead us into…Continuous Performance testing.
  • #44 For a faster feedback, set up your CI server so that performance tests are always running against the latest version of the application.