• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Automating Performance Improvements
 

Automating Performance Improvements

on

  • 308 views

We automate build and unit testing with continuous integration. We can also automate functional regression testing. The next step is clear - automating continuous performance tests. ...

We automate build and unit testing with continuous integration. We can also automate functional regression testing. The next step is clear - automating continuous performance tests.

We will take a deep look at how to automate performance regression tests as part of continuous integration. We will learn how to leverage our existing test suites in development, test and beyond. But we can take this further still. We will learn how performance analysis can be automated and how to detect flaws in our architecture. As a last item we will see how we can apply this technique to live production data and with that anticipate performance problems before they arise.

Performance Testing in CI (special focus on Web)
Performance Regression checks of load tests, indepth in code
Ajax Edition Performance analysis --> SOTW?
Automatic Architecture Validation, Anti pattern checking
leveraging production data, UEM
This one basically has the same theme as DevOps but with focus on automating performance tests early on. leveraging existing unit tests and functional tests. We will cover web tests. and we will go into automatic architecture validation, automatic regression testing deep in the code. automatic analysis in production.

Presented by Michael Kopp, Product Evangelist, dynaTrace-Compuware, at the Jaxoon conference in Zurich June 2011

Statistics

Views

Total Views
308
Views on SlideShare
307
Embed Views
1

Actions

Likes
2
Downloads
0
Comments
0

1 Embed 1

http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Dev Time linewithoutAutomatedTestingDev Time linewithContinous IntegrationPerformance ismissed, bigproblemLoadTestingcosts a lotof time, reruns, analysis, regressionsSolutionLeverage CIBrowser TestingWhattotakecareofAutomated ChecksLeverageFunctionalTesting (ifavailable)AutomatedLoadTesting (why, how, advantages)AutomatedComparisionAutomated Regression ChecksAutomatedArchitecture Checks, whatcanbechecked, examplesProduction, Regression CheckWhy DevOps, Agile DeliveryHow, before/after delivery, thiswayitisfareasiertoanalyseproblems, analysechange, automaticAutomatingImprovements?Not reallyautomatableofcourse butTrendingand Feedback, autoamtedregression check, bestrict
  • The problemisthattheperformancerequirementisoftenignoredduringdevelopment. Nowmanysaythatonecannottestperformanceuntilwehave a productandweneedto do loadtestsforthis.
  • Oneofthekeythingstounderstandthatthereis a differencebetweenperformancetrackingandperformancetesting. Performance testingis a oneshotthing, andwelookat absolute numbersandifwemeetthegoal. The problemisthatifwedon‘twehavetoinvestigatewhywedon‘tTracking on theotherhandisaboutchange. Andifwetrackperformanceandtrackthechangethanwedon‘treallyhavetoinvestigae. Ifsomethinggetsslowerthanweknowthatthisis not good.
  • Automatic, Detect Change – reuseyourexisting Unit TestsBE AWARE thatweare not necesarilymeasuring Performance/Execution Time here -> doesntmake sense forshortrunning Unit Tests. On thosewefocus on theinternals, e.g.: Database Statements andseehowtheychangeover time (regression)Perfearly, insidethecycle. This wayitispartofthesprint not outside. Itisautomted so itdoes not introduce additional effort.
  • Besidesfocusing on theinternaldynamicsofthetestedcomponentswecanruncertainsetsof real performancetests in CIThis exampleshowshowdynaTraceinternallyrunsperformancetests on criticalfeature on everybuildExplaintheidentifiedproblems …
  • Add a pictureofseleniumtests
  • Whyisthisimportant?, automaticdetectchanges
  • Themoreinterestingusecase in CI isArchitecture Validation wherewehave a verycloselookatwhatisgoing on withinthetestedcomponents.Byanalyzingthedynamiccodeexecutionwecanvalidatecertainrulesthataredefinedbythe Software Architects, e.g.: Onlymakeasycnhronousremotingcalls, only X DB Calls per transaction, …Havingthetools in placeallowyoutoautomaticallyverifytheserulesagainstyourunitorfunctionaltests
  • In ordertorunperformancetests in CI itisimporttofollowsomegroundrulesUnit testsneedtohave a certainexecution time lengthMakesurethatyouhave a stableenvironment -> tip: runseveralwarm-upteststoensureenvironmentisstable -> thenrun real test
  • Youcannotrun all testsalways. Thereforeyouhavetocomeupwith a strategyhowtotest. Especiallybiggertesttakemore time. Werunthreetypesoftests. Junitbasedteststhatrunwitheverybuild. Total systemteststhatrunautomaticallyatspecifiedintervals e.g. once a day. We also do longrunningstabilityteststoensureweintroducednomemoryleaksorhiddenproblems. This bringsustoloadtesting
  • In traditional developmentwehopedthatloadtestingisthesilverbulletto find performanceproblems. Today weseethatthisis not thecase. Let‘slookintothedetailswhyweneed a newwayoftesting.
  • Toomucheffortforscriptmaintenance, settingupthetestenvironmentCostsfortestingtoolsandtestenvironmentTest cycletakeslongerthanreserved in thesprint. Feedback todevscomeswhentheyarealready in thenextsprintDeswegen werden sie selbst bei agiler entwicklung immer noch am ende der Release gemacht. Die loesungdafuer ist automatisierung. Aber auch automatisierung, hilft nur bedingt, wirmuessen auch dafuer sorgen das die cyclenkuerzer werden und das heist den grundfuerreruns zu vermeiden.Und nachdem wir reruns vermeiden, muessen wir die manuellen regressionschecks vermeiden  wieder automatisierungNeben dem allem muessen wir uns etwas eingestehen, loadtests sind wichtig und sinnvoll, aber sie sind immer eine simulation, das heisst wir muessen den tradeoffabschaetzten.Je mehr wir in richtung echte welt gehen desto groesser wird der aufwand und desto besser abgedeckt sind wir, irgendwann steht der aufwand sich nicht mehr dafuer. Das muessen sie entscheiden.
  • Rerun, additional dataAdding log statemetnsandstatistics in code, errorprone.First, automatethestuff.Second, captureenoughfromthebeginning. (traces, BTM)Dynamic Capture all the time.Overhead discussion 5-10Benefits of Agile and DevOpsShorter iterations with fewer changes leads to reduced effort in testingfewer script maintenance changes, higher quality smoke test, easier regression analysis,more focused testing on the areas that have changed
  • Sample Data: smalldatabaseswith sample datais not realistic. Canttestcaching, access time todatabase, …OverloadedAgents: LoadTestingAgentsare just usedtoproduceloadwithoutverifyingiftheycanproducetherequestedloadWrong Think Times: Think Times areeither not setcorrectlytoreflect real usersortheyareavoidedtoincreaseload -> thats not realisticeitherInfrastructure Issues: doesthe Test Infrastructure provideenoughbandwidth, resources, … to handle theload? Isthereanythingelserunning on theinfrastructurethatmayinterfere?No Error Detection: Missing Content Verificationleadstoincorrectresults. A Page thatresponds super fast but alwaysyresponds „pleasecome back later“ is a problemNo Server Side Data: veryoftenpeopleforgettocollectserversidemetrics such as CPU, Network, I/O, Logfiles, …
  • Youcannot do thisfor all partsofyourapplication. So youhavetocreatefocus. Wecategorizetestsintothreetypes. Thosethatwehavetoautomate, thosewerunregularlyandthosewenever do. Wedecidebased on riskandimpact on business. The highestriskandhighestimpact must beautomated. Medium riskorimpactshouldbedoneregularly.
  • Makesurewecaptureenoughinfrastructuredata – not just responsetimes. This helpsustoget a quick overviewoftheproblematicareas
  • In ordertoidentifyproblematiccomponentsorservicesitsrecommendedtorun different loadandseewhichcomponentsscaleandwhichdont
  • instability
  • You do not onlyrunoneloadtest in a devcycle. Yourunmanyandalwayswanttoseewhathaschanged.Itisthereforeimportanttocomparetestresults. Critical hereisthatwehaveteststhatwecancompareandthatwehavedatathatiscomparable
  • Inthe end itisthedeveloperthatneedstolookattheproblem. The more granular transactionaldatawecanprovidetheeasiertoanalyzeand fix theproblem.ProfilingandTracing Solutions becamevery powerful andallowdeeptracing also duringload.
  • classical loadtestingtest environment that tries to be as close to production as possible (size/users/data vs. reality)test scripts with transactions that we expect (but we know we shouldexpecttheunexpected. Never happens)TRIES TO REBUILD TO REAL WORLD -> HUGE EFFORT!!Result: no real guarantee that what has been tested really works in prodProblems:Testing on not frozen Code Base -> Results of Testing are available not before mid or end of NEXT Sprint -> Results are outdated!HArd to get real life Production Load BehaviorMore changes in application lead to more effort in testing (script maintenance, regression analysis, ...)Size ofenvironmentNumberof UsersTransaction Mix
  • All ofthisleadsto …I am done – nowyoudeployit
  • Kreis mit pfeilen, mitte ein tool, communicationbarrier, objectivness, lessmaintaincePerformance analysisPerformance testingPerformance monitoringvalidation
  • Business Transactions, whatmattersTrendanalysisAnalyzingthebusinessimpactofyourappsperformanceisimportanttoprioritizeyourtroubleshootingactivities.
  • Beforeand after deployment
  • Insteadoftroubleshooting, runningintothe same problemsoverandoveragain, we fix once, constrantlyimproveAgile + DevOps + CAPMChanged Testing Mindest -> WE ARE NEVER DONE -> WE CONSTANTLY IMPROVE

Automating Performance Improvements Automating Performance Improvements Presentation Transcript

  • Automating Performance…Continuous Integration, Load Testing and beyond
  • How we track our goalsStory Points Release Developme Time Estimate nt Remaining Team Velocity
  • We are getting things doneStory Points Release Developme Time Estimate nt Remaining Team Velocity
  • Then we see what we missedStory Points Release Developme Time Estimate nt Remaining Team Velocity
  • Performance Testing Performance ThresholdPerformanceManagement Traditional Time Development Testing Production
  • Continous and Automated
  • Performance Tracking vs.Performance Testing
  • Performance in Continuous Integration
  • Identify Performance Regressions Early Version Control History Lookup
  • Browser driven Usecase Testing
  • See what happens on the client side
  • Architecture Validation Analyze your existing Unit Tests Validate against your Arch Rules
  • Tip: Stable Environment
  • Tip: Understand your measurements Response Time only Response Time and GC
  • Frequency and Granularity JUnit-based Tests (2x day)Granularity Long-running Total System Stabilitiy Tests Tests Frequency
  • Load Testing ≠Silver Bullet
  • Traditional Load Testingtakes too long in an agile process time Sprint 1 Starts with Dev and dedicated Time for Test at the End Update Test Setup Run Tests Scripts Environment Adapt Scripts Re Run Tests Feedback to Dev Testing extends sprints Consumes too many Dev Resources Sprint 2 Re Run Tests Feedback to Dev Ready for Ops Either postponed or impacted because Dev Resources are still occopuied with testing of previous sprint
  • Avoid Re-Runs• What could happen?• Which information do you want?• What describes your system?• What is different from the last run?
  • Minimize and automate real Load Tests Developing Test Run Reproduction Refine Capturing Re Run Tests Reproduction Refine Capturing Multiple Test Iterations needed to analyze Root-cause Re Run Tests Reproduction Problem Analysis Problem Solving time Developing Test Run Reproduction Refine Capturing Re Run Tests Reproduction Refine Capturing •Eliminates Test Iterations •Go directly to problem analysis •Frees up resources for other proje Re Run Tests Reproduction Problem Analysis Problem Solving time
  • Avoid Common MistakesOnly use Sample Data No Server Side DataWrong Think Times Overloaded Load AgentsNo Error Detection Infrastructure Issues
  • Create Focus Risk Impact
  • Measure more than Response Time
  • Define KPIsThroughputResponse TimeMemory ConsumptionOther KPIs …
  • Performance vs. Scalability
  • Tip: Identify Scalabilty Problems Response time Presentation Layer Presentation Layer Business Logic Presentation Layer Business Logic Business Logic Web Service/Remoting Web Service/Remoting Web Service/Remoting Data Access Data Access Data Access Minimum Load Medium Load Maximum Load
  • Incresing Load Test is your friend
  • .. adding some volatility increases the likelyness to discover problems …“
  • Tip: Identify Regressions on Code Level
  • Tip: Get as granular data as possible
  • It‘s still just a Simulation Real Size and Content Real Third Party CodeReal Infrastructure Setup No Guarantees! Real Load Real Number of Users
  • Don t Forgetthe Real World
  • Communication is KeySource: http://dev2ops.org/blog/2010/2/22/what-is-devops.html
  • Feedback Communication Collaboration
  • Facts, Facts, Facts
  • Focus on areas with biggest impact160140120100 80 Number of Sales 60 No of Users 40 20 0 Response Time Transaction Load
  • Compare real Data
  • Performance Engineering vs Troubleshooting Fix Once Improve always
  • Michael Koppmichael.kopp@dynaTrace.comhttp://blog.dynatrace.com@mikopp
  • Download the latest Application Performance almanac:Web | Cloud | DevOps | Mobile | Automation | Tuningwww.dynatrace.com/almanac