Automated Performance Testing

1,684 views

Published on

If you are like most test driven developers, you write automated tests for your software to get fast feedback about potential problems. Most of the tests you write will verify the functional behaviour of the software: When we call this function or press this button, the expected result is that value or that message.

But what about the non-functional behaviour, such as performance: When we perform this query the expected speed of getting results should be no more than that many milliseconds. It is important to be able to write automated performance tests as well, because they can give us early feedback about potential performance problems. But expected performance is not as clear-cut as expected results. Expected results are either correct or wrong. Expected performance is more like a threshold: If the performance is worse than this, we want the test to fail.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,684
On SlideShare
0
From Embeds
0
Number of Embeds
103
Actions
Shares
0
Downloads
10
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Automated Performance Testing

  1. 1. AutomatedPerformanceTestingLars ThorupZeaLake Software ConsultingMay, 2012
  2. 2. Who is Lars Thorup?● Software developer/architect ● C++, C# and JavaScript ● Test Driven Development● Coach: Teaching agile and automated testing● Advisor: Assesses software projects and companies● Founder and CEO of BestBrains and ZeaLake
  3. 3. We want to know when performance drops● ...or improves :-)● Examples ● this refactoring means the cache is no longer used for lookups ● introducing this database index on that foreign key is way faster● Write a test to measure performancevar stopwatch = Stopwatch.StartNew();for (int i = 0; i < 200; ++i){ var url = string.Format("Vote?text={0}", Guid.NewGuid()); var response = client.DownloadString(url); Assert.That(response, Is.True);}stopwatch.Stop();● When and how should the test fail?
  4. 4. We cannot use assert for this Assert.That(requestsPerSecond, Is.InRange(40, 50));● But the resulting time can vary widely ● range too narrow: many false negatives ● range too broad: many false positives
  5. 5. Use trend curves instead● Does not fail automatically :-( ● unless we add automated trend line analysis● Need manual inspection ● weekly, before every release ● and it takes only 10 seconds● So the feedback is not fast ● but shows which commit caused the performance issue
  6. 6. Demo: TeamCity from JetBrains● Make the tests output their results stopwatch.Stop(); PerformanceTest.Report(stopwatch.ElapsedMilliseconds);● In an .xml file <build> <statisticValue key=Voting value=667/> <statisticValue key=PerfGetEvent value=3689/> </build>● Configure TeamCity to convert the data to graphs● Read more here ● http://www.zealake.com/2011/05/19/automated-performance-trends/
  7. 7. Demo: Jenkins● Make the tests output their results in CSV files● Use the Plot plugin

×