Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

of

Neotys PAC - Stuart Moncrieff Slide 1 Neotys PAC - Stuart Moncrieff Slide 2 Neotys PAC - Stuart Moncrieff Slide 3 Neotys PAC - Stuart Moncrieff Slide 4 Neotys PAC - Stuart Moncrieff Slide 5 Neotys PAC - Stuart Moncrieff Slide 6 Neotys PAC - Stuart Moncrieff Slide 7 Neotys PAC - Stuart Moncrieff Slide 8 Neotys PAC - Stuart Moncrieff Slide 9 Neotys PAC - Stuart Moncrieff Slide 10 Neotys PAC - Stuart Moncrieff Slide 11 Neotys PAC - Stuart Moncrieff Slide 12 Neotys PAC - Stuart Moncrieff Slide 13 Neotys PAC - Stuart Moncrieff Slide 14 Neotys PAC - Stuart Moncrieff Slide 15
Upcoming SlideShare
What to Upload to SlideShare
Next

0 Likes

Share

Neotys PAC - Stuart Moncrieff

Neotys organized its first Performance Advisory Council in Scotland, the 14th & 15th of November.
With 15 Load Testing experts from several countries (UK, France, New-Zeland, Germany, USA, Australia, India…) we explored several theme around Load Testing such as DevOps, Shift Right, AI etc.
By discussing around their experience, the methods they used, their data analysis and their interpretation, we created a lot of high-value added content that you can use to discover what will be the future of Load Testing.

You want to know more about this event ? https://www.neotys.com/performance-advisory-council

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

Neotys PAC - Stuart Moncrieff

  1. 1. The Top7 Mistakes in Performance Testing Stuart Moncrieff
  2. 2. Good performance testing is really important… …but so many people do it reallybadly… …and that reallybothers me.
  3. 3. StuartMoncrieff:Web PerformanceEvangelist • How am I qualified to be here? • Web performance specialist since 2002 • 15 years of consulting experience • I have seen almost all the ways that people mess up when they are doing performance testing.
  4. 4. • What I think I look like when I’m talking about Performance Testing: • What I actually look like when I’m talking about Performance Testing:
  5. 5. The “we want the advanced training”paradox • Companies usually ask to include “expert level” content in training courses for their staff, even when their staff don’t understand the basics. • But their staff are all searching Google for entry-level training material. • See also: • The Dunning-Kruger effect
  6. 6. The structuralproblem: Most companies can’t differentiate between good and bad performancetesting • Performance Testing treated as an empty ritual to be performed before deployment. • Patterns of dysfunction: • Not even noticing problems in Production because performance monitoring is inadequate. • Problems discovered in Production, even though they could have been discovered in Test. • Testing team can blame anything missed on “differences between Test and Production environments”. • Ops team tries to solve performance problems in Production without involving performance testers. Tuning in Production. Throwing hardware at the problem.
  7. 7. Mistake1: Not adding validationchecks in your scripts Load testing tools will automatically detect HTTP error codes, but error pages are often served with an HTTP 200 response code. How do you know your application is not throwing errors under load? What you should do: • Add verification checks for each page request. • Check for something that indicates success and is unique to the page. • Bonus points: static analysis of scripts as they are checked into version control.
  8. 8. Mistake2: Not monitoringthe Test environment “Setting up monitoring will take too long, so we won’t include infrastructure metrics like CPU utilisation in our report.” How do you diagnose the root cause of load-related problems found during testing? What you should do: • Make the effort to set up infrastructure monitoring in the Test environment. • Bonus points: have exactly the same monitoring in Test and Production environments.
  9. 9. Mistake3: Bad WorkloadModelling (or none at all) “We are going to generate 1000 concurrent users worth of load” What does that even mean? Web apps care more about requests per minute than they do about how many virtual users are configured in your testing tool. What you should do: • Use real-world usage data as an input • Define a Peak Hour usage model that includes a transaction rate (e.g. orders per hour). • Include network conditions (Network Virtualization) • Bonus points: what % of your traffic is generated by bots?
  10. 10. Mistake4: “We only do testing”(silos) “Our job is all about finding defects in the Test environment.” Performance testers can have a stake in system architecture, capacity planning, monitoring, non-functional requirements, contracts with vendors (SLAs), and incidents in Production. What you should do: • Don’t ignore the app after go-live. • Keep looking for defects in the Production environment (using monitoring tools). • Be an advocate for application performance at every stage of the software lifecycle.
  11. 11. Mistake5: “We’reresponsetimetesters”(tunnel vision) “Our job is to measure response times with our load testing tool.” Your job is to find load and performance-related problems…preferably before they reach Production. What you should do: • Check for errors under load. • Test Failover under load. • Test system behaviour when there are interface outages under load. • Ensure that performance problems and system metrics will be visible to the Ops team in Production.
  12. 12. Mistake6: Ignoringerrorsand otherdefects “We tried to run our test last night, but the web server crashed. It’s okay though; they’ve restarted everything and deleted the logs, so we can re-run the test.” So it sounds like you actually found a few defects last night. What you should do: • Constantly be asking yourself “if this happened in Production, would it be a problem?” • If you see something, say something. • Some of your most interesting days as a testers will start out with “hmmm…that’s odd.”
  13. 13. Mistake7: Miscalculatingerrorrates If you are reporting your error rate as: 𝑬𝒓𝒓𝒐𝒓 𝑹𝒂𝒕𝒆 = 𝑷𝒂𝒔𝒔𝒆𝒅 𝑻𝒓𝒂𝒏𝒔𝒂𝒄𝒕𝒊𝒐𝒏𝒔 𝑻𝒐𝒕𝒂𝒍 𝑻𝒓𝒂𝒏𝒔𝒂𝒄𝒕𝒊𝒐𝒏𝒔 …then you are doing it wrong. What you should do: • Report on the probability of the user completing the entire business process, not just individual steps. • Bonus points: Under controlled conditions, anything above an 0% error rate indicates a problem that should be investigated. Transaction Passed Faile d Total Front page 99,041 959 100,00 0 Search catalogue 99,002 998 100,00 0 Browse catalogue 99,033 967 100,00 0 View item 98,966 1,034 100,00 0 Checkout 51 49 100 TOTAL 396,09 3 4,007 400,10 0
  14. 14. Summary of Mistakes What have we covered? 1. Scripts without enough verification checks 2. Not monitoring the Test environment 3. Bad Workload Modelling 4. Focusing on “performance testing”, rather than “performance” 5. Thinking your responsibility starts and ends with response times 6. Ignoring defects that are right in front of you 7. Miscalculating error rates
  15. 15. • • • •

Neotys organized its first Performance Advisory Council in Scotland, the 14th & 15th of November. With 15 Load Testing experts from several countries (UK, France, New-Zeland, Germany, USA, Australia, India…) we explored several theme around Load Testing such as DevOps, Shift Right, AI etc. By discussing around their experience, the methods they used, their data analysis and their interpretation, we created a lot of high-value added content that you can use to discover what will be the future of Load Testing. You want to know more about this event ? https://www.neotys.com/performance-advisory-council

Views

Total views

155

On Slideshare

0

From embeds

0

Number of embeds

0

Actions

Downloads

0

Shares

0

Comments

0

Likes

0

×