Performance Testing And Beyond

1,121 views

Published on

A presentation tries to move the discussion on performance testing from a simple, "will it support x users" to a focus on application optimisation.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,121
On SlideShare
0
From Embeds
0
Number of Embeds
58
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Earlier this year at the Future of Web Apps conference (http://futureofwebapps.com/) in Miami, Fred Wilson, who is with the VC behind Twitter, del.icio.us, FeedBurner, Heyzap, Indeed.com, Tacoda, Oddcast, Disqus, Zemanta, Clickable, Covestor, Etsy, etc, was asked to present his top ten list of what made a great web app. The number one, top of his list, was speed. He said “First and foremost, we believe that speed is more than a feature. Speed is the most important feature.““First and foremost, we believe that speed is more than a feature. Speed is the most important feature. If your application is slow, people won’t use it. I see this more with mainstream users than I do with power users. I think that power users sometimes have a bit of sympathetic eye to the challenges of building really fast web apps, and maybe they’re willing to live with it, but when I look at my wife and kids, they’re my mainstream view of the world. If something is slow, they’re just gone.” – Fred Wilson
  • This is one of the first performance tests that has actual data (and is not strictly anecdotal)Bing delayed server response by ranges from 50ms to 2000ms for their control group. You can see the results of the tests above. Though the number may seem small it's actually large shifts in usage and when applied over millions can be very significant to usage and revenue. The results of the test were so clear that they ended it earlier than originally planned. The metric Time To Click is quite interesting. Notice that as the delays get longer the Time To Click increases at a more extreme rate (1000ms increases by 1900ms). The theory is that the user gets distracted and unengaged in the page. In other words, they've lost the user's full attention and have to get it back. http://en.oreilly.com/velocity2009/public/schedule/detail/8523
  • Google's Test: Google ran a similar experiment for where they tested delays ranging from 50ms - 400ms. The chart above shows the impact that it had on users for the 7 weeks they were in the test. The most interesting thing to note was the continued effect the experiment had on users even after it had ended. Some of the users never recovered -- especially those with the greater delay of 400ms. Google tracked the users for an additional 5 weeks (for a total of 12). http://en.oreilly.com/velocity2009/public/schedule/detail/8523
  • This is the use case on everyone’s mind. What if I launch this application, and it crashes and burns?! What if we run that marketing campaign and it can’t take the additional user load? What if we switch over to this new system and employees can’t do their job? This is such a compelling use case for load testing that it has kind of drowned out the other areas that we are going to talk about. The people that have invested time and money into the application want to know if it is going to work when it goes live. The deliverable that everyone talks about is will the application work when we hit x number of users.
  • So the performance testing team focuses on the key deliverable, working out if the application will support the number of users that it should. If it does, it gets passed if not, it doesn’t. If an application goes live then goes pear shaped, the performance testing team get to answer all of the hard questions. So it is not surprising that the go/no go thing gets a lot of attention. And not surprising that the performance testing team can be careful about what goes out.
  • Google performance anti-patterns – first line of the first hit is:- Fixing Performance at the End of the Project (http://highscalability.com/blog/2009/4/5/performance-anti-pattern.html)
  • Performance ArchitecturePerformance EngineeringPerformance TestingPerformance OptimizationPerformance MonitoringPerformance architecturePerformance engineeringPerformance monitoringUnderstand use patternsIdentify low hanging performance gainsPerformance testingPerformance optimisation
  • http://www.microfocus.com/aboutmicrofocus/pressroom/releases/pr20100707322277.asphttps://h10079.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-126-17^44030_4000_100__&jumpid=ex_r11374_us/en/large/eb/go_loadrunnercloudwww.keynote.com/products/web_performance/web-performance-testing.html
  • Performance Testing And Beyond

    1. 1. Performance Testing and Beyond<br />Peter Brown<br />CEO<br />Ecetera<br />
    2. 2. Performance is the number 1 feature<br />1. Speed<br />2. Instant Utility<br />3. Software is Media<br />4. Less is More<br />5. Make it Programmable<br />6. Make it Personal<br />7. RESTful<br />8. Discoverabilty<br />9. Clean<br />10. Playful<br />
    3. 3. Imperceptible differences have an effect 1<br />Data driven results<br />Strong linear correlation<br />Users become less engaged<br />
    4. 4. Imperceptible differences have an effect 2<br />Number of searches per day decreases in proportion to the delay<br />Effect persists even after the delay is removed<br />
    5. 5. Perceptible differences have an effect too!<br />
    6. 6. Common view of Performance Testing<br />
    7. 7. Why do performance testing?<br />So you know, ahead of time, across varying user loads, the system’s<br />Responsiveness<br />Throughput<br />Reliability<br />After all changes that could effect performance and before real users get access to the system<br />So you can<br />Know if it will meet operational objectives, and ...<br />Gauge the effect of architectural decisions<br />Optimize the environment for optimal performance<br />Identify code hotspots<br />Etc ...<br />
    8. 8. The effect of architectural decisions<br />Does the application behave the way it was architected?<br />In the context of the transaction are any anti-patterns evident?<br />
    9. 9. Environment optimisation<br />Business processes<br />JVM/App Server<br />Garbage Collection<br />Threading<br />Clustering<br />Caching<br />Database<br />Web proxy<br />VM tuning<br />Database<br />Frontend engineering<br />Load balancing<br />Protocol offload<br />TOE<br />SSLisation<br />Storage<br />Misc black boxes<br />
    10. 10. Identify code hotspots<br />Where is the transaction spending most time<br />Which component is using the most CPU time<br />Which components are memory hogs<br />
    11. 11. Application/Testing Lifecycle<br /><ul><li>The last thing between a great idea and launch is SVT
    12. 12. #1 Performance anti-pattern
    13. 13. Performance should be addressed across the lifecycle</li></li></ul><li>Performance COE<br />
    14. 14. Cloud based testing<br />Load injection in the Cloud<br />SilkPerformerCloudBurst<br />Gomez Reality Load<br />LoadRunner in the Cloud<br />Keynote LoadPro<br />Amazon + software <br />Load Test Environment<br />Amazon<br />Rackspace<br />
    15. 15. Summary<br />Performance Matters – A lot<br />Even imperceptible performance improvements can make a big difference<br />Performance testing can add a lot of value across the application lifecycle<br /> The cloud makes it easy to create and remove test environments and load injectors<br />

    ×