Your SlideShare is downloading. ×
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Software testing 2012 - A Year in Review

485

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
485
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Software Testing 2012A Year in Review
  • 2. Introduction• What are the latest trends within software testing?• What has happened during 2012 that is worth highlighting?• Very subjective, but here is my view
  • 3. Software Testing Trends Overview Good Test Google Testing 2.0 Automation Context-driven Test Roles Testing
  • 4. Google Testing 2.0 (Google)• “This brings us to the current chapter in test which I call Testing 1.5. This chapter is being written by computer scientists, applied scientists, engineers, developers, statisticians, and many other disciplines. These people come together in the Software Engineer in Test (SET) and Test Engineer (TE) roles at Google. SET/TEs focus on; developing software faster, building it better the first time, testing it in depth, releasing it quicker, and making sure it works in all environments. We often put deep test focus on Security, Reliability and Performance. I sometimes think of the SET/TE’s as risk assessors whose role is to figure out the probability of finding a bug, and then working to reduce that probability. Super interesting computer science problems where we take a solid engineering approach, rather than a process oriented / manual / people intensive based approach. We always look to scale with machines wherever possible.” [1] a lot of “art” has turned into “science”
  • 5. Good Test Automation (Microsoft)• Designing good tests is one of the hardest tasks in software development. [2]• Relying on test automation for any part of your testing is pointless if you don’t care about the results and look at failed tests every time they fail. [3]• I’m all for saving time and money, but I have concerns with an automation approach based entirely (or largely) on automating a bunch of manual tests. Good test design considers manual and computer assisted testing as two different attributes – not sequential tasks. That concept is such an ingrained approach to me (and the testers I get to work with), that the idea of a write tests->run tests manually->automate those tests seems fundamentally broken. [4]• Key benefits of API testing include [7]: – Reduced testing costs – Improved productivity – Higher functional (business logic) quality• First off, let me state my main points for disliking GUI automation [8]: – It’s (typically) fragile – tests tend to break / stop working / work unsuccessfully often – It rarely lasts through multiple versions of a project (another aspect of fragility) – It’s freakin’ hard to automate UI (and keep track of state, verify, etc.) – Available tools are weak to moderate (this is arguable, depending on what you want to do with the tools).• Only those problems … which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing. [9]
  • 6. Test Roles (Google + Microsoft)• Roles that testers play on teams vary. They vary a lot. You can’t compare them. That’s ok, and (IMO) part of the growth of the role. [5]• Which leads me to two test roles that, while they definitely exist, I could say they’re dead to me. [5] – The first is the test-automation only role. I think the role of taking manual test scripts written by one person and then automating those steps is a bad practice. – I’ll call the final role “waterfall-tester” – even though I know this role exists at some (fr)agile shops as well. This is the when- I’m-done-writing-it-you-can-test-it role.• Google Tester Roles [6] – The SWE or Software Engineer is the traditional developer role. – The SET or Software Engineer in Test is also a developer role except their focus is on testability. They review designs and look closely at code quality and risk. They refactor code to make it more testable. SETs write unit testing frameworks and automation. They are a partner in the SWE code base but are more concerned with increasing quality and test coverage than adding new features or increasing performance. – The TE or Test Engineer is the exact reverse of the SET. It is a a role that puts testing first and development second. Many Google TEs spend a good deal of their time writing code in the form of automation scripts and code that drives usage scenarios and even mimics a user. They also organize the testing work of SWEs and SETs, interpret test results and drive test execution, particular in the late stages of a project as the push toward release intensifies. TEs are product experts, quality advisers and analyzers of risk.
  • 7. Context-driven Testing• Cem Kaner: “Rather than calling it a “school”, I prefer to refer to a context-driven approach.“ [10]• “One of the striking themes in what I saw was a mistrust of test automation. Hey, I agree that regression test automation is a poor bases for an effective comprehensive testing strategy, but the mistrust went beyond that. Manual (session-based, of course) exploratory testing had become a Best Practice.” [11]• “I think we need to look more sympathetically at more contexts and more solutions. To ask more about what is right with alternative ideas and what we can learn from them. And to develop batteries of skills to work with them. For that, I think we need to get past the politics of The One School of context-driven testing. “[11]• Many great ideas from the context-driven world: – 37 Sources for Test Ideas [12] – The scripted and exploratory testing continuum [13] – Where does all the time go? [14] – A Testing Landscape [15] – 8 Layer Model for Exploratory Testing [16] – Silent Evidence in Testing [17] – Etc.• Mostly improvements, not innovations?
  • 8. SummaryIs there any consensus in the testing world?• Testing requires qualified professionals that take ownership – Hiring many unqualified testers will only result in higher costs and waste• Reduce the time spent on useless artefacts, either by tool support, or by removing them completely• Always look for a way to support your testing activities with tools – not only test execution• Everything is risk-based! The more data you have (qualitative or quantitative) the better!• Mindless automation is not valuable• Manual scripted regression testing is not an effective way to find bugs – There should always be a good reason to run scripted manual tests, like specific legal or customer requirements• Customer involvement is important – both as input and as live user testers• Focus on testing in agile projects
  • 9. Reference[1] Testing 2.0http://googletesting.blogspot.se/#!/2012/08/testing-20.html [14] Where does all the time go?[2] Orchestrating Test Automation http://www.developsense.com/blog/2012/10/where-does-all-that-time-go/http://angryweasel.com/blog/?p=496 [15] A testing landscape[3] Oops I did it again http://www.shino.de/2012/11/07/a-testing-landscape/http://angryweasel.com/blog/?p=432 [16] 8 Layer Model for Exploratory Testing[4] Exploring Test Automationhttp://angryweasel.com/blog/?p=412 http://www.shino.de/2012/03/23/testbash-an-8-layer-model-for-exploratory-[5] Exploring Test Roles testing/http://angryweasel.com/blog/?p=444 [17] Silen Evidence in Testing[6] How do Google Test Software – Part 2 http://testers-headache.blogspot.se/2012/03/silent-evidence-in-testing.htmlhttp://googletesting.blogspot.se/search?q=test+roles#!/2011/02/how-google- tests-software-part-two.html[7] API Testing – How It Can Helphttp://www.testingmentor.com/imtesty/2011/12/01/api-testinghow-can-it-help/[8] Design for *GUI* automationhttp://angryweasel.com/blog/?p=332[9] How Google Tests Softwre – Part 5http://googletesting.blogspot.se/search?q=exploratory+testing#!/2011/03/how- google-tests-software-part-five.html[10] Context-driven Testinghttp://context-driven-testing.com/?page_id=9[11] Context-driven Testing is not a Religionhttp://context-driven-testing.com/?p=23[12] 37 Sources for Test Ideashttp://thetesteye.com/blog/2012/02/announcing-37-sources-for-test-ideas/[13] The Scripted and Exploratory Testing Continuumhttp://thetesteye.com/blog/2012/01/the-scripted-and-exploratory-testing- continuum/

×