Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

[Paul Holland] Trends in Software Testing


Published on

Trends in Software Testing: There has been a slow realization among the top executives that simply outsourcing testing to the lowest bidder is not resulting in a sufficient level of quality in their software products. In this session, Paul Holland will discuss how American companies are starting to reconsider “factory school” testing and are no longer satisfied with the current situation of simply outsourcing their “checking”. As the development side of software continues its dramatic shift toward Agile development – what role can testers have and how can testers still add value?

Published in: Software

[Paul Holland] Trends in Software Testing

  1. 1. 1 Copyright © 1995-2014, Doran Jones, Inc. and Satisfice, Inc. Paul Holland, Doran Jones, Inc.
  2. 2. My Background  Managing Director, Testing Practice at Doran Jones  Independent S/W Testing consultant 4/2012 - 3/2014  16+ years testing telecommunications equipment and reworking test methodologies at Alcatel-Lucent  10+ years as a test manager  Presenter at STAREast, STARWest, Let’s Test, EuroSTAR and CAST  Facilitator at 50+ peer conferences and workshops  Teacher of S/W testing for the past 6 years  Teacher of Rapid Software Testing  Military Helicopter pilot – Canadian Sea Kings
  3. 3. Attributions  Over the past 10 years I have spoken with many people regarding Exploratory Testing and Metrics. Much of this talk comes from the Rapid Software Testing class developed by James Bach and Michael Bolton.  For the rest, I cannot directly attribute any specific aspects to any individual but all of these people (and more) have influenced my opinions and thoughts on metrics:  Cem Kaner, Ross Collard, Doug Hoffman, Scott Barber, John Hazel, Eric Proegler, Dan Downing, Greg McNelly, Ben Yaroch
  4. 4. What I’m seeing in New York City  “C”-level executives are aware that:  They have quality issues  The current approach of outsourcing doesn’t work  When you go with the lowest bidder, you often get the lowest skill level and lowest quality of work  Software is becoming more:  Expected by customers - everywhere  Integrated with business needs  Complicated (increasing risk of failure)
  5. 5. Important Change in Approach  There is no such thing as a “Best Practice” in software testing:  The word “best” means there is nothing better  If something is the best we think it can’t be improved  Something that works well for one project may be a bad practice for another project or company  There are only “Good” Practices for a particular context.  All practices can be improved
  6. 6. Important Change in Approach  Stop using the term “Quality Assurance”  We can’t “A” the “Q”  The term comes from manufacturing which is a different industry with different challenges  Software development is as much about relationships and social interaction as it is about technology  Could you assure the quality of a meeting between intelligent and empowered individuals?  Use “Quality Assistance” or “Question Asker” or simply “Tester”
  7. 7. Call this “Checking” not Testing Observe Evaluate Report Interact with the product in specific ways to collect specific observations. Apply algorithmic decision rules to those observations. Report any failed checks. means operating a product to check specific facts about it…
  8. 8. Acquiring the competence, motivation, and credibility to… Testing is… create the conditions necessary to… …so that you help your clients to make informed decisions about risk. evaluate a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference, including… operating a product to check specific facts about it…
  9. 9. Tacit Test Procedures Consistency Oracles Prospective Testing Learning and Teaching Commitment Management (inc. estimation) Recruiting Helpers Managing Testing Logistics Test Tooling and Artifact Development Test Framing Bug Advocacy & Triage Project Post Mortem Creating Archival Documentation Guiding Helpers Discovery of Curios, Issues & Risks Building the Test Team Designing Checks and Tests Playing with the Product Studying Results Galumphing Configuring Product & Tools Schedule Management Study Customer Feedback Relationship Building Making Failure Productive Sympathetic Testing Maintaining Personal Health and Motivation Team Leadership Quasi-Functional Testing Playing Programmer Testing w/Simulated Conditions Testing a Simulation Creating the Test Lab Studying Specs Managing Records Playing Business Analyst Opposition Research Testability Advocacy Cultivate Credibility
  10. 10. Testing vs. Checking TESTING (think “what testers do”): the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference. CHECKING (think “fact checking”): the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
  11. 11. 11 Exploratory Testing Is…  an approach to testing…  that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work…  by treating learning, test design, test execution and result evaluation as mutually supportive activities that run in parallel throughout the project. (applicable to any test technique) (optimize how?)
  12. 12. 12 Session Based Test Management  SBTM allows Exploratory Testing to be managed in an effective way that stands up to scrutiny  It allows planning, estimating, tracking and reporting of testing projects  It does NOT involve counting test cases or reporting on pass/fail counts  Read more online at and
  13. 13. The Fallacy of Repeated Tests: Clearing Mines mines
  14. 14. Totally Repeatable Tests Won’t Clear the Minefield mines fixes
  15. 15. Variable Tests are Therefore More Effective mines fixes
  16. 16. Agile™ Development  Everyone on the SCRUM team must be able to perform all tasks for the team  Test Driven Development (TDD) fulfils the testing needs of the project  Acceptance Test Driven Development (ATDD) really fulfils the testing needs  Designers can test their own code just as well as a tester
  17. 17. Agile™ Development  The BEST expected result from an excellent automation framework is an awesome SANITY CHECK of the software  Testing by testers is still required on most projects (some exceptions like Facebook)  Dedicated testers should support SCRUM teams from within the team and/or as “undone” work
  18. 18. The Agile “Testing” Pyramid 18 Checking
  19. 19. A Better “Testing” Pyramid 19 From: Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory. Automation is in the pyramid while manual testing is not. Manual testing is performed as required. Manual testing is a cloud and is not attached to the pyramid – it can occur at any phase & be any size.
  20. 20. 20  How often do you account for your progress?  If you have any autonomy at all, you can risk investing some time in  learning  thinking  refining approaches  better tests Allow some disposable time Self-management is good!
  21. 21. 21 Allow some disposable time  If it turns out that you’ve made a bad investment…oh well  If it turns out that you’ve made a good investment, you might have  learned something about the product  invented a more powerful test  found a bug  done a better job  avoided going down a dead end for too long  surprised and impressed your manager
  22. 22. 22 Copyright © 1995-2014, Doran Jones, Inc. and Satisfice, Inc.
  23. 23. 23 Copyright © 1995-2014, Doran Jones, Inc. and Satisfice, Inc.