Your SlideShare is downloading. ×

Advanced software test_lucanhquan90


Published on

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. About the AuthorsWith over a quarter-century of software and systemsengineering experience, Rex Black is President of RexBlack Consulting Services (, a leader insoftware, hardware, and systems testing. RBCS deliversconsulting, outsourcing, and training services, employingthe industry’s most experienced and recognizedconsultants. RBCS worldwide clientele save time andmoney through improved product development,decreased tech support calls, improved corporatereputation, and more. Rex is the most prolific author practicing in the field ofsoftware testing today. His popular first book, Managingthe Testing Process, has sold over 40,000 copies around theworld, and is now in its third edition. His six other books—Advanced Software Testing: Volumes I, II, and III, CriticalTesting Processes, Foundations of Software Testing, andPragmatic Software Testing—have also sold tens ofthousands of copies. He has written over thirty articles;presented hundreds of papers, workshops, and seminars;and given about fifty keynote and other speeches atconferences and events around the world. Rex is theimmediate past President of the International SoftwareTesting Qualifications Board (ISTQB) and a Director of theAmerican Software Testing Qualifications Board (ASTQB).Jamie L. Mitchell has over 28 years of experience indeveloping and testing both hardware and software. He isa pioneer in the test automation field and has workedwith a variety of vendor and open-source test automationtools since the first Windows tools were released withWindows 3.0. He has also written test tools for severalplatforms. Jamie specializes in increasing the productivity ofautomation and test teams through innovative ideas andcustom tool extensions. In addition, he provides training,mentoring, process auditing, and expert technicalsupport in all aspects of testing and automation. Jamie holds a Master of Computer Science degreefrom Lehigh University in Bethlehem, PA, and a CertifiedSoftware Test Engineer certification from QAI. He was aninstructor and board member of the InternationalInstitute of Software Testing (IIST) and a contributingeditor, technical editor, and columnist for the Journal ofSoftware Testing Professionals. He has been a frequentspeaker on testing and automation at severalinternational conferences, including STAR, QAI, and PSQT.
  • 2. Rex Black · Jamie L. MitchellAdvanced SoftwareTesting—Vol. 3Guide to the ISTQB Advanced Certificationas an Advanced Technical Test Analyst
  • 3. Rex Blackrex_black@rbcs-us.comJamie L. Mitchelljamie@go-tac.comEditor: Dr. Michael BarabasProjectmanager: Matthias RossmanithCopyeditor: Judy FlynnLayout and Type: Josef HegeleProofreader: James JohnsonCover Design: Helmut Kraus, www.exclam.dePrinter: CourierPrinted in USAISBN: 978-1-933952-39-01st Edition © 2011 by Rex Black and Jamie L. Mitchell16 15 14 13 12 11 1 2 3 4 5Rocky Nook802 East Cota Street, 3rd FloorSanta Barbara, CA 93103www.rockynook.comLibrary of Congress Cataloging-in-Publication DataBlack, Rex, 1964- Advanced software testing : guide to the ISTQB advanced certification as an advancedtechnical test analyst / Rex Black, Jamie L. Mitchell.-1st ed. p. cm.-(Advanced software testing)ISBN 978-1-933952-19-2 (v. 1 : alk. paper)-ISBN 978-1-933952-36-9 (v. 2 : alk. paper)1. Electronic data processing personnel-Certification. 2. Computer software-Examinations-Study guides. 3. Computer software-Testing. I. Title. QA76.3.B548 2008 005.14-dc222008030162Distributed by O’Reilly Media1005 Gravenstein Highway NorthSebastopol, CA 95472-2811All product names and services identified throughout this book are trademarks or registeredtrademarks of their respective companies. They are used throughout this book in editorialfashion only and for the benefit of such companies. No such uses, or the use of any tradename, is intended to convey endorsement or other affiliation with the book. No part of thematerial protected by this copyright notice may be reproduced or utilized in any form,electronic or mechanical, including photocopying, recording, or bay any information storageand retrieval system, without written permission from the copyright owner.This book is printed on acid-free paper.
  • 4. vRex Black’s AcknowledgementsA complete list of people who deserve thanks for helping me along in my careeras a test professional would probably make up its own small book. Here I’ll con-fine myself to those who had an immediate impact on my ability to write thisparticular book. First of all, I’d like to thank my colleagues on the American Software TestingQualifications Board and the International Software Testing QualificationsBoard, and especially the Advanced Syllabus Working Party, who made thisbook possible by creating the process and the material from which this bookgrew. Not only has it been a real pleasure sharing ideas with and learning fromeach of the participants, but I have had the distinct honor of being elected pres-ident of both the American Software Testing Qualifications Board and theInternational Software Testing Qualifications Board twice. I spent two terms ineach of these roles, and I continue to serve as a board member, as the ISTQBGovernance Officer, and as a member of the ISTQB Advanced Syllabus Work-ing Party. I look back with pride at our accomplishments so far, I look forwardwith pride to what we’ll accomplish together in the future, and I hope this bookserves as a suitable expression of the gratitude and professional pride I feeltoward what we have done for the field of software testing. Next, I’d like to thank the people who helped us create the material thatgrew into this book. Jamie Mitchell co-wrote the materials in this book, ourAdvanced Technical Test Analyst instructor-lead training course, and ourAdvanced Technical Test Analyst e-learning course. These materials, alongwith related materials in the corresponding Advanced Test Analyst book andcourses, were reviewed, re-reviewed, and polished with hours of dedicatedassistance by José Mata, Judy McKay, and Pat Masters. In addition, James Nazar,Corne Kruger, John Lyerly, Bhavesh Mistry, and Gyorgy Racz provided usefulfeedback on the first draft of this book. The task of assembling the e-learning
  • 5. vi Rex Black’s Acknowledgements and live courseware from the constituent bits and pieces fell to Dena Pauletti, RBCS’ extremely competent and meticulous systems engineer. Of course, the Advanced syllabus could not exist without a foundation, spe- cifically the ISTQB Foundation syllabus. I had the honor of working with that Working Party as well. I thank them for their excellent work over the years, cre- ating the fertile soil from which the Advanced syllabus and thus this book sprang. In the creation of the training courses and the materials that I contributed to this book, I have drawn on all the experiences I have had as an author, practi- tioner, consultant, and trainer. So, I have benefited from individuals too numer- ous to list. I thank those of you who have bought one of my previous books, for you contributed to my skills as a writer. I thank those of you who have worked with me on a project, for you have contributed to my abilities as a test manager, test analyst, and technical test analyst. I thank those of you who have hired me to work with you as a consultant, for you have given me the opportunity to learn from your organizations. I thank those of you who have taken a training course from me, for you have collectively taught me much more than I taught each of you. I thank my readers, colleagues, clients, and students, and hope that my contributions to you have repaid the debt of gratitude that I owe you. For over a dozen years, I have run a testing services company, RBCS. From humble beginnings, RBCS has grown into an international consulting, training, and outsourcing firm with clients on six continents. While I have remained a hands-on contributor to the firm, over 100 employees, subcontractors, and business partners have been the driving force of our ongoing success. I thank all of you for your hard work for our clients. Without the success of RBCS, I could hardly avail myself of the luxury of writing technical books, which is a source of great pride but not a whole lot of money. Again, I hope that our mutual suc- cesses together have repaid the debt of gratitude that I owe each of you. Finally, I thank my family, especially my wife, Laurel, and my daughters, Emma and Charlotte. Tolstoy was wrong: It is not true that all happy families are exactly the same. Our family life is quite hectic, and I know I miss a lot of it thanks to the endless travel and work demands associated with running a global testing services company and writing books. However, I’ve been able to enjoy seeing my daughters grow up as citizens of the world, with passports given to them before their first birthdays and full of stamps before they started losing
  • 6. Jamie Mitchell’s Acknowledgements viitheir baby teeth. Laurel, Emma, and Charlotte, I hope the joys of Decemberbeach sunburns in the Australian summer sun of Port Douglas, learning to skiin the Alps, hikes in the Canadian Rockies, and other experiences that frequentflier miles and an itinerant father can provide, make up in some way for the lim-ited time we share together. I have won the lottery of life to have such a wonder-ful family.Jamie Mitchell’s AcknowledgementsWhat a long, strange trip its been. The last 25 years has taken me from being abench technician, fixing electronic audio components, to this time and place,where I have cowritten a book on some of the most technical aspects of softwaretesting. Its a trip that has been both shared and guided by a host of people that Iwould like to thank. To the many at both Moravian College and Lehigh University who startedme off in my “Exciting Career in Computers”: for your patience and leadershipthat instilled in me a burning desire to excel, I thank you. To Terry Schardt, who hired me as a developer but made me a tester, thanksfor pushing me to the dark side. To Tom Mundt and Chuck Awe, who gave mean incredible chance to lead, and to Barindralal Pal, who taught me that to leadwas to keep on learning new techniques, thank you. To Dean Nelson, who first asked me to become a consultant, and LarryDecklever, who continued my training, many thanks. A shout-out to Beth andJan, who participated with me in choir rehearsals at Joe Sensers when thingswere darkest. Have one on me. To my colleagues at TCQAA, SQE, and QAI who gave me chances todevelop a voice while I learned how to speak, my heartfelt gratitude. To the peo-ple I am working with at ISTQB and ASTQB: I hope to be worthy of the honor
  • 7. viii Jamie Mitchell’s Acknowledgements of working with you and expanding the field of testing. Thanks for the opportu- nity. In my professional life, I have been tutored, taught, mentored, and shown the way by a host of people whose names deserve to be mentioned, but they are too abundant to recall. I would like to give all of you a collective thanks; I would be poorer for not knowing you. To Rex Black for giving me a chance to coauthor the Advanced Technical Test Analyst course and this book: Thank you for your generosity and the opportunity to learn at your feet. For my partner in crime, Judy McKay: Even though our first tool attempt did not fly, I have learned a lot from you and appreciate both your patience and kindness. Hoist a fruity drink from me. To Laurel, Dena, and Michelle: Your patience with me is noted and appreciated. Thanks for being there. And finally, to my family, who have seen so much less of me over the last 25 years than they might have wanted, as I strove to become all that I could be in my chosen profession: words alone cannot convey my thanks. To Beano, who spent innumerable hours helping me steal the time needed to get through school and set me on the path to here, my undying love and gratitude. To my loving wife, Susan, who covered for me at many of the real-life tasks while I toiled, trying to climb the ladder, my love and appreciation. I might not always remember to say it, but I do think it. And to my kids, Christopher and Kim- berly, who have always been smarter than me but allowed me to pretend that I was the boss of them, thanks. Your tolerance and enduring support have been much appreciated. Last, and probably least, to “da boys,” Baxter and Boomer, Bitzi and Buster. Whether sitting in my lap while I was trying to learn how to test or sitting at my feet while writing this book, you guys have been my sanity check. You never cared how successful I was, as long as the doggie-chow appeared in your bowls, morning and night. Thanks.
  • 8. Contents ixContentsRex Black’s Acknowledgements vJamie Mitchell’s Acknowledgements viiIntroduction xix1 Test Basics 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.2 Testing in the Software Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21.3 Specific Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71.4 Metrics and Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111.5 Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141.6 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162 Testing Processes 192.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192.2 Test Process Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202.3 Test Planning and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212.4 Test Analysis and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 2.4.1 Non-functional Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 2.4.2 Identifying and Documenting Test Conditions . . . . . . . . . . . . . .25 2.4.3 Test Oracles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 2.4.4 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 2.4.5 Static Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 2.4.6 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352.5 Test Implementation and Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 2.5.1 Test Procedure Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37 2.5.2 Test Environment Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39 2.5.3 Blended Test Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
  • 9. x Contents 2.5.4 Starting Test Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.5.5 Running a Single Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.5.6 Logging Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.5.7 Use of Amateur Testers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.5.8 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.5.9 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.6 Evaluating Exit Criteria and Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.6.1 Test Suite Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.6.2 Defect Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.6.3 Confirmation Test Failure Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.6.4 System Test Exit Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.6.5 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.6.6 Evaluating Exit Criteria and Reporting Exercise . . . . . . . . . . . . 60 2.6.7 System Test Exit Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.6.8 Evaluating Exit Criteria and Reporting Exercise Debrief . . . . . 63 2.7 Test Closure Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 2.8 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3 Test Management 69 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.2 Test Management Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.3 Test Plan Documentation Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.4 Test Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.5 Scheduling and Test Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6 Test Progress Monitoring and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.7 Business Value of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.8 Distributed, Outsourced, and Insourced Testing . . . . . . . . . . . . . . . . . . 74 3.9 Risk-Based Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.9.1 Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.9.2 Risk Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.9.3 Risk Analysis or Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.9.4 Risk Mitigation or Risk Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.9.5 An Example of Risk Identification and Assessment Results . 87
  • 10. Contents xi 3.9.6 Risk-Based Testing throughout the Lifecycle . . . . . . . . . . . . . . . .89 3.9.7 Risk-Aware Testing Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 3.9.8 Risk-Based Testing Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92 3.9.9 Risk-Based Testing Exercise Debrief 1 . . . . . . . . . . . . . . . . . . . . . . .93 3.9.10 Project Risk By-Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 3.9.11 Requirements Defect By-Products . . . . . . . . . . . . . . . . . . . . . . . . . .95 3.9.12 Risk-Based Testing Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96 3.9.13 Risk-Based Testing Exercise Debrief 2 . . . . . . . . . . . . . . . . . . . . . . .96 3.9.14 Test Case Sequencing Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . .973.10 Failure Mode and Effects Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .973.11 Test Management Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .983.12 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .984 Test Techniques 1014.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024.2 Specification-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.2.1 Equivalence Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Avoiding Equivalence Partitioning Errors . . . . . . . . . 110 Composing Test Cases with Equivalence Partitioning . . . . . . . . . . . . . . . . . . . . 111 Equivalence Partitioning Exercise . . . . . . . . . . . . . . . . 115 Equivalence Partitioning Exercise Debrief . . . . . . . . . 116 4.2.2 Boundary Value Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Examples of Equivalence Partitioning and Boundary Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Non-functional Boundaries . . . . . . . . . . . . . . . . . . . . . . 123 A Closer Look at Functional Boundaries . . . . . . . . . . 124 Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Floating Point Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Testing Floating Point Numbers . . . . . . . . . . . . . . . . . . 130 How Many Boundaries? . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Boundary Value Exercise . . . . . . . . . . . . . . . . . . . . . . . . . 134 Boundary Value Exercise Debrief . . . . . . . . . . . . . . . . . 135
  • 11. xii Contents 4.2.3 Decision Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Collapsing Columns in the Table . . . . . . . . . . . . . . . . . 143 Combining Decision Table Testing with Other Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Nonexclusive Rules in Decision Tables . . . . . . . . . . . . 147 Decision Table Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Decision Table Exercise Debrief . . . . . . . . . . . . . . . . . . 149 4.2.4 State-Based Testing and State Transition Diagrams . . . . . . . 154 Superstates and Substates . . . . . . . . . . . . . . . . . . . . . . . 161 State Transition Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Switch Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 State Testing with Other Techniques . . . . . . . . . . . . . 169 State Testing Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 State Testing Exercise Debrief . . . . . . . . . . . . . . . . . . . . 172 4.2.5 Requirements-Based Testing Exercise . . . . . . . . . . . . . . . . . . . . 175 4.2.6 Requirements-Based Testing Exercise Debrief . . . . . . . . . . . . 175 4.3 Structure-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 4.3.1 Control-Flow Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Building Control-Flow Graphs . . . . . . . . . . . . . . . . . . 180 Statement Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Decision Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Loop Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Hexadecimal Converter Exercise . . . . . . . . . . . . . . . . 195 Hexadecimal Converter Exercise Debrief . . . . . . . . 197 Condition Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Decision/Condition Coverage . . . . . . . . . . . . . . . . . . . 200 Modified Condition/Decision Coverage (MC/DC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Multiple Condition Coverage . . . . . . . . . . . . . . . . . . . 205 Control-Flow Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Control-Flow Exercise Debrief . . . . . . . . . . . . . . . . . . 210 4.3.2 Path Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 LCSAJ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
  • 12. Contents xiii Basis Path/Cyclomatic Complexity Testing . . . . . . . . 220 Cyclomatic Complexity Exercise . . . . . . . . . . . . . . . . . 225 Cyclomatic Complexity Exercise Debrief . . . . . . . . . . 225 4.3.3 A Final Word on Structural Testing . . . . . . . . . . . . . . . . . . . . . . . 227 4.3.4 Structure-Based Testing Exercise . . . . . . . . . . . . . . . . . . . . . . . . . 228 4.3.5 Structure-Based Testing Exercise Debrief . . . . . . . . . . . . . . . . 2294.4 Defect- and Experience-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 4.4.1 Defect Taxonomies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 4.4.2 Error Guessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 4.4.3 Checklist Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 4.4.4 Exploratory Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Test Charters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Exploratory Testing Exercise . . . . . . . . . . . . . . . . . . . . . 249 Exploratory Testing Exercise Debrief . . . . . . . . . . . . . . 249 4.4.5 Software Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 An Example of Effective Attacks . . . . . . . . . . . . . . . . . . 256 Other Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Software Attack Exercise . . . . . . . . . . . . . . . . . . . . . . . . . 259 Software Attack Exercise Debrief . . . . . . . . . . . . . . . . . 259 4.4.6 Specification-, Defect-, and Experience-Based Exercise . . . . 260 4.4.7 Specification-, Defect-, and Experience-Based Exercise Debrief . . . . . . . . . . . . . . . . . . . 260 4.4.8 Common Themes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2614.5 Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 4.5.1 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 4.5.2 Code Parsing Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 4.5.3 Standards and Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 4.5.4 Data-Flow Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 4.5.5 Set-Use Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 4.5.6 Set-Use Pair Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 4.5.7 Data-Flow Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 4.5.8 Data-Flow Exercise Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 4.5.9 Data-Flow Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
  • 13. xiv Contents 4.5.10 Static Analysis for Integration Testing . . . . . . . . . . . . . . . . . . . . 288 4.5.11 Call-Graph Based Integration Testing . . . . . . . . . . . . . . . . . . . . . 290 4.5.12 McCabe Design Predicate Approach to Integration Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 4.5.13 Hex Converter Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 4.5.14 McCabe Design Predicate Exercise . . . . . . . . . . . . . . . . . . . . . . . 301 4.5.15 McCabe Design Predicate Exercise Debrief . . . . . . . . . . . . . . . 301 4.6 Dynamic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 4.6.1 Memory Leak Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 4.6.2 Wild Pointer Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 4.6.3 API Misuse Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 4.7 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 5 Tests of Software Characteristics 323 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 5.2 Quality Attributes for Domain Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 5.2.1 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 5.2.2 Suitability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 5.2.3 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 5.2.4 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 5.2.5 Usability Test Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 5.2.6 Usability Test Exercise Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 5.3 Quality Attributes for Technical Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 337 5.3.1 Technical Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 5.3.2 Security Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 5.3.3 Timely Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 5.3.4 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 5.3.5 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 5.3.6 Multiple Flavors of Efficiency Testing . . . . . . . . . . . . . . . . . . . . . 357 5.3.7 Modeling the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 5.3.8 Efficiency Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 5.3.9 Examples of Efficiency Bugs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 5.3.10 Exercise: Security, Reliability, and Efficiency . . . . . . . . . . . . . . . 372
  • 14. Contents xv 5.3.11 Exercise: Security, Reliability, and Efficiency Debrief . . . . . . . 372 5.3.12 Maintainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 5.3.13 Subcharacteristics of Maintainability . . . . . . . . . . . . . . . . . . . . . 379 5.3.14 Portability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 5.3.15 Maintainability and Portability Exercise . . . . . . . . . . . . . . . . . . 393 5.3.16 Maintainability and Portability Exercise Debrief . . . . . . . . . . . 3935.4 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3966 Reviews 3996.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3996.2 The Principles of Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4036.3 Types of Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4076.4 Introducing Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4126.5 Success Factors for Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 6.5.1 Deutsch’s Design Review Checklist . . . . . . . . . . . . . . . . . . . . . . . 417 6.5.2 Marick’s Code Review Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 419 6.5.3 The OpenLaszlo Code Review Checklist . . . . . . . . . . . . . . . . . . 4226.6 Code Review Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4236.7 Code Review Exercise Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4246.8 Deutsch Checklist Review Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4296.9 Deutsch Checklist Review Exercise Debrief . . . . . . . . . . . . . . . . . . . . . . . 4306.10 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4327 Incident Management 4357.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4357.2 When Can a Defect Be Detected? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4367.3 Defect Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4377.4 Defect Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4457.5 Metrics and Incident Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4497.6 Communicating Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4507.7 Incident Management Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4517.8 Incident Management Exercise Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . 4527.9 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
  • 15. xvi Contents 8 Standards and Test Process Improvement 457 9 Test Techniques 459 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 9.2 Test Tool Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 9.2.1 The Business Case for Automation . . . . . . . . . . . . . . . . . . . . . . . 461 9.2.2 General Test Automation Strategies . . . . . . . . . . . . . . . . . . . . . . 466 9.2.3 An Integrated Test System Example . . . . . . . . . . . . . . . . . . . . . . 471 9.3 Test Tool Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 9.3.1 Test Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 9.3.2 Test Execution Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 9.3.3 Debugging, Troubleshooting, Fault Seeding, and Injection Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 9.3.4 Static and Dynamic Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . 480 9.3.5 Performance Testing Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 9.3.6 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 9.3.7 Web Testing Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 9.3.8 Simulators and Emulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 9.4 Keyword-Driven Test Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 9.4.1 Capture/Replay Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 9.4.2 Capture/Replay Exercise Debrief . . . . . . . . . . . . . . . . . . . . . . . . . 495 9.4.3 Evolving from Capture/Replay . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 9.4.4 The Simple Framework Architecture . . . . . . . . . . . . . . . . . . . . . 499 9.4.5 Data-Driven Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 9.4.6 Keyword-Driven Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 9.4.7 Keyword Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 9.4.8 Keyword Exercise Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 9.5 Performance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 9.5.1 Performance Testing Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 9.5.2 Performance Testing Exercise Debrief . . . . . . . . . . . . . . . . . . . . 521 9.6 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
  • 16. Contents xvii10 People Skills and Team Composition 52710.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52810.2 Individual Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52810.3 Test Team Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52810.4 Fitting Testing within an Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 52910.5 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52910.6 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53010.7 Sample Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53211 Preparing for the Exam 53511.1 Learning Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 11.1.1 Level 1: Remember (K1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 11.1.2 Level 2: Understand (K2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 11.1.3 Level 3: Apply (K3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 11.1.4 Level 4: Analyze (K4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 11.1.5 Where Did These Levels of Learning Objectives Come From? . . . . . . . . . . . . . . . . . . . . . . 53811.2 ISTQB Advanced Exams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 11.2.1 Scenario-Based Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 11.2.2 On the Evolution of the Exams . . . . . . . . . . . . . . . . . . . . . . . . . . . 543Appendix A – Bibliography 545 11.2.3 Advanced Syllabus Referenced Standards . . . . . . . . . . . . . . . . 545 11.2.4 Advanced Syllabus Referenced Books . . . . . . . . . . . . . . . . . . . . 545 11.2.5 Other Referenced Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 11.2.6 Other References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547Appendix B – HELLOCARMSThe Next Generation of Home Equity Lending 549System Requirements Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549I Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551II Versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553III Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
  • 17. xviii Contents 000 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 001 Informal Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 003 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 004 System Business Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 010 Functional System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 020 Reliability System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 030 Usability System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 040 Efficiency System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 050 Maintainability System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 060 Portability System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571 A Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Appendix C – Answers to Sample Questions 575 Index 577
  • 18. xixIntroductionThis is a book on advanced software testing for technical test analysts. By thatwe mean that we address topics that a technical practitioner who has chosensoftware testing as a career should know. We focus on those skills and tech-niques related to test analysis, test design, test tools and automation, test execu-tion, and test results evaluation. We take these topics in a more technicaldirection than in the earlier volume for test analysts by including details of testdesign using structural techniques and details about the use of dynamic analysisto monitor internal status. We assume that you know the basic concepts of testengineering, test design, test tools, testing in the software development lifecycle,and test management. You are ready to mature your level of understanding ofthese concepts and to apply these advanced concepts to your daily work as a testprofessional. This book follows the International Software Testing Qualifications Board’s(ISTQB) Advanced Level Syllabus, with a focus on the material and learningobjectives for the advanced technical test analyst. As such, this book can helpyou prepare for the ISTQB Advanced Level Technical Test Analyst exam. Youcan use this book to self-study for this exam or as part of an e-learning orinstructor-lead course on the topics covered in those exams. If you are taking anISTQB-accredited Advanced Level Technical Test Analyst training course, thisbook is an ideal companion text for that course. However, even if you are not interested in the ISTQB exams, you will findthis book useful to prepare yourself for advanced work in software testing. Ifyou are a test manager, test director, test analyst, technical test analyst, auto-mated test engineer, manual test engineer, programmer, or in any other fieldwhere a sophisticated understanding of software testing is needed—especiallyan understanding of the particularly technical aspects of testing such as white-box testing and test automation—then this book is for you.
  • 19. xx Introduction This book focuses on technical test analysis. It consists of 11 chapters, addressing the following material: 1. Basic aspects of software testing 2. Testing processes 3. Test management 4. Test techniques 5. Testing of software characteristics 6. Reviews 7. Incident (defect) management 8. Standards and test process improvement 9. Test tools and automation 10. People skills (team composition) 11. Preparing for the exam Since the structure follows the structure of the ISTQB Advanced syllabus, some of the chapters address the material in great detail because they are central to the technical test analyst role. Some of the chapters address the material in less detail because the technical test analyst need only be familiar with it. For exam- ple, we cover in detail test techniques—including highly technical techniques like structure-based testing and dynamic analysis and test automation—in this book because these are central to what a technical test analyst does, while we spend less time on test management and no time at all on test process improve- ment. If you have already read Advanced Software Testing: Volume 1, you will notice that there is overlap in some chapters in the book, especially chapters 1, 2, 6, 7, and 10. (There is also some overlap in chapter 4, in the sections on black- box and experience-based testing.) This overlap is inherent in the structure of the ISTQB Advanced syllabus, where both learning objectives and content are common across the two analysis modules in some areas. We spent some time grappling with how to handle this commonality and decided to make this book completely free-standing. That meant that we had to include common material for those who have not read volume 1. If you have read volume 1, you may choose to skip chapters 1, 2, 6, 7, and 10, though people using this book to pre- pare for the Technical Test Analysis exam should read those chapters for review. If you have also read Advanced Software Testing: Volume 2, which is for test managers, you’ll find parallel chapters that address the material in detail but
  • 20. Introduction xxiwith different emphasis. For example, technical test analysts need to know quitea bit about incident management. Technical test analysts spend a lot of time cre-ating incident reports, and you need to know how to do that well. Test managersalso need to know a lot about incident management, but they focus on how tokeep incidents moving through their reporting and resolution lifecycle and howto gather metrics from such reports. What should a technical test analyst be able to do? Or, to ask the questionanother way, what should you have learned to do—or learned to do better—bythe time you finish this book?■ Structure the tasks defined in the test strategy in terms of technical requirements (including the coverage of technically related quality risks)■ Analyze the internal structure of the system in sufficient detail to meet the expected quality level■ Evaluate the system in terms of technical quality attributes such as performance, security, etc.■ Prepare and execute adequate activities, and report on their progress■ Conduct technical testing activities■ Provide the necessary evidence to support evaluations■ Implement the necessary tools and techniques to achieve the defined goalsIn this book, we focus on these main concepts. We suggest that you keep thesehigh-level objectives in mind as we proceed through the material in each of thefollowing chapters. In writing this book, we’ve kept foremost in our minds the question of howto make this material useful to you. If you are using this book to prepare for anISTQB Advanced Level Technical Test Analyst exam, then we recommend thatyou read chapter 11 first, then read the other 10 chapters in order. If you areusing this book to expand your overall understanding of testing to an advancedand highly technical level but do not intend to take the ISTQB Advanced LevelTechnical Test Analyst exam, then we recommend that you read chapters 1through 10 only. If you are using this book as a reference, then feel free to readonly those chapters that are of specific interest to you. Each of the first 10 chapters is divided into sections. For the most part, wehave followed the organization of the ISTQB Advanced syllabus to the point ofsection divisions, but subsections and sub-subsection divisions in the syllabus
  • 21. xxii Introduction might not appear. You’ll also notice that each section starts with a text box describing the learning objectives for this section. If you are curious about how to interpret those K2, K3, and K4 tags in front of each learning objective, and how learning objectives work within the ISTQB syllabus, read chapter 11. Software testing is in many ways similar to playing the piano, cooking a meal, or driving a car. How so? In each case, you can read books about these activities, but until you have practiced, you know very little about how to do it. So we’ve included practical, real-world exercises for the key concepts. We encourage you to practice these concepts with the exercises in the book. Then, make sure you take these concepts and apply them on your projects. You can become an advanced testing professional only by applying advanced test tech- niques to actual software testing. ISTQB Copyright This book is based on the ISTQB Advanced Syllabus version 2007. It also references the ISTQB Foundation Syllabus version 2011. It uses terminology definitions from the ISTQB Glossary version 2.1. These three documents are copyrighted by the ISTQB and used by permission.
  • 22. 11 Test Basics “Read the directions and directly you will be directed in the right direction.” A doorknob in Lewis Carroll’s surreal fantasy, Alice in Wonderland.The first chapter of the Advanced syllabus is concerned with contextual andbackground material that influences the remaining chapters. There are five sec-tions.1. Introduction2. Testing in the Software Lifecycle3. Specific Systems4. Metrics and Measurement5. EthicsLet’s look at each section and how it relates to technical test analysis.1.1 Introduction Learning objectives Recall of content onlyThis chapter, as the name implies, introduces some basic aspects of softwaretesting. These central testing themes have general relevance for testing profes-sionals.There are four major areas:■ Lifecycles and their effects on testing■ Special types of systems and their effects on testing■ Metrics and measures for testing and quality■ Ethical issues
  • 23. 2 1 Test Basics ISTQB Glossary software lifecycle: The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The soft- ware lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note that these phases may overlap or be performed iteratively. Many of these concepts are expanded upon in later chapters. This material expands on ideas introduced in the Foundation syllabus. 1.2 Testing in the Software Lifecycle Learning objectives Recall of content only Chapter 2 in the Foundation syllabus discusses integrating testing into the soft- ware lifecycle. As with the Foundation syllabus, in the Advanced syllabus, you should understand that testing must be integrated into the software lifecycle to succeed. This is true whether the particular lifecycle chosen is sequential, incre- mental, iterative, or spiral. Proper alignment between the testing process and other processes in the lifecycle is critical for success. This is especially true at key interfaces and hand- offs between testing and lifecycle activities such as these: ■ Requirements engineering and management ■ Project management ■ Configuration and change management ■ Software development and maintenance ■ Technical support ■ Technical documentation
  • 24. 1.2 Testing in the Software Lifecycle 3Let’s look at two examples of alignment. In a sequential lifecycle model, a key assumption is that the project teamwill define the requirements early in the project and then manage the (hopefullylimited) changes to those requirements during the rest of the project. In such asituation, if the team follows a formal requirements process, an independent testteam in charge of the system test level can follow an analytical requirements-based test strategy. Using a requirements-based strategy in a sequential model, the test teamwould start—early in the project—planning and designing tests following ananalysis of the requirements specification to identify test conditions. This plan-ning, analysis, and design work might identify defects in the requirements,making testing a preventive activity. Failure detection would start later in thelifecycle, once system test execution began. However, suppose the project follows an incremental lifecycle model,adhering to one of the agile methodologies like Scrum. The test team won’treceive a complete set of requirements early in the project, if ever. Instead, thetest team will receive requirements at the beginning of sprint, which typicallylasts anywhere from two to four weeks. Rather than analyzing extensively documented requirements at the outsetof the project, the test team can instead identify and prioritize key quality riskareas associated with the content of each sprint; i.e., they can follow an analyti-cal risk-based test strategy. Specific test designs and implementation will occurimmediately before test execution, potentially reducing the preventive role oftesting. Failure detection starts very early in the project, at the end of the firstsprint, and continues in repetitive, short cycles throughout the project. In such acase, testing activities in the fundamental testing process overlap and are con-current with each other as well as with major activities in the software lifecycle. No matter what the lifecycle—and indeed, especially with the more fast-paced agile lifecycles—good change management and configuration manage-ment are critical for testing. A lack of proper change management results in aninability of the test team to keep up with what the system is and what it shoulddo. As was discussed in the Foundation syllabus, a lack of proper configura-tion management may lead to loss of artifact changes, an inability to say whatwas tested at what point in time, and severe lack of clarity around the mean-ing of the test results.
  • 25. 4 1 Test Basics ISTQB Glossary system of systems: Multiple heterogeneous, distributed systems that are embedded in networks at multiple levels and in multiple interconnected domains, addressing large-scale interdisciplinary common problems and purposes, usually without a common management structure. The Foundation syllabus cited four typical test levels: ■ Unit or component ■ Integration ■ System ■ Acceptance The Foundation syllabus mentioned some reasons for variation in these levels, especially with integration and acceptance. Integration testing can mean component integration testing—integrating a set of components to form a system, testing the builds throughout that process. Or it can mean system integration testing—integrating a set of systems to form a system of systems, testing the system of systems as it emerges from the conglomeration of systems. As discussed in the Foundation syllabus, acceptance test variations include user acceptance tests and regulatory acceptance tests. Along with these four levels and their variants, at the Advanced level you need to keep in mind additional test levels that you might need for your projects. These could include the following: ■ Hardware-software integration testing ■ Feature interaction testing ■ Customer product integration testing You should expect to find most if not all of the following for each level: ■ Clearly defined test goals and scope ■ Traceability to the test basis (if available) ■ Entry and exit criteria, as appropriate both for the level and for the system lifecycle ■ Test deliverables, including results reporting
  • 26. 1.2 Testing in the Software Lifecycle 5■ Test techniques that will be applied, as appropriate for the level, for the team and for the risks inherent in the system■ Measurements and metrics■ Test tools, where applicable and as appropriate for the level■ And, if applicable, compliance with organizational or other standardsWhen RBCS associates perform assessments of test teams, we often find organi-zations that use test levels but that perform them in isolation. Such isolation leadsto inefficiencies and confusion. While these topics are discussed more inAdvanced Software Testing: Volume 2, test analysts should keep in mind that usingdocuments like test policies and frequent contact between test-related staff cancoordinate the test levels to reduce gaps, overlap, and confusion about results. Let’s take a closer look at this concept of alignment. We’ll use the V-modelshown in figure 1-1 as an example. We’ll further assume that we are talkingabout the system test level. Concept System Develop Tests Ca ts ptu es eT re Re nc pta qu ire ce me Ac nts t s Develop Tests Te em De yst sig n/S nS yst tio gra em e Int Develop Tests Im st Te ple nt me ne nt o mp Sy st Co emFigure 1–1 V-model
  • 27. 6 1 Test Basics In the V-model, with a well-aligned test process, test planning occurs concur- rently with project planning. In other words, the moment that the testing team becomes involved is at the very start of the project. Once the test plan is approved, test control begins. Test control continues through to test closure. Analysis, design, implementation, execution, evaluation of exit criteria, and test results reporting are carried out according to the plan. Deviations from the plan are managed. Test analysis starts immediately after or even concurrently with test plan- ning. Test analysis and test design happen concurrently with requirements, high-level design, and low-level design. Test implementation, including test environment implementation, starts during system design and completes just before test execution begins. Test execution begins when the test entry criteria are met. More realistically, test execution starts when most entry criteria are met and any outstanding entry criteria are waived. In V-model theory, the entry criteria would include success- ful completion of both component test and integration test levels. Test execu- tion continues until the test exit criteria are met, though again some of these may often be waived. Evaluation of test exit criteria and reporting of test results occur throughout test execution. Test closure activities occur after test execution is declared complete. This kind of precise alignment of test activities with each other and with the rest of the system lifecycle will not happen simply by accident. Nor can you expect to instill this alignment continuously throughout the process, without any forethought. Rather, for each test level, no matter what the selected software lifecycle and test process, the test manager must perform this alignment. Not only must this happen during test and project planning, but test control includes acting to ensure ongoing alignment. No matter what test process and software lifecycle are chosen, each project has its own quirks. This is especially true for complex projects such as the sys- tems of systems projects common in the military and among RBCS’s larger cli- ents. In such a case, the test manager must plan not only to align test processes, but also to modify them. Off-the-rack process models, whether for testing alone or for the entire software lifecycle, don’t fit such complex projects well.
  • 28. 1.3 Specific Systems 71.3 Specific Systems Learning objectives Recall of content onlyIn this section, we are going to talk about how testing affects—and is affectedby—the need to test two particular types of systems. The first type is systems ofsystems. The second type is safety-critical systems. Systems of systems are independent systems tied together to serve a com-mon purpose. Since they are independent and tied together, they often lack asingle, coherent user or operator interface, a unified data model, compatibleexternal interfaces, and so forth.Systems of systems projects include the following characteristics and risks:■ The integration of commercial off-the-shelf (COTS) software along with some amount of custom development, often taking place over a long period.■ Significant technical, lifecycle, and organizational complexity and heterogeneity. This organizational and lifecycle complexity can include issues of confidentiality, company secrets, and regulations.■ Different development lifecycles and other processes among disparate teams, especially—as is frequently the case—when insourcing, outsourcing, and offshoring are involved.■ Serious potential reliability issues due to intersystem coupling, where one inherently weaker system creates ripple-effect failures across the entire system of systems.■ System integration testing, including interoperability testing, is essential. Well-defined interfaces for testing are needed.At the risk of restating the obvious, systems of systems projects are more com-plex than single-system projects. The complexity increase applies organization-ally, technically, processwise, and teamwise. Good project management, formaldevelopment lifecycles and processes, configuration management, and qualityassurance become more important as size and complexity increase. Let’s focus on the lifecycle implications for a moment.
  • 29. 8 1 Test Basics As mentioned earlier, with systems of systems projects, we are typically going to have multiple levels of integration. First, we will have component inte- gration for each system, and then we’ll have system integration as we build the system of systems. We will also typically have multiple version management and version con- trol systems and processes, unless all the systems happen to be built by the same (presumably large) organization and that organization follows the same approach throughout its software development team. This kind of unified approach to such systems and processes is not something that we commonly see during assessments of large companies, by the way. The duration of projects tends to be long. We have seen them planned for as long as five to seven years. A system of systems project with five or six systems might be considered relatively short and relatively small if it lasted “only” a year and involved “only” 40 or 50 people. Across this project, there are multiple test levels, usually owned by different parties. Because of the size and complexity of the project, it’s easy for handoffs and transfers of responsibility to break down. So, we need formal information trans- fer among project members (especially at milestones), transfers of responsibility within the team, and handoffs. (A handoff, for those of you unfamiliar with the term, is a situation in which some work product is delivered from one group to another and the receiving group must carry out some essential set of activities with that work product.) Even when we’re integrating purely off-the-shelf systems, these systems are evolving. That’s all the more likely to be true with custom systems. So we have the management challenge of coordinating development of the individual sys- tems and the test analyst challenge of proper regression testing at the system of systems level when things change. Especially with off-the-shelf systems, maintenance testing can be trig- gered—sometimes without much warning—by external entities and events such as obsolescence, bankruptcy, or upgrade of an individual system. If you think of the fundamental test process in a system of systems project, the progress of levels is not two-dimensional. Instead, imagine a sort of pyrami- dal structure, as shown in figure 1-2.
  • 30. 1.3 Specific Systems 9 User acceptance test System of systems Systems test System integration test System test Component integration test Component test System A System BFigure 1–2 Fundamental test process in a system of systems projectAt the base, we have component testing. A separate component test level existsfor each system. Moving up the pyramid, you have component integration testing. A sepa-rate component integration test level exists for each system. Next, we have system testing. A separate system test level exists for eachsystem. Note that, for each of these test levels, we have separate organizational own-ership if the systems come from different vendors. We also probably have sepa-rate team ownership since multiple groups often handle component,integration, and system test. Continuing to move up the pyramid, we come to system integration testing.Now, finally, we are talking about a single test level across all systems. Nextabove that is systems testing, focusing on end-to-end tests that span all the sys-tems. Finally, we have user acceptance testing. For each of these test levels, whilewe have single organizational ownership, we probably have separate team own-ership. Let’s move on to safety-critical systems. Simply put, safety-critical systemsare those systems upon which lives depend. Failure of such a system—or eventemporary performance or reliability degradation or undesirable side effects assupport actions are carried out—can injure or kill people or, in the case of mili-tary systems, fail to injure or kill people at a critical juncture of a battle.
  • 31. 10 1 Test Basics ISTQB Glossary safety-critical system: A system whose failure or malfunction may result in death or serious injury to people, or loss or severe damage to equipment, or environmental harm. Safety-critical systems, like systems of systems, have certain associated charac- teristics and risks: ■ Since defects can cause death, and deaths can cause civil and criminal penalties, proof of adequate testing can be and often is used to reduce liability. ■ For obvious reasons, various regulations and standards often apply to safety-critical systems. The regulations and standards can constrain the process, the organizational structure, and the product. Unlike the usual constraints on a project, though, these are constructed specifically to increase the level of quality rather than to enable trade-offs to enhance schedule, budget, or feature outcomes at the expense of quality. Overall, there is a focus on quality as a very important project priority. ■ There is typically a rigorous approach to both development and testing. Throughout the lifecycle, traceability extends all the way from regulatory requirements to test results. This provides a means of demonstrating compliance. This requires extensive, detailed documentation but provides high levels of audit ability, even by non-test experts. Audits are common if regulations are imposed. Demonstrating compliance can involve tracing from the regulatory requirement through development to the test results. An outside party typically performs the audits. Therefore, establish- ing traceability up front and carrying out both from a people and a process point of view. During the lifecycle—often as early as design—the project team uses safety analysis techniques to identify potential problems. As with quality risk analysis, safety analysis will identify risk items that require testing. Single points of fail- ure are often resolved through system redundancy, and the ability of that redun- dancy to alleviate the single point of failure must be tested. In some cases, safety-critical systems are complex systems or even systems of systems. In other cases, non-safety-critical components or systems are inte-
  • 32. 1.4 Metrics and Measurement 11 ISTQB Glossary metric: A measurement scale and the method used for measurement. measurement scale: A scale that constrains the type of data analysis that can be performed on it. measurement: The process of assigning a number or category to an entity to describe an attribute of that entity. measure: The number or category assigned to an attribute of an entity by making a measurement.grated into safety-critical systems or systems of systems. For example, network-ing or communication equipment is not inherently a safety-critical system, butif integrated into an emergency dispatch or military system, it becomes part of asafety-critical system. metric Formal quality risk management is essential in these situations. Fortunately, measurement scalea number of such techniques exist, such as failure mode and effect analysis; fail- measurement measureure mode, effect, and criticality analysis; hazard analysis; and software commoncause failure analysis. We’ll look at a less formal approach to quality risk analysisand management in chapter 3.1.4 Metrics and Measurement Learning objectives Recall of content onlyThroughout this book, we use metrics and measurement to establish expecta-tions and guide testing by those expectations. You can and should apply metricsand measurements throughout the software development lifecycle because well-established metrics and measures, aligned with project goals and objectives, willenable technical test analysts to track and report test and quality results tomanagement in a consistent and coherent way. A lack of metrics and measurements leads to purely subjective assessmentsof quality and testing. This results in disputes over the meaning of test resultstoward the end of the lifecycle. It also results in a lack of clearly perceived andcommunicated value, effectiveness, and efficiency for testing.
  • 33. 12 1 Test Basics Not only must we have metrics and measurements, we also need goals. What is a “good” result for a given metric? An acceptable result? An unaccept- able result? Without defined goals, successful testing is usually impossible. In fact, when we perform assessments for our clients, we more often than not find ill-defined metrics of test team effectiveness and efficiency with no goals and thus bad and unrealistic expectations (which of course aren’t met). We can establish realistic goals for any given metric by establishing a baseline measure for that metric and checking current capability, comparing that baseline against industry averages, and, if appropriate, setting realistic targets for improvement to meet or exceed the industry average. There’s just about no end to what can be subjected to a metric and tracked through measurement. Consider the following: ■ Planned schedule and coverage ■ Requirements and their schedule, resource, and task implications for testing ■ Workload and resource usage ■ Milestones and scope of testing ■ Planned and actual costs ■ Risks; both quality and project risks ■ Defects, including total found, total fixed, current backlog, average closure periods, and configuration, subsystem, priority, or severity distribution During test planning, we establish expectations in the form of goals for the various metrics. As part of test control, we can measure actual outcomes and trends against these goals. As part of test reporting, we can consistently explain to management various important aspects of the process, product, and project, using objective, agreed-upon metrics with realistic, achievable goals. When thinking about a testing metrics and measurement program, there are three main areas to consider: definition, tracking, and reporting. Let’s start with definition. In a successful testing metrics program, you define a useful, pertinent, and concise set of quality and test metrics for a project. You avoid too large a set of metrics because this will prove difficult and perhaps expensive to measure while often confusing rather than enlightening the viewers and stakeholders.
  • 34. 1.4 Metrics and Measurement 13 You also want to ensure uniform, agreed-upon interpretations of these met-rics to minimize disputes and divergent opinions about the meaning of certainmeasures of outcomes, analyses, and trends. There’s no point in having a met-rics program if everyone has an utterly divergent opinion about what particularmeasures mean. Finally, define metrics in terms of objectives and goals for a process or task,for components or systems, and for individuals or teams. Victor Basili’s well-known Goal Question Metric technique is one way toevolve meaningful metrics. (We prefer to use the word objective where Basiliuses goal.) Using this technique, we proceed from the objectives of the effort—in this case, testing—to the questions we’d have to answer to know if we wereachieving those objectives to, ultimately, the specific metrics. For example, one typical objective of testing is to build confidence. One nat-ural question that arises in this regard is, How much of the system has beentested? Metrics for coverage include percentage requirements covered by tests,percentage of branches and statements covered by tests, percentage of interfacescovered by tests, percentage of risks covered by tests, and so forth. Let’s move on to tracking. Since tracking is a recurring activity in a metrics program, the use of auto-mated tool support can reduce the time required to capture, track, analyze,report, and measure the metrics. Be sure to apply objective and subjective analyses for specific metrics overtime, especially when trends emerge that could allow for multiple interpreta-tions of meaning. Try to avoid jumping to conclusions or delivering metrics thatencourage others to do so. Be aware of and manage the tendency for people’s interests to affect theinterpretation they place on a particular metric or measure. Everyone likes tothink they are objective—and, of course, right as well as fair!—but usuallypeople’s interests affect their conclusions. Finally, let’s look at reporting. Most importantly, reporting of metrics and measures should enlightenmanagement and other stakeholders, not confuse or misdirect them. In part,this is achieved through smart definition of metrics and careful tracking, butit is possible to take perfectly clear and meaningful metrics and confusepeople with them through bad presentation. Edward Tufte’s series of books,
  • 35. 14 1 Test Basics ISTQB Glossary ethics: No definition provided in the ISTQB Glossary. ethics starting with The Graphical Display of Quantitative Information, is a treasure trove of ideas about how to develop good charts and graphs for reporting purposes.1 Good testing reports based on metrics should be easily understood, not overly complex and certainly not ambiguous. The reports should draw the viewer’s attention toward what matters most, not toward trivialities. In that way, good testing reports based on metrics and measures will help management guide the project to success. Not all types of graphical displays of metrics are equal—or equally useful. A snapshot of data at a moment in time, as shown in a table, might be the right way to present some information, such as the coverage planned and achieved against certain critical quality risk areas. A graph of a trend over time might be a useful way to present other information, such as the total number of defects reported and the total number of defects resolved since the start of testing. An analysis of causes or relationships might be a useful way to present still other information, such as a scatter plot showing the correlation (or lack thereof) between years of tester experience and percentage of bug reports rejected. 1.5 Ethics Learning objectives Recall of content only Many professions have ethical standards. In the context of professionalism, ethics are “rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.”2 1. The three books of Tufte’s that Rex has read and can strongly recommend on this topic are The Graphical Display of Quantitative Information, Visual Explanations, and Envisioning Information (all published by Graphics Press, Cheshire, CT). 2. Definition from
  • 36. 1.5 Ethics 15Since, as a technical test analyst, you’ll often have access to confidential andprivileged information, ethical guidelines can help you to use that informationappropriately. In addition, you should use ethical guidelines to choose the bestpossible behaviors and outcomes for a given situation, given your constraints.The phrase “best possible” means for everyone, not just you. Here is an example of ethics in action. One of the authors, Rex Black, ispresident of three related international software testing consultancies, RBCS,RBCS AU/NZ, and Software TestWorx. He also serves on the ISTQB andASTQB boards of directors. As such, he might have and does have insight intothe direction of the ISTQB program that RBCS’ competitors in the softwaretesting consultancy business don’t have. In some cases, such as helping to develop syllabi, Rex has to make thosebusiness interests clear to people, but he is allowed to help do so. Rex helpedwrite both the Foundation and Advanced syllabi. In other cases, such as developing exam questions, Rex agreed, along withhis colleagues on the ASTQB, that he should not participate. Direct access to theexam questions would make it all too likely that, consciously or unconsciously,RBCS would warp its training materials to “teach the exam.” As you advance in your career as a tester, more and more opportunities toshow your ethical nature—or to be betrayed by a lack of it—will come your way.It’s never too early to inculcate a strong sense of ethics.The ISTQB Advanced syllabus makes it clear that the ISTQB expects certificateholders to adhere to the following code of ethics.PUBLIC – Certified software testers shall act consistently with the public inter-est. For example, if you are working on a safety-critical system and are asked toquietly cancel some defect reports, it’s an ethical problem if you do so.CLIENT AND EMPLOYER – Certified software testers shall act in a mannerthat is in the best interests of their client and employer and consistent with thepublic interest. For example, if you know that your employer’s major project isin trouble and you short-sell the stock and then leak information about theproject problems to the Internet, that’s a real ethical lapse—and probably acriminal one too.PRODUCT – Certified software testers shall ensure that the deliverables theyprovide (on the products and systems they test) meet the highest professional
  • 37. 16 1 Test Basics standards possible. For example, if you are working as a consultant and you leave out important details from a test plan so that the client has to hire you on the next project, that’s an ethical lapse. JUDGMENT – Certified software testers shall maintain integrity and indepen- dence in their professional judgment. For example, if a project manager asks you not to report defects in certain areas due to potential business sponsor reac- tions, that’s a blow to your independence and an ethical failure on your part if you comply. MANAGEMENT – Certified software test managers and leaders shall subscribe to and promote an ethical approach to the management of software testing. For example, favoring one tester over another because you would like to establish a romantic relationship with the favored tester’s sister is a serious lapse of mana- gerial ethics. PROFESSION – Certified software testers shall advance the integrity and repu- tation of the profession consistent with the public interest. For example, if you have a chance to explain to your child’s classmates or your spouse’s colleagues what you do, be proud of it and explain the ways software testing benefits society. COLLEAGUES – Certified software testers shall be fair to and supportive of their colleagues and promote cooperation with software developers. For exam- ple, it is unethical to manipulate test results to arrange the firing of a program- mer whom you detest. SELF – Certified software testers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession. For example, attending courses, reading books, and speaking at conferences about what you do help to advance yourself—and the profession. This is called doing well while doing good, and fortunately, it is very ethical! 1.6 Sample Exam Questions To end each chapter, you can try one or more sample exam questions to rein- force your knowledge and understanding of the material and to prepare for the ISTQB Advanced Level Technical Test Analyst exam.
  • 38. 1.6 Sample Exam Questions 171 You are working as a test analyst at a bank. At the bank, technical test analysts work closely with users during user acceptance testing. The bank has bought two financial applications as commercial off-the-shelf (COTS) software from large software vendors. Previous history with these vendors has shown that they deliver quality applications that work on their own, but this is the first time the bank will attempt to integrate applications from these two vendors. Which of the following test levels would you expect to be involved in? [Note: There might be more than one right answer.] A Component test B Component integration test C System integration test D Acceptance test2 Which of the following is necessarily true of safety-critical systems? A They are composed of multiple COTS applications. B They are complex systems of systems. C They are systems upon which lives depend. D They are military or intelligence systems.
  • 39. 18 1 Test Basics
  • 40. 192 Testing Processes Do not enter. If the fall does not kill you, the crocodile will. A sign blocking the entrance to a parapet above a pool in the Sydney Wildlife Centre, Australia, guiding people in a safe viewing process for one of many dangerous-fauna exhibits.The second chapter of the Advanced syllabus is concerned with the process oftesting and the activities that occur within that process. It establishes a frame-work for all the subsequent material in the syllabus and allows you to visualizeorganizing principles for the rest of the concepts. There are seven sections.1. Introduction2. Test Process Models3. Test Planning and Control4. Test Analysis and Design5. Test Implementation and Execution6. Evaluating Exit Criteria and Reporting7. Test Closure ActivitiesLet’s look at each section and how it relates to technical test analysis.2.1 Introduction Learning objectives Recall of content only
  • 41. 20 2 Testing Processes The ISTQB Foundation syllabus describes the ISTQB fundamental test process. It provides a generic, customizable test process, shown in figure 2-1. That process consists of the following activities: ■ Planning and control ■ Analysis and design ■ Implementation and execution ■ Evaluating exit criteria and reporting ■ Test closure activities For technical test analysts, we can focus on the middle three activities in the bullet list above. Execution Evaluating exit criteria Reporting test results Control Project Timeline Figure 2–1 ISTQB fundamental test process 2.2 Test Process Models Learning objectives Recall of content only The concepts in this section apply primarily for test managers. There are no learning objectives defined for technical test analysts in this section. In the course of studying for the exam, read this section in chapter 2 of the Advanced syllabus for general recall and familiarity only.
  • 42. 2.3 Test Planning and Control 21 ISTQB Glossary test planning: The activity of establishing or updating a test plan. test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies, among other test items, the features to be tested, the testing tasks, who will do each task, the degree of tester inde- pendence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.2.3 Test Planning and Control Learning objectives Recall of content onlyThe concepts in this section apply primarily for test managers. There are nolearning objectives defined for technical test analysts in this section. In thecourse of studying for the exam, read this section in chapter 2 of the Advancedsyllabus for general recall and familiarity only.2.4 Test Analysis and Design Learning objectives (K2) Explain the stages in an application’s lifecycle where non- functional tests and architecture-based tests may be applied. Explain the causes of non-functional testing taking place only in specific stages of an application’s lifecycle. (K2) Give examples of the criteria that influence the structure and level of test condition development. (K2) Describe how test analysis and design are static testing techniques that can be used to discover defects. (K2) Explain by giving examples the concept of test oracles and how a test oracle can be used in test specifications.
  • 43. 22 2 Testing Processes ISTQB Glossary test case: A set of input values, execution preconditions, expected results, and execution postconditions developed for a particular objective or test condi- tion, such as to exercise a particular program path or to verify compliance with a specific requirement. test condition: An item or event of a component or system that could be veri- fied by one or more test cases, e.g., a function, transaction, feature, quality attribute, or structural element. During the test planning activities in the test process, test leads and test manag- ers work with project stakeholders to identify test objectives. In the IEEE 829 test plan template—which was introduced at the Foundation Level and which we’ll review later in this book—the lead or manager can document these in the section “Features to be Tested.” The test objectives are a major deliverable for technical test analysts because without them, we wouldn’t know what to test. During test analysis and design activities, we use these test objectives as our guide to carry out two main subac- tivities: ■ Identify and refine the test conditions for each test objective ■ Create test cases that exercise the identified test conditions However, test objectives are not enough by themselves. We not only need to know what to test, but in what order and how much. Because of time con- straints, the desire to test the most important areas first, and the need to expend our test effort in the most effective and efficient manner possible, we need to prioritize the test conditions. When following a risk-based testing strategy—which we’ll discuss in detail in chapter 3—the test conditions are quality risk items identified during quality risk analysis. The assignment of priority for each test condition usually involves determining the likelihood and impact associated with each quality risk item; i.e., we assess the level of risk for each risk item. The priority determines the allocation of test effort (throughout the test process) and the order of design, implementation, and execution of the related tests.
  • 44. 2.4 Test Analysis and Design 23 ISTQB Glossary exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered complete when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. exit criteriaThroughout the process, the specific test conditions and their associated priori-ties can change as the needs—and our understanding of the needs—of theproject and project stakeholders evolve. This prioritization, use of prioritization, and reprioritization occurs regu-larly in the test process. It starts during risk analysis and test planning, ofcourse. It continues throughout the process, from analysis and design toimplementation and execution. It influences evaluation of exit criteria andreporting of test results.2.4.1 Non-functional Test ObjectivesBefore we get deeper into this process, let’s look at an example of non-functionaltest objectives. First, it’s important to remember that non-functional test objectives canapply to any test level and exist throughout the lifecycle. Too often majornon-functional test objectives are not addressed until the very end of theproject, resulting in much wailing and gnashing of teeth when show-stoppingfailures are found. Consider a video game as an example. For a video game, the ability tointeract with the screen in real time, with no perceptible delays, is a key non-functional test objective. Every subsystem of the game must interact and per-form efficiently to achieve this goal. To be smart about this testing, execution efficiency to enable timelyprocessing should be tested at the unit, integration, system, and acceptancelevels. Finding a serious bottleneck during system test would affect the sched-ule, and that’s not good for a consumer product like a game—or any otherkind of software or system, for that matter.
  • 45. 24 2 Testing Processes ISTQB Glossary test execution: The process of running a test on the component or system under test, producing actual result(s). test execution Furthermore, why wait until test execution starts at any level, early or late? Instead, start with reviews of requirements specifications, design specifications, and code to assess this function as well. Many times, non-functional quality characteristics can be quantified; in this case, we might have actual performance requirements which we can test throughout the various test levels. For example, suppose that key events must be processed within 3 milliseconds of input to a specific component to be able to meet performance standards; we can test if the component actually meets that measure. In other cases, the requirements might be implicit rather than explicit: the system must be “fast enough.” Some types of non-functional testing should clearly be performed as early as possible. As an example, for many projects we have worked on, perfor- mance testing was only done late in system testing. The thought was that it could not be done earlier because the functional testing had not yet been done, so end-to-end testing would not be possible. Then, when serious bottle- necks resulting in extremely slow performance were discovered in the sys- tem, the release schedule was severely impacted. Other projects targeted performance testing as critical. At the unit and component testing level, measurements were made as to time required for processing through the objects. At integration testing, subsystems were benchmarked to make sure they could perform in an optimal way. As system testing started, performance testing was a crucial piece of the planning and started as soon as some functionality was available, even though the system was not yet feature complete. All of this takes time and resources, planning and effort. Some perfor- mance tools are not really useful too early in the process, but measurements can still be taken using simpler tools. Other non-functional testing may not make sense until late in the Soft- ware Development Life Cycle (SDLC). While error and exception handling can be tested in unit and integration testing, full blown recoverability testing
  • 46. 2.4 Test Analysis and Design 25really makes sense only in the system, acceptance, and system integrationphases when the response of the entire system can be measured.2.4.2 Identifying and Documenting Test ConditionsTo identify test conditions, we can perform analysis of the test basis, the testobjectives, the quality risks, and so forth using any and all information inputsand sources we have available. For analytical risk-based testing strategies, we’llcover exactly how this works in chapter 3. If you’re not using analytical risk-based testing, then you’ll need to selectthe specific inputs and techniques according to the test strategy or strategiesyou are following. Those strategies, inputs, and techniques should align withthe test plan or plans, of course, as well as with any broader test policies ortest handbooks. Now, in this book, we’re concerned primarily with the technical test ana-lyst role. So we address both functional tests (especially from a technical per-spective) and non-functional tests. The analysis activities can and shouldidentify functional and non-functional test conditions. We should considerthe level and structure of the test conditions for use in addressing functionaland non-functional characteristics of the test items.There are two important choices when identifying and documenting test condi-tions:■ The structure of the documentation for the test conditions■ The level of detail we need to describe the test conditions in the documen- tationThere are many common ways to determine the level of detail and structure ofthe test conditions. One is to work in parallel with the test basis documents. For example, ifyou have a marketing requirements document and a system requirementsdocument in your organization, the former is usually high level and the latteris low level. You can use the marketing requirements document to generatethe high-level test conditions and then use the system requirements docu-ment to elaborate one or more low-level test conditions underneath eachhigh-level test condition.
  • 47. 26 2 Testing Processes Another approach is often used with quality risk analysis (sometimes called product risk analysis). In this approach, we can outline the key features and quality characteristics at a high level. We can then identify one or more detailed quality risk items for each feature or characteristic. These quality risk items are thus the test conditions. Another approach, if you have only detailed requirements, is to go directly to the low-level requirements. In this case, traceability from the detailed test conditions to the requirements (which impose the structure) is needed for management reporting and to document what the test is to estab- lish. Yet another approach is to identify high-level test conditions only, some- times without any formal test bases. For example, in exploratory testing some advocate the documentation of test conditions in the form of test charters. At that point, there is little to no additional detail created for the unscripted or barely scripted tests. Again, it’s important to remember that the chosen level of detail and the structure must align with the test strategy or strategies, and those strategies should align with the test plan or plans, of course, as well as with any broader test policies or test handbooks. Also, remember that it’s easy to capture traceability information while you’re deriving test conditions from test basis documents like requirements, designs, use cases, user manuals, and so forth. It’s much harder to re-create that information later by inspection of test cases. Let’s look at an example of applying a risk-based testing strategy to this step of identifying test conditions. Suppose you are working on an online banking system project. During a risk analysis session, system response time, a key aspect of system perfor- mance, is identified as a high-risk area for the online banking system. Several different failures are possible, each with its own likelihood and impact. So, discussions with the stakeholders lead us to elaborate the system perfor- mance risk area, identifying three specific quality risk items: ■ Slow response time during login ■ Slow response time during queries ■ Slow response time during a transfer transaction
  • 48. 2.4 Test Analysis and Design 27 ISTQB Glossary test design: (1) See test design specification. (2) The process of transforming general testing objectives into tangible test conditions and test cases. test design specification: A document specifying the test conditions (cover- age items) for a test item and the detailed test approach and identifying the associated high-level test cases. high-level test case: A test case without concrete (implementation-level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available. low-level test case: A test case with concrete (implementation-level) values for input data and expected results. Logical operators from high-level test cases are replaced by actual values that correspond to the objectives of the logical operators.At this point, the level of detail is specific enough that the risk analysis team canassign specific likelihood and impact ratings for each risk item. Now that we have test conditions, the next step is usually to elaborate thoseinto test cases. We say “usually” because some test strategies, like the reactiveones discussed in the Foundation syllabus and in this book in chapter 4, don’talways use written test cases. For the moment, let’s assume that we want to spec-ify test cases that are repeatable, verifiable, and traceable back to requirements,quality risk, or whatever else our tests are based on. If we are going to create test cases, then, for a given test condition—ortwo or more related test conditions—we can apply various test design tech-niques to create test cases. These techniques are covered in chapter 4. Keep inmind that you can and should blend techniques in a single test case. We mentioned traceability to the requirements, quality risks, and other testbases. Some of those other test bases for technical test analysts can includedesigns (high or low level), architectural documents, class diagrams, objecthierarchies, and even the code itself. We can capture traceability directly, byrelating the test case to the test basis element or elements that gave rise to thetest conditions from which we created the test case. Alternatively, we can relatethe test case to the test conditions, which are in turn related to the test basiselements.
  • 49. 28 2 Testing Processes ISTQB Glossary test implementation: The process of developing and prioritizing test proce- dures, creating test data, and, optionally, preparing test harnesses and writing automated test scripts. As with test conditions, we’ll need to select a level of detail and structure for our test implementation test cases. It’s important to remember that the chosen level of detail and the structure must align with the test strategy or strategies. Those strategies should align with the test plan or plans, of course, as well as with any broader test poli- cies or test handbooks. So, can we say anything else about the test design process? Well, the specific process of test design depends on the technique. However, it typically involves defining the following: ■ Preconditions ■ Test environment requirements ■ Test inputs and other test data requirements ■ Expected results ■ Postconditions Defining the expected result of a test can be tricky, especially as expected results are not only screen outputs, but also data and environmental post conditions. Solving this problem requires that we have what’s called a test oracle, which we’ll look at in a moment. First, though, notice the mention of test environment requirements in the preceding bullet list. This is an area of fuzziness in the ISTQB fundamental test process. Where is the line between test design and test implementation, exactly? The Advanced syllabus says, “[D]uring test design the required detailed test infrastructure requirements may be defined, although in practice these may not be finalized until test implementation.” Okay, but maybe we’re doing some implementation as part of the design? Can’t the two overlap? To us, trying to draw sharp distinctions results in many questions along the lines of, How many angels can dance on the head of a pin?
  • 50. 2.4 Test Analysis and Design 29 Whatever we call defining test environments and infrastructures—design,implementation, environment setup, or some other name—it is vital to remem-ber that testing involves more than just the test objects and the testware. Thereis a test environment, and this isn’t just hardware. It includes rooms, equipment,personnel, software, tools, peripherals, communications equipment, userauthorizations, and all other items required to run the tests.2.4.3 Test OraclesOkay, let’s look at what test oracles are and what oracle-related problems thetechnical test analyst faces. A test oracle is a source we use to determine the expected results of a test.We can compare these expected results with the actual results when we run atest. Sometimes the oracle is the existing system. Sometimes it’s a user man-ual. Sometimes it’s an individual’s specialized knowledge. Rex usually says thatwe should never use the code itself as an oracle, even for structural testing,because that’s simply testing that the compiler, operating system, and hard-ware work. Jamie feels that the code can serve as a useful partial oracle, say-ing it doesn’t hurt to consider it, though he agrees with Rex that it should notserve as the sole oracle. So, what is the oracle problem? Well, if you haven’t experienced this first-hand, ask yourself, in general, how we know what “correct results” are for atest? The difficulty of determining the correct result is the oracle problem. If you’ve just entered the workforce from the ivory towers of academia,you might have learned about perfect software engineering projects. You mayhave heard stories about detailed, clear, and consistent test bases like require-ments and design specifications that define all expected results. Those storieswere myths. In the real world, on real projects, test basis documents like requirementsare vague. Two documents, such as a marketing requirements document anda system requirements document, will often contradict each other. These doc-uments may have gaps, omitting any discussion of important characteristics ofthe product—especially non-functional characteristics, and especially usabil-ity and user interface characteristics. Sometimes these documents are missing entirely. Sometimes they existbut are so superficial as to be useless. One of our clients showed Rex a hand-
  • 51. 30 2 Testing Processes written scrawl on a letter-size piece of paper, complete with crude illustra- tions, which was all the test team had received by way of requirements on a project that involved 100 or so person-months of effort. We have both worked on projects where even that would be an improvement over what we actually received! When test basis documents are delivered, they are often delivered late, often too late to wait for them to be done before we begin test design (at least if we want to finish test design before we start test execution). Even with the best intentions on the part of business analysts, sales and marketing staff, and users, test basis documents won’t be perfect. Real-world applications are com- plex and not entirely amenable to complete, unambiguous specification. So we have to augment the written test basis documents we receive with tester expertise or access to expertise, along with judgment and professional pessimism. Using all available oracles—written and mental, provided and derived—the tester can define expected results before and during test execu- tion. Since we’ve been talking a lot about requirements, you might assume that the oracle problem applies only to high-level test levels like system test and acceptance test. Nope. The oracle problem—and its solutions—apply to all test levels. The test bases will vary from one level to another, though. Higher test levels like user acceptance test and system test rely more on requirements spec- ification, use cases, and defined business processes. Lower test levels like com- ponent test and integration test rely more on low-level design specification While this is a hassle, remember that you must solve the oracle problem in your testing. If you run tests with no way to evaluate the results, you are wasting your time. You will provide low, zero, or negative value to the team. Such testing generates false positives and false negatives. It distracts the team with spurious results of all kinds. It creates false confidence in the system. By the way, as for our sarcastic aside about the “ivory tower of academia” a moment ago, let us mention that, when Rex studied computer science at UCLA quite a few years ago, one of his software engineering professors told him about this problem right from the start. One of Jamie’s professors at Lehigh said that that complete requirements were more mythological than unicorns. Neither of us could say we weren’t warned! Let’s look at an example of a test oracle, from the real world.
  • 52. 2.4 Test Analysis and Design 31 Rex and his associates worked on a project to develop a banking applica-tion to replace a legacy system. There were two test oracles. One was therequirements specification, such as it was. The other was the legacy system.They faced two challenges. For one thing, the requirements were vague. The original concept of theproject, from the vendor’s side, was “Give the customer whatever the cus-tomer wants,” which they then realized was a good way to go bankrupt giventhe indecisive and conflicting ideas about what the system should do amongthe customer’s users. The requirements were the outcome of a belated effortto put more structure around the project. For another thing, sometimes the new system differed from the legacysystem in minor ways. In one infamous situation, there was a single bugreport that they opened, then deferred, then reopened, then deferred again, atleast four or five times. It described situations where the monthly paymentvaried by $0.01. The absence of any reliable, authoritative, consistent set of oracles led to alot of “bug report ping-pong.” They also had bug report prioritization issuesas people argued over whether some problems were problems at all. They hadhigh rates of false positives and negatives. The entire team—including the testteam—was frustrated. So, you can see that the oracle problem is not someabstract concept; it has real-world consequences.2.4.4 StandardsAt this point, let’s review some standards from the Foundation that will be use-ful in test analysis and design. First, let’s look at two documentation templates you can use to captureinformation as you analyze and design your tests, assuming you intend todocument what you are doing, which is usually true. The first is the IEEE 829test design specification. Remember from the Foundation course that a test condition is an item orevent of a component or system that could be verified by one or more testcases, e.g., a function, transaction, feature, quality attribute, identified risk, orstructural element. The IEEE 829 test design specification describes a condi-tion, feature or small set of interrelated features to be tested and the set oftests that cover them at a very high or logical level. The number of tests
  • 53. 32 2 Testing Processes required should be commensurate with the risks we are trying to mitigate (as reflected in the pass/fail criteria.) The design specification template includes the following sections: ■ Test design specification identifier (following whatever standard your company uses for document identification) ■ Features to be tested (in this test suite) ■ Approach refinements (specific techniques, tools, etc.) ■ Test identification (tracing to test cases in suites) ■ Feature pass/fail criteria (e.g., how we intend to determine whether a fea- ture works, such as via a test oracle, a test basis document, or a legacy sys- tem) The collection of test cases outlined in the test design specification is often called a test suite. The sequencing of test suites and cases within suites is often driven by risk and business priority. Of course, project constraints, resources, and progress must affect the sequencing of test suites. Next comes the IEEE 829 test case specification. A test case specification describes the details of a test case. This template includes the following sections: ■ Test case specification identifier ■ Test items (what is to be delivered and tested) ■ Input specifications (user inputs, files, etc.) ■ Output specifications (expected results, including screens, files, timing, behaviors of various sorts, etc.) ■ Environmental needs (hardware, software, people, props, and so forth) ■ Special procedural requirements (operator intervention, permissions, etc.) ■ Intercase dependencies (if needed to set up preconditions) While this template defines a standard for contents, many other attributes of a test case are left as open questions. In practice, test cases vary significantly in effort, duration, and number of test conditions covered. We’ll return to the IEEE 829 standard again in the next section. However, let us also review another related topic from the Foundation syllabus, on the matter of documentation.
  • 54. 2.4 Test Analysis and Design 33In the real world, the extent of test documentation varies considerably. It wouldbe hard to list all the different reasons for this variance, but they include the fol-lowing:■ Risks to the project created by documenting or not documenting.■ How much value, if any, the test documentation creates—and is meant to create.■ Any standards that are or should be followed, including the possibility of an audit to ensure compliance with those standards.■ The software development lifecycle model used. Advocates of agile approaches try to minimize documentation by ensuring close and frequent team communication.■ The extent to which we must provide traceability from the test basis to the test cases.The key idea here is to remember to keep an open mind and a clear head whendeciding how much to document. Now, since we focus on both functional and non-functional characteris-tics as part of this technical test analyst volume, let’s review the ISO 9126standard. The ISO 9126 quality standard for software defines six software qualitycharacteristics: functionality, reliability, usability, efficiency, maintainability,and portability. Each characteristic has three or more subcharacteristics, asshown in figure 2-2. Tests that address functionality and its subcharacteristics are functionaltests. These were the main topics in the first volume of this series, for test ana-lysts. We will revisit them here, but primarily from a technical perspective. Teststhat address the other five characteristics and their subcharacteristics are non-functional tests. These are among the main topics for this book. Finally, keep inmind that, when you are testing hardware/software systems, additional qualitycharacteristics can and will apply.
  • 55. 34 2 Testing Processes Characteristic Functionality: Suitability, accuracy, Addressed by interoperability, security, compliance functional tests Subchar- acteristic Reliability: Maturity (robustness), fault tolerance, recoverability, compliance Usability: Understandability, learnability, operability, attractiveness, compliance Efficiency: Time behaviour, resource Addressed by non- utilization, compliance functional tests Maintainability: Analyzability, changeability, stability, testability, compliance Portability: Adaptability, installability, coexistence, replaceability, compliance Figure 2–2 ISO 9126 quality standard 2.4.5 Static Tests Now, let’s review three important ideas from the Foundation syllabus. One is the value of static testing early in the lifecycle to catch defects when they are cheap and easy to fix. The next is the preventive role testing can play when involved early in the lifecycle. The last is that testing should be involved early in the project. These three ideas are related because technical test analysis and design is a form of static testing; it is synergistic with other forms of static testing, and we can exploit that synergy only if we are involved at the right time. Notice that, depending on when the analysis and design work is done, you could possibly define test conditions and test cases in parallel with reviews and static analyses of the test basis. In fact, you could prepare for a requirements review meeting by doing test analysis and design on the require- ments. Test analysis and design can serve as a structured, failure-focused static test of a requirements specification that generates useful inputs to a requirements review meeting. Of course, we should also take advantage of the ideas of static testing, and early involvement if we can, to have test and non-test stakeholders participate in reviews of various test work products, including risk analyses, test designs,
  • 56. 2.4 Test Analysis and Design 35test cases, and test plans. We should also use appropriate static analysis tech-niques on these work products. Let’s look at an example of how test analysis can serve as a static test. Sup-pose you are following an analytical risk-based testing strategy. If so, then inaddition to quality risk items—which are the test conditions—a typical qual-ity risk analysis session can provide other useful deliverables. We refer to these additional useful deliverables as by-products, along thelines of industrial by-products, in that they are generated by the way as youcreate the target work product, which in this case is a quality risk analysisdocument. These by-products are generated when you and the other partici-pants in the quality risk analysis process notice aspects of the project youhaven’t considered before.These by-products include the following:■ Project risks—things that could happen and endanger the success of the project■ Identification of defects in the requirements specification, design specifica- tion, or other documents used as inputs into the quality risk analysis■ A list of implementation assumptions and simplifications, which can improve the design as well as set up checkpoints you can use to ensure that your risk analysis is aligned with actual implementation laterBy directing these by-products to the appropriate members of the project team,you can prevent defects from escaping to later stages of the software lifecycle.That’s always a good thing.2.4.6 MetricsTo close this section, let’s look at metrics and measurements for test analysis anddesign. To measure completeness of this portion of the test process, we canmeasure the following:■ Percentage of requirements or quality (product) risks covered by test conditions■ Percentage of test conditions covered by test cases■ Number of defects found during test analysis and design
  • 57. 36 2 Testing Processes We can track test analysis and design tasks against a work breakdown structure, which is useful in determining whether we are proceeding according to the esti- mate and schedule. 2.5 Test Implementation and Execution Learning objectives (K2) Describe the preconditions for test execution, including testware, test environment, configuration management, and defect management. Test implementation includes all the remaining tasks necessary to enable test case execution to begin. At this point, remember, we have done our analysis and design work, so what remains? For one thing, if we intend to use explicitly specified test procedures— rather than relying on the tester’s knowledge of the system—we’ll need to organize the test cases into test procedures (or, if using automation, test scripts). When we say “organize the test cases,” we mean, at the very least, document the steps to carry out the test. How much detail do we put in these procedures? Well, the same considerations that lead to more (or less) detail at the test condition and test case level would apply here. For example, if a regulatory standard like the United States Federal Aviation Administration’s DO-178B applies, that’s going to require a high level of detail. Since testing frequently requires test data for both inputs and the test envi- ronment itself, we need to make sure that data is available now. In addition, we must set up the test environments. Are both the test data and the test environ- ments in a state such that we can use them for testing now? If not, we must resolve that problem before test execution starts. In some cases test data require the use of data generation tools or production data. Ensuring proper test environment configuration can require the use of configuration management tools. With the test procedures in hand, we need to put together a test execu- tion schedule. Who is to run the tests? In what order should they run them? What environments are needed for which tests? When should we run the
  • 58. 2.5 Test Implementation and Execution 37 ISTQB Glossary test procedure: See test procedure specification. test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. test script: Commonly used to refer to a test procedure specification, espe- cially an automated one.automated tests? If automated tests run in the same environment as manualtests, how do we schedule the tests to prevent undesirable interactions test procedurebetween the automated and manual tests? We need to answer these questions. test procedure specificati- Finally, since we’re about to start test execution, we need to check whether on test scriptall explicit and implicit entry criteria are met. If not, we need to work withproject stakeholders to make sure they are met before the scheduled test exe-cution start date. Now, keep in mind that you should prioritize and schedule the test proce-dures to ensure that you achieve the objectives in the test strategy in the mostefficient way. For example, in risk-based testing, we usually try to run tests inrisk priority order. Of course, real-world constraints like availability of testconfigurations can change that order. Efficiency considerations like theamount of data or environment restoration that must happen after a test isover can change that order too. Let’s look more closely at two key areas, readiness of test procedures andreadiness of test environments.2.5.1 Test Procedure ReadinessAre the test procedures ready to run? Let’s examine some of the issues we needto address before we know the answer. As mentioned earlier, we must have established clear sequencing for thetest procedures. This includes identifying who is to run the test procedure,when, in what test environment, with what data. We have to evaluate constraints that might require tests to run in a partic-ular order. Suppose we have a sequence of test procedures that together makeup an end-to-end workflow? There are probably business rules that governthe order in which those test procedures must run.
  • 59. 38 2 Testing Processes So, based on all the practical considerations as well as the theoretical ideal of test procedure order—from most important to least important—we need to finalize the order of the test procedures. That includes confirming that order with the test team and other stakeholders. In the process of confirming the order of test procedures, you might find that the order you think you should follow is in fact impossible or perhaps unacceptably less efficient than some other possible sequencing. We also might have to take steps to enable test automation. Of course, we say “might have to take steps” rather than “must take steps” because not all test efforts involve automation. However, as a technical test analyst, imple- menting automated testing is a key responsibility, one which we’ll discuss in detail later in this book. If some tests are automated, we’ll have to determine how those fit into the test sequence. It’s very easy for automated tests, if run in the same environ- ment as manual tests, to damage or corrupt test data, sometimes in a way that causes both the manual and automated tests to generate huge numbers of false positives and false negatives. Guess what? That means you get to run the tests all over again. We don’t want that! Now, the Advanced syllabus says that we will create the test harness and test scripts during test implementation. Well, that’s theoretically true, but as a practical matter we really need the test harness ready weeks, if not months, before we start to use it to automate test scripts. We definitely need to know all the test procedure dependencies. If we find that there are reasons—due to these dependencies—we can’t run the test procedures in the sequence we established earlier, we have two choices: One, we can change the sequence to fit the various obstacles we have discovered. Or, two, we can remove the obstacles. Let’s look more closely at two very common categories of test procedure dependencies—and thus obstacles. The first is the test environment. You need to know what is required for each test procedure. Now, check to see if that environment will be available during the time you have that test procedure scheduled to run. Notice that “available” means not only is the test environment configured, but also no other test procedure—or any other test activity for that matter—that would interfere with the test procedure under consideration is scheduled to use that test environment during the same period of time.
  • 60. 2.5 Test Implementation and Execution 39 The interference question is usually where the obstacles emerge. How-ever, for complex and evolving test environments, the mere configuration ofthe test environment can become a problem. Rex worked on a project a whileback that was so complex that he had to construct a special database to track,report, and manage the relationships between test procedures and the testenvironments they required. The second category of test procedure dependencies is the test data. Youneed to know what data each test procedure requires. Now, similar to the pro-cess before, check to see if that data will be available during the time you havethat test procedure scheduled to run. As before, “available” means not only isthe test data created, but also no other test procedure—or any other test activ-ity for that matter—that would interfere with the viability and accessibility ofthe data is scheduled to use that test data during the same period of time. With test data, interference is again often a large issue. We had a clientwho tried to run manual tests during the day and automated tests overnight.This resulted in lots of problems until a process was evolved to properlyrestore the data at the handover points between manual testing and auto-mated testing (at the end of the day) and between automated testing andmanual testing (at the start of the day).2.5.2 Test Environment ReadinessAre the test environments ready to use? Let’s examine some of the issues weneed to address before we know the answer. First, let’s make clear the importance of a properly configured test environ-ment. If we run the test procedures perfectly but use an improperly configuredtest environment, we obtain useless test results. Specifically, we get many falsepositives. A false positive in software testing is analogous to one in medicine—atest that should have passed instead fails, leading to wasted time analyzing“defects” that turn out to be test environment problems. Often the false positivesare so large in number that we also get false negatives. This happens when a testthat should have failed instead passes, often in this case because we didn’t see ithiding among the false positives. The overall outcomes are low defect detectioneffectiveness; high field or production failure rates; high defect report rejectionrates; a lot of wasted time for testers, managers, and developers; and a severeloss of credibility for the test team. Obviously those are all very bad outcomes.
  • 61. 40 2 Testing Processes ISTQB Glossary false negative: See false-pass result. false-pass result: A test result which fails to identify the presence of a defect that is actually present in the test object. false positive: See false-fail result. false-fail result: A test result in which a defect is reported although no such defect actually exists in the test object. So, we have to ensure properly configured test environments. Now, in the ISTQB fundamental test process, implementation is the point where this happens. As false negative with automation, though, we feel this is probably too late, at least if implementa- false-pass result tion is an activity that starts after analysis and design. If, instead, implementation false positive false-fail result of the test environment is a subset of the overall implementation activity and can start as soon as the test plan is done, then we are in better shape. What is a properly configured test environment and what does it do for us? For one thing, a properly configured test environment enables finding defects under the test conditions we intend to run. For example, if we want to test for performance, it allows us to find unexpected bottlenecks that would slow down the system. For another thing, a properly configured test environment operates nor- mally when failures are not occurring. In other words, it doesn’t generate many false positives. Additionally, at higher levels of testing such as system test and system integration test, a properly configured test environment replicates the produc- tion or end-user environment. Many defects, especially non-functional defects like performance and reliability problems, are hard if not impossible to find in scaled-down environments. There are some other things we need for a properly configured test environ- ment. We’ll need someone to set up and support the environment. For complex environments, this person is usually someone outside the test team. (Jamie and Rex both would prefer the situation in which the person is part of the test team, due to the problems of environment support availability that seem to arise with reliance on external resources, but the preferences of the test manager don’t
  • 62. 2.5 Test Implementation and Execution 41 ISTQB Glossary test log: A chronological record of relevant details about the execution of tests. test logging: The process of recording information about tests executed into a test log. test log test loggingalways result in the reallocation of the individual.) We also need to make suresomeone—perhaps a tester, perhaps someone else—has loaded the testware, testsupport tools, and associated processes on the test environment. Test support tools include, at the least, configuration management, inci-dent management, test logging, and test management. Also, you’ll need proce-dures to gather data for exit criteria evaluation and test results reporting.Ideally, your test management system will handle some of that for you.2.5.3 Blended Test StrategiesIt is often a good idea to use a blend of test strategies, leading to a balanced testapproach throughout testing, including during test implementation. For exam-ple, when RBCS associates run test projects, we typically blend analytical risk-based test strategies with analytical requirements-based test strategies anddynamic test strategies (also referred to as reactive test strategies). We reservesome percentage (often 10 to 20 percent) of the test execution effort for testingthat does not follow predetermined scripts. Analytical strategies follow the ISTQB fundamental test process nicely,with work products produced along the way. However, the risk with blendedstrategies is that the reactive portion can get out of control. Testing withoutscripts should not be ad hoc or aimless. Such tests are unpredictable in dura-tion and coverage. Some techniques like session-based test management, which is covered inthe companion volume on test management, can help deal with that inherentcontrol problem in reactive strategies. In addition, we can use experience-based test techniques such as attacks, error guessing, and exploratory testingto structure reactive test strategies. We’ll discuss these topics further in chap-ter 4.
  • 63. 42 2 Testing Processes The common trait of a reactive test strategy is that most of the testing involves reacting to the actual system presented to us. This means that test analysis, test design, and test implementation occur primarily during test exe- cution. In other words, reactive test strategies allow—indeed, require—that the results of each test influence the analysis, design, and implementation of the subsequent tests. As discussed in the Foundation syllabus, these reactive strategies are lightweight in terms of total effort both before and during test execution. Experience-based test techniques are often efficient bug finders, sometimes 5 or 10 times more efficient than scripted techniques. However, being experi- ence based, naturally enough, they require expert testers. As mentioned ear- lier, reactive test strategies result in test execution periods that are sometimes unpredictable in duration. Their lightweight nature means they don’t provide much coverage information and are difficult to repeat for regression testing. Some claim that tools can address this coverage and repeatability problem, but we’ve never seen that work in actual practice. That said, when reactive test strategies are blended with analytical test strategies, they tend to balance each other’s weak spots. This is analogous to blended scotch whiskey. Blended scotch whiskey consists of malt whiskey— either a single malt or more frequently a combination of various malt whis- keys—blended with grain alcohol (basically, vodka). It’s hard to imagine two liquors more different than vodka and single malt scotch, but together they produce a liquor that many people find much easier to drink and enjoy than the more assertive, sometimes almost medicinal single malts. 2.5.4 Starting Test Execution Before we start test execution, it’s a best practice to measure readiness based on some predefined preconditions, often called entry criteria. Entry criteria were discussed in the Foundation syllabus, in the chapter on test management. Often we will have formal entry criteria for test execution. Consider the fol- lowing entry criteria, taken from an actual project: 1. Bug tracking and test tracking systems are in place. 2. All components are under formal, automated configuration management and release management control.
  • 64. 2.5 Test Implementation and Execution 433. The operations team has configured the system test server environment, including all target hardware components and subsystems. The test team has been provided with appropriate access to these systems.These were extracted from the entry criteria section of the test plan for thatproject. They focus on three typical areas that can adversely affect the value oftesting if unready: the test environment (as discussed earlier), configurationmanagement, and defect management.On the other hand, sometimes projects have less-formal entry criteria for exe-cution. For example, test teams often simply assume that the test cases and testdata will be ready, and there is no explicit measurement of them.Whether the entry criteria are formal or informal, we need to determine if we’reready to start test execution. To do so, we need the delivery of a testable testobject or objects and the satisfaction (or waiver) of entry criteria. Of course, this presumes that the entry criteria alone are enough toensure that the various necessary implementation tasks discussed earlier inthis section are complete. If not, then we have to go back and check thoseissues of test data, test environments, test dependencies, and so forth. Now, during test execution, people will run the manual test cases via thetest procedures. To execute a test procedure to completion, we’d expect that atleast two things had happened. First, we covered all of the test conditions orquality risk items traceable to the test procedure. Second, we carried out all ofthe steps of the test procedure. You might ask, “How could I carry out the test procedure without cover-ing all the risks or conditions?” In some cases, the test procedure is written ata high level. In that case, you would need to understand what the test wasabout and augment the written test procedure with on-the-fly details thatensure that you cover the right areas. You might also ask, “How could I cover all the risks and conditions with-out carrying out the entire test procedure?” Some steps of a test procedureserve to enable testing rather than covering conditions or risks. For example,some steps set up data or other preconditions, some steps capture logginginformation, and some steps restore the system to a known good state at theend.
  • 65. 44 2 Testing Processes A third kind of activity can apply during manual test execution. We can incorporate some degree of reactive testing into the procedures. One way to accomplish this is to leave the procedures somewhat vague and to tell the tester to select their favorite way of carrying out a certain task. Another way is to tell the testers, as Rex often does, that a test script is a road map to inter- esting places and, when they get somewhere interesting, they should stop and look around. This has the effect of giving them permission to transcend, to go beyond, the scripts. We have found it very effective. Finally, during execution, tools will run any existing automated tests. These tools follow the defined scripts without deviation. That can seem like an unalloyed “good thing” at first. However, if we did not design the scripts properly, that can mean that the script can get out of sync with the system under test and generate a bunch of false positives. We’ll talk more about that problem—and how to solve it—in chapter 9. 2.5.5 Running a Single Test Procedure Let’s zoom in on the act of a tester running a single test procedure. After the logistical issues of initial setup are handled, the tester starts running the specific steps of the test. These yield the actual results. Now we have come to the heart of test execution. We compare actual results with expected results. This is indeed the moment when testing either adds value or removes value from the project. Everything up to this point—all of the work designing and implementing our tests—was about getting us to this point. Everything after this point is about using the value this compari- son has delivered. Since running a test procedure is so critical, so central to good testing, attention and focus on your part is essential at this moment. But what if we notice a mismatch between the expected results and the actual results? The ISTQB glossary refers to each difference between the expected results and the actual results as an anomaly. There can be multiple differences, and thus multiple anomalies, in a mismatch. When we observe an anomaly, we have an incident. Some incidents are failures. A failure occurs when the system misbehaves due to one or more defects. This is the ideal situation when an incident has occurred. If we are looking at a failure—a symptom of a true defect—we should start to gather data to help the developer resolve the defect. We’ll talk more about incident reporting and management in detail in chapter 7.
  • 66. 2.5 Test Implementation and Execution 45 ISTQB Glossary anomaly: Any condition that deviates from expectation based on require- ments specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. incident: Any event occurring that requires investigation. failure: Deviation of the component or system from its expected delivery, ser- vice, or result. anomaly incidentSome incidents are not failures but rather are false positives. False positives failureoccur when the expected and actual results don’t match due to bad test specifi-cations, invalid test data, incorrectly configured test environments, a simplemistake on the part of the person running the test, and so forth. If we can catch a false positive right away, the moment it happens, the dam-age is limited. The tester should fix the test, which might involve some configu-ration management work if the tests are checked into a repository. The testershould then rerun the test. Thus, the damage done was limited to the tester’swasted time along with the possible impact of that lost time on the scheduleplus the time needed to fix the test plus the time needed to rerun the test. All of those activities, all of that lost time, and the impact on the schedulewould have happened even if the tester had simply assumed the failure was validand reported it as such. It just would have happened later, after an additionalloss of time on the part of managers, other testers, developers, and so forth. Here’s a cautionary note on these false positives too. Just because a testhas never yielded a false positive before, in all the times it’s been run before,doesn’t mean you’re not looking at one this time. Changes in the test basis,the proper expected results, the test object, and so forth can obsolete or inval-idate a test specification.2.5.6 Logging Test ResultsMost testers like to run tests—at least the first few times they run them—butsometimes they don’t always like to log results. “Paperwork!” they snort.“Bureaucracy and red tape!” they protest.
  • 67. 46 2 Testing Processes If you are one of those testers, get over it. We said previously that all the planning, analysis, design, and implementation was about getting to the point of running a test procedure and comparing actual and expected results. We then said that everything after that point is about using the value the compar- ison delivered. Well, you can’t use the value if you don’t capture it, and the test logs are about capturing the value. So remember that, as testers run tests, testers log results. Failure to log results means either doing the test over (most likely) or losing the value of running the tests. When you do the test over, that is pure waste, a loss of your time running the test. Since test execution is usually on the critical path for project completion, that waste puts the planned project end date at risk. Peo- ple don’t like that much. A side note here, before we move on. We mentioned reactive test strate- gies and the problems they have with coverage earlier. Note that, with ade- quate logging, while you can’t ascertain reactive test coverage in advance, at least you can capture it afterward. So again, log your results, both for scripted and unscripted tests. During test execution, there are many moving parts. The test cases might be changing. The test object and each constituent test item are often chang- ing. The test environment might be changing. The test basis might be chang- ing. Logging should identify the versions tested. The military strategist Clausewitz referred famously to the “fog of war.” What he meant was not a literal fog—though black-powder cannons and fire- arms of his day created plenty of that!—but rather a metaphorical fog whereby no one observing a battle, be they an infantryman or a general, could truly grasp the whole picture. Clausewitz would recognize his famous fog if he were to come back to life and work as a tester. Test execution periods tend to have a lot of fog. Good test logs are the fog-cutter. Test logs should provide a detailed, rich chronol- ogy of test execution. To do so, test logs need to be test by test and event by event. Each test, uniquely identified, should have status information logged against it as it goes through the test execution period. This information should support not only understanding the overall test status but also the overall test coverage.
  • 68. 2.5 Test Implementation and Execution 47 ISTQB Glossary test control: A test management task that deals with developing and apply- ing a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.You should also log events that occur during test execution and affect the testexecution process, whether directly or indirectly. You should document any-thing that delays, interrupts, or blocks testing. Test analysts are not always also test managers, but they should workclosely with the test managers. Test managers need logging information fortest control, test progress reporting, and test process improvement. Test ana-lysts need logging information too, along with the test managers, for measure-ment of exit criteria, which we’ll cover later in this chapter. Finally, note that the extent, type, and details of test logs will vary basedon the test level, the test strategy, the test tools, and various standards andregulations. Automated component testing results in the automated test gath-ering logging information. Manual acceptance testing usually involves the testmanager compiling the test logs or at least collating the information comingfrom the testers. If we’re testing regulated, safety-critical systems like pharma-ceutical systems, we might have to log certain information for audit purposes.2.5.7 Use of Amateur TestersAmateur testers. This term is rather provocative, so here’s what we mean. A per-son who primarily works as a tester to earn a living is a professional tester. Any-one else engaged in testing is an amateur tester. Rex is a professional tester now—and has been since 1987. Before that, hewas a professional programmer. Rex still writes programs from time to time,but now he’s an amateur programmer. He makes many typical amateur-pro-grammer mistakes when he does it. Before Rex was a professional tester, heunit-tested his code as a programmer. He made many typical amateur-testermistakes when he did that. Since one of the companies he worked for as aprogrammer relied entirely on programmer unit testing, that sometimesresulted in embarrassing outcomes for their customers—and for Rex.
  • 69. 48 2 Testing Processes Jamie is also a professional tester, and he has been one since 1991. Jamie also does some programming from time to time, but he considers himself an amateur programmer. He makes many typical amateur-programmer mistakes when he does it, mistakes that full-time professional programmers often do not make. There are two completely differing mind-sets for testing and pro- gramming. Companies that rely entirely on programmer testing sometimes find that it results in embarrassing outcomes for their customers. Professional testers could have solved some of those problems. There’s nothing wrong with involving amateur testers. Sometimes, we want to use amateur testers such as users or customers during test execution. It’s important to understand what we’re trying to accomplish with this and why it will (or won’t) work. For example, often the objective is to build user confidence in the system, but that can backfire! Suppose we involve them too early, when the system is still full of bugs. Oops! 2.5.8 Standards Let’s look at some standards that relate to implementation and execution as well as to other parts of the test process. Let’s start with the IEEE 829 standard. Most of this material about IEEE 829 should be a review of the Foundation syllabus for you, but it might be a while since you’ve looked at it. The first IEEE 829 template, which we’d use during test implementation, is the IEEE 829 test procedure specification. A test procedure specification describes how to run one or more test cases. This template includes the following sec- tions: ■ Test procedure specification identifier ■ Purpose (e.g., which tests are run) ■ Special requirements (skills, permissions, environment, etc.) ■ Procedure steps (log, set up, start, proceed [the steps themselves], measure results, shut down/suspend, restart [if needed], stop, wrap up/tear down, contingencies) While the IEEE 829 standard distinguishes between test procedures and test cases, in practice test procedures are often embedded in test cases.
  • 70. 2.5 Test Implementation and Execution 49 A test procedure is sometimes referred to as a test script. A test script canbe manual or automated. The IEEE 829 standard for test documentation also includes ideas on whatto include in a test log. According to the standard, a test log should record therelevant details about test execution. This template includes the following sec-tions:■ Test log identifier.■ Description of the testing, including the items under test (with version numbers), the test environments being used, and the like.■ Activity and event entries. These should be test by test and event by event. Events include things like test environments becoming unavailable, people being out sick, and so forth. You should capture information on the test exe- cution process; the results of the tests; environmental changes or issues; bugs, incidents, or anomalies observed; the testers involved; any suspension or blockage of testing; changes to the plan and the impact of change; and so forth.For those who find the IEEE 829 templates too daunting, simple spreadsheetscould also be used to capture execution logging information. The British Standards Institute produces the BS 7925/2 standard. It has twomain sections: test design techniques and test measurement techniques. For testdesign, it reviews a wide range of techniques, including black box, white box,and others. It covers the following black-box techniques that were also coveredin the Foundation syllabus:■ Equivalence partitioning■ Boundary value analysis■ State transition testingIt also covers a black-box technique called cause-effect graphing, which is agraphical version of a decision table, and a black-box technique called syntaxtesting. It covers the following white-box techniques that were also covered in theFoundation syllabus:■ Statement testing■ Branch and decision testing
  • 71. 50 2 Testing Processes It also covers some additional white-box techniques that were covered only briefly or not at all in the Foundation syllabus: ■ Data flow testing ■ Branch condition testing ■ Branch condition combination testing ■ Modified condition decision testing ■ Linear Code Sequence and Jump (LCSAJ) testing Rounding out the list are two sections, “Random Testing” and “Other Testing Techniques.” Random testing was not covered in the Foundation syllabus, but we’ll talk about the use of randomness in relation to reliability testing in chapters 5 and 9. The section on other testing techniques doesn’t provide any examples but merely talks about rules on how to select them. You might be thinking, “Hey, wait a minute, that was too fast. Which of those do I need to know for the Advanced Level Technical Test Analyst (ATTA) exam?” The answer is in two parts. First, you need to know any test design technique that was on the Foundation syllabus. Such techniques may be covered on the Advanced Level Technical Test Analyst exam. Second, we’ll cover the new test design techniques that might be on the Advanced Level Test Analyst (ATA) exam in detail in chapter 4. BS 7925/2 provides one or more coverage metrics for each of the test measurement techniques. These are covered in the measurement part of the standard. The choice of organization for this standard is curious indeed because there is no clear reason why the coverage metrics weren’t covered at the same time as the design techniques. However, from the point of view of the ISTQB fundamental test process, perhaps it is easier that way. For example, our entry criteria might require some particular level of test coverage, as it would if we were testing safety-critical avionics software subject to the United States Federal Aviation Administration’s standard DO-178B. (We’ll cover that standard in a moment.) So during test design, we’d employ the test design techniques. During test implementation, we’d use the test measurement techniques to ensure adequate coverage. In addition to these two major sections, this document also includes two annexes. Annex B brings the dry material in the first two major sections to life by showing an example of applying them to realistic situations. Annex A covers
  • 72. 2.5 Test Implementation and Execution 51process considerations, which is perhaps closest to our area of interest here. Itdiscusses the application of the standard to a test project, following a test pro-cess given in the document. To map that process to the ISTQB fundamental testprocess, we can say the following:■ Test analysis and design along with test implementation in the ISTQB process is equivalent to test specification in the BS 7925/2 process.■ BS 7925/2 test execution, logically enough, corresponds to test execution in the ISTQB process. Note, though, that the ISTQB process includes that as part of a larger activity, test implementation and execution. Note also that the ISTQB process includes test logging as part of test execution, while BS 7925/2 has a separate test recording process.■ Finally, BS 7925/2 has checking for test completion as the final step in its process. That corresponds roughly to the ISTQB’s evaluating test criteria and reporting.Finally, as promised, let’s talk about the DO-178B standard. This standard ispromulgated by the United States Federal Aviation Administration. As youmight guess, it’s for avionics systems. In Europe, it’s called ED-12B. The stan-dard assigns a criticality level, based on the potential impact of a failure. Basedon the criticality level, a certain level of white-box test coverage is required, asshown in table 2-1.Table 2–1 FAA DO/178B mandated coverageCriticality Potential Failure Impact Required CoverageLevel A: Software failure can result in a Modified Condition/Decision,Catastrophic catastrophic failure of the system. Decision, and StatementLevel B: Hazardous/ Software failure can result in a hazardous Decision and StatementSevere or severe/major failure of the system.Level C: Software failure can result in a major StatementMajor failure of the system.Level D: Software failure can result in a minor NoneMinor failure of the system.Level E: Software failure cannot have an effect on NoneNo Effect the system.
  • 73. 52 2 Testing Processes Let us explain table 2-1 a bit more thoroughly: Criticality level A, or Catastrophic, applies when a software failure can result in a catastrophic failure of the system. For software with such criti- cality, the standard requires Modified Condition/Decision, Decision, and Statement coverage. Criticality level B, or Hazardous and Severe, applies when a software failure can result in a hazardous, severe, or major failure of the system. For software with such criticality, the standard requires Decision and Statement coverage. Criticality level C, or Major, applies when a software failure can result in a major failure of the system. For software with such criticality, the standard requires only Statement coverage. Criticality level D, or Minor, applies when a software failure can only result in a minor failure of the system. For software with such criticality, the standard does not require any level of coverage. Finally, criticality level E, or No Effect, applies when a software fail- ure cannot have an effect on the system. For software with such critical- ity, the standard does not require any level of coverage. This makes a certain amount of sense. You should be more concerned about software that affects flight safety, such as rudder and aileron control modules, than about software that doesn’t, such as video entertainment systems. How- ever, there is a risk of using a one-dimensional white-box measuring stick to determine how much confidence we should have in a system. Coverage metrics are a measure of confidence, it’s true, but we should use multiple coverage met- rics, both white box and black box. By the way, if you found this a bit confusing, note that two of the white- box coverage metrics we mentioned, statement and decision coverage, were discussed in the Foundation syllabus, in chapter 4. Modified condition/deci- sion coverage was mentioned briefly but not described in any detail in the Foundation; Modified condition/decision coverage will be covered in detail in this book. If you don’t remember what statement and decision coverage mean, you should go back and review the material in that chapter on white-box cov- erage metrics. We’ll return to black-box and white-box testing and test cover- age in chapter 4 of this book, and we’ll assume that you understand the
  • 74. 2.6 Evaluating Exit Criteria and Reporting 53relevant Foundation-level material. Chapter 4 will be hard to follow if you arenot fully conversant with the Foundation-level test design techniques.2.5.9 MetricsFinally, what metrics and measurements can we use for the test implementationand execution of the ISTQB fundamental test process? Different people use dif-ferent metrics, of course. Typical metrics during test implementation are the percentage of testenvironments configured, the percentage of test data records loaded, and thepercentage of test cases automated. During test execution, typical metrics look at the percentage of test condi-tions covered, test cases executed, and so forth. We can track test implementation and execution tasks against a work-breakdown-structure, which is useful in determining whether we are proceed-ing according to the estimate and schedule. Note that here we are discussing metrics to measure the completeness ofthese processes; i.e., the progress we have made. You should use a different setof metrics for test reporting.2.6 Evaluating Exit Criteria and Reporting Learning objectives (K3) Determine from a given set of measures if a test completion criterion has been fulfilled.In one sense, the evaluation of exit criteria and reporting of results is a test man-agement activity. And, in the volume on advanced test management, we exam-ine ways to analyze, graph, present, and report test results as part of the testprogress monitoring and control activities. However, in another sense, there is a key piece of this process thatbelongs to the technical test analyst. As technical test analysts, in this book weare more interested in two main areas. First, we need to collect the informa-tion necessary to support test management reporting of results. Second, weneed to measure progress toward completion, and by extension, we need to
  • 75. 54 2 Testing Processes detect deviation from the plan. (Of course, if we do detect deviation, it is the test manager’s role, as part of the control activities, to get us back on track.) To measure completeness of the testing with respect to exit criteria, and to generate information needed for reporting, we can measure properties of the test execution process such as the following: ■ Number of test conditions, cases, or test procedures planned, executed, passed, and failed ■ Total defects, classified by severity, priority, status, or some other factor ■ Change requests proposed, accepted, and tested ■ Planned versus actual costs, schedule, effort ■ Quality risks, both mitigated and residual ■ Lost test time due to blocking events ■ Confirmation and regression test results We can track evaluation and reporting milestones and tasks against a work- breakdown-structure, which is useful in determining whether we are proceed- ing according to the estimate and schedule. We’ll now look at examples of test metrics and measures that you can use to evaluate where you stand with respect to exit criteria. Most of these are drawn from actual case studies, projects where RBCS helped a client with their testing. 2.6.1 Test Suite Summary Figure 2-3 shows a test suite summary worksheet. Such a worksheet summa- rizes the test-by-test logging information described in a previous section. As a test analyst, you can use this worksheet to track a number of important proper- ties of the test execution so far: ■ Test case progress ■ Test case pass/fail rates ■ Test suite defect priority/severity (weighted failure) ■ Earned value As such, it’s a useful chart for the test analyst to understand the status of the entire test effort.
  • 76. 2.6 Evaluating Exit Criteria and Reporting 55 Test Suite Summary Test Pass Two Total Planned Tests Fullfilled Weighted Planned Tests Unfullfilled Earned ValueSuite Cases Count Skip Pass Fail Failure Count Queued IP Block Plan Actual %Effort %Exec Hrs Hrs Functionality 14 14 0 4 10 10.6 0 0 0 0 36.0 49.0 163% 100% Performance 5 4 0 1 3 8.7 1 1 0 0 7.0 11.5 164% 80% Reliability 2 2 2 0 0 0.0 0 0 0 0 0.0 0.0 0% 0% Robustness 3 1 0 0 1 0.5 2 2 0 0 12.5 8.0 64% 33% Installation 4 0 0 0 0 0.0 4 4 0 0 72.0 0.0 0% 0% Localization 8 1 0 0 1 0.5 7 0 7 0 128.0 22.0 17% 13% Security 4 2 0 2 0 1.1 2 2 0 0 17.0 8.5 50% 50%Documentation 3 1 0 0 1 4.0 2 2 0 0 28.0 15.0 54% 33% Integration 4 4 0 3 1 1.0 0 0 0 0 8.0 12.5 156% 100% Usability 2 0 0 0 0 0.0 2 0 2 0 16.0 0.0 0% 0% Exploratory 6 0 0 0 0.0 6 6 0 0 12.0 0.0 0% 0% Total 55 29 2 10 17 26.4 26 17 9 0 336.5 126.5 38% 51% Percent 100% 53% 4% 18% 31% N/A 47% 31% 16% 0%Figure 2–3 Test suite summary worksheetDown the left side of figure 2-3, you see two columns, the test suite name andthe number of test cases it contains. Again, in some test log somewhere, we havedetailed information for each test case. On the middle-left side, you see four columns under the general headingof “Planned Tests Fulfilled.” These are the tests for which no more workremains, at least during this pass of testing. The weighted failure numbers for each test suite, found in the column aboutin the middle of the table, give a metric of historical bug finding effectiveness.Each bug is weighted by priority and severity—only the severity one, priorityone bugs, count for a full weighted failure point, while lower severity and prior-ity can reduce the weighted failure point count for a bug to as little 0.04. So thisis a metric of the technical risk, the likelihood of finding problems, associatedwith each test suite based on historical bug finding effectiveness. On the middle-right side, you see four columns under the general headingof “Planned Tests Unfulfilled.” These are the tests for which more work remainsduring this pass of testing. IP, by the way, stands for “in progress.” Finally, on the right side you see four columns under the general headingof “Earned Value.” Earned value is a simple project management concept. It
  • 77. 56 2 Testing Processes says that, in a project, we accomplish tasks by expending resources. So if the percentage of tasks accomplished is about equal to the percentage of resources expended, we’re on track. If the percentage of tasks accomplished is greater than the percentage of resources expended, we’re on our way to being under budget. If the percentage of tasks accomplished is less than the percentage of resources expended, we’re on our way to being over budget. Similarly, from a schedule point of view, the percentage of tasks accom- plished should be approximately equal to the percentage of project time elapsed. As with effort, if the tasks percentage is over the schedule percent- age, we’re happy. If the tasks percentage is below the schedule percentage, we’re sad and worried. In test execution, we can consider the test case or test procedure to be our basic task. The resources—for manual testing, anyway—are usually the per- son-hours required to run the tests. That’s the way this chart shows earned value, test suite by test suite. Take a moment to study figure 2-3. Think about how you might be able to use it on your current (or future) projects. 2.6.2 Defect Breakdown We can analyze defect (or bug) reports in a number of ways. There’s just about no end to the analysis that we have done as technical test analysts, test manag- ers, and test consultants. As a test professional, you should think of good, proper analysis of bug reports much the same way as a doctor thinks of good, proper analysis of blood samples. Just one of these many ways to examine bug reports is by looking at the breakdown of the defects by severity, priority, or some combination of the two. If you use a numerical scale for severity and priority, it’s easy to multiply them together to get a weighted metric of overall bug importance, as you just saw. Figure 2-4 shows an example of such a chart. What can we do with it? Well, for one thing, we can compare this chart with previous projects to get a sense of whether we’re in better or worse shape. Remember, though, that the distribution will change over the life of the project. Ideally, the chart will start skewed toward high-priority bugs—at least it will if we’re doing proper risk-based testing, which we’ll discuss in chapter 3.
  • 78. 2.6 Evaluating Exit Criteria and Reporting 57 700 600 622 500Number of Reports 400 300 200 100 136 82 19 0 0 1 2 3 4 5 Severity of DefectFigure 2–4 Breakdown of defects by severityFigure 2-4 shows lots of severity ones and twos. Usually, severity one is loss ofdata. Severity two is loss of functionality without a workaround. Either way, badnews. This chart tells us, as test professionals, to take a random sample of 10 or20 of these severity one and two reports and see if we have severity inflationgoing on here. Are our severity classifications accurate? If not, the poor testmanager’s reporting will be biased and alarmist, which will get her in trouble. As we mentioned previously, though, if we’re doing risk-based testing, thisis probably about how this chart should look during the first quarter or so ofthe project. Find the scary stuff first, we always tell testers. If these severityclassifications are right, the test team is doing just that.2.6.3 Confirmation Test Failure RateAs technical test analysts, we can count on finding some bugs. Hopefully manyof those bugs will be sent back to us, allegedly fixed. At that point, we confirma-tion-test the fix. How many of those fixes fail the confirmation test? It can feellike it’s quite a few, but is it really? Figure 2-5 shows the answer, graphically, foran actual project.
  • 79. 58 2 Testing Processes Case Study Banking Application Bugs Confirmation Test Failure Analysis 1000 100% 99% 99% 100% 100% 100% 100% 100% 100% 100% 96% 83% 80% 100 Number of Bugs Opened 60% 10 710 40% 112 26 1 6 5 20% 1 0 0 0 0 0 0 0% 1 2 3 4 5 6 7 8 9 10 11 Bug Count % Cum Opened Count Figure 2–5 Confirmation test failure analysis On this banking project, we can see that quite a few bug fixes failed the confir- mation test. We had to reopen fully one in six defect reports at least once. That’s a lot of wasted time. It’s also a lot of potential schedule delay. Study figure 2-5, thinking about ways you could use this information during test execution. 2.6.4 System Test Exit Review Finally, let’s look at another case study. Figure 2-6 shows an excerpt of the exit criteria for an Internet appliance project that RBCS provided testing for. You’ll see that we have graded the criteria as part of a system test exit review. Each of the three criteria here is graded on a three-point scale: ■ Green: Totally fulfilled, with little remaining risk ■ Yellow: Not totally fulfilled, but perhaps an acceptable risk ■ Red: Not in any sense fulfilled, and poses a substantial risk Of course, you’d want to provide additional information and data for the yellows and the reds.
  • 80. 2.6 Evaluating Exit Criteria and Reporting 59 System Test Exit Review Per the Test Plan, System Test was planned to end when following criteria were met: 1. All design, implementation, and feature completion, code completion, and unit test completion commitments made in the System Test Entry meeting were either met or slipped to no later than four (4), three (3), and three (3) weeks, respectively, prior to the proposed System Test Exit date. STATUS: RED. Audio and demo functionality have entered System Test in the last three weeks. The modems entered System Test in the last three weeks. On the margins of a violation, off-hook detection was changed significantly. 2. No panic, crash, halt, wedge, unexpected process termination, or other stoppage of processing has occurred on any server software or hardware for the previous three (3) weeks. STATUS: YELLOW. The servers have not crashed, but we did not complete all the tip-over and fail-over testing we planned, and so we are not satisfied that the servers are stable under peak load or other inclement conditions. 3. Production Devices have been used for all System Test execution for at least three (3) weeks. STATUS: GREEN. Except for the modem situation discussed above, the hardware has been stable.Figure 2–6 Case study of system test exit review2.6.5 StandardsFinally, let’s look at one more IEEE 829 template, one that applies to this part ofthe test process. The IEEE 829 standard for test documentation includes atemplate for test summary reports. A test summary report describes the results of a given level or phase of test-ing. The IEEE 829 template includes the following sections:■ Test summary report identifier■ Summary (e.g., what was tested, what the conclusions are, etc.)■ Variances (from plan, cases, procedures)■ Comprehensive assessment■ Summary of results (e.g., final metrics, counts)■ Evaluation (of each test item vis-à-vis pass/fail criteria)■ Summary of activities (resource use, efficiency, etc.)■ ApprovalsThe summaries can be delivered during test execution as part of a project statusreport or meeting. They can also be used at the end of a test level as part of testclosure activities.
  • 81. 60 2 Testing Processes ISTQB Glossary test closure: During the test closure phase of a test process, data is collected from completed activities to consolidate experience, testware, facts, and num- bers. The test closure phase consists of finalizing and archiving the testware and evaluating the test process, including preparation of a test evaluation report. test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit crite- ria. 2.6.6 Evaluating Exit Criteria and Reporting Exercise Consider the complete set of actual exit criteria from the Internet appliance project, which is shown in the following subsection. Toward the end of the project, the test team rated each criterion on the following scale: Green: Totally fulfilled, with little remaining risk Yellow: Not totally fulfilled, but perhaps an acceptable risk Red: Not in any sense fulfilled, and poses a substantial risk You can see the ratings we gave each criterion in the STATUS block below the criterion itself. We used this evaluation of the criteria as an agenda for a System Test Exit Review meeting. Rex led the meeting and walked the team through each cri- terion. As you can imagine, the RED ones required more explanation than the YELLOW and GREEN ones. While narrative explanation is provided for each evaluation, perhaps more information and data are needed. So, for each criterion, determine what kind of data and other information you’d want to collect to support the con- clusions shown in the status evaluations for each. 2.6.7 System Test Exit Review Per the Test Plan, System Test was planned to end when the following criteria were met:
  • 82. 2.6 Evaluating Exit Criteria and Reporting 611. All design, implementation, and feature completion, code completion, and unit test completion commitments made in the System Test Entry meeting were either met or slipped to no later than four (4), three (3), and three (3) weeks, respectively, prior to the proposed System Test Exit date. STATUS: RED. Audio and demo functionality have entered System Test in the last three weeks. The modems entered System Test in the last three weeks. On the margins of a violation, off-hook detection was changed sig- nificantly.2. No panic, crash, halt, wedge, unexpected process termination, or other stoppage of processing has occurred on any server software or hardware for the previous three (3) weeks. STATUS: YELLOW. The servers have not crashed, but we did not complete all the tip-over and fail-over testing we planned, and so we are not satisfied that the servers are stable under peak load or other inclement conditions.3. Production devices have been used for all System Test execution for at least three (3) weeks. STATUS: GREEN. Except for the modem situation discussed above, the hardware has been stable.4. No client systems have become inoperable due to a failed update for at least three (3) weeks. STATUS: YELLOW. No system has become permanently inoperable during update, but we have seen systems crash during update and these systems required a reboot to clear the error.5. Server processes have been running without installation of bug fixes, man- ual intervention, or tuning of configuration files for two (2) weeks. STATUS: RED. Server configurations have been altered by Change Com- mittee–approved changes multiple times over the last two weeks.6. The Test Team has executed all the planned tests against the release-candi- date hardware and software releases of the Device, Server, and Client. STATUS: RED. We had planned to test Procurement and Fulfillment, but disengaged from this effort because the systems were not ready. Also, we have just received the release-candidate build; complete testing would take two weeks. In addition, the servers are undergoing Change Committee– approved changes every few days and a new load balancer has been added to the server farm. These server changes have prevented volume, tip-over,
  • 83. 62 2 Testing Processes and fail-over testing for the last week and a half. Finally, we have never had a chance to test the server installation and boot processes because we never received documentation on how to perform these tasks. 7. The Test Team has retested all fixes for priority one and two bug reports over the life of the project against the release-candidate hardware and soft- ware releases of the Device, Server, and Client. STATUS: RED. Testing of the release-candidate software and hardware has been schedule-limited to one week, which does not allow for retesting of all bug fixes. 8. The Development Teams have resolved all “must-fix” bugs. “Must-fix” will be defined by the Project Management Team. STATUS: RED. Referring to the attached open/closed charts and the “Bugs Found Since 11/9” report, we continue to find new bugs in the product, though there is good news in that the find rate for priority one bugs has lev- eled off. Per the closure period charts, it takes on average about two weeks—three weeks for priority one bugs—to close a problem report. In addition, both open/close charts show a significant quality gap between cumulative open and cumulative closed, and it’s hard to believe that taken all together, a quantity of bugs that significant doesn’t indicate a pervasive fit-and-finish issue with the product. Finally, note that WebGuide and E- commerce problems are design issues—the selected browser is basically incompatible with much of the Internet—which makes these problems much more worrisome. 9. The Test Team has checked that all issues in the bug tracking system are either closed or deferred and, where appropriate, verified by regression and confirmation testing. STATUS: RED. A large quality gap exists and has existed for months. Because of the limited test time against the release-candidate build, the risk of regression is significant. 10. The open/close curve indicates that we have achieved product stability and reliability. STATUS: RED. The priority-one curve has stabilized, but not the overall bug-find curve. In addition, the run chart of errors requiring a reboot shows that we are still showing about one crash per eight hours of system operation, which is no more stable than a typical Windows 95/Windows 98 laptop. (One of the ad claims is improved stability over a PC.)
  • 84. 2.6 Evaluating Exit Criteria and Reporting 6311. The Project Management Team agrees that the product, as defined during the final cycle of System Test, will satisfy the customer’s reasonable expecta- tions of quality. STATUS: YELLOW. We have not really run enough of the test suite at this time to give a good assessment of overall product quality.12. The Project Management Team holds a System Test Exit Meeting and agrees that we have completed System Test. STATUS: In progress.2.6.8 Evaluating Exit Criteria and Reporting Exercise DebriefWe have added a section called “ADDITIONAL DATA AND INFORMATION”below each criterion. In that section, you’ll find Rex’s solution to this exercise,based both on what kind of additional data and information he actually hadduring this meeting and what he would have brought if he knew then what heknows now.1. All design, implementation, and feature completion, code completion, and unit test completion commitments made in the System Test Entry meeting were either met or slipped to no later than four (4), three (3), and three (3) weeks, respectively, prior to the proposed System Test Exit date. STATUS: RED. Audio and demo functionality have entered System Test in the last three weeks. The modems entered System Test in the last three weeks. On the margins of a violation, off-hook detection was changed sig- nificantly. ADDITIONAL DATA AND INFORMATION: The specific commitments made in the System Test Entry meeting. The delivery dates for the audio functionality, the demo functionality, the modem, and the off-hook detec- tion functionality.2. No panic, crash, halt, wedge, unexpected process termination, or other stoppage of processing has occurred on any server software or hardware for the previous three (3) weeks. STATUS: YELLOW. The servers have not crashed, but we did not complete all the tip-over and fail-over testing we planned, and so we are not satisfied that the servers are stable under peak load or other inclement conditions. ADDITIONAL DATA AND INFORMATION: Metrics indicate the per- centage completion of tip-over and fail-over tests. Details on which specific
  • 85. 64 2 Testing Processes quality risks remain uncovered due to the tip-over and fail-over tests not yet run. 3. Production devices have been used for all System Test execution for at least three (3) weeks. STATUS: GREEN. Except for the modem situation discussed above, the hardware has been stable. ADDITIONAL DATA AND INFORMATION: None. Good news requires no explanation. 4. No client systems have become inoperable due to a failed update for at least three (3) weeks. STATUS: YELLOW. No system has become permanently inoperable during update, but we have seen systems crash during update and these systems required a reboot to clear the error. ADDITIONAL DATA AND INFORMATION: Details, from bug reports, on the system crashes described. 5. Server processes have been running without installation of bug fixes, man- ual intervention, or tuning of configuration files for two (2) weeks. STATUS: RED. Server configurations have been altered by Change Com- mittee–approved changes multiple times over the last two weeks. ADDITIONAL DATA AND INFORMATION: List of tests that have been run prior to the last change, along with an assessment of the risk posed to each test by the change. (Note: Generating this list and the assessment could be a lot of work unless you have good traceability information.) 6. The Test Team has executed all the planned tests against the release-candi- date hardware and software releases of the Device, Server, and Client. STATUS: RED. We had planned to test Procurement and Fulfillment, but disengaged from this effort because the systems were not ready. Also, we have just received the release-candidate build; complete testing would take two weeks. In addition, the servers are undergoing Change Committee– approved changes every few days and a new load balancer has been added to the server farm. These server changes have prevented volume, tip-over, and fail-over testing for the last week and a half. Finally, we have never had a chance to test the server installation and boot processes because we never received documentation on how to perform these tasks. ADDITIONAL DATA AND INFORMATION: List of Procurement and
  • 86. 2.6 Evaluating Exit Criteria and Reporting 65 Fulfillment tests skipped, along with the risks associated with those tests. List of tests that will be skipped due to time compression of the last pass of testing against the release-candidate, along with the risks associated with those tests. List of changes to the server since the last volume, tip-over, and fail-over tests along with an assessment of reliability risks posed by the change. (Again, this could be a big job.) List of server install and boot pro- cess tests skipped, along with the risks associated with those tests.7. The Test Team has retested all priority one and two bug reports over the life of the project against the release-candidate hardware and software releases of the Device, Server, and Client. STATUS: RED. Testing of the release-candidate software and hardware has been schedule-limited to one week, which does not allow for retesting of all bugs. ADDITIONAL DATA AND INFORMATION: The list of all the priority one and two bug reports filed during the project, along with an assessment of the risk that those bugs might have re-entered the system in a change- related regression not otherwise caught by testing. (Again, potentially a huge job.)8. The Development Teams have resolved all “must-fix” bugs. “Must-fix” will be defined by the Project Management Team. STATUS: RED. Referring to the attached open/closed charts and the “Bugs Found Since 11/ 9” report, we continue to find new bugs in the product, though there is good news in that the find rate for priority one bugs has lev- eled off. Per the closure period charts, it takes on average about two weeks—three weeks for priority one bugs—to close a problem report. In addition, both open/close charts show a significant quality gap between cumulative open and cumulative closed, and it’s hard to believe that taken all together, a quantity of bugs that significant doesn’t indicate a pervasive fit-and-finish issue with the product. Finally, note that Web and E-com- merce problems are design issues—the selected browser is basically incom- patible with much of the Internet—which makes these problems much more worrisome. ADDITIONAL DATA AND INFORMATION: Open/closed charts, list of bugs since November 9, closure period charts, and a list of selected impor-
  • 87. 66 2 Testing Processes tant sites that won’t work with the browser. (Note: The two charts men- tioned are covered in the Advanced Test Manager course, if you’re curious.) 9. The Test Team has checked that all issues in the bug tracking system are either closed or deferred and, where appropriate, verified by regression and confirmation testing. STATUS: RED. A large quality gap exists and has existed for months. Because of the limited test time against the release-candidate build, the risk of regression is significant. ADDITIONAL DATA AND INFORMATION: List of bug reports that are neither closed nor deferred, sorted by priority. Risk of tests that will not be run against the release-candidate software, along with the associated risks for each test. 10. The open/close curve indicates that we have achieved product stability and reliability. STATUS: RED. The priority-one curve has stabilized, but not the overall bug-find curve. In addition, the run chart of errors requiring a reboot shows that we are still showing about one crash per eight hours of system operation, which is no more stable than a typical Windows 95/Windows 98 laptop. (One of the ad claims is improved stability over a PC.) ADDITIONAL DATA AND INFORMATION: Open/closed chart (run for priority one defects only and again for all defects). Run chart of errors requiring a reboot; i.e., a trend chart that shows how many reboot-requiring crashes occurred each day. 11. The Project Management Team agrees that the product, as defined during the final cycle of System Test, will satisfy the customer’s reasonable expecta- tions of quality. STATUS: YELLOW. We have not really run enough of the test suite at this time to give a good assessment of overall product quality. ADDITIONAL DATA AND INFORMATION: List of all the tests not yet run against the release-candidate build, along with their associated risks. 12. The Project Management Team holds a System Test Exit Meeting and agrees that we have completed System Test. STATUS: In progress. ADDITIONAL DATA AND INFORMATION: None.
  • 88. 2.7 Test Closure Activities 672.7 Test Closure Activities Learning objectives Recall of content onlyThe concepts in this section apply primarily for test managers. There are nolearning objectives defined for technical test analysts in this section. In thecourse of studying for the exam, read this section in chapter 2 of the Advancedsyllabus for general recall and familiarity only.2.8 Sample Exam QuestionsTo end each chapter, you can try one or more sample exam questions toreinforce your knowledge and understanding of the material and to prepare forthe ISTQB Advanced Level Technical Test Analyst exam.1 Identify all of the following that can be useful as a test oracle the first time a test case is run. A Incident report B Requirements specification C Test summary report D Legacy system2 Assume you are a technical test an=2alyst working on a banking project to upgrade an existing automated teller machine system to allow customers to obtain cash advances from supported credit cards. During test design, you identify a discrepancy between the list of supported credit cards in the requirements specification and the design specification. This is an example of what? A Test design as a static test technique B A defect in the requirements specification C A defect in the design specification D Starting test design too early in the project
  • 89. 68 2 Testing Processes 3 Which of the following is not always a precondition for test execution? A A properly configured test environment B A thoroughly specified test procedure C A process for managing identified defects D A test oracle 4 Assume you are a technical test analyst working on a banking project to upgrade an existing automated teller machine system to allow customers to obtain cash advances from supported credit cards. One of the exit criteria in the test plan requires documentation of successful cash advances of at least 500 euros for all supported credit cards. The correct list of supported credit cards is American Express, Visa, Japan Credit Bank, Eurocard, and MasterCard. After test execution, a complete list of cash advance test results shows the following: ■ American Express allowed advances of up to 1,000 euros. ■ Visa allowed advances of up to 500 euros. ■ Eurocard allowed advances of up to 1,000 euros. ■ MasterCard allowed advances of up to 500 euros. Which of the following statements is true? A. The exit criterion fails due to excessive advances for American Express and Eurocard. B. The exit criterion fails due to a discrepancy between American Express and Eurocard on the one hand and Visa and MasterCard on the other hand. C. The exit criterion passes because all supported cards allow cash advances of at least the minimum required amount. D. The exit criterion fails because we cannot document Japan Credit Bank results.
  • 90. 693 Test Management Wild Dog...said, “I will go up and see and look, and say; for I think it is good. Cat, come with me.’” “Nenni!” said the Cat. “I am the Cat who walks by himself, and all places are alike to me. I will not come.” Rudyard Kipling, from his story, The Cat That Walked by Himself, a colorful illustration of the reasons for the phrase “herding cats,” which is often applied to describe the task of managing software engineering projects.The third chapter of the Advanced syllabus is concerned with test management.It discusses test management activities from the start to the end of the test pro-cess and introduces the consideration of risk for testing. There are 11 sections.1. Introduction2. Test Management Documentation3. Test Plan Documentation Templates4. Test Estimation5. Scheduling and Test Planning6. Test Progress Monitoring and Control7. Business Value of Testing8. Distributed, Outsourced, and Insourced Testing9. Risk-Based Testing10. Failure Mode and Effects Analysis11. Test Management IssuesLet’s look at the sections that pertain to technical test analysis.
  • 91. 70 3 Test Management ISTQB Glossary test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager. 3.1 Introduction Learning objectives Recall of content only This chapter, as the name indicates, is focused primarily on test management topics. Thus, it is mainly the purview of Advanced Test Manager exam candi- dates. Since this book is for technical test analysts, most of our coverage in this chapter is to support simple recall. However, there is one key area that, as a technical test analyst, you need to understand very well: risk-based testing. In this chapter, you’ll learn how to per- form risk analysis, to allocate test effort based on risk, and to sequence tests according to risk. These are the key tasks for a technical test analyst doing risk- based testing. This chapter in the Advanced syllabus also covers test documentation tem- plates for test managers. It focuses on the IEEE 829 standard. As a technical test analyst, you’ll need to know that standard as well. If you plan to take the Advanced Level Technical Test Analyst exam, remember that all material from the Foundation syllabus, including that related to the use of the IEEE 829 tem- plates and the test management material in chapter 5 of the Foundation sylla- bus, is examinable. 3.2 Test Management Documentation Learning objectives Recall of content only The concepts in this section apply primarily for test managers. There are no learning objectives defined for technical test analysts in this section.
  • 92. 3.3 Test Plan Documentation Templates 71 ISTQB Glossary test policy: A high-level document describing the principles, approach, and major objectives of the organization regarding testing. test strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects). test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test, and acceptance test. level test plan: A test plan that typically addresses one test level. See also test plan. master test plan: A test plan that typically addresses multiple test levels. See also test plan. test plan: A document describing the scope, approach, resources, and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used (and the rationale for their choice), and any risks requiring contingency planning. It is a record of the test planning process.However, the Foundation syllabus covered test management, including thetopic of test management documentation. Anything covered in the Foundationsyllabus is examinable. In addition, all sections of the Advanced syllabus areexaminable in terms of general recall. So, if you are studying for the exam, you’ll want to read this section in chap-ter 3 of the Advanced syllabus for general recall and familiarity and review thetest management documentation material from the Foundation syllabus.3.3 Test Plan Documentation Templates Learning objectives Recall of content onlyThe concepts in this section apply primarily for test managers. There are nolearning objectives defined for technical test analysts in this section.
  • 93. 72 3 Test Management ISTQB Glossary test estimation: The calculated approximation of a result related to various aspects of testing (e.g., effort spent, completion date, costs involved, number of test cases, etc.) which is usable even if input data may be incomplete, uncer- tain, or noisy. test schedule: A list of activities, tasks, or events of the test process, identify- ing their intended start and finish dates and/or times and interdependencies. test point analysis (TPA): A formula-based test estimation method based on function point analysis. Wideband Delphi: An expert-based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team mem- bers. [Note: The glossary spells this as Wide Band Delphi, though the spelling given here is more commonly used.] Earlier, in chapter 2, we reviewed portions of the IEEE 829 test documentation standard. Specifically, we looked at the test design specification template, the test case specification template, the test procedure specification template, the test summary report template, and the test log template. However, the Founda- tion covered the entire IEEE 829 standard, including the test plan template, test item transmittal report template, and the incident report template. Anything covered in the Foundation syllabus is examinable. All sections of the Advanced syllabus are examinable in terms of general recall. If you are studying for the exam, you’ll want to read this section in chapter 3 of the Advanced syllabus for general recall and familiarity and review the test manage- ment documentation material from the Foundation syllabus. 3.4 Test Estimation Learning objectives Recall of content only The concepts in this section apply primarily for test managers. There are no learning objectives defined for technical test analysts in this section.
  • 94. 3.5 Scheduling and Test Planning 73 However, chapter 5 of the Foundation syllabus covered test estimation aspart of the material on test management. Anything covered in the Foundationsyllabus is examinable. All sections of the Advanced syllabus are examinable interms of general recall. If you are studying for the exam, you’ll want to read thissection in chapter 3 of the Advanced syllabus for general recall and familiarityand review the test estimation material from the Foundation syllabus.3.5 Scheduling and Test Planning Learning objectives Recall of content onlyThe concepts in this section apply primarily for test managers. There are nolearning objectives defined for technical test analysts in this section. In thecourse of studying for the exam, read this section in chapter 3 of the Advancedsyllabus for general recall and familiarity only.3.6 Test Progress Monitoring and Control Learning objectives Recall of content onlyThe concepts in this section apply primarily for test managers. There are nolearning objectives defined for technical test analysts in this section. However, chapter 5 of the Foundation syllabus covers test progress moni-toring and control as part of the material on test management. Anything cov-ered in the Foundation syllabus is examinable. All sections of the Advancedsyllabus are examinable in terms of general recall. If you are studying for theexam, you’ll want to read this section in chapter 3 of the Advanced syllabus forgeneral recall and familiarity and review the test progress monitoring and con-trol material from the Foundation syllabus.
  • 95. 74 3 Test Management ISTQB Glossary test monitoring: A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management. 3.7 Business Value of Testing Learning objectives Recall of content only The concepts in this section apply primarily for test managers. There are no learning objectives defined for technical test analysts in this section. In the course of studying for the exam, read this section in chapter 3 of the Advanced syllabus for general recall and familiarity only. 3.8 Distributed, Outsourced, and Insourced Testing Learning objectives Recall of content only The concepts in this section apply primarily for test managers. There are no learning objectives defined for technical test analysts in this section. In the course of studying for the exam, read this section in chapter 3 of the Advanced syllabus for general recall and familiarity only.
  • 96. 3.9 Risk-Based Testing 753.9 Risk-Based Testing Learning objectives (K2) Outline the activities of a risk-based approach for planning and executing technical testing. [Note: While the Advanced syllabus does not include K3 or K4 learning objectives for technical test analysts (unlike for test analysts) in this section, we feel that technical test analysts have an equal need for the ability to perform a risk analysis. Therefore, we will include exercises on this topic from a technical perspective.]Risk is the possibility of a negative or undesirable outcome or event. A specificrisk is any problem that may occur that would decrease customer, user, partici-pant, or stakeholder perceptions of product quality or project success. In testing, we’re concerned with two main types of risks. The first type ofrisk is product or quality risk. When the primary impact of a potential problemis on product quality, such potential problems are called product risks. A syn-onym for product risks, which we use most frequently ourselves, is quality risks.An example of a quality risk is a possible reliability defect that could cause a sys-tem to crash during normal operation. The second type of risk is project or planning risks. When the primaryimpact of a potential problem is on project success, such potential problems arecalled project risks. Some people also refer to project risks as planning risks. Anexample of a project risk is a possible staffing shortage that could delay comple-tion of a project. Of course, you can consider a quality risk as a special type of project risk.While the ISTQB definintion of project risk is given here, Jamie likes the infor-mal definition of a project risk as anything that might prevent the project fromdelivering the right product, on time and on budget. However, the difference isthat you can run a test against the system or software to determine whether aquality risk has become an actual outcome. You can test for system crashes, forexample. Other project risks are usually not testable. You can’t test for a staffingshortage.
  • 97. 76 3 Test Management ISTQB Glossary risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood. risk type: A set of risks grouped by one or more common factors such as a quality attribute, cause, location, or potential effect of risk. A specific set of product risk types is related to the type of testing that can mitigate (control) that risk type. For example, the risk of user interactions being misunderstood can be mitigated by usability testing. risk category: see risk type. [Note: Included because this is also a common term, and because the process of risk analysis as described in the Advanced syllabus includes risk categorization.] product risk: A risk directly related to the test object. See also risk. [Note: We will also use the commonly used synonym quality risk in this book.] project risk: A risk related to management and control of the (test) project, e.g., lack of staffing, strict deadlines, changing requirements, etc. See also risk. risk-based testing: An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process. Not all risks are equal in importance. There are a number of ways to classify the level of risk. The simplest is to look at two factors: ■ The likelihood of the problem occurring; i.e., being present in the product when it is delivered for testing ■ The impact of the problem should it occur; i.e., being present in the product when it is delivered to customers or users after testing Note the distinction made here in terms of project timeline. Likelihood is assessed based on the likelihood of a problem existing in the software, not on the likelihood of it being encountered by the user. The likelihood of a user encountering the problem influences impact. Likelihood of a problem arises primarily from technical considerations, such as the programming languages used, the bandwidth of connections, and so forth. The impact of a problem arises from business considerations, such as the finan-
  • 98. 3.9 Risk-Based Testing 77cial loss the business will suffer from a problem, the number of users or custom-ers affected by a problem, and so forth. In risk-based testing, we use the risk items identified during risk analysistogether with the level of risk associated with each risk item to guide us. In fact,under a true analytical risk-based testing strategy, risk is the primary basis oftesting.Risk can guide testing in various ways, but there are three very common ones:■ First, during all test activities, test managers, technical test analysts, and test analysts allocate effort for each quality risk item proportionally to the level of risk. Technical test analysts and test analysts select test techniques in a way that matches the rigor and extensiveness of the technique with the level of risk. Test managers, technical test analysts, and test analysts carry out test activities in risk order, addressing the most important quality risks first and only at the very end spending any time at all on less-important ones. Finally, test managers, technical test analysts, and test analysts work with the project team to ensure that the repair of defects is appropriate to the level of risk.■ Second, during test planning and test control, test managers provide both mitigation and contingency responses for all significant, identified project risks. The higher the level of risk, the more thoroughly that project risk is managed.■ Third, test managers, technical test analysts, and test analysts report test results and project status in terms of residual risks. For example, which tests have not yet been run or have been skipped? Which tests have been run? Which have passed? Which have failed? Which defects have not yet been fixed or retested? How do the tests and defects relate back to the risks?When following a true analytical risk-based testing strategy, it’s important thatrisk management not happen only once in a project. The three responses to riskwe just covered—along with any others that might be needed—should occurthroughout the lifecycle. Specifically, we should try to reduce quality risk byrunning tests and finding defects and reduce project risks through mitigationand, if necessary, contingency actions. Periodically in the project we shouldreevaluate risk and risk levels based on new information. This might result inour reprioritizing tests and defects, reallocating test effort, and taking other testcontrol activities.
  • 99. 78 3 Test Management ISTQB Glossary test control: A test management task that deals with developing and apply- ing a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. risk level: The importance of a risk as defined by its characteristics, impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g., high, medium, low) or quantitatively. One metaphor sometimes used to help people understand risk-based testing is that testing is a form of insurance. In your daily life, you buy insurance when you are worried about some potential risk. You don’t buy insurance for risks that you are not worried about. So, we should test the areas that are worrisome, test for bugs that are worrisome, and ignore the areas and bugs that aren’t worri- some. One potentially misleading aspect of this metaphor is that insurance profes- sionals and actuaries can use statistically valid data for quantitative risk analysis. Typically, risk-based testing relies on qualitative analyses because we don’t have the same kind of data insurance companies have. During risk-based testing, you have to remain aware of many possible sources of risks. There are safety risks for some systems. There are business and economic risks for most systems. There are privacy and data security risks for many systems. There are technical, organizational, and political risks too. 3.9.1 Risk Management Risk management includes three primary activities: ■ Risk identification, figuring out what the different project and quality risks are for the project ■ Risk analysis, assessing the level of risk—typically based on likelihood and impact—for each identified risk item ■ Risk mitigation, which is really more properly called “risk control” because it consists of mitigation, contingency, transference, and acceptance actions for various risks
  • 100. 3.9 Risk-Based Testing 79 ISTQB Glossary risk management: Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk. risk identification: The process of identifying risks using techniques such as brainstorming, checklists, and failure history. risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood). risk mitigation or risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.In some sense, these activities are sequential, at least in terms of when they start.They are staged such that risk identification starts first. Risk analysis comes next.Risk control starts once risk analysis has determined the level of risk. However,since risk management should be continuous in a project, the reality is that riskidentification, risk analysis, and risk control are all recurring activities. Everyone has their own perspective on how to manage risks on a project,including what the risks are, the level of risk, and the appropriate controls to putin place for risks. Therefore, risk management should include all project stake-holders. In many cases, though, not all stakeholders can participate or are willing todo so. In such cases, some stakeholders may act as surrogates for other stake-holders. For example, in mass-market software development, the marketingteam might ask a small sample of potential customers to help identify potentialdefects that would affect their use of the software most heavily. In this case, thesample of potential customers serves as a surrogate for the entire eventual cus-tomer base. As another example, business analysts on IT projects can some-times represent the users rather than involving users in potentially distressingrisk analysis sessions that include conversations about what could go wrong andhow bad it would be. Technical test analysts bring particular expertise to risk management due totheir defect-focused outlook, especially as relates to technically based sources ofrisk and likelihood. So they should participate whenever possible. In fact, in
  • 101. 80 3 Test Management many cases, the test manager will lead the quality risk analysis effort, with tech- nical test analysts providing key support in the process. With that overview of risk management in place, let’s look at the three risk management activities more closely. 3.9.2 Risk Identification For proper risk-based testing, we need to identify both product and project risks. We can identify both kinds of risks using techniques like these: ■ Expert interviews ■ Independent assessments ■ Use of risk templates ■ Project retrospectives ■ Risk workshops and brainstorming ■ Checklists ■ Calling on past experience Conceivably, you can use a single integrated process to identify both project and product risks. We usually separate them into two separate processes since they have two separate deliverables. We include the project risk identification pro- cess in the test planning process and thus hand the bulk of the responsibility for these kinds of risks to managers, including test managers. In parallel, the quality risk identification process occurs early in the project. That said, project risks—and not just for testing but also for the project as a whole—are often identified as by-products of quality risk analysis. In addition, if you use a requirements specification, design specification, use cases, or other documentation as inputs into your quality risk analysis process, you should expect to find defects in those documents as another set of by-products. These are valuable by-products, which you should plan to capture and escalate to the proper person. Previously, we encouraged you to include representatives of all possible stakeholder groups in the risk management process. For the risk identification activities, the broadest range of stakeholders will yield the most complete, accu- rate, and precise risk identification. The more stakeholder group representatives you omit from the process, the more you will miss risk items and even whole risk categories.
  • 102. 3.9 Risk-Based Testing 81 ISTQB Glossary Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of possible modes of failure and attempting to pre- vent their occurrence. See also Failure Mode, Effect and Criticality Analysis (FMECA). Failure Mode, Effect and Criticality Analysis (FMECA): An extension of FMEA. In addition to the basic FMEA, it includes a criticality analysis, which is used to chart the probability of failure modes against the severity of their con- sequences. The result highlights failure modes with relatively high probability and severity of consequences, allowing remedial effort to be directed where it will produce the greatest value. See also Failure Mode and Effect Analysis (FMEA).How far should you take this process? Well, it depends on the technique. In the Failure Mode and Effectrelatively informal Pragmatic Risk Analysis and Management technique that we Analysis (FMEA)frequently use, risk identification stops at the risk items. You have to be specific Failure Mode, Effect and Criticality Analysisenough about the risk items to allow for analysis and assessment of each risk item (FMECA)to yield an unambiguous likelihood rating and an unambiguous impact rating. Techniques that are more formal often look downstream to identify poten-tial effects of the risk item if it were to become an actual negative outcome.These effects include effects on the system—or the system of systems if applica-ble—as well as effects on the potential users, customers, stakeholders, and evensociety in general. Failure Mode and Effect Analysis is an example of such a for-mal risk management technique, and it is commonly used on safety-critical andembedded systems. Other formal techniques look upstream to identify the source of the risk.Hazard Analysis is an example of such a formal risk management technique.We’ve never used it ourselves, but Rex has talked to clients who have used it forsafety-critical medical systems.11. For more information on risk identification and analysis techniques, you can see either of RexBlack’s two books, Managing the Testing Process, 3e (for more both formal and informal tech-niques) or Pragmatic Software Testing (for informal techniques). If you want to use Failure Modeand Effect Analysis, then we recommend reading D.H. Stamatis’s Failure Mode and Effect Analysisfor a thorough discussion of the technique, followed by Managing the Testing Process, 3e for a dis-cussion of how the technique applies to software testing.
  • 103. 82 3 Test Management 3.9.3 Risk Analysis or Risk Assessment The next step in the risk management process is referred to in the Advanced syl- labus as risk analysis. We prefer to call it risk assessment because analysis would seem to us to include both identification and assessment of risk. For example, the process of identifying risk items often includes analysis of work products such as requirements and metrics such as defects found in past projects. Regardless of what we call it, risk analysis or risk assessment involves the study of the identified risks. We typically want to categorize each risk item appropri- ately and assign each risk item an appropriate level of risk. We can use ISO 9126 or other quality categories to organize the risk items. In our opinion—and in the Pragmatic Risk Analysis and Management process described here—it doesn’t matter so much what category a risk item goes into, usually, so long as we don’t forget it. However, in complex projects and for large organizations, the category of risk can determine who has to deal with the risk. A practical implication of categorization like this will make the categorization important. The other part of risk assessment is determining the level of risk. This often involves likelihood and impact as the two key factors. Likelihood arises from technical considerations, typically, while impact arises from business consider- ations. So what technical factors should we consider when assessing likelihood? Here’s a list to get you started: ■ Complexity of technology and teams ■ Personnel and training issues ■ Intrateam and interteam conflict ■ Supplier and vendor contractual problems ■ Geographical distribution of the development organization, as with outsourcing ■ Legacy or established designs and technologies versus new technologies and designs ■ The quality—or lack of quality—in the tools and technology used ■ Bad managerial or technical leadership ■ Time, resource, and management pressure, especially when financial penalties apply
  • 104. 3.9 Risk-Based Testing 83■ Lack of earlier testing and quality assurance tasks in the lifecycle■ High rates of requirements, design, and code changes in the project■ High defect rates■ Complex interfacing and integration issues.What business factors should we consider when assessing impact? Here’s a listto get you started:■ The frequency of use of the affected feature■ Potential damage to image■ Loss of customers and business■ Potential financial, ecological, or social losses or liability■ Civil or criminal legal sanctions■ Loss of licenses, permits, and the like■ The lack of reasonable workaroundsWhen determining the level of risk, we can try to work quantitatively or qualita-tively. In quantitative risk analysis, we have numerical ratings for both likeli-hood and impact. Likelihood is a percentage, and impact is often a monetaryquantity. If we multiply the two values together, we can calculate the cost ofexposure, which is called—in the insurance business—the expected payout orexpected loss. While perhaps some day in the future of software engineering we can dothis routinely, typically we find that we have to determine the level of risk quali-tatively. Why? Because we don’t have statistically valid data on which to performquantitative quality risk analysis. So we can speak of likelihood being very high,high, medium, low, or very low, but we can’t say—at least, not in any meaningfulway—whether the likelihood is 90 percent, 75 percent, 50 percent, 25 percent,or 10 percent. This is not to say—by any means—that a qualitative approach should beseen as inferior or useless. In fact, given the data most of us have to work with,use of a quantitative approach is almost certainly inappropriate on mostprojects. The illusory precision of such techniques misleads the stakeholdersabout the extent to which you actually understand and can manage risk. Whatwe’ve found is that if we accept the limits of our data and apply appropriatequalitative quality risk management approaches, the results are not only per-fectly useful, but also indeed essential to a well-managed test process.
  • 105. 84 3 Test Management In any case, unless your risk analysis is based on extensive and statistically valid risk data, it will reflect perceived likelihood and impact. In other words, personal perceptions and opinions held by the stakeholders will determine the level of risk. Again, there’s absolutely nothing wrong with this, and we don’t bring this up to condemn the technique at all. The key point is that project managers, programmers, users, business analysts, architects, and testers typi- cally have different perceptions and thus possibly different opinions on the level of risk for each risk item. By including all these perceptions, we distill the collec- tive wisdom of the team. However, we do have a strong possibility of disagreements between stake- holders. The risk analysis process should include some way of reaching consen- sus. In the worst case, if we cannot obtain consensus, we should be able to escalate the disagreement to some level of management to resolve. Otherwise, risk levels will be ambiguous and conflicted and thus not useful as a guide for risk mitigation activities—including testing. 3.9.4 Risk Mitigation or Risk Control Having identified and assessed risks, we now must control them. As we men- tioned earlier, the Advanced syllabus refers to this as risk mitigation, but that’s not right. Risk control is a better term. We actually have four main options for risk control: ■ Mitigation, where we take preventive measures to reduce the likelihood of the risk occurring and/or the impact of a risk should it occur ■ Contingency, where we have a plan or perhaps multiple plans to reduce the impact of a risk should it occur ■ Transference, where we get another party to accept the consequences of a risk should it occur ■ Finally, ignoring or accepting the risk and its consequences should it occur For any given risk item, selecting one or more of these options creates its own set of benefits and opportunities as well as costs and, potentially, additional risks associated. Analytical risk-based testing is focused on creating risk mitigation opportu- nities for the test team, including for technical test analysts, especially for qual-
  • 106. 3.9 Risk-Based Testing 85ity risks. Risk-based testing mitigates quality risks via testing throughout theentire lifecycle. Let us mention that, in some cases, there are standards that can apply. We’vealready looked at one such standard, the United States Federal Aviation Admin-istration’s DO-178B. We’ll look at another one of those standards shortly. It’s important too that project risks be controlled. For technical test analysts,we’re particularly concerned with test-affecting project risks like the following:■ Test environment and tools readiness■ Test staff availability and qualification■ Low quality of inputs to testing■ Overly high rates of change for work products delivered to testing■ Lack of standards, rules, and techniques for the testing effort.While it’s usually the test manager’s job to make sure these risks are controlled,the lack of adequate controls in these areas will affect the technical test analyst. One idea discussed in the Foundation syllabus, a basic principle of testing,is the principle of early testing and quality assurance (QA). This principlestresses the preventive potential of testing. Preventive testing is part of analyti-cal risk-based testing. We should try to mitigate risk before test execution starts.This can entail early preparation of testware, pretesting test environments, pre-testing early versions of the product well before a test level starts, insisting ontougher entry criteria to testing, ensuring requirements for and designing fortestability, participating in reviews (including retrospectives for earlier projectactivities), participating in problem and change management, and monitoringof the project progress and quality.In preventive testing, we take quality risk control actions throughout the lifecy-cle. Technical test analysts should look for opportunities to control risk usingvarious techniques:■ Choosing an appropriate test design technique■ Reviews and inspections■ Reviews of test design■ An appropriate level of independence for the various levels of testing■ The use of the most experienced person on test tasks■ The strategies chosen for confirmation testing (retesting) and regression testing
  • 107. 86 3 Test Management Preventive test strategies acknowledge that quality risks can and should be miti- gated by a broad range of activities, many of them not what we traditionally think of as testing. For example, if the requirements are not well written, per- haps we should institute reviews to improve their quality rather than relying on tests that we will run once the badly written requirements become a bad design and ultimately bad, buggy code. Of course, testing is not effective against all kinds of quality risks. In some cases, you can estimate the risk reduction effectiveness of testing in general and for specific test techniques for given risk items. There’s not much point in using testing to reduce risk where there is a low level of test effectiveness. For exam- ple, code maintainability issues related to poor commenting or use of unstruc- tured programming techniques will not tend to show up—at least, not initially—during testing. Once we get to test execution, we run tests to mitigate quality risks. Where testing finds defects, testers reduce risk by providing the awareness of defects and opportunities to deal with them before release. Where testing does not find defects, testing reduces risk by ensuring that under certain conditions the sys- tem operates correctly. Of course, running a test only demonstrates operation under certain conditions and does not constitute a proof of correctness under all possible conditions. We mentioned earlier that we use level of risk to prioritize tests in a risk- based strategy. This can work in a variety of ways, with two extremes, referred to as depth-first and breadth-first. In a depth-first approach, all of the highest- risk tests are run before any lower-risk tests, and tests are run in strict risk order. In a breadth-first approach, we select a sample of tests across all the identified risks using the level of risk to weight the selection while at the same time ensur- ing coverage of every risk at least once. As we run tests, we should measure and report our results in terms of resid- ual risk. The higher the test coverage in an area, the lower the residual risk. The fewer bugs we’ve found in an area, the lower the residual risk.2 Of course, in doing risk-based testing, if we only test based on our risk analysis, this can leave blind spots, so we need to use testing outside the predefined test procedures to 2. You can find examples of how to carry out risk-based test reporting in Rex Black’s book Critical Testing Processes and in the companion volume to this book, Advanced Software Testing: Volume 2, which addresses test management.
  • 108. 3.9 Risk-Based Testing 87see if we have missed anything. We’ll talk about about how to accomplish suchtesting, using blended testing strategies, in chapter 4. If, during test execution, we need to reduce the time or effort spent on test-ing, we can use risk as a guide. If the residual risk is acceptable, we can curtailour tests. Notice that, in general, those tests not yet run are less important thanthose tests already run. If we do curtail further testing, that property of risk-based test execution serves to transfer the remaining risk onto the users, cus-tomers, help desk and technical support personnel, or operational staff.Suppose we do have time to continue test execution? In this case, we can adjustour risk analysis—and thus our testing—for further test cycles based on whatwe’ve learned from our current testing. First, we revise our risk analysis. Then,we reprioritize existing tests and possibly add new tests. What should we lookfor to decide whether to adjust our risk analysis? We can start with the followingmain factors:■ Totally new or very much changed product risks■ Unstable or defect-prone areas discovered during the testing■ Risks, especially regression risk, associated with fixed defects■ Discovery of unexpected bug clusters■ Discovery of business-critical areas that were missedSo, if you have time for new additional test cycles, consider revising your qualityrisk analysis first. You should also update the quality risk analysis at each projectmilestone.3.9.5 An Example of Risk Identification and Assessment ResultsIn figure 3-1, you see an example of a quality risk analysis document. It is a casestudy from an actual project. This document—and the approach we used—fol-lowed the Failure Mode and Effect Analysis (FMEA) approach.
  • 109. 88 3 Test Management Figure 3–1 An example of a quality risk analysis document using FMEA As you can see, we start—at the left side of the table—with a specific function and then identify failure modes and their possible effects. Criticality is deter- mined based on the effects, as is the severity and priority. Possible causes are listed to enable bug prevention work during requirements, design, and imple- mentation. Next, we look at detection methods—those methods we expect to be applied for this project. The more likely the failure mode is to escape detection, the worse the detection number. We calculate a risk priority number based on the severity, priority, and detection numbers. Smaller numbers are worse. Sever- ity, priority, and detection each range from 1 to 5, so the risk priority number ranges from 1 to 125. This particular table shows the highest-level risk items only because Rex sorted it by risk priority number. For these risk items, we’d expect a lot of addi- tional detection and other recommended risk control actions. You can see that we have assigned some additional actions at this point but have not yet assigned the owners. During testing actions associated with a risk item, we’d expect that the num- ber of test cases, the amount of test data, and the degree of test coverage would
  • 110. 3.9 Risk-Based Testing 89all increase as the risk increases. Notice that we can allow any test proceduresthat cover a risk item to inherit the level of risk from the risk item. That docu-ments the priority of the test procedure, based on the level of risk.3.9.6 Risk-Based Testing throughout the LifecycleWe’ve mentioned that, properly done, risk-based testing manages risk through-out the lifecycle. Let’s look at how that happens, based on our usual approach toa test project. During test planning, risk management comes first. We perform a qualityrisk analysis early in the project, ideally once the first draft of a requirementsspecification is available. From that quality risk analysis, we build an estimatefor negotiation with and, we hope, approval by project stakeholders and man-agement. Once the project stakeholders and management agree on the estimate, wecreate a plan for the testing. The plan assigns testing effort and sequences testsbased on the level of quality risk. It also plans for project risks that could affecttesting. During test control, we will periodically adjust the risk analysis throughoutthe project. That can lead to adding, increasing, or decreasing test coverage;removing, delaying, or accelerating the priority of tests; and other such activi-ties. During test analysis and design, we work with the test team to allocate testdevelopment and execution effort based on risk. Because we want to report testresults in terms of risk, we maintain traceability to the quality risks. During implementation and execution, we sequence the procedures basedon the level of risk. We ensure that the test team, including the test analysts andtechnical test analysts, uses exploratory testing and other reactive techniques todetect unanticipated high-risk areas. During the evaluation of exit criteria and reporting, we work with our testteam to measure test coverage against risk. When reporting test results (andthus release readiness), we talk not only in terms of test cases run and bugsfound, but also in terms of residual risk.
  • 111. 90 3 Test Management 3.9.7 Risk-Aware Testing Standards As you saw in chapter 2, the United States Federal Aviation Administration’s DO-178B standard bases the extent of testing—measured in terms of white-box code coverage—on the potential impact of a failure. That makes DO-178B a risk-aware testing standard. Another interesting example of how risk management, including quality risk management, plays into the engineering of complex and/or safety-critical systems is found in the ISO/IEC standard 61508, which is mentioned in the Advanced syllabus. This standard applies to embedded software that controls systems with safety-related implications, as you can tell from its title, “Func- tional safety of electrical/electronic/programmable electronic safety-related sys- tems.” The standard is very much focused on risks. Risk analysis is required. It considers two primary factors for determing the level of risk, likelihood and impact. During a project, we are to reduce the residual level of risk to a tolerable level, specifically through the application of electrical, electronic, or software improvements to the system. The standard has an inherent philosophy about risk. It acknowledges that we can’t attain a level of zero risk—whether for an entire system or even for a single risk item. It says that we have to build quality, especially safety, in from the beginning, not try to add it at the end. Thus we must take defect-preventing actions like requirements, design, and code reviews. The standard also insists that we know what constitutes tolerable and intol- erable risks and that we take steps to reduce intolerable risks. When those steps are testing steps, we must document them, including a software safety valida- tion plan, a software test specification, software test results, software safety vali- dation, a verification report, and a software functional safety report. The standard addresses the author-bias problem. As discussed in the Foun- dation syllabus, this is the problem with self-testing, the fact that you bring the same blind spots and bad assumptions to testing your own work that you brought to creating that work. So the standard calls for tester independence, indeed insisting on it for those performing any safety-related tests. And since testing is most effective when the system is written to be testable, that’s also a requirement.
  • 112. 3.9 Risk-Based Testing 91 The standard has a concept of a safety integrity level (or SIL), which isbased on the likelihood of failure for a particular component or subsystem. Thesafety integrity level influences a number of risk-related decisions, including thechoice of testing and QA techniques. Some of the techniques are ones we’ll cover in this book and in the compan-ion volume for test analysis (volume I) that address various functional andblack-box testing design techniques. Many of the techniques are ones that we’llcover in this book, including probabilistic testing, dynamic analysis, datarecording and analysis, performance testing, interface testing, static analysis,and complexity metrics. Additionally, since thorough coverage, including dur-ing regression testing, is important in reducing the likelihood of missed bugs,the standard mandates the use of applicable automated test tools, which we’llalso cover here in this book. Again, depending on the safety integrity level, the standard might requirevarious levels of testing. These levels include module testing, integration testing,hardware-software integration testing, safety requirements testing, and systemtesting. If a level of testing is required, the standard states that it should be docu-mented and independently verified. In other words, the standard can requireauditing or outside reviews of testing activities. In addition, continuing on withthe theme of “guarding the guards,” the standard also requires reviews for testcases, test procedures, and test results along with verification of data integrityunder test conditions. The standard requires the use of structural test design techniques. Struc-tural coverage requirements are implied, again based on the safety integritylevel. (This is similar to DO-178B.) Because the desire is to have high confi-dence in the safety-critical aspects of the system, the standard requires com-plete requirements coverage not once, but multiple times, at multiple levels oftesting. Again, the level of test coverage required depends on the safety integ-rity level. Now, this might seem a bit excessive, especially if you come from a veryinformal world. However, the next time you step between two pieces of metalthat can move—e.g., elevator doors—ask yourself how much risk you want toremain in the software the controls that movement.
  • 113. 92 3 Test Management 3.9.8 Risk-Based Testing Exercise 1 Read the HELLOCARM system requirements document in Appendix B, a hypothetical project derived from a real project that RBCS helped to test. If you’re working in a classroom, break into groups of three to five. If you are working alone, you’ll need to do this yourself. Perform quality risks analysis for this project. Since this is a book focused on non-functional testing, identify and assess risks for efficiency quality characteristics only. Use the templates shown in figure 3-2 and table 3-1 on the following page. To keep the time spent reasonable, I suggest 30 to 45 minutes identifying quality risks, then 15 to 30 minutes altogether assessing the level of each risk. If you are working in a classroom, once each group has finished its analysis, dis- cuss the results. Quality risks are potential system problems which could reduce user satisfaction Risk priority number. Aggregate measure of problem risk Tracing information back Impact of the problem: Arises from business, to requirements, operational. and regulatory considerations design, or other risk bases Likelihood of the problem: Arises from implementation and technical considerations Extent of Quality Risk Likelihood Impact Risk Pri. # Testing Tracing Risk Category 1 Risk 1 Risk 2 Risk n A heirarchy of 1 = Very high risk categories 2 = High can help The product of 3 = Medium organize the technical and 1-5 = Extensive 4 = Low list and jog business risk, 6-10 = Broad 5 = Very low your memory from 1-25 11-5 = Cursory 16-20 = Opportunity 21-25 = Report bugs Figure 3–2 Annotated template for informal quality risk analysis
  • 114. 3.9 Risk-Based Testing 93Table 3–1 Functional quality risk analysis template showing quality risk categories to address Likeli- Im- Risk Extent of No. Quality Risk hood pact Pri. # Testing Tracing 1.1.000 Efficiency: Time behavior 1.1.001 [Non-functional risks related to time behavior go in this section.] 1.1.002 1.1.003 1.1.004 1.1.005 1.2.000 Efficiency: Resource utilization 1.2.001 [Non-functional risks related to accuracy go in this section.] 1.2.002 1.2.0033.9.9 Risk-Based Testing Exercise Debrief 1You can see our solution to the exercise starting in table 3-2. Immediately afterthat table are two lists of by-products. One is the list of project risks discoveredduring the analysis. The other is the list of requirements document defects dis-covered during the analysis. As a first pass for this quality risk analysis, Jamie went through each effi-ciency requirement and identified one or more risk items for it. Jamie assumedthat the priority of the requirement was a good surrogate for the impact of therisk, so he used that. Even using all of these shortcuts, it took Jamie about anhour to get through this.Table 3–2 Functional quality risk analysis for HELLOCARMS Likeli- Im- Risk Extent of No. Quality Risk hood pact Pri. # Testing Tracing 1.1.000 Efficiency: Time behavior 1.1.001 Screen-to-screen response 4 2 8 Broad 040-010-010 is continually slow for home equity loans 1.1.002 Screen-to-screen response 4 3 12 Cursory 040-010-010 is continually slow for home equity lines of credit 1.1.003 Screen-to-screen response 4 4 16 Opportun 040-010-010 is continually slow for reverse ity mortgages
  • 115. 94 3 Test Management 1.1.004 Approval or decline of loan 4 2 8 Broad 040-010-020 continually slower than required for all applications 1.1.005 Approval or decline of high- 3 2 6 Broad 040-010-020 value loans slower than required 1.1.006 Loans regularly fail to be 3 3 9 Broad 040-010-030 processed within one hour 1.1.007 Time overhead on scoring 3 4 12 Cursory 040-010-040 mainframe does not meet requirements when running at 2,000 applications per hour. 1.1.008 Time overhead on scoring 2 4 8 Extensive 040-010-040 mainframe does not meet requirements when running at 4,000 applications per hour 1.1.009 Credit-worthiness of 3 2 6 Broad 040-010-050 customers is not determined in timely manner, especially when working at high rates of throughput 1.1.010 Fewer than rated Credit 2 2 4 Extensive 040-010-050 Bureau Mainframe requests are completed within the required timeframe at high rates of throughput 1.1.011 At high rates of throughput, 4 2 8 Broad 040-010-060 archiving application is slow, tying up Telephone Banker workstation appreciably 1.1.012 Escalation to Senior Banker 4 3 12 Cursory 040-010-070 or return to standard 040-010-080 Telephone Banker takes longer than rated time when working at high throughput. 1.1.013 Conditional confirmation of 3 2 6 Broad 040-010-090 acceptance takes appreciably more time at high throughput (2,000 per hour) 1.1.014 Conditional confirmation of 2 2 4 Extensive 040-010-090 acceptance takes appreciably more time at high throughput (4,000 per hour)
  • 116. 3.9 Risk-Based Testing 95 1.1.015 Abort function does not reset 5 4 20 Report 040-010-100 the system to the starting Bugs point on the bankers workstation in preparation for the next customer in a timely manner 1.1.016 Unable to support Internet 5 4 20 Opportun 010-010-170 operations within a timely ity manner 1.1.017 Unable to sustain a 2,000 2 2 4 Extensive 040-010-110 application per hour throughput 1.1.018 Unable to sustain a 4,000 3 3 9 Broad 040-010-120 application per hour throughput 1.1.019 Unable to support 4,000 3 4 12 Cursory 040-010-130 concurrent applications 1.1.020 Unable to sustain a rate of 3 2 6 Broad 040-010-140 throughput that would facilitate 1.2 applications the first year 1.1.021 Turnaround time is 4 4 16 Opportun 040-010-150 prohibitive on transactions ity (including network transmission time) 1.2.000 Efficiency: Resource utilization 1.2.001 Database server unable to 4 3 12 Cursory 040-020-010 handle the rated load 1.2.002 Web server unable to handle 3 3 9 Broad 040-020-020 the rated load 1.2.003 App server unable to handle 2 3 6 Broad 040-020-030 the rated load3.9.10 Project Risk By-ProductsIn the course of preparing the quality risk analysis document, Jamie observedthe following project risk inherent in the requirements. What is Service level agreement (SLA) contract with the Credit Bureau Mainframe? Many efficiency measurements will be dependent on fast turnaround there.3.9.11 Requirements Defect By-ProductsIn the course of preparing the quality risk analysis document, Jamie observedthe following defect in the requirements.
  • 117. 96 3 Test Management 1. Assumption appears to be that a certain percentage of applications will be approved or conditionally approved (see 040-010-140, 040-010-150, and 040-010-160). I see no proof that this is a logical necessity. 3.9.12 Risk-Based Testing Exercise 2 Using the quality risks analysis for HELLOCARMS, outline a set of non-func- tional test suites to run for HELLOCARM system integration and system test- ing. Specifically, the system integration test level should have, as an overall objective, looking for defects in and building confidence around the ability of the HELLOCARMS application to work with the other applications in the data- center efficiently. The system test level should have, as an overall objective, looking for defects in and building confidence around the ability of the HELLOCARMS application to provide the necessary end-to-end capabilities efficiently. Again, if you are working in a classroom, once each group has finished its work on the test suites and guidelines, discuss the results. 3.9.13 Risk-Based Testing Exercise Debrief 2 Based on the quality risk analysis Jamie performed, he created the following lists of system integration test suites and system test suites. System Integration Test Suites ■ HELLOCARMS/Scoring Mainframe Interfaces ■ HELLOCARMS/LoDoPS Interfaces ■ HELLOCARMS/GLADS Interfaces ■ HELLOCARMS/GloboRainBQW Interfaces System Test Suites ■ Time behavior / Home equity loans ■ Time behavior / Home equity lines of credit ■ Time behavior / Reverse mortgages ■ Resource utilization / Mix of all three
  • 118. 3.10 Failure Mode and Effects Analysis 97 ISTQB Glossary session-based test management: A method for measuring and managing session-based testing, e.g., exploratory testing. session-based test mana- gement3.9.14 Test Case Sequencing GuidelinesFor system integration testing, we are going to create a suite for each of the dif-ferent systems that interact with HELLOCARMS. The suite will consist of allthe transactions that are possible between the systems. For system testing, we may be using overkill between the time behaviorsuites. It is unclear to us at this time in the software development lifecycle howmuch commonality there may be between the three different products. By thetime we get ready to test, we may merge those suites to test all loan productstogether. For resource utilization, we definitely will mix the different productstogether to get a realistic test set. The question of sequencing the tests should be addressed. In performancetesting, prioritization must include not only the risk analysis, but also the avail-ability of data, functionality, tools, and resources. Often the resources are prob-lematic because we will be including network, database, and applicationexperts, and perhaps others.3.10 Failure Mode and Effects Analysis Learning objectives Recall of content onlyThe concepts in this section apply primarily for test managers. There are nolearning objectives defined for technical test analysts in this section. In thecourse of studying for the exam, read this section in chapter 3 of the Advancedsyllabus for general recall and familiarity only.
  • 119. 98 3 Test Management 3.11 Test Management Issues Learning objectives Recall of content only The concepts in this section apply primarily for test managers. There are no learning objectives defined for technical test analysts in this section. In the course of studying for the exam, read this section in chapter 3 of the Advanced syllabus for general recall and familiarity only. 3.12 Sample Exam Questions To end each chapter, you can try one or more sample exam questions to rein- force your knowledge and understanding of the material and to prepare for the ISTQB Advanced Level Technical Test Analyst exam. The questions in this section illustrate what is called a scenario question. Scenario Assume you are testing a computer-controlled braking system for an auto- mobile. This system includes the possibility of remote activation to initiate a gradual braking followed by disabling of the motor upon a full stop if the owner or the police report that the automobile is stolen or otherwise being illegally operated. Project documents and the product marketing collateral refer to this feature as emergency vehicle control override. The marketing team is heavily pro- moting this feature in advertisements as a major innovation for an automobile at this price. Consider the following two statements: I. Testing will cover the possibility of the failure of the emergency vehicle control override feature to engage properly and also the possibility of the emergency vehicle control override engaging without proper autho- rization. II. The reliability tests will include sending a large volume of invalid com- mands to the emergency vehicle control override system. Ten percent of these invalid commands will consist of deliberately engineered
  • 120. 3.12 Sample Exam Questions 99 invalid commands that cover all invalid equivalence partitions and/or boundary values that apply to command fields; 10 percent of these invalid commands will consist of deliberately engineered invalid com- mands that cover all pairs of command sequences, both valid and invalid; the remaining invalid commands will be random corruptions of valid commands rendered invalid by the failure to match the check- sum.1. If the project follows the IEEE 829 documentation standard, which of the following statements about the IEEE 829 templates could be correct? A I belongs in the Test Design Specification, while II belongs in the Test Plan B I belongs in the Test Plan, while II belongs in the Test Design Specification C I belongs in the Test Case Specification, while II belongs in the Test Plan D I belongs in the Test Design Specification, while II belongs in the Test Item Transmittal Report2. If the project is following a risk-based testing strategy, which of the follow- ing is a quality risk item that would result in the kind of testing specified in statements I and II above? A The emergency vehicle control override system fails to accept valid commands. B The emergency vehicle control override system is too difficult to install. C The emergency vehicle control override system accepts invalid commands. D The emergency vehicle control override system alerts unauthorized drivers.3. Assume that each quality risk item is assessed for likelihood and impact to determine the extent of testing to be performed. Consider only the infor- mation in the scenario, in questions 1 and 2 above, and in your answers to
  • 121. 100 3 Test Management those questions. Which of the following statements is supported by this information? A The likelihood and impact are both high. B The likelihood is high. C The impact is high. D No conclusion can be reached about likelihood or impact.
  • 122. 1014 Test Techniques “Let’s just say I was testing the bounds of reality. I was curious to see what would happen. That’s all it was: curiosity.” Jim Morrison of the Doors, an early boundary analystThe fourth chapter of the Advanced syllabus is concerned with test techniques.To make the material manageable, the syllabus uses a taxonomy—a hierarchicalclassification scheme—to divide the material into sections and subsections.Conveniently for us, it uses the same taxonomy of test techniques given in theFoundation syllabus, namely specification-based, structure-based, and experi-ence-based, adding additional techniques and further explanation in each cate-gory. In addition, the Advanced syllabus also discusses both static and dynamicanalysis.1This chapter contains six sections.1. Introduction2. Specification-Based3. Structure-Based4. Defect- and Experience-Based5. Static Analysis6. Dynamic AnalysisWe’ll look at each section and how it relates to test analysis.1. You might notice a slight change from the organization of the Foundation syllabus here.The Foundation syllabus covers static analysis in the material on static techniques in chapter 3.The Advanced syllabus covers reviews (one form of static techniques) in chapter 6. The Advancedsyllabus covers static analysis (another form of static techniques) and dynamic analysis in chapter4 and, in terms of tools, in chapter 9.
  • 123. 102 4 Test Techniques 4.1 Introduction Learning objectives Recall of content only In this chapter and the next two chapters, we cover a number of test design techniques. Let’s start by putting some structure around these techniques, and then we’ll tell you which ones are in scope for the test analyst and which are in scope for the technical test analyst. Testing Static Dynamic Static Review Defect- analysis Experience- Black-box White-box based based ATA ATTA ATTA ATA ATA ATTA ATTA Non- ATTA Dynamic Functional functional analysis ATA ATTA ATA ATTA ATTA Figure 4–1 A taxonomy for Advanced syllabus test techniques Figure 4-1 shows the highest-level breakdown of testing techniques as the dis- tinction between static and dynamic tests. Static tests, you’ll remember from the Foundation syllabus, are those that do not involve running (or executing) the test object itself, while dynamic tests are those that do involve running the test object. Static tests are broken down into reviews and static analysis. A review is any method where the human being is the primary defect finder and scrutinizer of the item under test, while static analysis relies on a tool as the primary defect finder and scrutinizer. Dynamic tests are broken down into five main types in the Advanced level: ■ Black-box (also referred to as specification-based or behavioral), where we test based on the way the system is supposed to work. Black-box tests can
  • 124. 4.1 Introduction 103 further be broken down in two main subtypes, functional and non- functional, following the ISO 9126 standard. The easy way to distinguish between functional and non-functional is that functional is testing what the system does and non-functional is testing how or how well the system does what it does.■ White-box (also referred to as structural), where we test based on the way the system is built.■ Experience-based, where we test based on our skills and intuition, along with our experience with similar applications or technologies.■ Defect-based, where we use our understanding of the type of defect targeted by a test as the basis for test design, with tests derived systemati- cally from what is known about the defect.■ Dynamic analysis, where we analyze an application while it is running, usually via some kind of instrumentation in the code.There’s always a temptation, when presented with a taxonomy—a hierarchicalclassification scheme—like this one, to try to put things into neat, exclusive cat-egories. If you’re familiar with biology, you’ll remember that every living being,from herpes virus to trees to chimpanzees, has a unique Latin name based onwhere in the big taxonomy of life it fits. However, when dealing with testing,remember that these techniques are complementary. You should use whicheverand as many as are appropriate for any given test activity, whatever level of test-ing you are doing. Now, in figure 4-1, below each bubble that shows the lowest-level break-down of test techniques, you see one or two boxes. Boxes with ATA in them rep-resents test techniques covered by the book Advanced Software Testing Vol. 1.Boxes with ATTA in them represent test techniques covered by this book. Thefact that most techniques are covered by both books does not, however, meanthat the coverage is the same. You should also keep in mind that this assignment of types of techniquesinto particular roles is based on common usage, which might not correspond toyour own organization. You might have the title of test analyst and be responsi-ble for dynamic analysis. You might have the title of technical test analyst andnot be responsible for static analysis at all.
  • 125. 104 4 Test Techniques Okay, so that gives you an idea of where we’re headed, basically for the rest of this chapter. Figure 4-1 gives an overview of the heart of this book, probably 65 percent of the material covered. In this chapter, we cover dynamic testing, except for non-functional tests, which are covered in the next chapter. A subse- quent chapter covers static testing. You might want to make a copy of figure 4-1 and have it available as you go through this chapter and the next two, as a way of orienting yourself to where we are in the book. 4.2 Specification-Based Learning objectives (K2) List examples of typical defects to be identified by each specific specification-based technique. (K3) Write test cases from a given software model in real life using the following test design techniques (the tests shall achieve a given model coverage): – Equivalence partitioning – Boundary value analysis – Decision tables – State transition testing (K4) Analyze a system, or its requirement specification, in order to determine which specification-based techniques to apply for specific objectives, and outline a test specification based on IEEE 829, focusing on component and non-functional test cases and test procedures. Let’s start with a broad overview of specification-based tests before we dive into the details of each technique. In specification-based testing, we are going to derive and select tests by ana- lyzing the test basis. Remember that the test basis is—or, more accurately, the test bases are—the documents that tell us, directly or indirectly, how the com- ponent or system under test should and shouldn’t behave, what it is required to do, and how it is required to do it. These are the documents we can base the tests on. Hence the name, test basis. Note also that, while the ISTQB glossary
  • 126. 4.2 Specification-Based 105 ISTQB Glossary black-box testing: Testing, either functional or non-functional, without refer- ence to the internal structure of the component or system. specification: A document that specifies, ideally in a complete, precise and ver- ifiable manner, the requirements, design, behavior, or other characteristics of a component or system and, often, the procedures for determining whether these provisions have been satisfied. specification-based technique (or black-box test design technique): Proce- dure to derive and/or select test cases based on an analysis of the specifica- tion, either functional or non-functional, of a component or system without reference to its internal structure. test basis: All documents from which the requirements of a component or sys- tem can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. black-box testing specification-based tech- nique test basisdefines the test basis to consist of documents, it is not uncommon to base testson conversations with users, business analysts, architects, and other stake-holders or on reviews of competitors’ systems. It’s probably worth a quick compare-and-contrast with the term test oracle,which is similar and related but not the same as the test basis. The test oracle isanything we can use to determine expected results that we can compare with theactual results of the component or system under test. Anything that can serve asa test basis can also be a test oracle, of course. However, an oracle can also be anexisting system, either a legacy system being replaced or a competitor’s system.An oracle can also be someone’s specialized knowledge. An oracle should not bethe code because otherwise we are only testing whether the compiler works. For structural tests—which are covered in the next section—the internalstructure of the system is the test basis but not the test oracle. However, forspecification-based tests, we do not consider the internal structure at all—atleast theoretically. Beyond being focused on behavior rather than structure, what’s common inspecification-based test techniques? Well, for one thing, there is some model,whether formal or informal. The model can be a graph or a table. For each
  • 127. 106 4 Test Techniques ISTQB Glossary test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), other software, a user manual, or an individual s specialized knowledge, but it should not be the code. model, there is some systematic way to derive or create tests using it. And, typi- cally, each technique has a set of coverage criteria that tell you, in effect, when test oracle the model has run out of interesting and useful test ideas. It’s important to remember that fulfilling coverage criteria for a particular test design technique does not mean that your tests are in any way complete or exhaustive. Instead, it means that the model has run out of useful tests to suggest based on that tech- nique. There is also typically some family of defects that the technique is particu- larly good at finding. Boris Beizer, in his books on test design, referred to this as the “bug hypothesis”. He meant that, if you hypothesize that a particular kind of bug is likely to exist, you could then select the technique based on that hypothe- sis. This provides an interesting linkage with the concept of defect-based test- ing, which we’ll cover in a later section of this chapter.2 Often, specification-based tests are requirements based. Requirements specifications often describe behavior, especially functional behavior. (The ten- dency to underspecify non-functional requirements creates a set of quality- related problems that we’ll not address at this point.) So, we can use the descrip- tion of the system behavior in the requirements to create the models. We can then derive tests from the models. 2. Boris Beizer’s books on this topic would include Black-Box Testing (perhaps most pertinent to test analysts), Software System Testing and Quality Assurance (good for test analysts, technical test analysts, and test managers), and Software Test Techniques (perhaps most pertinent to technical test analysts).
  • 128. 4.2 Specification-Based 107 ISTQB Glossary requirements-based testing: An approach to testing in which test cases are designed based on test objectives and test conditions derived from require- ments, e.g., tests that exercise specific functions or probe non-functional attributes such as reliability or usability.4.2.1 Equivalence PartitioningWe start with the most basic of specification-based test design techniques,equivalence partitioning. Conceptually, equivalence partitioning involves test-ing various groups that are expected to be handled the same way by the systemand to exhibit similar behavior. The underlying model is a graphical or mathematical one that identifiesequivalent classes—which are also called equivalence partitions—of inputs, out-puts, internal values, time relationships, calculations, or just about anything elseof interest. These classes or partitions are called equivalent because they arelikely to be handled the same way by the system. Some of the classes can becalled valid equivalence classes because they describe valid situations that thesystem should handle normally. Other classes can be called invalid equivalenceclasses because they describe invalid situations that the system should reject orat least escalate to the user for correction or exception handling. Once we’ve identified the equivalence classes, we can derive tests from them.Usually, we are working with more than one set of equivalence classes at one time;for example, each input field on a screen has its own set of valid and invalid equiv-alence classes. So, we can create one set of valid tests by selecting one valid mem-ber from each equivalence partition. We continue this process until each validclass for each equivalence partition is represented in at least one valid test. Next, we can create negative tests (using invalid data). In most applications,for each equivalence partition, we select one invalid class for one equivalencepartition and a valid class for every other equivalence partition. This rule—don’t combine multiple invalid equivalent classes in a single test—prevents usfrom running into a situation where the presence of one invalid value mightmask the incorrect handling of another invalid value. We continue this processuntil each invalid class for each equivalence partition is represented in at leastone invalid test.
  • 129. 108 4 Test Techniques ISTQB Glossary equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification. equivalence partitioning: A black-box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle, test cases are designed to cover each partition at least once. However, some applications (particularly web-based applications) are now designed so that, when processing an input screen, they evaluate all of the inputs before responding to the user. With these screens, multiple invalid values are bundled up and returned to the user all at the same time, often by highlight- ing erroneous values or the controls that contain incorrect values. Careful selec- tion of data is still required, even though multiple invalid values may be tested concurrently if we can check that the input validation code acts independently on each field. Notice the coverage criterion implicit in the preceding discussion. Every class member, both valid and invalid, is represented in at least one test case. What is our bug hypothesis with this technique? For the most part, we are looking for a situation where some equivalence class is handled improperly. That could mean the value is accepted when it should have been rejected or vice versa, or that a value is properly accepted or rejected but handled in a way appropriate to another equivalence class, not the class to which it actually belongs. You might find this concept a bit confusing verbally, so let’s try some figures. Figure 4-2 shows a way that we can visualize equivalence partitioning. As you can see in the top half of the figure 4-2, we start with some set of interest. This set of interest can be an input field, an output field, a test precon- dition or postcondition, a configuration, or just about anything we’re interested in testing. The key is that we can apply the operation of equivalence partitioning to split the set into two or more disjoint subsets, where all the members of each subset share some trait in common that was not shared with the members of the other subset.
  • 130. 4.2 Specification-Based 109 Subset A Equivalence Set Partitioning Subset B Subset A Subset A Select Test Cases Subset B Subset BFigure 4–2 Visualizing equivalence partitioningFor example, if you have a simple drawing program that can fill figures with red,green, or blue, you can split the set of fill colors into three disjoint sets: red,green, and blue. In the bottom half of the figure 4-2, we see the selection of test case valuesfrom the subsets. The dots in the subsets represent the value chosen from eachsubset to be represented in the test case. This involves selecting at least onemember from each subset. In pure equivalence partitioning, the logic behindthe specific selection is outside the scope of the technique. In other words, youcan select any member of the subset you please. If you’re thinking, “Some mem-bers are better than others,” that’s fine; hold that thought for a few minutes andwe’ll come back to it. Now, at this point we’ll generate the rest of the test case. If the set that wepartitioned was an input field, we might refer to the requirements specificationto understand how each subset is supposed to be handled. If the set that we par-titioned was an output field, we might refer to the requirements to derive inputs
  • 131. 110 4 Test Techniques that should cause that output to occur. We might use other test techniques to design the rest of the test cases. Subset A 1 Equivalence Subset A Partitioning Subset A 2 Equivalence Set Partitioning Subset A 3 Subset B Subset A 1 Subset A 1 Subset A 2 Subset A 2 Select Test Cases Subset A 3 Subset A 3 Subset B Subset B Figure 4–3 Subpartitioning an equivalence class Figure 4-3 shows that equivalence partitioning can be applied iteratively. In this figure, we apply a second round of equivalence partitioning to one of the sub- sets to generate three smaller subsets. Only at that point do we select four mem- bers—one from subset B and one each from subset A1, A2, and A3—for test cases. Note that we don’t have to select a member from subset A since each of the members from subsets A1, A2, and A3 are also members of subset A. Avoiding Equivalence Partitioning Errors While equivalence partitioning is fairly straightforward, people do make some common errors when applying it. Let’s look at these errors so you can avoid them. First, as shown in the top half of figure 4-4, the subsets must be disjoint. That is, no two of the subsets can have one or more member in common. The whole point of equivalence partitioning is to test whether a system handles dif- ferent situations differently (and properly, of course). If it’s ambiguous as to which handling is proper, then how do we define a test case around this? Try it out and see what happens? Not much of a test!
  • 132. 4.2 Specification-Based 111 Subset A Set Equivalence Partitioning Subset B Subset A Set Equivalence Partitioning Subset B {}Figure 4–4 Common equivalence partitioning errorsSecond, as shown in the bottom half of figure 4-4, none of the subsets may beempty. That is, if the equivalence partitioning operation produces a subset withno members, that’s hardly very useful for testing. We can’t select a member ofthat subset because it has no members. Third, while not shown graphically—in part because we couldn’t figure outa clear way to draw the picture—note that the equivalence partitioning processdoes not subtract, it divides. What we mean by this is that, in terms of mathe-matical set theory, the union of the subsets produced by equivalence partition-ing must be the same as the original set that was equivalence partitioned. Inother words, equivalence partitioning does not generate “spare” subsets that aresomehow disposed of in the process—at least, not if we do it properly. Notice that this is important because, if this is not true, then we stand thechance of failing to test some important subset of inputs, outputs, configura-tions, or some other factor of interest that somehow was dropped in the testdesign process. Composing Test Cases with Equivalence PartitioningWhen we compose test cases in situations where we’ve performed equivalence par-titioning on more than one set, we select from each subset as shown in figure 4-5.
  • 133. 112 4 Test Techniques TC1 TC2 TC3 X1 X1 Equivalence Select X Partitioning Test Cases X2 X2 Compose Test Cases Y1 Y1 Equivalence Select Y Partitioning Y2 Test Cases Y2 Y3 Y3 Figure 4–5 Composing test cases with valid values Here, we start with set X and set Y. We partition set X into two subsets, X1 and X2. We partition set Y into three subsets, Y1, Y2, and Y3. We select test case values from each of the five subsets, X1 and X2 on the one hand, Y1, Y2, and Y3 on the other. We then compose three test cases since we can combine the values from the X subsets with values from the Y subsets (assuming the values are independent and all valid). For example, imagine you are testing a browser-based application. You are interested in two browsers: Internet Explorer and Firefox. You are interested in three connection types: dial-up, DSL, and cable modem. Since the browser and the connection types are independent, we can create three test cases. In each of the test cases, one of the connection types and one of the browser types will be represented. One of the browser types will be represented twice. When discussing figure 4-5, we made a brief comment that we can combine values across the equivalence partitions when the values are independent and all valid. Of course, that’s not always the case. In some cases, values are not independent, in that the selection of one value from one subset constrains the choice in another subset. For example, imagine if we are trying to test combinations of applications and operating systems. We can’t test an application running on an operating system if there is not a version of that application available for that operating system. In some cases, values are not valid. For example, in figure 4-6, imagine that we are testing a project management application, something like Microsoft Project. Suppose that set X is the type of event we’re dealing with, which can be
  • 134. 4.2 Specification-Based 113either a task (X1) or a milestone (X2). Suppose that set Y is the start date of theevent, which can be in the past (Y1), today (Y2), or in the future (Y3). Supposethat set Z is the end date of the event, which can be either on or after the startdate (Z1) or before the start date (Z2). Of course, Z2 is invalid, since no eventcan have a negative duration. TC1 TC2 TC3 TC4 X1 X1 Equivalence Select X Partitioning Test Cases X2 X2 Compose Test Cases Y1 Y1 Equivalence Select Y Partitioning Y2 Test Cases Y2 Y3 Y3 Compose Test Cases Z1 Z1 Equivalence Select Z Partitioning Test Cases Z2 Z2Figure 4–6 Composing tests with invalid valuesSo, we test combinations of tasks and milestones with past, present, and futurestart dates and valid end dates in test cases TC1, TC2, and TC3. In TC4, wecheck that illegal end dates are correctly rejected. We try to enter a task with astart date in the past and an end date prior to the start date. If we wanted to sub-partition the invalid situation, we could also test with a start date in the presentand one in the future together with an end date before the start date, whichwould add two more test cases. One would test a start date in the present and anend date prior to the start date. The other would test a start date in the futureand an end date prior to the start date. In this particular example, we had a single subset for just one of the sets thatwas invalid. The more general case is that many—perhaps all—of the fields willhave invalid subsets. Imagine testing an e-commerce application. On the check-out screens of the typical e-commerce application, there are multiple requiredfields, and usually there is at least one way for such a field to be invalid. When there are multiple invalid values, there are two possible ways oftesting them, depending on how the error handling is designed in the applica-tion.
  • 135. 114 4 Test Techniques Historically, most error handling is done by parsing through the values and immediately stopping when the first error is found. Some newer applications parse through the entire set of values first, and then compile a list of errors found. Whichever error handler is used, we can always run test cases with a single invalid value entered. That way, we can check that every invalid value is correctly rejected or otherwise handled by the system. Of course, the number of test cases required by this method will equal the number of invalid parti- tions. When we have an error handler that aggregates all of the errors in one pass, we could test it by submitting multiple invalid values and thence reduce the total number of test cases. In a safety-critical or mission-critical system, you might want to test combi- nations of invalid values even if the parser stops on the first error. Just remem- ber, anytime you start down the trail of combinatorial testing, you are taking a chance that you’ll spend a lot of time testing things that aren’t terribly impor- tant. Consider a simple example of equivalence partitioning for system configu- ration. On the Internet appliance project we’ve mentioned, there were four pos- sible configurations for the appliances. They could be configured for kids, teens, adults, or seniors. This configuration value was stored in a database on the Internet service provider’s server so that, when an Internet appliance is con- nected to the Internet, this configuration value became a property of its connec- tion. Based on this configuration value, there were two key areas of difference in the expected behavior. For one thing, for the kids and teens systems, there was a filtering function enabled. This determined the allowed and disallowed websites the system could surf to. The setting was most strict for kids and somewhat less strict for teens. Adults and seniors, of course, were to have no filtering at all and should be able to surf anywhere. For another thing, each of the four configurations had a default set of e-commerce sites they could visit called the mall. These sites were selected by the marketing team and were meant to be age appropriate. Of course, these were the expected differences. We were also aware of the fact that there could be weird unexpected differences that could arise because
  • 136. 4.2 Specification-Based 115that’s the nature of some types of bugs. For example, performance was supposedto be the same, but it’s possible that performance problems with the filteringsoftware could introduce perceptible response-time issues with the kids andteens systems. We had to watch for those kinds of misbehaviors. So, to test for the unexpected differences, we simply had at least one of eachconfiguration in the test lab at all times and spread the nonconfiguration-spe-cific tests more or less randomly across the different configurations. To testexpected differences related to filtering and e-commerce, we made sure theseconfiguration-specific tests were run on the correct configuration. The chal-lenge here was that, while the expected results were concrete for the mall—themarketing people gave us four sets of specific sites, one for each configuration—the expected results for the filtering were not concrete but rather logical. Thisled to an enormous number of disturbing defect reports during test execution aswe found creative ways to sneak around the filters and access age-inappropriatematerials on the kids and teens configurations. Equivalence Partitioning ExerciseA screen prototype for one screen of the HELLOCARMS system is shown infigure 4-7. This screen asks for three pieces of information:■ The product being applied for, which is one of the following: – Home equity loan – Home equity line of credit – Reverse mortgage■ Whether someone has an existing Globobank checking account, which is either Yes or No■ Whether someone has an existing Globobank savings account, which is either Yes or NoIf the user indicates an existing Globobank account, then the user must enterthe corresponding account number. This number is validated against the bank’scentral database upon entry. If the user indicates no such account, the user mustleave the corresponding account number field blank. If the fields are valid, including the account number fields, then thescreen will be accepted. If one or more fields are invalid, an error message isdisplayed.
  • 137. 116 4 Test Techniques Apply for Product? {Select a product} Home equity loan Home equity line of credit Reverse mortgage Existing Checking? {Select Yes or No} {If Yes input account number} Yes No Existing Savings? {Select Yes or No} {If Yes input account number} Yes No Figure 4–7 HELLOCARMS system product screen prototype The exercise consists of two parts: 1. Show the equivalence partitions for each of the three pieces of information, indicating valid and invalid members. 2. Create test cases to cover these partitions, keeping in mind the rules about combinations of valid and invalid members. The answers to the two parts are shown on the next pages. You should review the answer to the first part (and, if necessary, revise your answer to the second part) before reviewing the answer to the second part. Equivalence Partitioning Exercise Debrief First, let’s take a look at the equivalence partitions. For the application-product field, the equivalence partitions are as follows: Table 4–1 # Partition 1 Home equity loan 2 Home equity line of credit 3 Reverse mortgage
  • 138. 4.2 Specification-Based 117Note that the screen prototype shows this information as selected from a pull-down list, so there is no possibility of entering an invalid product here. Somepeople do include an invalid product test, but we have not. For each of two existing-account entries, the situation is best modeled as asingle input field, which consists of two subfields. The first subfield is the Yes/No field. This subfield determines the rule for checking the second subfield,which is the account number. If the first subfield is Yes, the second subfieldmust be a valid account number. If the first subfield is No, the second subfieldmust be blank.So, the existing checking account information partitions are as follows:Table 4–2 # Partition 1 Yes-Valid 2 Yes-Invalid 3 No-Blank 4 No-NonblankAnd, the existing savings account information partitions are as follows:Table 4–3 # Partition 1 Yes-Valid 2 Yes-Invalid 3 No-Blank 4 No-NonblankNote that, for both of these, partitions 2 and 4 are invalid partitions, while parti-tions 1 and 3 are valid partitions. Just as a side note, we could simplify the interface to eliminate some errorhandling code, testing, and possible end-user errors. This simplification couldbe done by designing the interface such that selecting No for either existingchecking or savings accounts automatically deleted any value in the respec-tive account number text field and then hid that field. In both table 4-2 andtable 4-3, this would have the effect of eliminating partition 4. The userwould not be allowed to make the mistakes, therefore the code would nothave to handle the exceptions, and therefore the tester would not need to test
  • 139. 118 4 Test Techniques them. A win-win-win situation. Often, clever design of the interface can increase the testability of an application in this way. Now, let’s create tests from these equivalence partitions. As we do so, we’re going to capture traceability information from the test case number back to the partitions. Once we have a trace from each partition to a test case, we’re done— provided that we’re careful to follow the rules about combining valid and invalid partitions! Table 4–4 Inputs 1 2 3 4 5 6 7 Product HEL LOC RM HEL LOC RM HEL Existing Checking? Yes No No Yes No No No Checking Account Valid Blank Blank Invalid Nonblank Blank Blank Existing Savings? No Yes No No No Yes No Savings Account Blank Valid Blank Blank Blank Invalid Nonblank Outputs Accept? Yes Yes Yes No No No No Error? No No No Yes Yes Yes Yes Table 4–5 # Partition Test Case 1 Home equity loan (HEL) 1 2 Home equity line of credit (LOC) 2 3 Reverse mortgage (RM) 3 Table 4–6 # Partition Test Case 1 Yes-Valid 1 2 Yes-Invalid 4 3 No-Blank 2 4 No-Nonblank 5 Table 4–7 # Partition Test Case 1 Yes-Valid 2 2 Yes-Invalid 6 3 No-Blank 1 4 No-Nonblank 7
  • 140. 4.2 Specification-Based 119 ISTQB Glossary boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, such as, for example, the minimum or maximum value of a range. boundary value analysis: A black-box test design technique in which test cases are designed based on boundary values.You should notice that these test cases do not cover all interesting possiblecombinations of factors here. For example, we don’t test to make sure a per-son with both a valid savings and valid checking account work properly. Thatcould be an interesting test because the accounts might have been establishedat different times and might have information that now conflicts in some way;e.g., in some countries it is still relatively common for a woman to take herhusband’s last name upon marriage. We also don’t test the combination ofinvalid accounts or the combination of account numbers that are valid alonebut not valid together; e.g., the two accounts belong to entirely differentpeople.4.2.2 Boundary Value AnalysisLet’s refine our equivalence partitioning test design technique with the nexttechnique, boundary value analysis. Conceptually, boundary value analysis ispredominately about testing the edges of equivalence classes. In other words,instead of selecting one member of the class, we select the largest and smallestmembers of the class and test them. We will discuss some other options forboundary analysis later. The underlying model is again either a graphical or mathematical one thatidentifies two boundary values at the boundary between one equivalence classand another. (In some techniques, the model identifies three boundary values,which we’ll discuss later.) Whether such a boundary exists for subsets wherewe’ve performed equivalence partitioning is another question that we’ll get to injust a moment. Right now, assuming the boundary does exist, notice that theboundary values are special members of the equivalence classes that happen tobe right next to each other and right next to the point where the expectedbehavior of the system changes. If the boundary values are members of a valid
  • 141. 120 4 Test Techniques equivalence class, they are valid, of course, but if members of an invalid equiva- lence class, they are invalid. Deriving tests with boundary values as the equivalence class members is much the same as for plain equivalence classes. We test valid boundary values together and then combine one invalid boundary value with other valid bound- ary values (unless the application being tested aggregates errors). We have to represent each boundary value in a test case, analogous to the equivalence partitioning situation. In other words, the coverage criterion is that every boundary value, both valid and invalid, must be represented in at least one test. The main difference is that there are at least two boundary values in each equivalence class. Therefore, we’ll have more test cases, about twice as many. More test cases? That’s not something we like unless there’s a good reason. What is the point of these extra test cases? That is revealed by the bug hypothe- sis for boundary value analysis. Since we are testing equivalence class mem- bers—every boundary value is an equivalence class member—we are testing for situations where some equivalence class is handled improperly. Again, that could mean acceptance of values that should be rejected, or vice versa, or proper acceptance or rejection but improper handling subsequently, as if the value were in another equivalence class, not the class to which it actually belongs. However, by testing boundary values, we also test whether the boundary between equiva- lence classes is defined in the right place. So, do all equivalence classes have boundary values? No, definitely not. Boundary value analysis is an extension of equivalence partitioning that applies only when the members of an equivalence class are ordered. So, what does that mean? Well, an ordered set is one where we can say that one member is greater than or less than some other member if those two mem- bers are not the same. We have to be able to say this meaningfully too. Just because some item is right above or below some other item on a pull-down menu does not mean that, within the program, the two items have a greater- than/less-than relationship. Examples of Equivalence Partitioning and Boundary Values Let’s look at two examples where a technical test analyst can (and can’t) use boundary value analysis on equivalence classes: enumerations and arrays.
  • 142. 4.2 Specification-Based 121 Many languages allow a programmer to define a set of named values thatcan be used as a substitute for constant values. These are called enumerations. As an example, assume that a database stored colors as integers to savespace. Red equals 0, blue equals 1, green equals 2. When the programmerwanted to set an object to red, he would have to use the value 0. These arbitrary values are often called magic numbers because they have a(somewhat) hidden meaning, often known only by the programmer. A pro-gramming language that allowed enumeration could allow the programmer tocreate a mapping of the magic numbers to a textual value, allowing the pro-grammer to write code that is self-documenting. Instead of writing ColorGun = 0;, the programmer can write ColorGun =Red. In figure 4-8, an example enumeration is made in the C language, definingthe five different colors. enum Colors { Red, Blue, Green, Yellow, Cyan };Figure 4–8In some languages, the programmer is given special functions to allow easiermanipulation of the enumerated values. For example, Pascal has the functionspred and succ to traverse the values of an enumeration. Of course, trying to“pred” the first or “succ” the last is going to fail, sometimes with unknownsymptoms. Hence boundaries that can be tested: both valid and invalid. The compiler usually manages the access to the values in the enumeration.We should be able to assume (we hope) that if we can access the first and lastelements of an enumeration, we should be able to access them all. If the lan-guage supports functions that walk the enumerated list, then we should testboth valid and invalid boundaries using the list. In some languages (e.g., C), the order of enumeration may be set by thecompiler but the rules used are specific to each compiler and therefore not con-sistent for testing. Knowing the rules for the programming language and com-piler being used is essential.
  • 143. 122 4 Test Techniques Arrays are data structures allowing a programmer to store homogenous data and access it via index rather than by name. The data must be homoge- nous, which means of a single type. In most languages, arrays can consist of ele- ments of programmer-defined types. A defined type may contain any number of different internal elements (including a mix of different data types) but each element in the array must be of that type, whether that type is built in or pro- grammer defined. Some programming languages allow only static arrays3 which are never allowed to change in number of elements. Other programming languages allow arrays to be dynamically created (memory for the array is allocated off the heap at runtime). These types of arrays can often be reallocated to allow resizing them on the fly. That can make testing them interesting. Even more interesting is testing what happens after an array is resized. Then, we get issues as to whether pointers to the old array are still accessible, whether data get moved correctly (or lost if the array is resized downward). For languages that do bounds checking of arrays, where an attempt to access an item before or after the array is automatically prevented, testing of boundaries may be somewhat less important. However, other issues may be problematic. For example, if the system throws an exception while resizing an array, what should happen? Many languages, however, are more permissive when it comes to array checking. They allow the programmer the freedom to make mistakes. And, very often, those mistakes often come at the boundaries. In some operating and runtime systems, an attempt to access memory beyond the bounds of the array will immediately result in a runtime failure that is immediately visible. But not all. A common problem occurs when the run- time system allows a write action to memory not owned by the same process or thread that owns the array. The data being held in that location beyond the array are overwritten. When the actual owning process subsequently tries to use the overwritten memory, the failure may then become evident (i.e., total failure or freeze up) or the memory’s contents might be used in a calculation (resulting in the wrong answer), or it might be stored in a file or database, corrupting the data store silently. None of those are attractive results; thus worthy of testing. 3. Those created and sized at compile time
  • 144. 4.2 Specification-Based 123There are also security issues to this array overwrite that we will discuss in alater chapter. Multidimensional arrays, where an item is accessed through two or moreindices, can be arbitrarily complex, making all of the issues mentioned aboveeven worse. Dynamic analysis tools can be useful when testing arrays; they can catch thefailure at the first touch beyond the bounds of the array rather than waiting forthe symptom of the failure to show. We will discuss dynamic analysis tools laterin this chapter and in chapter Non-functional BoundariesBoundaries are sometimes simple to find. For example, when dealing with sim-ple data types like integers, where the values are ordered, there is no need toguess. The boundaries are self-evident. Other boundaries may be somewhatmurky as you will see when we discuss integer and floating point data types. And then there are times when a boundary is only implicitly defined andthe tester must settle on ways to test it. Consider the following requirement: The maximum file size is 16 MB.Exactly how many bytes is 16 MB? How do we create a 16 MB file to test themaximum? How do we test the invalid high boundary? The answer to the firstquestion as to the number of bytes depends. What is the operating system?What is the file system under use: FAT32, NTFS? What are the sector and clus-ter sizes on the hard disk? Testing the exact valid high boundary on a 16 MB fileis very difficult, as is testing the invalid high boundary. When we are testingsuch a system—assuming it is not a critical or safety-related system—we willoften not spend time trying to ascertain the exact size but will test a file as closeto 16 MB as we can get for the valid high boundary and perhaps a 17 MB file forthe invalid high boundary. On the other end of the file size, the invalid low boundary is relatively easyto comprehend: there is none. No file can be minus one (-1) byte. But perhapsthe invalid low must be compared to the valid low size. Is it zero bytes? Somefile systems and operating systems allow zero-sized files, and therefore there isno invalid low boundary size file possible. If the file is a structured file, withdefined header information, the application that created it likely will not allow azero-byte file, and therefore an invalid low boundary is testable.
  • 145. 124 4 Test Techniques In our experience, you should take these types of questions to the develop- ment team if they were not already defined in the design specification. Be aware, however, that often the developers have not given sufficient thought to some of these questions; their answers might be incorrect. Often, experiment- ing is required once the system arrives. As a separate example, assume the following efficiency requirement: The system shall process at least 150 transactions per second. An invalid low boundary may not exist. It would, of course, be impossible to trigger a negative number of transactions; however, if the system is designed such that a minimum number of transactions must flow within a given unit of time, there would be both valid and invalid low boundaries. Defining the mini- mum should be approached as we did the file example. Is a valid high really a boundary though? Clearly we must test 150 transac- tions per second, consistent with good performance testing—we will discuss that later in the book. What would be an invalid high? In a case like this, we would argue that there is no specific high invalid boundary to test; however, good performance testing would continue to ramp up the number of transac- tions until we see an appreciable change in performance. The point of this discussion is that boundaries are really useful for testing; sometimes you just must search a bit to determine what those boundaries are. A Closer Look at Functional Boundaries In the Foundation syllabus, we discussed a few boundary examples in a some- what simplistic way. For an integer where the valid range is 1 to 10 inclusive, the boundaries were discussed as if there were four. These are shown on the number line in figure 4-9 as 0 (invalid low), 1 (valid low), 10 (valid high), and 11 (invalid high). ?? 0 1 10 11 ?? Figure 4–9 Boundaries for integers However, the closer you look, the more complicated it actually gets. Looking at this number line, there appear to be two missing values that we did not discuss: on the far left and far right. In math, we would say that the two missing bound-
  • 146. 4.2 Specification-Based 125aries represent negative infinity on the left, positive infinity on the right. Clearlya computer cannot represent infinity using integers, as that would take an infi-nite number of bytes. That means there are going to be specific values that areboth the largest and the smallest values that the computer can represent.Boundaries! In order to understand how these boundaries are actually repre-sented—and where errors can occur—we need to look at some computer archi-tectural details. Exactly how are numbers stored in memory? Technical test analysts must understand how different values are repre-sented in memory; we will look at this topic for both integer and floating pointnumbers. IntegersIn most computer architectures, integers are stored differently than floatingpoint numbers (those numbers that have fractional values). The maximum sizeof an integer—and hence the boundaries—(in most cases) is defined by twothings: the number of bits that are allocated for the number and whether it issigned or unsigned. In these architectures, an integer is encoded using the binary number sys-tem, a combination of ones and zeroes. The width or precision of the number isthe number of bits in its representation. A number with n bits can encode 2ndifferent numeric values. If the number is signed, the left-most bit represents the sign; this has theeffect of making approximately half of the possible values positive and the otherhalf negative. For example, assume that 8 bits are used to store the number. Therange of the unsigned number is from 0 (0000 0000) to +255 (1111 1111). Therange of the signed number is from -128 (1000 0000) to +127 (0111 1111). Ifthose actual bit representations look strange, it is because in most computers,negative numbers are represented as two’s complements. A two’s complement number is represented as the value obtained by sub-tracting the number from a large power of two. To get the two’s complement ofan 8-bit value, the value (in binary) is subtracted from 28. The full math andlogic behind using two’s complement encoding for negative numbers is beyondthe scope of this book. The important concept is that the boundaries can bedetermined for use in testing.
  • 147. 126 4 Test Techniques Figure 4–10 Whole number representations4 4 Figure 4-10 shows several possible number sizes and the values that can be represented. Many mainframes (and some databases) use a different encoding scheme called binary-coded decimals (BCDs). Each digit in a represented number takes up 4 bits. The possible maximum length of a number is proportional to the number of bytes allowed by the architecture. So, if the computer allowed 1,000 bytes for a single number, that number could be 2,000 digits long. By writing 4. Wikipedia at
  • 148. 4.2 Specification-Based 127your own handler routines, you could conceivably have numbers of any magni-tude in any programming language. The issue would then become computa-tional speed—how long does it take to perform mathematical calculations. Note that the binary representation is slightly more efficient than the BCDrepresentation as far as the relative size of the number that can be stored. A 128-bit, unsigned value (16 bytes) can hold up to 39 digits, while you would need aBCD of 20 bytes to hold the same size number. While this size differential is nothuge, the speed of processing may become paramount; BCD calculations tendto be slower than calculations for built-in data types because the calculationmust be done digit by digit. Most programming languages have a number of different width wholenumbers that can be selected by the programmer. The typical default value inte-ger in 16-bit computers was 16 bits; in most 32-bit computers the default is32 bits wide. For example, in the Delphi programming language, a short is16 bits, an int is 32 bits, and an int64 is 64 bits. A special library can be used inDelphi to support 128-bit numbers. When the programmer declares a variable,they define the length and whether the value is signed or unsigned. While a pro-grammer could conceivably use the largest possible value for each number tominimize the chance of overflow, that has always been considered a bad pro-gramming technique that wastes space and often slows down computations. In C as it was originally defined for DEC minicomputers, the language onlyguaranteed that an int was at least as long (if not longer) as a short and a longwas at least as long (if not longer) as an int. On the PDP-11, a short was 16 bits,an int 32 bits, and a long also 32 bits. Other architectures gave other lengths, andeven with modern 64-bit CPUs, you might not always know how many bitsyou’ll get with your int. This ambiguity about precision in C has created—andcontinues to create—a lot of bugs and thus is fertile ground for testing. By looking at the definitions of the data variables used, a technical testercan discern the length of the value and thence the boundaries. Since there isalways a defined maxint (the maximum size integer) and minint (the maximummagnitude negative number)—no matter which type variable is used—it is rela-tively straightforward to find and test the boundaries. Assuming an 8-bit,unsigned integer 255 (1111 1111), adding 1 to it would result in an overflow,giving the value 0 (0000 0000). 255 + 1 = 0. Probably not what the programmerintended...
  • 149. 128 4 Test Techniques Since any value inputted to the system may be used in an expression where a value is added to it, testing with very large and very small numbers based on the representation used by the programmer should be considered a good test technique. Because division and multiplication happens too, be sure to test zero, which is always an interesting boundary value. Floating Point Numbers The other types of numbers we need to discuss are floating point (floats). These numbers allow a fractional value; some number of digits before and some num- ber of digits after a decimal point. As before, a technical tester needs to investi- gate the architecture and selections made by programmers when testing these types of numbers. These numbers are represented by a specific number of sig- nificant digits and scaled by using an exponent. This is the computer equivalent of scientific notation. The base is usually 2 but could be 10 or 8. Like scientific notation, floats are used to represent very large or very small numbers. For example, a light year is approximately 9.460536207 × 1015 meters. This value is a bit more than even a really large integer can represent. Likewise, an angstrom is approx 1 × 10-15 meters, a very tiny number. Just to get the terminology correct, a floating point number consists of a signed digit string of a given length in a given base. The signed string (as shown in the preceding paragraph, 9.460536207) is called the significand (or some- times the mantissa). The length of the significand determines the precision that the float can represent—that is, the number of significant digits. The decimal point, sometimes called the radix, is usually understood to be in a specific place—usually to the right of the first digit of the significand. The base is usually set by the architecture (usually 2 or 10). The exponent (also called the charac- teristic or scale) modifies the magnitude of the number. In the preceding para- graph, in the first example, the exponent is 15. That means that the number is 9.460536207 times 10 to the 15th power. Like integers, floats can also be represented by binary coded decimals. These are sometimes called fixed-point decimals and are actually undistinguish- able from integer BCDs. In other words, the decimal point is not stored in the value; instead, its relative location is understood by the programmer and com- piler. For example, assume a 4-byte value: 12 34 56 7C. In fixed-point notation,
  • 150. 4.2 Specification-Based 129this value might represent +1,234.567 if the compiler understands the decimalpoint to be between the fourth and fifth digits. In general, non-integer numbers have a far wider range than integers whiletaking up only a few more bytes. Different architectures have a variety ofschemes for storing these values. Like integers, floats come in several differentsizes; one common naming terminology is 16 bit (halfs), 32 bit (singles), 64 bit(doubles) and 128 bit (quadruples). Over the years, a wide variety of schemeshave been used to represent these values; IEEE 754 has standardized most ofthem. This standard, first adopted in 1985, is followed by almost every com-puter manufacturer with the exception of IBM mainframes and Cray supercom-puters. The larger the representation, the larger (and smaller) the numbers thatcan be stored.Figure 4–11 IEEE 754-2008 float representationsAs shown in figure 4-11, a float contains a sign (positive or negative), an expo-nent of certain size, and a significand of certain size. The magnitude of the sig-nificand determines the number of significant figures the representation canhold (called the precision). Note that the bits precision is actually one more thanthe significand can hold. For complex reasons having to do with the way floatsare used in calculations, the leading bit of the significand is implicit—that is, notstored. That adds the extra bit of precision. Related to that is the bias of theexponent (bias is used in the engineering sense, as in offset). The value of theexponent is offset by a given value to allow it to be represented by an unsignednumber even though it actually represents both large (positive) and small (neg-ative) exponents. A key issue with these values that must be understood by technical testers isthat most floats are not exact; instead, they are approximations of the value wemay be interested in. In many cases, this is not an issue. Occasionally, however,interesting bugs can lurk in these approximate values.
  • 151. 130 4 Test Techniques When values are calculated, the result is often an irrational number. The definition of an irrational number is any real number that cannot be expressed as the fraction a/b where a and b are integers. Since an irrational number, by definition, has an infinite number of decimal digits (i.e., is a non-terminating number), there is no way to represent it exactly by any size floating point num- ber. That means anytime an operation occurs with irrational values, the result will be irrational (and hence approximate). There are many irrational numbers that are very important to mathematic concepts, including pi and the square root of 2. It is not that a floating point value cannot be exact. Whole numbers may be represented exactly. The representation for 734 would be 7.34 × 102. However, when calculations are made using floating point values, the exactness of the result might depend on how the values were originally set. It is probably safe to hold, as a rule of thumb, that any floating number is only close, an approxima- tion of the mathematically correct value. To the tester, close may or may not be problematic. The longer a chain of calculations using floats, the more likely the values will drift from exactness. Likewise, rounding and truncating can cause errors. Consider the following chain of events. Two values are calculated and placed into floating point vari- ables; one comes close to 6, the other close to 2. Very close; for all intents and purposes the values are almost exactly 6 and 2. The second value is divided into t he f i rst , resu lt i ng i n a v a lu e cl o s e t o 3 . Pe r h ap s s om e t h i ng l i ke 2.999999999999932. Now, suppose the value is placed into an integer rather than a float—the compiler will allow that. Suppose the developer decides that they do not want the fractional value and, rather than rounding, uses the trun- cation function. Essentially, following along this logic, we get 6 divided by 2 equals 2. Oops. If this sounds far-fetched, it is. But this particular error actu- ally happened to software that we were testing, and we did a lot of head scratch- ing digging it out. Each step the programmer made was sensible, even if the answer was nonsensical. Clearly, boundaries of floating point numbers—and calculations made with them—need to be tested. Testing Floating Point Numbers Testing of financial operations involves floating point numbers, so all of the issues just discussed apply. Notice that the question of precision has serious
  • 152. 4.2 Specification-Based 131real-world implications. People tend to be very sensitive to rounding errors thatinvolve things like money, investment account balances, and the like. Further-more, there are regulations and contracts that often apply, along with any speci-fied requirements. So testing financial applications involves a great deal of carewhen these kinds of issues are involved. Not Number EP {letters, punctuation, null, ...} Equivalence Partitioning Number BVA # of price per commis- shares share sion Invalid Valid Valid Valid Invalid Valid? Valid Valid Valid Invalid (negative) (zero) (round) (no round) (too large) (negative) (zero) (round) (no round) (too large) -max 0 max -max 0 max -0.000 000 000 1 0.000001 1,000,000,000.00 -0.005 0.001 10,000,000.00 0.000 000 499 99 999,999,999.99 0.0049999 9,999,999.99 0.000 000 5 0.005 0.000 000 9 0.009Figure 4–12 A stock-purchase screen from QuickenIn figure 4-12, you can see the application of equivalence partitioning andboundary value analysis to the input of a stock purchase transaction inQuicken. In this application, when you enter a stock purchase transaction, afterentering the stock, you input the number of shares, the price per share, and thecommission, if any. Quicken calculates the amount of the transaction. First, for each of the three fields—number of shares, price per share, andcommission—we can say that any non-numeric input is invalid. So that’s com-mon across all three. For the numeric inputs, the price per share and the number of shares arehandled the same. The number must be zero or greater. Look closely at all of theboundaries that are shown in figure 4-12. Some of these boundaries are not nec-essarily intuitive. When boundary testing with real numbers, there are twocomplexities we must deal with: precision and round-off.
  • 153. 132 4 Test Techniques By precision, in this case we are talking about the number of decimal digits we are willing to test. Many people refer to this as the epsilon (or smallest recog- nizable difference) to which we will test. Suppose we have a change in system behavior exactly at the value 10.00. Values below are valid, above are invalid. What is the closest possible value that you can reasonably say is the invalid boundary? 10.01? Clearly we can get many values between 10.00 and 10.01, including 10.001, 10.0001, 10.00001... Somewhere in testing we have to decide the boundary beyond which it makes no sense to test. One such line is the epsilon, and it generally represents the number of decimal digits to which we will test. In the preceding example, we might decide that we want to test to an epsilon of 2. That would make our invalid boundary 10.01, even if, in reality, it is not a physical boundary. Testers must understand the epsilon to which they will be testing. This epsilon may be decided by the programmer who wrote the code, the designer who designed it, or even the computer representation used. Likewise, rounding off is an important facet of real-world boundaries. Again, look at figure 4-12. It is possible to buy a fractional portion of a share of stock. Clearly, if we put 0 (zero) in the number of shares purchased, we will buy no stock. You might think that if we put a positive non-zero value in, we would automatically be purchasing some stock. However, some values will round down to zero, even though the absolute value is greater than zero. And therein lies our boundary. We should find out exactly where the boundary is, that is, where the behavior changes (with due respect to the epsilon). In this version of Quicken, Rex discovered through trial and error that the smallest amount of a share that can be purchased is 0.000 001 shares. This is a value where behavior changes. However, the testable boundary must allow for rounding. The value 0.000 000 5 will round up to 0.000 001. That makes it the low valid boundary. But we need to go to more decimal places to allow the round-off to occur, down to the epsilon. Hence the low invalid boundary is some value less than that which will round down to zero; in this case we select 0.000 000 499 99. How Many Boundaries? Let’s wind down our discussion of boundary value analysis by mentioning a somewhat obscure point that can nevertheless arise if you do any reading
  • 154. 4.2 Specification-Based 133about testing. This is the question of how many boundary values exist at aboundary. In the material so far, we’ve shown just two boundary values per boundary.The boundary lies between the largest member of one equivalence class and thesmallest member of the equivalence class above it. In other words, the boundaryitself doesn’t correspond to any member of any class. However, some authors, of whom Boris Beizer is probably the most notable,say there are three boundary values. Why would there be three? The user must order a quantity greater than 0 and less than 100... Mathematical inequality model (v1) 0 < x < 100 -1 0 1 99 100 101 Graphical (“number line”) model Invalid – Invalid – too low too high 0 1 x 99 100 Mathematical inequality model (v2) 1 =< x =< 99 0 1 2 98 99 100Figure 4–13 Two ways to look at boundariesThe difference arises from the use of a mathematical analysis rather than thegraphical one that we’ve used, as shown in figure 4-13. You can see that the twomathematical inequalities shown in figure 4-13 describe the same situation asthe graphical model. Applying Beizer’s methodology, you select the value itselfas the middle boundary value, then you add the smallest possible amount tothat value and subtract the smallest possible amount from that value to generatethe two other boundary values. Remember, with integers, as in this example, thesmallest possible amount is one, while for floating points we have to considerthose pesky issues of precision and round-off.
  • 155. 134 4 Test Techniques As you can see from this example, the mathematical boundary values will always include the graphical boundary values. To us, the three-value approach is wasted effort. We have enough to do already without creating more test values to cover, especially when they don’t address any risks that haven’t already been covered by other test values. Boundary Value Exercise Loan amount {Enter a loan amount} Whole dollar amounts (no cents). Will be rounded to nearest $ 100. Property value? {Enter the property’s value} Whole dollar amounts (no cents). Will be rounded to nearest $ 100. Figure 4–14 HELLOCARMS system amount screen prototype A screen prototype for one screen of the HELLOCARMS system is shown in figure 4-14. This screen asks for two pieces of information: ■ Loan amount ■ Property value For both fields, the system allows entry of whole dollar amounts only (no cents), and it rounds to the nearest $100. Assume the following rules apply to loans: ■ The minimum loan amount is $5,000. ■ The maximum loan amount is $1,000,000. ■ The minimum property value is $25,000. ■ The maximum property value is $5,000,000. Refer also to requirements specification elements 010-010-130 and 010-010-140 in the Functional System Requirements section of the HELLOCARMS system requirements document.
  • 156. 4.2 Specification-Based 135 If the fields are valid, then the screen will be accepted. Either the TelephoneBanker will continue with the application or the application will be transferredto a Senior Telephone Banker for further handling. If one or both fields areinvalid, an error message is displayed.The exercise consists of two parts:1. Show the equivalence partitions and boundary values for each of the two fields, indicating valid and invalid members and the boundaries for those partitions.2. Create test cases to cover these partitions and boundary values, keeping in mind the rules about combinations of valid and invalid members.The answers to the two parts are shown on the following pages. You shouldreview the answer to the first part (and, if necessary, revise your answer to thesecond part) before reviewing the answer to the second part. Boundary Value Exercise DebriefFirst, let’s take a look at the equivalence partitions and boundary values, whichare shown in figure 4-15. Not Integer EP ABC 50000.01 null Loan Amount EP Invalid Invalid Invalid Valid Valid Invalid (negative) (zero) (too low) (no xfer) (xfer) (too high) Integer BVA -max -1 0 49 50 4,949 max 4,950 500,049 1,000,050 500,050 1,000,049 Not Integer EP abc 999777.01 null Property Value EP Invalid Invalid Invalid Valid Valid Invalid (negative) (zero) (too low) (no xfer) (xfer) (too high) Integer BVA -max -1 0 49 50 24,949 max 24,950 5,000,050 1,000,049 No 1,000,050 5,000,049 Property Value EP Yes EP For Loan For Value For BothFigure 4–15 Equivalence partitions (EP) and boundary values (BVA) for exercise
  • 157. 136 4 Test Techniques So, for the loan amount we can show the boundary values and equivalence par- titions as shown in table 4-8. Table 4–8 # Partition Boundary Value 1 Letter: ABC - 2 Decimal: 50,000.01 - 3 Null - 4 Invalid (negative) -max 5 Invalid (negative) -1 6 Invalid (zero) 0 7 Invalid (zero) 49 8 Invalid (too low) 50 9 Invalid (too low) 4,949 10 Valid (no transfer) 4,950 11 Valid (no transfer) 500,049 12 Valid (transfer) 500,050 13 Valid (transfer) 1,000,049 14 Invalid (too high) 1,000,050 15 Invalid (too high) max For the property value, we can show the boundary values and equivalence parti- tions as shown in table 4-9. Table 4–9 # Partition Boundary Value 1 Letter: abc - 2 Decimal: 999,777.01 - 3 Null - 4 Invalid (negative) -max 5 Invalid (negative) -1 6 Invalid (zero) 0 7 Invalid (zero) 49 8 Invalid (too low) 50 9 Invalid (too low) 24,949 10 Valid (no transfer) 24,950 11 Valid (no transfer) 1,000,049 12 Valid (transfer) 1,000,050 13 Valid (transfer) 5,000,049 14 Invalid (too high) 5,000,050 15 Invalid (too high) max
  • 158. 4.2 Specification-Based 137Make sure you understand why these values are boundaries based on round-offrules given in the requirements. For the transfer decision, we can show theequivalence partitions for the loan amount as shown in table 4-10.Table 4–10 # Partition 1 No 2 Yes, for the loan amount 3 Yes, for the property value 4 Yes, for bothNow, let’s create tests from these equivalence partitions and boundary values.We’ll capture traceability information from the test case number back to thepartitions or boundary values, and as before, once we have a trace from eachpartition to a test case, we’re done—as long as we didn’t combine invalid values!Table 4–11 Inputs 1 2 3 4 5 6 Loan amount 4,950 500,050 500,049 1,000,049 ABC 50,000.01 Property value 24,950 1,000,049 1,000,050 5,000,049 100,000 200,000 Outputs Accept? Y Y Y Y N N Transfer? N Y (loan) Y (prop) Y (both) - - Inputs 7 8 9 10 11 12 Loan amount null 100,000 200,000 300,000 -max -1 Property value 300,000 abc 999,777.01 null 400,000 500,000 Outputs Accept? N N N N N N Transfer? - - - - - - Inputs 13 14 15 16 17 18 Loan amount 0 49 50 4,949 1,000,050 max Property value 600,000 700,000 800,000 900,000 1,000,000 1,100,000 Outputs Accept? N N N N N N Transfer? - - - - - -
  • 159. 138 4 Test Techniques Inputs 19 20 21 22 23 24 Loan amount 400,000 500,000 600,000 700,000 800,000 900,000 Property value -max -1 0 49 50 24,949 Outputs Accept? N N N N N N Transfer? - - - - - - Inputs 25 26 27 28 29 30 Loan amount 1,000,000 555,555 Property value 5,000,050 max Outputs Accept? N N Transfer? - - Table 4–12 # Partition Boundary Value Test Case 1 Letter: ABC - 5 2 Decimal: 50,000.01 - 6 3 Null - 7 4 Invalid (negative) -max 11 5 Invalid (negative) -1 12 6 Invalid (zero) 0 13 7 Invalid (zero) 49 14 8 Invalid (too low) 50 15 9 Invalid (too low) 4,949 16 10 Valid (no transfer) 4,950 1 11 Valid (no transfer) 500,049 3 12 Valid (transfer) 500,050 2 13 Valid (transfer) 1,000,049 4 14 Invalid (too high) 1,000,050 17 15 Invalid (too high) max 18
  • 160. 4.2 Specification-Based 139Table 4–13 # Partition Boundary Value Test Case 1 Letter: abc - 8 2 Decimal: 999,777.01 - 9 3 Null - 10 4 Invalid (negative) -max 19 5 Invalid (negative) -1 20 6 Invalid (zero) 0 21 7 Invalid (zero) 49 22 8 Invalid (too low) 50 23 9 Invalid (too low) 24,949 24 10 Valid (no transfer) 24,950 1 11 Valid (no transfer) 1,000,049 2 12 Valid (transfer) 1,000,050 3 13 Valid (transfer) 5,000,049 4 14 Invalid (too high) 5,000,050 25 15 Invalid (too high) max 26Table 4–14 # Partition Test Case 1 No 1 2 Yes, for the loan amount 2 3 Yes, for the property value 3 4 Yes, for both 4Notice that’s there’s another interesting combination related to the transfer deci-sion that we covered in our tests. This was when the values were rejected asinputs, in which case we should not even be able to leave the screen, not to men-tion transfer the application. We did test with both loan amounts and propertyvalues that would have triggered a transfer had the other value been valid. Wecould have shown that as a third set of equivalence classes for the transfer deci-sion.
  • 161. 140 4 Test Techniques ISTQB Glossary decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases. decision table testing: A black-box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. 4.2.3 Decision Tables Equivalence partitioning and boundary value analysis are very useful tech- niques. They are especially useful, as you saw in the earlier parts of this section, when testing input field validation at the user interface. However, lots of testing that we do as technical test analysts involves testing the business logic that sits underneath the user interface. We can use boundary values and equivalence partitioning on business logic too, but two additional techniques, decision tables and state-based testing, will often prove handier and more powerful. Let’s start with decision tables. Conceptually, decision tables express the rules that govern handling of transac- tional situations. By their simple, concise structure, they make it easy for us to design tests for those rules, usually at least one test per rule. When we said “transactional situations,” what we meant was those situa- tions where the conditions—inputs, preconditions, etc.—that exist at a given moment in time for a single transaction are sufficient by themselves to deter- mine the actions the system should take. If the conditions on their own are not sufficient, but we must also refer to what conditions have existed in the past, then we’ll want to use state-based testing, which we’ll cover later in this chapter. The underlying model is a table. The model connects combinations of con- ditions with the action or actions that should occur when each particular com- bination of conditions arises. To create test cases from a decision table, we are going to design test inputs that fulfill the conditions given. The test outputs will correspond to the action or actions given for that combination of conditions. During test execution, we check that the actual actions taken correspond to the expected actions.
  • 162. 4.2 Specification-Based 141 We create enough test cases that every combination of conditions is coveredby at least one test case. Oftentimes, that coverage criterion is relaxed to ensurethat we cover those combinations of conditions that can determine the action oractions. If that’s a little confusing—which it might be, depending on how youprepared for the Foundation exam, because this isn’t always explained properlyin books and classes—the distinction we’re drawing will become clear to youwhen we talk about collapsed decision tables. With a decision table, the coverage criterion boils down to an easy-to-remember rule of at least one test per column in the table. So, what is our bug hypothesis with decision tables? What kind of bugs arewe looking for? There are several. First, under some combination of conditions,the wrong action might occur. In other words, there may be some action thatthe system is not to take under this combination of conditions, yet it does. Sec-ond, under some combination of conditions, the system might not take theright action. In other words, there is some action that the system is to takeunder this combination of conditions, yet it does not. Beyond those two possibilities, there is also a very real positive value ofusing decision tables during the analysis portion of our testing. A decision tableforces testers to consider many possibilities that we may not have consideredotherwise. By considering all of the permutations of conditions possible, even ifwe don’t test them all, we may discover some unknown unknowns. That is, weare likely to find some scenarios that the analysts and developers had not con-sidered. By considering these scenarios, we can not only capture incipientdefects but avoid later disagreements about correct behavior.Table 4–15 Conditions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Real account? Y Y Y Y Y Y Y Y N N N N N N N N Active account? Y Y Y Y N N N N Y Y Y Y N N N N Within limit? Y Y N N Y Y N N Y Y N N Y Y N N Location okay? Y N Y N Y N Y N Y N Y N Y N Y N Actions Approve Y N N N N N N N N N N N N N N N Call N Y Y Y N Y Y Y N N N N N N N N cardholder? Call vendor? N N N N Y Y Y Y Y Y Y Y Y Y Y Y
  • 163. 142 4 Test Techniques Consider the decision table shown in table 4-15. This table shows the decisions being made at an e-commerce website about whether to accept a credit card purchase of merchandise. Once the information goes to the credit card process- ing company for validation, how can we test their decisions? We could handle that with equivalence partitioning, but there is actually a whole set of conditions that determine this processing: ■ Does the named person hold the credit card entered, and is the other information correct? ■ Is it still active or has it been cancelled? ■ Is the person within or over their limit? ■ Is the transaction coming from a normal or a suspicious location? The decision table in table 4-15 shows how these four conditions interact to determine which of the following three actions will occur: ■ Should we approve the transaction? ■ Should we call the cardholder (e.g., to warn them about a purchase from a strange place)? ■ Should we call the vendor (e.g., to ask them to seize the cancelled card)? Take a minute to study the table to see how this works. The conditions are listed at the top left of the table, and the actions at the bottom left. Each column to the right of this leftmost column contains a business rule. Each rule says, in essence, “Under this particular combination of conditions (shown at the top of the rule), carry out this particular combination of actions (shown at the bottom of the rule).” Notice that the number of columns—i.e., the number of business rules—is equal to 2 (two) raised to the power of the number of conditions. In other words, 2 raised to the 4th power (based on four conditions), which results in 16 columns. When the conditions are strictly Boolean—true or false—and we’re dealing with a full decision table (not a collapsed one), that will always be the case. If one or more of the conditions are not rendered as Boolean, then the number of columns will not follow this rule. Did you notice how we populated the conditions? The topmost condition changes most slowly. Half of the columns are Yes, then half No. The condition under the topmost changes more quickly, but it changes more slowly than all the
  • 164. 4.2 Specification-Based 143other conditions below it. The pattern is a quarter Yes, then a quarter No, then aquarter Yes, then a quarter No. Finally, for the bottommost condition, the alter-nation is Yes, No, Yes, No, Yes, etc. This pattern makes it easy to ensure that youdon’t miss anything. If you start with the topmost condition, set the left half ofthe rule columns to Yes and the right half of the rule columns to No, then fol-lowing the pattern we showed, if you get to the bottom and the Yes, No, Yes, No,Yes, etc. pattern doesn’t hold, you did something wrong. The other algorithmthat you can use, again assuming that all conditions are Boolean, is to count thecolumns in binary. The first column is YYYY, second YYYN, third YYNY,fourth YYNN, etc. If you are not familiar with binary arithmetic, forget wementioned it. Deriving test cases from this example is relatively easy: each column (busi-ness rule) of the table generates a test case. When time comes to run the tests,we’ll create the conditions that are each test’s inputs. We’ll replace the yes/noconditions with actual input values for credit card number, security code, expi-ration date, and cardholder name, either during test design or perhaps even attest execution time. We’ll verify the actions that are the test’s expected results. In some cases, we might generate more than one test case per column. We’llcover this possibility later, as we enlist our previous test techniques, equivalencepartitioning and boundary value analysis, to extend decision table testing. Notice that, in this case, some of the test cases don’t make much sense. Forexample, how can the account not be real but yet active? How can the accountnot be real but within the given limit? This kind of situation is a hint that maybe we don’t need all the columns inour decision table. Collapsing Columns in the TableWe can sometimes collapse the decision table, combining columns, to achieve amore concise—and in some cases more sensible—decision table. In any situa-tion where the value of one or more particular conditions can’t affect the actionsfor two or more combinations of conditions, we can collapse the decision table. This involves combining two or more columns where, as we said, one ormore of the conditions don’t affect the actions. As a hint, combinable columnsare often but not always next to each other. You can at least start by looking atcolumns next to each other.
  • 165. 144 4 Test Techniques To combine two or more columns, look for two or more columns that result in the same combination of actions. Note that the actions in the two columns must be the same for all of the actions in the table, not just some of them. In these columns, some of the conditions will be the same, and some will be differ- ent. The ones that are different obviously don’t affect the outcome. We can replace the conditions that are different in those columns with a dash. The dash usually means we don’t care, it doesn’t matter, or it can’t happen, given the other conditions. Now, repeat this process until the only columns that share the same combi- nation of actions for all the actions in the table are ones where you’d be combin- ing a dash with a Yes or No value and thus wiping out an important distinction for cause of action. What we mean by this will be clear in the example on the next page, if it’s not clear already. Another word of caution at this point: Be careful when dealing with a table where more than one rule can apply at one single point in time. These tables have nonexclusive rules. We’ll discuss that further later in this section. Table 4–16 Conditions 1 2 3 5 6 7 9 Real account? Y Y Y Y Y Y N Active account? Y Y Y N N N - Within limit? Y Y N Y Y N - Location okay? Y N - Y N - - Actions Approve Y N N N N N N Call card holder? N Y Y N Y Y N Call vendor? N N N Y Y Y Y Table 4-16 shows the same decision table as before, but collapsed to eliminate extraneous columns. Most notably, you can see that what were columns 9 through 16 in the original decision table collapsed into a single column. Notice that all of the columns 9 through 16 are essentially the same test: Is this for a real account? If it is not real (note that for all columns, 9 through 16, the answer is no), then it certainly cannot be active, within limit, or have an okay location. Since those conditions are impossible, we would essentially be running the same test eight times.
  • 166. 4.2 Specification-Based 145 We’ve kept the original column numbers for ease of comparison. Again,take a minute to study the table to see how we did this. Look carefully at col-umns 1, 2, and 3. Notice that we can’t collapse 2 and 3 because that would resultin “dash” for both “within limit” and “location okay.” If you study this table orthe full one, you can see that one of these conditions must not be true for thecardholder to receive a call. The collapse of rule 4 into rule 3 says that, if thecard is over limit, the cardholder will be called, regardless of location. The same logic applies to the collapse of rule 8 into rule 7. Notice that the format is unchanged. The conditions are listed at the top leftof the table, and the actions at the bottom left. Each column to the right of thisleftmost column contains a business rule. Each rule says, “Under this particularcombination of conditions (shown at the top of the rule, some of which mightnot be applicable), carry out this particular combination of actions (shown atthe bottom of the rule, all of which are fully specified).” Notice that the number of columns is no longer equal to 2 raised to thepower of the number of conditions. This makes sense, since otherwise no col-lapsing would have occurred. If you are concerned that you might miss some-thing important, you can always start with the full decision table. In a full table,because of the way you generate it, it is guaranteed to have all the combinationsof conditions. You can mathematically check if it does. Then, carefully collapsethe table to reduce the number of test cases you create. Also, notice that, when you collapse the table, that pleasant pattern of Yesand No columns present in the full table goes away. This is yet another reason tobe very careful when collapsing the columns, because you can’t count on thepattern or the mathematical formula to check your work. Combining Decision Table Testing with Other TechniquesOkay, let’s address an issue we brought up earlier, the possibility of multiple testcases per column in the decision table via the combination of equivalence parti-tioning and the decision table technique. In figure 4-16, we refer to our exampledecision table of table 4-16, specifically column 9.
  • 167. 146 4 Test Techniques Conditions 9 Number/ Two Name mismatch Real account? N EP Number/ Two Active account? - Expiry mismatch Within limit? - Three Number/ Two Location okay? - mismatch CSC mismatch Figure 4–16 Equivalence partitions and decision tables We can apply equivalence partitioning to the question, How many interesting— from a test point of view—ways are there to have an account not be real? As you can see from figure 4-16, this could happen seven potentially interesting ways: ■ Card number and cardholder mismatch ■ Card number and expiry mismatch ■ Card number and CSC (card security code) mismatch ■ Two of the above mismatches (three possibilities) ■ All three mismatches So, there could be seven tests for that column. How about boundary value analysis? Yes, that too can be applied to decision tables to find new and interesting tests. For example, How many interesting test values relate to the credit limit? Just over Zero before Normal after At limit after limit after At limit before Max after transaction transaction transaction transaction transaction transaction Conditions 1 2 3 5 6 7 Real account? Y Y Y Y Y Y EP EP Active account? Y Y Y N N N Within limit Over limit Within limit? Y Y N Y Y N BVA 0 limit limit+0.01 max Location okay? Y N - Y N - Figure 4–17 Boundary values and decision tables As you can see from figure 4-17, equivalence partitioning and boundary value analysis show us six interesting possibilities: 1. The account starts at zero balance. 2. The account would be at a normal balance after transaction.
  • 168. 4.2 Specification-Based 1473. The account would be exactly at the limit after the transaction.4. The account would be over the limit after the transaction.5. The account was at exactly the limit before the transaction (which would ensure going over if the transaction concluded).6. The account would be at the maximum overdraft value after the transaction (which might not be possible).Combining this with the decision table, we can see that would again end upwith more “over limit” tests than we have columns—one more, to be exact—sowe’d increase the number of tests just slightly. In other words, there would befour within-limit tests and three over-limit tests. That’s true unless you wantedto make sure that each within-limit equivalence class was represented in anapproved transaction, in which case column 1 would go from one test to three. Nonexclusive Rules in Decision TablesLet’s finish our discussion about decision tables by looking at the issue of non-exclusive rules we mentioned earlier.Table 4–17 Conditions 1 2 3 Foreign exchange? Y - - Balance forward? - Y - Late payment? - - Y Actions Exchange fee? Y - - Charge interest? - Y - Charge late fee? - - YSometimes more than one rule can apply to a transaction. In table 4-17, you seea table that shows the calculation of credit card fees. There are three conditions,and notice that zero, one, two, or all three of those conditions could be met in agiven month. How does this situation affect testing? It complicates the testing a bit, but we can use a methodical approach andrisk-based testing to avoid the major pitfalls. To start with, test the decision table like a normal one, one rule at a time,making sure that no conditions not related to the rule you are testing are met.This allows you to test rules in isolation—just as you are forced to do in situa-tions where the rules are exclusive.
  • 169. 148 4 Test Techniques Next, consider testing combinations of rules. Notice we said, “Consider,” not “test all possible combinations of rules.” You’ll want to avoid combinatorial explosions, which is what happens when testers start to test combinations of factors without consideration of the value of those tests. Now, in this case, there are only 8 possible combinations—three factors, two options for each factor, 2 times 2 times 2 is 8. However, if you have six factors with five options each, you would now have 15,625 combinations. One way to avoid combinatorial explosions is to identify the possible com- binations and then use risk to weight those combinations. Try to get to the important combinations and don’t worry about the rest. Another way to avoid combinatorial explosions is to use techniques like classification trees and pairwise testing, which are covered in Rex’s book Advanced Software Testing Vol. 1. Decision Table Exercise During development, the HELLOCARMS project team added a feature to HEL- LOCARMS. This feature allows the system to sell a life insurance policy to cover the amount of a home equity loan so that, should the borrower die, the policy will pay off the loan. The premium is calculated annually, at the beginning of each annual policy period and based on the loan balance at that time. The base annual premium will be $1 for $10,000 in loan balance. The insurance policy is not available for lines of credit or for reverse mortgages. The system will increase the base premium by a certain percentage based on some basic physical and health questions that the Telephone Banker will ask during the interview. A Yes answer to any of the following questions will trigger a 50 percent increase to the base premium: 1. Have you smoked cigarettes in the past 12 months? 2. Have you ever been diagnosed with cancer, diabetes, high cholesterol, high blood pressure, a heart disorder, or stroke? 3. Within the last 5 years, have you been hospitalized for more than 72 hours except for childbirth or broken bones? 4. Within the last 5 years, have you been completely disabled from work for a week or longer due to a single illness or injury?
  • 170. 4.2 Specification-Based 149The Telephone Banker will also ask about age, weight, and height. (Applicantscannot be under 18.) The weight and height are combined to calculate the bodymass index (BMI). Based on that information, the Telephone Banker will applythe rules in table 4-18 to decide whether to increase the rate or even decline toissue the policy based on possible weight-related illnesses in the person’s future.Table 4–18 Body Mass Index (BMI) Age <17 34-36 37-39 >39 18-39 Decline 75% 100% Decline 40-59 Decline 50% 75% Decline >59 Decline 25% 50% DeclineThe increases are cumulative. For example, if the person has normal weight,smokes cigarettes, and has high blood pressure, the annual rate is increasedfrom $1 per $10,000 to $2.255 per $10,000. If the person is a 45-year-old malediabetic with a body mass index of 39, the annual rate is increased from $1 per$10,000 to $2.6256 per $10,000.The exercise consists of three steps:1. Create a decision table that shows the effect of the four health questions and the body mass index.2. Show the boundary values for body mass index and age.3. Create test cases to cover the decision table and the boundary values, keep- ing in mind the rules about testing nonexclusive rules.The answers to the three parts are shown on the next pages. You should reviewthe answer to the each part (and, if necessary, revise your answer to the nextparts) before reviewing the answer to the next part. Decision Table Exercise DebriefFirst, we created the decision table from the four health questions and the BMI/age table. The answer is shown in table 4-19. Note that the increases are shownin percentages.5. Two risk factors; the calculation is as follows: ($1 x 50%) = $1.50 + ($1.50 x 50%) = $2.256. One risk factor plus BMI increase of 75%: ($1 x 50%) = $1.50 + ($1.50 x 75%) = $2.625
  • 171. 150 4 Test Techniques Table 4–19 Conditions 1 2 3 4 5 6 7 8 9 10 11 12 Smoked? Y – – – – – – – – – – – Diagnosed? – Y – – – – – – – – – – Hospitalized? – – Y – – – – – – – – – Disabled? – – – Y – – – – – – – – BMI – – – – 34–36 34–36 34–36 37–39 37–39 37–39 <17 >39 Age – – – – 18–39 40–59 >59 18–39 40–59 >59 – – Actions Increase 50 50 50 50 75 50 25 100 75 50 – – Decline – – – – – – – – – – Y Y It’s important to notice that rules 1 through 4 are nonexclusive, though rules 5 through 12 are exclusive. In addition, there is an implicit rule that the age must be greater than 17 or the applicant will be denied not only insurance but the loan itself. We could have put that here in the decision table, but our focus is primarily on testing business functionality, not input validation. We’ll cover those tests with bound- ary values. Now, let’s look at the boundary values for body mass index and age, shown in figure 4-18. To o No Smaller Bigger To o thin increase increase increase heavy 0 16 17 33 34 3 6 37 39 40 max Too young Young Mid-aged Senior 0 17 18 3 9 40 9 59 60 max Figure 4–18 BMI and age boundary values Three important testing notes relate to the body mass index. First, the body mass index is not entered directly but rather by entering height and weight. Depending on the range and precision of these two fields, there could be dozens of ways to enter a given body mass index. Second, the maximum body mass index is achieved by entering the smallest possible height and the largest possi- ble weight. Third, we’d need to separately understand the boundary values for these two fields and make sure those were tested properly.
  • 172. 4.2 Specification-Based 151 An important testing note relates to the age. You can see that we omittedequivalence classes related to invalid ages, such as negative ages and non-integerinput for ages. Again, our idea is that we’d need to separately test the input fieldvalidation. Here, our focus is on testing business logic. Finally, for both fields, we omit any attempt to figure out the maxima.Either someone will give us a requirements specification that tells us that duringtest design or we’ll try to ascertain it empirically during test execution. So, for the BMI, we can show the boundary values and equivalence parti-tions as shown in table 4-20.Table 4–20 # Partition Boundary Value 1 Too thin 0 2 Too thin 16 3 No increase 17 4 No increase 33 5 Smaller increase 34 6 Smaller increase 36 7 Bigger increase 37 8 Bigger increase 39 9 Too heavy 40 10 Too heavy maxFor the age, we can show the boundary values and equivalence partitions asshown in table 4-21.Table 4–21 # Partition Boundary Value 1 Too young 0 2 Too young 17 3 Young 18 4 Young 39 5 Mid-aged 40 6 Mid-aged 59 7 Senior 60 8 Senior max
  • 173. 152 4 Test Techniques Finally, table 4-22 shows the test cases. They are much like the decision table, but note that we have shown the rate (in dollars per $10,000 of loan balance) rather than the percentage increase. Table 4–22 Test case Conditions 1 2 3 4 5 6 7 8 9 10 11 12 Smoked? Y N N N N N N N N N N N Diagnosed? N Y N N N N N N N N N N Hospitalized? N N Y N N N N N N N N N Disabled? N N N Y N N N N N N N N BMI N N N N 34 36 35 34 36 35 37 39 Age N N N N 18 39 40 59 60 max 20 30 Actions Rate 1.5 1.5 1.5 1.5 1.75 1.75 1.5 1.5 1.25 1.25 2 2 Decline N N N N N N N N N N N N Test case Conditions 13 14 15 16 17 18 19 20 21 22 23 24 Smoked? N N N N N N N N N N N N Diagnosed? N N N N N N N N N N N N Hospitalized? N N N N N N N N N N N N Disabled? N N N N N N N N N N N N BMI 38 37 39 38 16 40 0 max 17 33 20 30 Age 45 55 65 75 35 50 25 70 37 47 0 17 Actions Rate 1.75 1.75 1.50 1.50 N/A N/A N/A N/A 1 1 N/A N/A Decline N N N N Y Y Y Y N N Y Y Test case Conditions 25 26 27 28 29 Smoked? Y N N N Y Diagnosed? N Y N N Y Hospitalized? N N Y N Y Disabled? N N N Y Y BMI 35 36 34 38 37 Age 20 50 70 30 35 Actions Rate 2.625 2.25 1.875 3 10.125 Decline N N N N N