'The Pursuit Of Quality: Chasing Tornadoes Or Just Hot Air?' by Paul Gerrard
Upcoming SlideShare
Loading in...5

'The Pursuit Of Quality: Chasing Tornadoes Or Just Hot Air?' by Paul Gerrard



Rain is great for farmers and their crops, but terrible for tourists. Wind is essential for sailors and windmills but bad for the rest of us. Quality, like weather, is good or bad and that depends on ...

Rain is great for farmers and their crops, but terrible for tourists. Wind is essential for sailors and windmills but bad for the rest of us. Quality, like weather, is good or bad and that depends on who you are. Just like beauty, comfort, facility, flavour, intuitiveness, excitement and risk, quality is a concept that most people understand, but few can explain. It’s worse. Quality is an all-encompassing, collective term for these and many other difficult concepts.

Quality is not an attribute of a system – it is a relationship between systems and stakeholders who take different views and the model of Quality that prevails has more to do with stakeholders than the system itself. Measurable quality attributes make techies feel good, but they don’t help stakeholders if they can’t be related to experience. If statistics don’t inform the stakeholders’ vision or model of quality, we think we do a good job. They think we waste their time and money.Whether documented or not, testers need and use models to identify what is important and what to test. A control flow graph has meaning (and value) to a programmer but not to a user. An equivalence partition has meaning to users but not the CEO. Control flow, equivalence partitions are models with value in some, but never all, contexts.

If we want to help stakeholders to make better-informed decisions then we need test models that do more that identify tests. We need models that take account of the stakeholders’ perspective and have meaning in the context of their decision-making. If we measure quality using technical models (quality attributes, test techniques) we delude both our stakeholders and ourselves into thinking we are in control of Quality. We’re not.

In this talk, Paul uses famous, funny and tragic examples of system failures to illustrate ways in which test models and (therefore testing) failed. He argues strongly that the pursuit of Quality requires that testers need better test models and how to create them, fast.



Total Views
Views on SlideShare
Embed Views



1 Embed 8

http://www.eurostarconferences.com 8



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

'The Pursuit Of Quality: Chasing Tornadoes Or Just Hot Air?' by Paul Gerrard 'The Pursuit Of Quality: Chasing Tornadoes Or Just Hot Air?' by Paul Gerrard Presentation Transcript

  • The Pursuit of Quality: ChasingTornadoes or Just Hot Air?Gerrard Consulting LimitedPO Box 347MaidenheadBerkshireSL6 2GUTel: +44 (0) 1628 639173Fax: +44 (0) 1628 630398Web: gerrardconsulting.comSlide 1
  • Paul GerrardSlide 2Paul is a consultant, teacher, author, webmaster, programmer, tester,conference speaker, rowing coach and a publisher. He has conductedconsulting assignments in all aspects of software testing and qualityassurance, specialising in test assurance. He has presented keynote talks andtutorials at testing conferences across Europe, the USA,Australia, SouthAfrica and occasionally won awards for them. In 2010 he won the EurostarEuropeanTesting Excellence Award.Paul designed and built the Gerrard Consulting story platform on which themaelscrum.com and businessstorymanager.com products are based.Gerrard Consulting Limited hosts the UK Test Management Forum.
  • Agenda• What is Quality?• Models for quality and testing• Examples of models• Models and stakeholders• Failures of systems, failures of models• CloseSlide 3
  • Weather• Rain is great for farmers and their crops, butterrible for tourists• Wind is essential for sailors and windmills butbad for the rest of us• Quality, like weather, can be good or bad andthat depends on who you are.Slide 4
  • That’sFantastic!That’sTerrible!Slide 5
  • Quality is a relationship• Quality is not anattribute of a system• It is a relationshipbetween systems andstakeholders whotake different views• The model of Qualitythat prevails has moreto do withstakeholders than thesystem itselfSlide 6
  • The concepts of quality, risk,comfort, intuitiveness …• Concepts that mostpeople understand, butfew can explain• But it‟s a lot worse thanthat• Quality is an all-encompassing, collectiveterm for these and manyother difficult concepts• A term that means allthings to all people• (I try and avoid theQ-word).Slide 7
  • Models for Quality andTestingSlide 8
  • ModelsSlide 9Models are everywhere
  • Models and reality• In our minds we build mental models of everything weexperience (and also, many things we don‟t experience)• When we pick up a glass of water, we build models– The 3-dimentional location and relationship between theglass, the water, the table it sits on and our body– As we reach for the glass, our brain processes the signalsfrom our eyes, our muscles and the feelings in ourfingertips– It continuously compares experience with the model andadjusts/rebuilds the model many times• … just to lift a cup of water – incredible!Slide 10
  • Some familiar models• The project plan is a model– The resources, activities, effort, costs, risks andfuture decision making• System requirements are a model– The “what and how” of the system– What: the features and functionality– How: how the system works (fast, secure, reliable)• User personas (16 year old gamer, 30 year oldsecurity hacker, 50 year old Man United fan).Slide 11
  • Where quality comes from• Quality is the outcome of a comparison– Our mental model of perfection– Our experience of reality• Mental models are internal,personal and unique to us• We could share them using somekind ofVulcan mind meld• But usually, we can write themdown or we can talk about them• However we communicate, there is noise andinformation gets corrupted/lost in translation.Slide 12
  • A quality model?• The requirements and design describe thebehaviour of a system• Functional– Mapping test cases to requirements is all we need• Non-Functional– All technical attributes are defined and measured• Quality and therefore testing assumes a model– Often undocumented, the model may not be shared,understood, complete, consistent, correct…Slide 13
  • Test design is based on models• Models describe the environment, system, usage, users,goals, risks• They simplify the context of the test - irrelevant ornegligible details are ignored in the model• Focus attention on a particular aspect of the behaviourof the system• Generate a set of unique and diverse tests (within thecontext of the model)• Enable the testing to be estimated, planned, monitoredand evaluated for its completeness (coverage).• Models help us to select tests in a systematic way.Slide 14
  • Examples of test models• A checklist or sets of criteria– Goals, risks, process paths, interfaces, message type…• Diagrams from requirements or designdocuments• Analyses of narrative text or tables• Some models are documented, many models arenever committed to paper– Can be mental models constructed specifically toguide the tester whilst they explore the system undertest and guide their next action.Slide 15
  • Sources of models• Test Basis– We analyse the text or diagrams or information that describerequired behaviour (or use past experience and knowledge)• System architecture:– We identify testable items in its user-interface, structure orinternal design• Modes of failure (product risks):– We identify potential ways in which the system might fail thatare of concern to stakeholders• Usage patterns:– We focus on the way the system will be used, operated andinteracted with in a business context using personas• Everything looks fine – doesn‟t it?Slide 16
  • But all models (over-)simplify• But requirements are never perfect, not allattributes can be meaningfully measured• Models incorporate implicit assumptions and areapproximate representations• All test models are heuristic, useful in somesituations, always incomplete and fallible• Before we adopt a model, we need to know:– What aspects of the behaviour, design, modes offailure or usage the model helps us to identify– What assumptions and simplifications it includes(explicitly or implicitly).Slide 17
  • Formality• Formal test models– Derived from analyses of requirements or code– Quantitative coverage measure can be obtained from a formaltest mode (mostly)• Informal test models– E.g. some models are just lists of modes of failure, risks orvulnerabilities.– Informal models cannot be used to define quantitative coveragemeasures• Ad-hoc models– Some models can be ad-hoc, invented by the tester just beforeor even during testing– Can be formal or informal.Slide 18
  • Examples of ModelsSlide 19
  • Basic test design techniques arebased on the simplest models• Equivalence partitions and boundary values:– Presume single input, single output responses– All values in partitions are equivalent, but theboundaries are the most important• These techniques are useful, but they datefrom the „green-screen‟ era.Slide 20
  • “Green Screen” equivalence model• Single input, single output• All input is classified andpartitioned with rules• One test per rule isenough!• But we don‟t consider:– The state of the system– Combinations of values.Slide 21If m<1 then“Error”Else if m>12 then“Error”Else“OK”Single InputSingle Output
  • StateTransitionTestingSlide 22StartStateRoomRequestedRoomBookedOnWaitingListOvernightStayBookingCancelledCheckoutRoom availableDecrement room countRoom requestNoneCustomer arrivesNoneCustomer paysIncrement roomcountNo room availableAdd to waiting listCustomer cancelsRemove from waitinglistRoom availableDecrement roomcountCustomer cancelsIncrement roomcount
  • But the number of states is infinite!• State-Transition considers:– The states of the system and– The valid/invalid transitions between states• Some systems have many, many states– A real-time system e.g. telecoms switch may have25,000 distinct states– State may depend on many variables that can haveinfinite values in combination• How confident can we be in this model?Slide 23
  • End-to-end/transaction-flow tests• End–to-end tests can follow a path through aprocess or a user journey• The mechanics of the experience aresimulated but…Slide 24
  • Bad experience leads to attritionSlide 25• Typical form-filling on government sitesintended to allow citizens to „apply online‟Page 1 Page 2 Page 6Page 5Page 4Page 3 Page 745% 72% 48% 21% 85% 80% Conversionby page45% 32% 16% 3% 3% 2% Cumulative• Every page „works‟ but the user-experience is sopoor that only 2% finish the journey• Modelling the journey is good, but not enough…• We need to model the experience too.
  • Models andStakeholdersSlide 26
  • Stakeholders and test models• Stakeholders may not tell testers to usespecific test models; you need to explain themto stakeholders so they understand• The challenge(s):– Stakeholders may be of the opinion that themodels you propose generate too few tests to bemeaningful or too many to be economic– We need to engage stakeholders.Slide 27
  • „Measuring quality‟ feels goodbut…• Measurable quality attributes make techies feelgood, but they don‟t help stakeholders if theycan‟t be related to experience• If statistics don‟t inform the stakeholders‟vision or model of quality– We think we do a good job– They think we waste their time and money.Slide 28
  • Relevance• Documented or not, testers need and use modelsto identify what is important and what to test• A control flow graph has meaning (and value) to aprogrammer but not to an end-user• An equivalence partition may have meaning tousers but not the CEO of the company• Control flow, equivalence partitions are modelsthat have value in some, but never all, contexts.Slide 29
  • Helping stakeholders to makebetter decisions is the tester‟s goal• We need models that– Do more than identify tests– Take account of the stakeholders‟ perspective andhave meaning in the context of their decision-making• If we „measure quality‟ using technical models– We delude both our stakeholders and ourselvesinto thinking we are in control of Quality– We‟re not.Slide 30
  • Failures of Systems,Failures of ModelsSlide 31
  • F-16 bug (found in flight)• One of the early problems was that you couldflip the plane over and the computer wouldgladly let you drop a bomb or fuel tank. Itwould drop, dent the wing, and then roll off.• http://catless.ncl.ac.uk/Risks/3.44.html#subj1.1Slide 32Poor test model
  • Slide 33Poor test model
  • Slide 34Poor test model
  • 2. Web Sub-SystemWeb Server1. Application(objects)Sub-SystemDatabase ServerBanking System(Credit CardProcessor)Legacy System(s)3. Order Processing Sub-System4. Full E-Business SystemScope of testing for E-CommercePeopleProcessTrainingEnvironmentSlide 35
  • Test strategySlide 36• Our test strategy must align with our modelof quality and our risk-assessmentTest Phase FocusRequirements, design etc. Relevance, correctness, completeness, ambiguity etc.Component Input validation, correct behaviour, output validation,statement and branch coverageIntegration Correct, authorised transfer of control, exchange ofdata, consistency of use and reconciliationsSystem (-system) End-to-end accuracy, consistency, security,performance and reliabilityAcceptance Alignment to business goals, end-to-end ease of useand experience, successful outcomes, user personasEvery focus arearequires testmodel(s)
  • Failure of testing is usually a failurein a test model• If the right models are selected, and commitmentis made to cover them– The testing usually gets done• But often, no model is explicitly selected at all• Where a model fails, it is usually wrong because:– The model does not represent reality– The scope of the model is too narrow– The model ignores critical aspects (context, people,process, environment or training/capability).Slide 37
  • Close• We need to understand what quality is before wecan pursue and achieve it• Testing often fails because test models are notused or understood• Testers need models to test but the „standard‟quality models are too simple• We need to take stakeholder views into accountto create relevant testing models• Using models sounds techy, but it‟s completelynatural – it‟s part of what makes us human.Slide 38
  • The Pursuit of Quality: ChasingTornadoes or Just Hot Air?Slide 39gerrardconsulting.comtestela.comtest-axioms.comuktmf.com
  • http://catless.ncl.ac.uk/Risks/10.10#subj6.1 (manufacturer response in RED)• BA flight from NewYork and Fairbanks in US• Co-pilot entered new navigational data into the systemand mis-typed a PIN code• System response:– “Invalid PIN number selected”– “Access violation, contact your credit institution if youbelieve there is an error.”• All the planes controls froze and it refused to respondto commands• SOS call to manufacturer at Aerospatiale in France…• “The pasty little Englishman probably had too manymeat pies and Guiness".Slide 40
  • Problem traced to the ATM-6000 INScomputer: a modified ATM product• “The system will automatically remove the restrictionsat the start of the next banking day”• Apparently, manual control could be re-set if acrewmember went back of plane and operated theelevators manually• “There is nothing wrong with ze plane, that a littlepinch in the rear will not cure. Just like a woman. Ifthese English knew anything about women, they wouldnever have had to call us.”• “The plane was able to safely land at DenversStapelton airport, where the craft was repaired and allcrewmembers credit histories reviewed."Slide 41