SlideShare a Scribd company logo
Risk and Testing
                         extended after session


           Presentation to Testing Master Class
                                   27/28 March 2003
                                 Neil Thompson

                                       www.TiSCL.com
                                    23 Oast House Crescent              NeilT@TiSCL.com
Thompson                                Farnham, Surrey            Direct phone 07000 NeilTh
information                                 GU9 0NP                                 (634584)
Systems                                   England (UK)               Direct fax 07000 NeilTF
Consulting Limited                  Phone & fax 01252 726900                        (634583)
Some slides included with permission from              and Paul Gerrard         © Thompson
                                                                                  information
                                                                                  Systems
                                                                                  Consulting Limited
Agenda

          •     1. What testers mean by risk
          •     2. “Traditional” use of risk in testing
          •     3. More recent contributions to thinking
          •     4. Risk-Based Testing: Paul Gerrard (and I)
          •     5. Next steps in RBT:
                  – end-to-end risk data model;
                  – automation
          • 6. Refinements and ideas for future
                                                              © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 1 of 51     information
                                  2003                          Systems
                                                                Consulting Limited
1. What testers mean by risk
• Risk that software in live use will fail:
        – software: could be Commercial Off The Shelf; packages such as
          ERP; bespoke project; integrated programmes of multiple
          systems...; industry-wide supply chains; any systems product
• Could include risk that later stages (higher levels)
  of testing will be excessively disrupted by failures
• Chain of risks:
               Error:                                 Fault:                                    Failure:
  RISK         mistake made by human          RISK    something wrong in a product    RISK      deviation of product from its
               (eg spec-writing,                      (interim eg spec,                         expected* delivery or service
               program-coding)                        final eg executable software)             (doesn’t do what it should,
                                                                                                or does what it shouldn’t)
                                        not all errors                                                    * “expected” may be as in spec,
                                                                                  not all faults
                                        result in faults                                                  or spec may be wrong
                                                                                  result in failures
                                                                                                          (verification & validation)

                                                                                                             © Thompson
Neil Thompson: Risk and Testing   27/28 Mar        Slide 2 of 51                                                information
                                  2003                                                                          Systems
                                                                                                                Consulting Limited
Extra links (4,5) in the chain of risks?
 • Steve Allott:
        – distinguish faults in specifications from faults in the resulting program code

                                                       Link 2:               Link 3:                  Link 4:
 RISK Link 1:                                  RISK    something
                                                                   RISK      something   RISK         deviation of product from its
              mistake made by human                                                                   expected delivery or service
                                                       wrong in              wrong in
                                                       a spec                program                  (doesn’t do what it should,
                                                                             code                     or does what it shouldn’t)




• Ed Kit (1995 book, Software Testing in the Real World):
       – “error” reserved for its scientific use (but not always very useful in software testing?)

          Error Mistake:                         (undefined):                 Fault:                           Failure:
RISK       a human action that         RISK        incorrect results     RISK    an incorrect step,     RISK       an incorrect result
           produces an incorrect                                                 process or data
                                                   in specifications
           result (eg in spec-
           writing, program-                    Could use Defect for this?
                                                                                 definition in a
                                                                                 computer program                 Error:
                                                                                                                   amount by which
           coding)                                                               (ie executable                    result is incorrect
                                                                                 software)

                                                                                                                 © Thompson
Neil Thompson: Risk and Testing    27/28 Mar          Slide 2.x1 of 51                                              information
                                   2003                                                                             Systems
                                                                                                                    Consulting Limited
Chain of risks could be up to 6 links?
       • Adding in a distinction by John Musa (1998 book,
             Software Reliability Engineering):
              – not all deviations are failures (but this is just the “anomaly” concept?)
              – (so the associated risks are in the testing process rather than development: that
                 an anomaly may not be noticed, or may be misinterpreted)

       • A possible hybrid of all sources:                                                                     (false alarm):
                                                                                                               or Change Request,

                                                                                  Anomaly:
                                                                                                               or testware mistake

                                                                                   an unexpected result
                                                                                   during testing               RISK OF MIS-
                                                                                                                INTERPRETING
                                                Note: this fits its usage
                                                         in inspections
                                                                                   RISK OF MISSING

          Mistake:                                 Defect:                      Fault:                       Failure:
RISK       a human action that        RISK           incorrect results      RISK   an incorrect step,   RISK     an incorrect result
           produces an incorrect                     in specifications             process or data
           result (eg in spec-
           writing, program-
                                                                                   definition in a
                                                                                   computer program             Error:
                                                                                                                 amount by which
           coding)                                        RISK                     (ie executable                result is incorrect
                                                                                   software)
                                               Direct programming mistake

                                                                                                               © Thompson
Neil Thompson: Risk and Testing    27/28 Mar          Slide 2.x2 of 51                                           information
                                   2003                                                                          Systems
                                                                                                                 Consulting Limited
Three types of software risk

                                                                                       Process Risk
                           Project Risk                                              variances in planning and
                         resource constraints,                                        estimation, shortfalls in
                          external interfaces,                                        staffing, failure to track
                         supplier relationships,                                      progress, lack of quality
                          contract restrictions                                           assurance and
                                                                                    configuration management

                   Primarily a management                                     Planning and the development
                        responsibility                                      process are the main issues here.

                                                          Product Risk
                                                        lack of requirements
                                                         stability, complexity,
             Testers are                                design quality, coding
               mainly                                  quality, non-functional
           concerned with                            issues, test specifications.
            Product Risk
                                        Requirements risks are the most significant
                                           risks reported in risk assessments.
                                                                                                         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar        Slide 3 of 51                                           information
                                  2003                                                                     Systems
                                                                                                           Consulting Limited
Risk management components


                                                    BAD THINGS WHICH
                                                    COULD HAPPEN, AND
                                                    PROBABILITY OF EACH




                                              N          E
                                              W          S
                                                               CONSEQUENCE OF
                                                                EACH BAD THING
                                                              WHICH COULD HAPPEN




                                                                                   © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 4 of 51                          information
                                  2003                                               Systems
                                                                                     Consulting Limited
Symmetric view of
                 risk probability & consequence
                         PROBABILITY                               risk EXPOSURE =
                                                                   probability
                           (likelihood)                            x
                          of bad thing
                             occurring
                                              3       6       9    consequence


                                              2       4       6
                                              1       2       3
                                                                  CONSEQUENCE (impact)
                                                                           if bad thing
                                                                           does occur


           • This is how most people quantify risk
           • Adding gives same rank as multiplying, but
             less differentiation
                                                                                          © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 5 of 51                                 information
                                  2003                                                      Systems
                                                                                            Consulting Limited
Risk management components


                                                    BAD THINGS WHICH
                                                    COULD HAPPEN, AND
                                                    PROBABILITY OF EACH




any other                                      N         E
                                              W          S
dimensions?                                                     CONSEQUENCE OF
                                                                 EACH BAD THING
                                                               WHICH COULD HAPPEN




                                                                                    © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 4r of 51                          information
                                  2003                                                Systems
                                                                                      Consulting Limited
Do risks have any other dimensions?

          • In addition to probability and consequence...
          • Undetectability:
                  – difficulty of seeing a bad thing if it does happen
                  – eg insidious database corruption
          • Urgency:
                  – advisability of looking for / preventing some bad
                    things before other bad things
                  – eg lack of requirements stability
          • Any others?
                                                               © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 6 of 51      information
                                  2003                           Systems
                                                                 Consulting Limited
2. “Traditional” use of risk in testing

          • Few authors & trainers in software testing now
            miss an opportunity to link testing to “risk”
          • In recent years, almost a mandatory mantra
          • But it isn‟t new (what will be new is translating
            the mantra into everyday doings!)
          • Let‟s look at the major “traditional” authors:
                  –    Hetzel
                  –    Myers
                  –    Beizer
                  –    others
                                                              © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 7 of 51     information
                                  2003                          Systems
                                                                Consulting Limited
Who wrote this and when?
   • “Part of the art of testing is to know when to stop
     testing”:
           – some recent visionary / pragmatist?
           – Myers? His eponymous 1979 book was the first on
             testing?
           – No, No and No!
           – Fred Gruenberger in the original testing book, Program
             Test Methods 1972-3, Ed. William C. Hetzel
   • Also in this book (which is the proceedings of first-ever
     testing conference)...
                                                              © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 8 of 51     information
                                  2003                          Systems
                                                                Consulting Limited
Hetzel (Ed.) 1972-3
• Little / nothing explicitly on risk, but:
      – reliability as a factor in quality; inability to cope with complexity of systems
      – “the probability of being faulty is great”p255 (Jean-Claude Rault, CRL France)...
      – “how to run the test for a given probability of error... number of random input
        combinations before... considered „good‟”p258; sampling as a principle of testing

• Interestingly:
      – “sampling as a principle should decrease in importance and be replaced by
        hierarchical organization & logical reduction”p28 (William C. Hetzel)

• Other curiosities:
      – ?source of Myers‟ triangle exercise p13 (ref.                       SYSTEM DESIGN
        Dr. Richard Hamming, “Computers and Society”)                      COMPON‟T DESIGN
                                                                            MODULE DESIGN

      – the first “V-model”? p172 Outside-in design, inside-out testing
                                                                             MODULE TEST
                                                                           COMPONENT TEST
                                                                             SYSTEM TEST
        (Allan L. Scherr, IBM Poughkeepsie NY / his colleagues)             CUSTOMER USE

                                                                          © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 9 of 51                 information
                                  2003                                      Systems
                                                                            Consulting Limited
Myers 1976: Software Reliability -
                  Principles & Practices
• Again, “risk” not explicit, but principles are there:
       – “reliability must be stated as a function of the severity of errors as well as their
         frequency”; “software reliability is the probability that the software will execute
         for a period of time without a failure, weighted by the cost to the user of each
         failure”; “probability that a user will not enter a particular set of inputs that
         leads to a failure”p7
       – “if there is reason to believe that this set of test cases had a high probability of
         uncovering all possible errors, then the tests have established some confidence
         in the program‟s correctness”; “each test case used should provide should
         provide a maximum yield on our investment... the probability that the test case
         will expose a previously undetected error”p170, 176
       – “if a reasonable estimate of [the number of remaining errors in a program] were
         available during the testing stages, it would help to determine when to stop
         testing”p329
       – hazard function as a component of reliability modelsp330
                                                                             © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 10 of 51                   information
                                  2003                                         Systems
                                                                               Consulting Limited
Myers 1979: The Art of Software Testing
• Risk is still not in the index, but more principles:
       – “the earlier that errors are found, the lower are the costs of correcting... and the
         higher is the probability of correcting the errors correctly”p18
       – “what subset of all possible test cases has the highest probability of detecting
         the most errors”p36
       – tries to base completion criteria for each phase of testing on an estimate of the
         number of errors originating in particular design processes, and during what
         testing phases these errors are likely to be detectedp124
       – testing adds value by increasing reliabilityp5
       – revisits / updates the reliability models outlined in his 1976 book:
               • those related to hardware reliability theory (reliability growth, Bayesian, Markov,
                 tailored per program)
               • error seeding, statistical sampling theory
               • simple intuitive (parallel independent testers, historic error data)
               • complexity-based (composite design, code properties)
                                                                                     © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 11 of 51                           information
                                  2003                                                 Systems
                                                                                       Consulting Limited
Hetzel (1984)-8: The Complete Guide to
             Software Testing
• Risk appears only once in the index, but is prominent:
      – Testing principle #4p24: Testing Is Risk-Based
              • amount of testing depends on risk of failure, or of missing a defect; so...
              • use risk to decide number of cases, amount of emphasis, time & resources
• Other principles appear:
      – testing measures software quality; want maximum confidence per unit cost via
        maximum probability of finding defectsp255
      – objectives of Testing In The Large include:p123
              • are major failures unlikely?
              • what level of quality is good enough?
              • what amount of implementation risk is acceptable?
      – System Testing should end when we have enough confidence that
        Acceptance Testing is ready to startp134
                                                                              © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 12 of 51                    information
                                  2003                                          Systems
                                                                                Consulting Limited
Beizer 1984: Software System Testing &
          Quality Assurance
• Risk appears twice in index, but both insignificant
• However, some relevant principles are to be found:
       – smartness in software production is ability to avoid past, present & future
         bugsp2 (and bwgs?)
       – now more than a dozen models/variations in software reliability theory: but all
         far from reality; and all far from providing simple, pragmatic tools that can be
         used to measure software developmentp292-293
       – six specific criticisms: but if a theory were to overcome these then it would
         probably be too complicated to be practicalp293-294
       – a compromise may be possible in future, but instead for now, suggest go-live
         when the system is considered to be useful, or at least sufficiently useful to
         permit the risk of failurep295
       – plotting and extrapolation of S-curves to assess when this point attainedp295-304

                                                                            © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 13 of 51                  information
                                  2003                                        Systems
                                                                              Consulting Limited
Beizer (1983)-90:
                     Software Testing Techniques
• “Risk” word is indexed as though deliberate:
      – a couple of occurrences are insignificant, but others:
              • purpose of testing is not to prove anything but to reduce perceived risk [of
                software not working] to an acceptable value (penultimate phase of attitude)
              • testing not an act; is a mental discipline which results in low-risk software
                without much testing effort (ultimate phase of attitude) p4
              • accepting principles of statistical quality control (but perhaps not yet implementing,
                  because is not yet obvious how to, and in the case of small products, is dangerous) p6
              • add test cases for transactions with high risksp135
              • we risk release when confidence is high enoughp6
• Other occurrences of key principles, including:
      – probability of failure due to hibernating bwgs* low enough to acceptp26
      – importance of a bwg* depends on frequency, correction cost, [fix] installation cost
        & consequencesp27
           *bwg: ghost, spectre, bogey, hobgoblin, spirit of the night,any imaginary (?) thing that frightens a person (Welsh)
                                                                                                              © Thompson
Neil Thompson: Risk and Testing   27/28 Mar      Slide 14 of 51                                                  information
                                  2003                                                                           Systems
                                                                                                                 Consulting Limited
Others
• The “traditional” period could be said to cover the
  1970s and 1980s. A variety of views can be found:
       – Edward Miller 1978, in Software Testing & Validation
         Techniques (IEEE Tutorial):
               • “except under very special situations [...], it is important to recognise that program
                 testing, if performed systematically, can serve to guarantee the absence of bugs”p4
               • and/but(?) “a program is well tested when the program tester has an adequately high
                 level of confidence that there are no remaining “errors” that further testing would
                 uncover”p9 (italics by Neil Thompson!)

       – DeMillo, McCracken, Martin & Passafiume 1987:
         Software Testing & Evaluation
               • “a technologically sound approach to testing will incorporate... evaluations of
                 software status into overall assessments of risk associated with the development and
                 eventual fielding of the system”p vii
                                                                                     © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 15 of 51                            information
                                  2003                                                  Systems
                                                                                        Consulting Limited
3. More recent contributions to
                        (risk use) thinking
• Traditional basis of testing on risk (although more
  perceptive than some give credit for) is less than
  satisfactory because:
        – it tends to be “lip-service”, with no follow-through / practical application
        – if there is follow-through, it involves merely using risk analysis as part of the
          Testing Strategy (then that is shelved, and it‟s “heads down” from then on?)
• Contributions more recently from (for example):
        –     Ed Kit (Software Testing in the real world, 1995)
        –     Testing Maturity Model (Illinois Institute of Technology)
        –     Test Process Improvement® (Tim Koomen & Martin Pol)
        –     Testing Organisation MaturityTM questionnaire (Systeme Evolutif)
        –     Hans Schaefer‟s work
        –     Zen and the art of Object-Oriented Risk Management (Neil Thompson)
                                                                             © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 16x of 51                  information
                                  2003                                         Systems
                                                                               Consulting Limited
Kit 1995:
            Software Testing in the real world
• Error-fault-failure chain extendedp18
• Clear statements on risk and risk managementp26:
       – test the parts of the system whose failures would have
         most serious consequencesp27
       – frequent-use areas increase chances of failure foundp27
       – focus on parts most likely to have errors in themp27
       – risk is not only basis for test management decisions, is
         basis for everyday test practitioner decisionsp28
• Risk management used in integration testp95
                                                                  © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 16.x1 of 51     information
                                  2003                              Systems
                                                                    Consulting Limited
Testing Maturity Model
• Five levels of increasing maturity, based loosely on decades of testing evolution
  (eg in 1950s testing not even distinguished from debugging)
• Maturity goals and process areas for the five levels do not
  include risk explicitly, although emphasis moves from
  tactical to strategic (eg fault detection to prevention):
        –     in level 1, software released without adequate visibility of quality & risks
        –     in level 3, test strategy is determined using risk management techniques
        –     in level 4, software products are evaluated using quality criteria (relation to risk?)
        –     in level 5, costs & test effectiveness are continually improved (sampling quality)
• Strongly recommended are key practices & subpractices:
        – (not yet available?)
• Little explicitly visible on risk; very process-oriented
                                                                                    © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 17 of 51                          information
                                  2003                                                Systems
                                                                                      Consulting Limited
Test Process Improvement®
• Only one entry in index:
       – risks and recommendations, substantiated with metrics (as part of
         Reporting)
               • risks indicated with regard to (parts of) the tested object
               • risks can be (eg) delays, quality shortfalls

• But risks incorporated to some extent, eg:
       – Test Strategy (differentiation in test depth depending on risks)




                                                                               © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 18 of 51                     information
                                  2003                                           Systems
                                                                                 Consulting Limited
Testing Organisation Maturity
                       (TOMTM) questionnaire


          • Risk assessment is not only at the beginning:
                  – when development slips, a risk assessment is
                    conducted and a decision to squeeze, maintain or
                    extend test time may be made
                  – [for] tests that are descoped, the associated risks
                    are identified & understood



                                                               © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 19 of 51     information
                                  2003                           Systems
                                                                 Consulting Limited
Hans Schaefer’s work
• Squeeze on testing prioritise based on risk
• Consider possibility of stepwise release: :
        – test most important functions first
        – look for functions which can be delayed
• What is “important” in the potential release (key functions,
  worst problems?)
        – visibility (of function / characteristic)
        – frequency of use
        – possible cost of failure
• Where likely to be most problems?
        – project history (new technology, methods, tools; numerous people, dispersed)
        – product measures (areas complex, changed, needing optimising, faulty before)
                                                                        © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 20 of 51              information
                                  2003                                    Systems
                                                                          Consulting Limited
Zen and the Art of
           Object-Oriented Risk Management
• Testing continues to lag development; but pessimism /
  delay was currently unacceptable (deregulation, Euro, Y2k)
• Could use OO concepts to help testing:
       – encapsulate risk information with tests (for effectiveness); and/or
       – inherit tests to reuse (efficiency)
• Basic risk management:
       – relationship to V-model (outline level)
       – detail level: test specification



                                                                © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 21 of 51      information
                                  2003                            Systems
                                                                  Consulting Limited
Risk management & the V-model



                                                                          Acceptance
                                                                          testing

                                                                          System
                                              Risks that system(s) have   testing
                                                 undetected defects
                                                       in them            Integration
                                                                          testing

                                                                          Unit
                                                                          testing
                                                                          © Thompson
Neil Thompson: Risk and Testing   27/28 Mar      Slide 22a of 51            information
                                  2003                                      Systems
                                                                            Consulting Limited
Risk management & the V-model


                                  Risks that system(s) and business(es)
                                         are not right and ready
                                              for each other              Acceptance
                                                                          testing

                                                                          System
                                              Risks that system(s) have   testing
                                                 undetected defects
                                                       in them            Integration
                                                                          testing

                                                                          Unit
                                                                          testing
                                                                          © Thompson
Neil Thompson: Risk and Testing   27/28 Mar      Slide 22b of 51            information
                                  2003                                      Systems
                                                                            Consulting Limited
Where and how testing manages risks:
          first, at outline level

           Level                      Risks
                                          Service requirements
           Acceptance                     Undetected errors damage
           testing                        business
                                          System specification
           System                         Undetected errors waste user time
           testing                        & damage confidence in
                                          Acceptance testing
                                          Interfaces don‟t match
           Integration                    Undetected errors too late to fix
           testing
                                          Units don‟t work right
           Unit                           Undetected errors won‟t be found
           testing                        by later tests


                                                                              © Thompson
Neil Thompson: Risk and Testing   27/28 Mar        Slide 23a of 51              information
                                  2003                                          Systems
                                                                                Consulting Limited
Where and how testing manages risks:
          first, at outline level

           Level                      Risks                                   How managed
                                          Service requirements                 Specify user-wanted tests against URS
           Acceptance                     Undetected errors damage             Script tests around user guide and user
           testing                        business                             & operator training materials
                                          System specification                 Use independent testers, functional &
           System                         Undetected errors waste user time    technical, to get fresh view
           testing                        & damage confidence in               Take last opportunity to do automated
                                          Acceptance testing                   stress testing before env‟ts re-used
                                          Interfaces don‟t match               Use skills of designers before they move
           Integration                    Undetected errors too late to fix    away
           testing                                                             Take last opportunity to exercise
                                                                               interfaces singly
                                          Units don‟t work right               Use detailed knowledge of developers
           Unit                           Undetected errors won‟t be found     before they forget
           testing                        by later tests                       Take last opportunity to exercise every
                                                                               error message

                                                                                                          © Thompson
Neil Thompson: Risk and Testing   27/28 Mar        Slide 23b of 51                                           information
                                  2003                                                                       Systems
                                                                                                             Consulting Limited
Second, at detail level:
risk management during test specification

          Test specification
          based on total
          magnitude of risks
          for all defects
          imaginable
                                        =     (         Estimated
                                                        probability of
                                                        defect
                                                        occurring
                                                                         x
                                                                             Estimated
                                                                             severity of
                                                                             defect                    )
          • To help decision-making during the “squeezing of testing”, it would be
            useful to have recorded explicitly as part of the specification of each
            test:
                  – the type of risk the set of tests is designed to minimise
                  – any specific risks at which a particular test or tests is aimed
          • And this was one of the inputs to...

                                                                                           © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 24 of 51                                 information
                                  2003                                                       Systems
                                                                                             Consulting Limited
4. Risk-Based (E-Business) Testing
• Advert!
       – Artech House, 2002
       – ISBN 1-58053-314-0
       – reviews amazon.com & co.uk
• companion website
  www.riskbasedtesting.com
       – sample chapters
       – Master Test Planning template
       – comments from readers
         (reviews, corrections)
                                                               With acknowledgements         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 25 of 51                                   information
                                  2003
                                                               to lead author Paul Gerrard     Systems
                                                                                               Consulting Limited
Risk-Based E-Business Testing:
                        main themes
• Can define approximately 100 “product” risks threatening a typical e-
  business system and its implementation and maintenance
• Test objectives can be derived almost directly as “inverse” of risks
• Usable reliability models are some way off (perhaps even
  unattainable?) so better for now to work on basis of stakeholders‟
  perceptions of risk
• Lists & explains techniques appropriate to each risk type
• Includes information on commercial and DIY tools
• Final chapters are on “making it happen”
• Go-live decision-making: when benefits “now” exceed risks “now”
• Written for e-business but principles are portable; extended to wider
  tutorial for EuroSTAR 2002; following slides summarise key points
                                                               With acknowledgements         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 26 of 51                                   information
                                  2003
                                                               to lead author Paul Gerrard     Systems
                                                                                               Consulting Limited
Risks and test objectives -
                                  examples

          Risk                                      Test Objective

          The web site fails to function            To demonstrate that the application functions
          correctly on the user‟s client            correctly on selected combinations of
          operating system and browser              operating systems and browser version
          configuration.                            combinations.
          Bank statement details                    To demonstrate that statement details
          presented in the client                   presented in the client browser reconcile with
          browser do not match records              back-end legacy systems.
          in the back-end legacy
          banking systems.
          Vulnerabilities that hackers              To demonstrate through audit, scanning and
          could exploit exist in the web            ethical hacking that there are no security
          site networking infrastructure.           vulnerabilities in the web site networking
                                                    infrastructure.

                                                                                         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 27 of 51                               information
                                  2003                                                     Systems
                                                                                           Consulting Limited
Generic test objectives
     Test Objective                                             Typical Test Stage
     Demonstrate component meets requirements                   Component Testing
     Demonstrate component is ready for reuse in larger         Component Testing
     sub-system
     Demonstrate integrated components correctly                Integration testing
     assembled/combined and collaborate
     Demonstrate system meets functional requirements           Functional System
                                                                Testing
     Demonstrate system meets non-functional requirements       Non-Functional System
                                                                Testing
     Demonstrate system meets industry regulation               System or Acceptance
     requirements                                               Testing
     Demonstrate supplier meets contractual obligations         (Contract) Acceptance
                                                                Testing
     Validate system meets business or user requirements        (User) Acceptance
                                                                Testing
     Demonstrate system, processes and people meet              (User) Acceptance
     business requirements                                      Testing
                                                                                 © Thompson
Neil Thompson: Risk and Testing    27/28 Mar   Slide 28 of 51                         information
                                   2003                                               Systems
                                                                                      Consulting Limited
Master test planning




                                                                  © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 28.x1 of 51     information
                                  2003                              Systems
                                                                    Consulting Limited
Test Plans for each testing stage
                          (example)
      eg for System Testing:                                        Test 1 2 3 4 5 6 7 8 9 10 11 ...
     Risk F1 125                              Test objective F1
     Risk F2 10                               Test objective F2
     Risk F3 60                               Test objective F3
     Risk F4 32                               Test objective F4
     Risk U1 12                               Test objective U1
     Risk U2 25                               Test objective U2
     Risk U3 30                               Test objective U3
     Risk S1 100                              Test objective S1
     Risk S2 40                               Test objective S2
     Requirement F1
     Requirement F2                     Generic test objective G4
     Requirement F3
     Requirement N1                     Generic test objective G5
     Requirement N2                                     Test importance H L M H H H M M M L L ...
                                                                                       © Thompson
Neil Thompson: Risk and Testing   27/28 Mar        Slide 29 of 51                        information
                                  2003                                                   Systems
                                                                                         Consulting Limited
Test Design:
       target execution schedule (example)
ENVIRONMENTS                      TEAMS          TESTERS
                                                            Test execution days
                                                             1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
                                  Balance &
                                                             1




                                                                                                              Retests & regression tests
                                  transaction
                                  reporting

                                  Inter-                     5                     10
Partition for                     account                      3 7                                   11
functional tests                  transfers                    2
                                  Payments                   6   9
                                  Direct debits
                                  End-to-end
                                  customer
                                  scenarios

Partition for                                                                4           8
disruptive non-functional tests
                                                                                 Comfortable completion date
                                                                    Earliest completion date
                                                                                                 © Thompson
Neil Thompson: Risk and Testing      27/28 Mar     Slide 30 of 51                                  information
                                     2003                                                          Systems
                                                                                                   Consulting Limited
Plan to manage risk (and scope) during
      test specification & execution
                                                                        Quality




                             Quality                                   Scope
                                                                                  Cost
                              Risk
                                                                                         Quality
best pair to
fine-tune

                                                   Time                  Time                     Cost
    Scope
                                                Scope                     Cost
                                                                Time
                                                                                         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 31x of 51                              information
                                  2003                                                     Systems
                                                                                           Consulting Limited
Risk as the “inverse” of quality
                                  Quality




                                                                  Scope                       Time
      Scope                                         Time




                                                                          Risk
                                                                                 © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 31.x1 of 51                    information
                                  2003                                             Systems
                                                                                   Consulting Limited
Managing risk during
                                   test specification
                                                                    initial scope set   target go-live date
Scope                                            Time               by requirements         set in advance




                                                                     increasing                decreasing
                                                                  risk of faults               risk as
                                                                    introduced                 faults found

                              Risk
                                                                                           © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 31.x2 of 51                              information
                                  2003                                                       Systems
                                                                                             Consulting Limited
Verification & validation as risk
                  management methods




                                                                  © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 31.x3 of 51     information
                                  2003                              Systems
                                                                    Consulting Limited
Risk management during
                test specification: “micro-risks”

 Test specification:                  Estimated                   Estimated         RISKS TO
 •for all defects                     probability     x           consequence   =   TEST AGAINST
  imaginable...




                                                                                         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar    Slide 32ax of 51                            information
                                  2003                                                     Systems
                                                                                           Consulting Limited
Risk management
                     clarifies during test execution

 Test specification:                  Estimated                   Estimated         RISKS TO
 •for all defects                     probability     x           consequence   =   TEST AGAINST
  imaginable...


 Test execution:                                              Consequence =
 •for each defect                    Probability
  detected                           =1               x       f (urgency,
                                                              importance)




                                                                                         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar    Slide 32bx of 51                            information
                                  2003                                                     Systems
                                                                                           Consulting Limited
But clarity during test execution is
             only close-range: fog ahead!

 Test specification:                  Estimated                    Estimated          RISKS TO
 •for all defects                     probability     x            consequence    =   TEST AGAINST
  imaginable...


 Test execution:                                                  Consequence =
 •for each failure                   Probability
  detected…                          =1               x           f (urgency,



                  {                                                               }
                                                                  importance)

                                                                                      =
                                                                                          REMAINING
 •for all                            Estimated                     Estimated                RISKS
  anomalies as yet                   probability      x            consequence
  undiscovered...



                                                                                             © Thompson
Neil Thompson: Risk and Testing   27/28 Mar    Slide 32cx of 51                                information
                                  2003                                                         Systems
                                                                                               Consulting Limited
Managing risk during test execution:
        when has risk got low enough?
                                                                    initial scope set   target go-live date
Scope                                            Time               by requirements         set in advance




                                                                     increasing                decreasing
                                                                  risk of faults               risk as
                                                                    introduced                 faults found

                              Risk
                                                                                           © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 32.x1 of 51                              information
                                  2003                                                       Systems
                                                                                             Consulting Limited
Pragmatic approximation to risk
    reduction: progress through “test+fix”
                                                             # anomalies
• Cumulative S-curves are
  good because they:
                                                                      Cumulative anomalies
        – show several things at once
        – facilitate extrapolation of trends                                          Resolved
        – are based on acknowledged                                        Awaiting
          theory and empirical data...                                     fix        Deferred

  #                                                                                     Closed
tests                                                                                              date
                       Target tests run



                                       Fail
         Actual tests run                      Pass
                                                             date

                                                                                             © Thompson
 Neil Thompson: Risk and Testing   27/28 Mar      Slide 32.x2 of 51                              information
                                   2003                                                          Systems
                                                                                                 Consulting Limited
Cumulative S-curves: theoretical basis
 (potential)
  # failures
   per day                                          Software reliability growth model
  depending on
   operational
     profile
                                                                                                            Software
                                                                                                                            Hardware

Above curves based on Myers 1976: Software Reliability: Principles & Practices p10                                              date
                    cumulative
                        #                                       Anomalies found
                                      More than 1                                                 Much less than 1 anomaly per test
                                      anomaly per test
   Much more than 1
  anomaly per test, but
                                                                       Tests run                          Late tests slower because



                                                                                   go-live date
only one visible at a time!
                                                                                                          awaiting difficult and/or
                                                                                                          lo-priority fixes
                                                                                                  date
Early tests blocked by
hi-impact anomalies                            Middle tests fast and productive
                                                                                                                 © Thompson
 Neil Thompson: Risk and Testing   27/28 Mar       Slide 32.x3 of 51                                               information
                                   2003                                                                            Systems
                                                                                                                   Consulting Limited
A possible variation on the
potential
          software reliability growth model
# failures                                                                              Possibility of knock-on errors included in




                                                                     go-live date
                                    Software                                            Littlewood & Verrall 1973:
 per day
                                                                                        A Bayesian Reliability Growth Model
                                                                                        for Computer Software (in IEEE Symposium)
                                                                                                                   Bad maintenance

                                                                                                                Good maintenance

   Good testing & maintenance:                                                      Bad testing & maintenance:                        date
   convergence on stability                                                         divergence into instability
                                   failures                                                          failures
                                              TEST &                                                            TEST &
                FIX                                                                      HACK                   RETEST
                                              RETEST
                                                   failures
                                                                                                                  failures
                                              CHECK
                                              AGAINST                                                           HAVE A
                                              EXPECTED                                 STROKE                   THINK AND
         DIAGNOSE                                                                                               TALK TO FOLKS
                                              RESULTS                                  CHIN

                                                                                                                   © Thompson
 Neil Thompson: Risk and Testing       27/28 Mar              Slide 32.x4 of 51                                      information
                                       2003                                                                          Systems
                                                                                                                     Consulting Limited
Cumulative S-curves: more theory
                                     cumulative #

Actually there                                                 Anomalies found
are several
reliability growth
models, but:                                                       Tests run




                                                                               go-live date
• the Rayleigh model is
part of hardware
reliability methodology                                                                         date
and has been used
successfully in                        # per day
software reliability
during development                                                       Pattern of fault discovery Dunn & Ullman 1982 p61
and testing
                                                     Anomalies found
• its curve produces
the S-curve when


                                                                                 go-live date
accumulated


                                                   Tests run                                    date
                                                                                                       © Thompson
Neil Thompson: Risk and Testing   27/28 Mar    Slide 32.x5 of 51                                          information
                                  2003                                                                    Systems
                                                                                                          Consulting Limited
Reliability theory more generally
# failures
per “t”
                             Software: exponential decay Myers 1976




                      Hardware: Poisson distribution Myers 1976

Both of our software                                                                              execution time Σt
curves are members                                     • shape
of the Weibull                                         parameter “m”
                     # failures                        can take various                   • NB: models tend to
distribution:
                     per “t”                           values (only 1 & 2                 use execution time
• has been used for                                    shown here): Kan                   rather than elapsed
decades in hardware:                                   1995 p180                          time (because removes
Kan 1995 p179                                          Rayleigh (m=2)                     test distortion, and uses
• two single-parameter                                                                    operational profile ie
cases applied to                                                                          how often used live)
software by Wagoner                                                   Exponential (m=1)
in 1970s: Dunn &
Ullman 1982 p318                                                                     execution time Σt
                                                                                                    © Thompson
 Neil Thompson: Risk and Testing   27/28 Mar      Slide 32.x6 of 51                                   information
                                   2003                                                               Systems
                                                                                                      Consulting Limited
Reliability models taxonomy
         (Beizer, Dunn & Ullman, + Musa)
            future addition to slide pack




                                                                  © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 32.x7 of 51     information
                                  2003                              Systems
                                                                    Consulting Limited
Reliability: problems with theories
                        (Beizer, 1984)
• Main problem is getting theories to match reality
• Several acknowledged shortcomings of many theories, eg:
       –    don‟t evaluate consequence (severity) of anomalies
       –    assume testing is like live (eg relatively few special cases)
       –    don‟t correct properly for stress-test effects, or code enhancements
       –    don‟t consider interactions between faults
       –    don‟t allow for debugging getting harder over time
• The science is moving on eg Wiley, Journal of Software Testing, Verification & Reliability but:
       – a reliability theory that satisfied all the above would be complex
       – would project managers use it, or would they go live anyway?
• So until these are resolved, let‟s turn to empirical data...
                                                                                  © Thompson
 Neil Thompson: Risk and Testing   27/28 Mar   Slide 32.x8 of 51                    information
                                   2003                                             Systems
                                                                                    Consulting Limited
Cumulative S-curves: empirical data
Observed empirically that faults plot takes characteristic shape Hetzel 1988 p210

                                   #
                                                            Anomalies found                   Possible to use to roughly gauge
                                                                                              test time or
                                                                                              faults remaining Hetzel 1988 p210

                                                                   Tests run




                                                                               go-live date
                                                                                              date
 S-curve also visible in Kit 1995: Software Testing in the Real World p135

  The Japanese “Project Bankruptcy” study: Abe, Sakamura & Aiso 1979, in Beizer 1984
  • analysed 23 projects, including application software & system software developments
  • included new code, modifications to existing code, and combinations
  • remarkable similarity across all projects for shape of test completion curve
  • anomaly detection rates not significant (eg low could mean good software or bad testing)
  • significant were (a) length of initial slow progress, and (b) shape of anomaly detection curve...

                                                                                                             © Thompson
 Neil Thompson: Risk and Testing   27/28 Mar   Slide 32.x9 of 51                                                information
                                   2003                                                                         Systems
                                                                                                                Consulting Limited
Project bankruptcy study,
cumulative
    #                                            summarised by Beizer
                                                                      Anomalies found                                           (b) A secondary indicator was
      start of integration stage




                                                                                                                                anomaly detection rate




                                                                                          planned go-live date
                                                                                                                  Derivative
                                                                                                                  (looks like   rate
                                                                        Tests run
                                                                                                                  Rayleigh                     “bankrupt”
                                                                                                                  curve)

                                                                                                                                                      failure
                                                                                                                 date                    success      inevitable
                                   Phase I............II...............III                     100%
All the projects had 3 phases: Beizer 1984: Software System Testing & Quality Assurance, p297-300
(a) Duration of phases was primary indicator of project success or “bankruptcy”:
                                   Ph I+II...............................72%.......................................126%
                                                  success                                                                         “bankrupt”
                                                                       failure inevitable


                                   Ph I..15%................55%................97%

                                                                                                                                                   © Thompson
Neil Thompson: Risk and Testing                           27/28 Mar           Slide 32.x10 of 51                                                     information
                                                          2003                                                                                       Systems
                                                                                                                                                     Consulting Limited
S-curves & reliability:
                                  summary of references
  • Many of the original references are very old:
          – illustrates the points that (a) the earliest testers seemed to be
            “advanced” even at that outset, and (b) software reliability
            seems not to have penetrated mainstream testing 30 years on!
          – but: means these books & papers hard to obtain, so...
  • Recommended “recent” references:
          – Boris Beizer 1984 book: Software System Testing & Quality
            Assurance (ISBN 0-442-21306-9)
          – Stephen H. Kan 1995 book: Metrics & Models in Software
            Quality Engineering (ISBN 0-201-63339-6)
          – John Musa 1999 book: Software Reliability Engineering
            (ISBN 0-07-913271-5)
                                                                    © Thompson
Neil Thompson: Risk and Testing    27/28 Mar   Slide 32.x11 of 51     information
                                   2003                               Systems
                                                                      Consulting Limited
Progress through tests
   • We are interested in two main aspects:
           – can we manage the test execution to get complete before target
             date?
           – if not, can we do it for those tests of high (and medium?)
             importance?
# tests                                                            # tests

                                                                                Target tests passed
                      Target tests run

                                                                             Actual tests
                                                                                 passed       High
                                       Fail
        Actual tests run                       Pass
                                                                                            Medium
                                                             date                                                     date
                                                                                                      Low

                                                                                                       © Thompson
Neil Thompson: Risk and Testing    27/28 Mar      Slide 33 of 51                                         information
                                   2003                                                                  Systems
                                                                                                         Consulting Limited
Progress through
                      anomaly fixing and retesting
             • Similarly, two main aspects:
                     – can we manage the workflow to get anomalies fixed and
                       retested before target date?
                     – if not, can we do it for those of material impact?
# anomalies                                                  # anomalies


              Cumulative anomalies
                                                                           Outstanding
                                                                           material impact
                                     Resolved
                   Awaiting
                   fix                Deferred

                                         Closed              date
                                                                                                          date

                                                                                       © Thompson
Neil Thompson: Risk and Testing   27/28 Mar       Slide 34x of 51                        information
                                  2003                                                   Systems
                                                                                         Consulting Limited
Quantitative and qualitative risk
               reduction from tests and retests
                   PROGRESS &
                   RESIDUAL RISK
                   UP RIGHT SIDE
                   OF W-MODEL


                                     System                       Large-Scale              Acceptance
                                     Testing                   Integration Testing          Testing
       PROGRESS
       THROUGH
                                              H M                                          Material
       TESTS                                                                   Fail        impact
                                                L
                                                                                    Pass
       PROGRESS
       THROUGH                          Awaiting fix                 Awaiting fix
                                         Resolved                                          Material
       INCIDENT                           Deferred                                         impact
       FIXING                                 Closed
       & RETESTING
                                                                                                      © Thompson
Neil Thompson: Risk and Testing   27/28 Mar         Slide 35 of 51                                      information
                                  2003                                                                  Systems
                                                                                                        Consulting Limited
…and also into regression testing


                                                                                                Retesting &
                                                 Specification         Execution
                                                                                                regression
                                                 of tests              of tests
                                                                                                 testing
                                                                 Difficulties,       What’s left for:
                       Desired       Time &        Physical
                    coverage    resource           constraints   squeeze on         “complete” second         allowance for
                                  con-             (environ-     scripting &        first         run            further
                             straints              ments         execution          run                            runs
                                                   etc.)         time




                                                                                                        ...


           Testing                Testing        Test        Test                Execution: Execution: Regression
           Strategy               Plan           Design      Scripts             First run Retest & 2nd run Testing
                                                                                                          © Thompson
Neil Thompson: Risk and Testing      27/28 Mar      Slide 36 of 51                                            information
                                     2003                                                                     Systems
                                                                                                              Consulting Limited
Managing risk during test execution,
          against “fixed” scope & time
                                                                  initial scope set   target go-live date
Scope                                            Time             by requirements         set in advance




                                                                   increasing                decreasing
                                                                risk of faults               risk as
                                                                  introduced                 faults found

                              Risk
                                                                                         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 37x of 51                              information
                                  2003                                                     Systems
                                                                                           Consulting Limited
From what we can report, to
                    what we would like to report




                                                                  © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 37.1x of 51     information
                                  2003                              Systems
                                                                    Consulting Limited
Risk-based reporting
                                                                 Planned
   start
                                                                   end


                 all risks
                „open‟ at
                the start




                  Progress through the test plan
                                                                © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 38a of 51     information
                                  2003                            Systems
                                                                  Consulting Limited
Risk-based reporting
                                                                         Planned
   start                                                        today
                                                                           end




                                                                            residual
                                                                            risks of
                                                                           releasing
                                                                            TODAY




                  Progress through the test plan
                                                                        © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 38b of 51             information
                                  2003                                    Systems
                                                                          Consulting Limited
Report not only risk, but also
                          scope, over time
                                                                    initial scope set   target go-live date
Scope                                            Time               by requirements         set in advance




                                                                                                 how much
                                                                                                 scope safely
                                                                                                 delivered
                                                                                                 so far?


                                                                     increasing                decreasing
                                                                  risk of faults               risk as
                                                                    introduced                 faults found

                              Risk
                                                                                           © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 38.1x of 51                              information
                                  2003                                                       Systems
                                                                                             Consulting Limited
Benefit & objectives based test reporting




                                                                                                              Blocked NFR 1
                                  Blocked func 1
                                                   Blocked func 2




                                                                                                                              Objective
                                                                                                                                          Objective


                                                                                                                                                                  Objective
                                                                                                                                                      Objective
                                                                      Benefit
                                                                                Benefit
                                                                                          Benefit
                                                                                                    Benefit
                      Open
                    Closed
          Risks




                      Open
                    Closed
                    Closed
                      Open
                    Open
                    Closed



                                                                    Project objectives, hence benefits, available for release
                                                                                                                                                                              © Thompson
Neil Thompson: Risk and Testing   27/28 Mar                           Slide 39ax of 51                                                                                          information
                                  2003                                                                                                                                          Systems
                                                                                                                                                                                Consulting Limited
Benefit & objectives based test reporting




                                                                                                Objective
                                                                                                            Objective
                                                                                                                        Objective
                                                                                                                                    Objective
                                                                                                                                                Objective
                                  Benefit
                                            Benefit


                                                        Benefit
                                                                  Benefit
                                                                            Benefit
                                                                                      Benefit
                      Open
                    Closed
          Risks




                      Open
                    Closed
                    Closed
                      Open
                    Open
                    Closed



                                                      Project objectives, hence benefits, available for release
                                                                                                                                                            © Thompson
Neil Thompson: Risk and Testing   27/28 Mar             Slide 39bx of 51                                                                                      information
                                  2003                                                                                                                        Systems
                                                                                                                                                              Consulting Limited
Slippages and trade-offs: an example

        • If Test Review Boards recommend delay, management
          may demand a trade-off, “slip in a little of that
          descoped functionality”

                                                                original   first
                                                                  target   slip
scope




                                                                                               date

                                                                                   © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 40a of 51                        information
                                  2003                                               Systems
                                                                                     Consulting Limited
Slippages and trade-offs: an example

        • If Test Review Boards recommend delay, management
          may demand a trade-off, “slip in a little of that
          descoped functionality”
        • This adds benefits but also new risks: more delay?
                                                                original   first            actual
                                                                  target   slip             go-live
scope




                                                                                               date

                                                                                   © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 40b of 51                        information
                                  2003                                               Systems
                                                                                     Consulting Limited
Tolerable risk-benefit balance:
                          another example
         • Even if we resist temptation to trade off slippage
           against scope, may still need to renegotiate the
           tolerable level of risk balanced against benefits
                                                                     original
                                                                       target
 (risk -
                                                                         date
benefits)




                                                     original target net risk
                                                                                                date
                                                                                © Thompson
 Neil Thompson: Risk and Testing   27/28 Mar   Slide 41a of 51                    information
                                   2003                                           Systems
                                                                                  Consulting Limited
Tolerable risk-benefit balance:
                          another example
         • Even if we resist temptation to trade off slippage
           against scope, may still need to renegotiate the
           tolerable level of risk balanced against benefits
                                                                     original
                                                                                 actual
                                                                       target
 (risk -                                                                        go-live
                                                                         date
benefits)
                                                                                          “go for it”
                                                                                           margin




                                                     original target net risk
                                                                                                    date
                                                                                    © Thompson
 Neil Thompson: Risk and Testing   27/28 Mar   Slide 41b of 51                        information
                                   2003                                               Systems
                                                                                      Consulting Limited
5. Next steps in Risk-Based Testing
       • End-to-end risk data model:
               – is wanted to keep risk information linked to tests
               – is under development by Paul Gerrard
               – main reason is of course...
       • Automation:
               – clerical and spreadsheets really not enough
               – want to keep risk to test link up-to-date through
                 descopes, reprioritisations etc
               – Paul is working on an Access database for now
               – any vendors with risk in their data models?
                                                               With acknowledgements         © Thompson
Neil Thompson: Risk and Testing   27/28 Mar   Slide 42 of 51                                   information
                                  2003
                                                               to lead author Paul Gerrard     Systems
                                                                                               Consulting Limited
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)
Risk and Testing (2003)

More Related Content

Viewers also liked

Bus370
Bus370Bus370
Bus370
rashed5852
 
09b. TCI Climate of the Nation 2014
09b. TCI Climate of the Nation 201409b. TCI Climate of the Nation 2014
09b. TCI Climate of the Nation 2014Richard Plumpton
 
Level of Thinking
Level of ThinkingLevel of Thinking
Level of Thinking
Dr Nahin Mamun
 
From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)
Neil Thompson
 
Preparation for Fortnue
Preparation for FortnuePreparation for Fortnue
Preparation for Fortnue
Dr Nahin Mamun
 
Grade 1lessonplans
Grade 1lessonplansGrade 1lessonplans
Grade 1lessonplans
St. Joseph School
 
Implants
ImplantsImplants
25 biannual 11 mar 12
25 biannual 11 mar 1225 biannual 11 mar 12
25 biannual 11 mar 12
ASoDL
 
Cracking the MBA
Cracking the MBACracking the MBA
Cracking the MBA
Dr Nahin Mamun
 
Document de Voluntats Anticipades. EAP Sallent
Document de Voluntats Anticipades. EAP SallentDocument de Voluntats Anticipades. EAP Sallent
Document de Voluntats Anticipades. EAP Sallent
ICS Catalunya Central
 
Wo tcommunity proposal
Wo tcommunity proposalWo tcommunity proposal
Wo tcommunity proposalCampus Tang
 
Final Presentation
Final PresentationFinal Presentation
Final Presentation
shahir20
 
Thoughts and Actions
Thoughts and ActionsThoughts and Actions
Thoughts and Actions
Dr Nahin Mamun
 
Myuvahub slideshow
Myuvahub slideshowMyuvahub slideshow
Myuvahub slideshow
Praveen Goldsmith
 
Mathura of my Dreams by Vasundhara Agarwal
Mathura of my Dreams by Vasundhara AgarwalMathura of my Dreams by Vasundhara Agarwal
Mathura of my Dreams by Vasundhara Agarwal
Paarth Institute
 
WebHR Presentation
WebHR PresentationWebHR Presentation
WebHR Presentation
memonnoor
 
Can the Internet Change Bangladesh
Can the Internet Change BangladeshCan the Internet Change Bangladesh
Can the Internet Change Bangladesh
Dr Nahin Mamun
 

Viewers also liked (19)

Bus370
Bus370Bus370
Bus370
 
Management scenario
Management scenarioManagement scenario
Management scenario
 
09b. TCI Climate of the Nation 2014
09b. TCI Climate of the Nation 201409b. TCI Climate of the Nation 2014
09b. TCI Climate of the Nation 2014
 
Level of Thinking
Level of ThinkingLevel of Thinking
Level of Thinking
 
From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)
 
Preparation for Fortnue
Preparation for FortnuePreparation for Fortnue
Preparation for Fortnue
 
02. TCI Carbon
02. TCI Carbon02. TCI Carbon
02. TCI Carbon
 
Grade 1lessonplans
Grade 1lessonplansGrade 1lessonplans
Grade 1lessonplans
 
Implants
ImplantsImplants
Implants
 
25 biannual 11 mar 12
25 biannual 11 mar 1225 biannual 11 mar 12
25 biannual 11 mar 12
 
Cracking the MBA
Cracking the MBACracking the MBA
Cracking the MBA
 
Document de Voluntats Anticipades. EAP Sallent
Document de Voluntats Anticipades. EAP SallentDocument de Voluntats Anticipades. EAP Sallent
Document de Voluntats Anticipades. EAP Sallent
 
Wo tcommunity proposal
Wo tcommunity proposalWo tcommunity proposal
Wo tcommunity proposal
 
Final Presentation
Final PresentationFinal Presentation
Final Presentation
 
Thoughts and Actions
Thoughts and ActionsThoughts and Actions
Thoughts and Actions
 
Myuvahub slideshow
Myuvahub slideshowMyuvahub slideshow
Myuvahub slideshow
 
Mathura of my Dreams by Vasundhara Agarwal
Mathura of my Dreams by Vasundhara AgarwalMathura of my Dreams by Vasundhara Agarwal
Mathura of my Dreams by Vasundhara Agarwal
 
WebHR Presentation
WebHR PresentationWebHR Presentation
WebHR Presentation
 
Can the Internet Change Bangladesh
Can the Internet Change BangladeshCan the Internet Change Bangladesh
Can the Internet Change Bangladesh
 

Similar to Risk and Testing (2003)

Ch01-whyTest.pptx
Ch01-whyTest.pptxCh01-whyTest.pptx
Ch01-whyTest.pptx
SteveIrwin25
 
Ch01-whyTest.pptx
Ch01-whyTest.pptxCh01-whyTest.pptx
Ch01-whyTest.pptx
TaofeekAHammed
 
Introduction to Software Testing
Introduction to Software TestingIntroduction to Software Testing
Introduction to Software Testing
Henry Muccini
 
Testing & implementation system 1-wm
Testing & implementation system 1-wmTesting & implementation system 1-wm
Testing & implementation system 1-wm
Wiwik Muslehatin
 
1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx
oswald1horne84988
 
Defect Tracking Software Project Presentation
Defect Tracking Software Project PresentationDefect Tracking Software Project Presentation
Defect Tracking Software Project Presentation
Shiv Prakash
 
44 Introduction Identifying and assessing risks is.docx
44 Introduction Identifying and assessing risks is.docx44 Introduction Identifying and assessing risks is.docx
44 Introduction Identifying and assessing risks is.docx
blondellchancy
 
Overview of software reliability engineering
Overview of software reliability engineeringOverview of software reliability engineering
Overview of software reliability engineering
Ann Marie Neufelder
 
1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx
jeremylockett77
 
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
University of Antwerp
 
Fundamental Of Testing (Dhea Frizky)
Fundamental Of Testing (Dhea Frizky)Fundamental Of Testing (Dhea Frizky)
Fundamental Of Testing (Dhea Frizky)
Dhea Ffrizky
 
Software Reliability
Software ReliabilitySoftware Reliability
Software Reliability
ranapoonam1
 
lecture02.ppt
lecture02.pptlecture02.ppt
lecture02.ppt
SofiaRehman2
 
real simple reliable software
real simple reliable software real simple reliable software
real simple reliable software
AnnMarieNeufelder1
 
Fundamentals of testing
Fundamentals of testingFundamentals of testing
Fundamentals of testing
seli purnianda
 
Industrial Training in Software Testing
Industrial Training in Software TestingIndustrial Training in Software Testing
Industrial Training in Software Testing
Arcadian Learning
 
Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...butest
 
Testing concepts [3] - Software Testing Techniques (CIS640)
Testing concepts [3] - Software Testing Techniques (CIS640)Testing concepts [3] - Software Testing Techniques (CIS640)
Testing concepts [3] - Software Testing Techniques (CIS640)
Venkatesh Prasad Ranganath
 
syllabus.
syllabus.syllabus.
syllabus.butest
 

Similar to Risk and Testing (2003) (20)

Ch01-whyTest.pptx
Ch01-whyTest.pptxCh01-whyTest.pptx
Ch01-whyTest.pptx
 
Quality & Reliability in Software Engineering
Quality & Reliability in Software EngineeringQuality & Reliability in Software Engineering
Quality & Reliability in Software Engineering
 
Ch01-whyTest.pptx
Ch01-whyTest.pptxCh01-whyTest.pptx
Ch01-whyTest.pptx
 
Introduction to Software Testing
Introduction to Software TestingIntroduction to Software Testing
Introduction to Software Testing
 
Testing & implementation system 1-wm
Testing & implementation system 1-wmTesting & implementation system 1-wm
Testing & implementation system 1-wm
 
1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx
 
Defect Tracking Software Project Presentation
Defect Tracking Software Project PresentationDefect Tracking Software Project Presentation
Defect Tracking Software Project Presentation
 
44 Introduction Identifying and assessing risks is.docx
44 Introduction Identifying and assessing risks is.docx44 Introduction Identifying and assessing risks is.docx
44 Introduction Identifying and assessing risks is.docx
 
Overview of software reliability engineering
Overview of software reliability engineeringOverview of software reliability engineering
Overview of software reliability engineering
 
1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx1 Introduction The task of identifying risks in an.docx
1 Introduction The task of identifying risks in an.docx
 
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
 
Fundamental Of Testing (Dhea Frizky)
Fundamental Of Testing (Dhea Frizky)Fundamental Of Testing (Dhea Frizky)
Fundamental Of Testing (Dhea Frizky)
 
Software Reliability
Software ReliabilitySoftware Reliability
Software Reliability
 
lecture02.ppt
lecture02.pptlecture02.ppt
lecture02.ppt
 
real simple reliable software
real simple reliable software real simple reliable software
real simple reliable software
 
Fundamentals of testing
Fundamentals of testingFundamentals of testing
Fundamentals of testing
 
Industrial Training in Software Testing
Industrial Training in Software TestingIndustrial Training in Software Testing
Industrial Training in Software Testing
 
Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...
 
Testing concepts [3] - Software Testing Techniques (CIS640)
Testing concepts [3] - Software Testing Techniques (CIS640)Testing concepts [3] - Software Testing Techniques (CIS640)
Testing concepts [3] - Software Testing Techniques (CIS640)
 
syllabus.
syllabus.syllabus.
syllabus.
 

More from Neil Thompson

Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...
Neil Thompson
 
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Neil Thompson
 
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Neil Thompson
 
Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)
Neil Thompson
 
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
Neil Thompson
 
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Neil Thompson
 
ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)
Neil Thompson
 
Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)
Neil Thompson
 
Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...
Neil Thompson
 
Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)
Neil Thompson
 
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Neil Thompson
 
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
Neil Thompson
 
What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)
Neil Thompson
 
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
Neil Thompson
 
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Neil Thompson
 
Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)
Neil Thompson
 

More from Neil Thompson (16)

Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...
 
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
 
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
 
Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)
 
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
 
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
 
ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)
 
Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)
 
Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...
 
Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)
 
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
 
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
 
What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)
 
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
 
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
 
Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)
 

Recently uploaded

UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
g2nightmarescribd
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 

Recently uploaded (20)

UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 

Risk and Testing (2003)

  • 1. Risk and Testing extended after session Presentation to Testing Master Class 27/28 March 2003 Neil Thompson www.TiSCL.com 23 Oast House Crescent NeilT@TiSCL.com Thompson Farnham, Surrey Direct phone 07000 NeilTh information GU9 0NP (634584) Systems England (UK) Direct fax 07000 NeilTF Consulting Limited Phone & fax 01252 726900 (634583) Some slides included with permission from and Paul Gerrard © Thompson information Systems Consulting Limited
  • 2. Agenda • 1. What testers mean by risk • 2. “Traditional” use of risk in testing • 3. More recent contributions to thinking • 4. Risk-Based Testing: Paul Gerrard (and I) • 5. Next steps in RBT: – end-to-end risk data model; – automation • 6. Refinements and ideas for future © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 1 of 51 information 2003 Systems Consulting Limited
  • 3. 1. What testers mean by risk • Risk that software in live use will fail: – software: could be Commercial Off The Shelf; packages such as ERP; bespoke project; integrated programmes of multiple systems...; industry-wide supply chains; any systems product • Could include risk that later stages (higher levels) of testing will be excessively disrupted by failures • Chain of risks: Error: Fault: Failure: RISK mistake made by human RISK something wrong in a product RISK deviation of product from its (eg spec-writing, (interim eg spec, expected* delivery or service program-coding) final eg executable software) (doesn’t do what it should, or does what it shouldn’t) not all errors * “expected” may be as in spec, not all faults result in faults or spec may be wrong result in failures (verification & validation) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 2 of 51 information 2003 Systems Consulting Limited
  • 4. Extra links (4,5) in the chain of risks? • Steve Allott: – distinguish faults in specifications from faults in the resulting program code Link 2: Link 3: Link 4: RISK Link 1: RISK something RISK something RISK deviation of product from its mistake made by human expected delivery or service wrong in wrong in a spec program (doesn’t do what it should, code or does what it shouldn’t) • Ed Kit (1995 book, Software Testing in the Real World): – “error” reserved for its scientific use (but not always very useful in software testing?)  Error Mistake:  (undefined):  Fault:  Failure: RISK a human action that RISK incorrect results RISK an incorrect step, RISK an incorrect result produces an incorrect process or data in specifications result (eg in spec- writing, program- Could use Defect for this? definition in a computer program  Error: amount by which coding) (ie executable result is incorrect software) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 2.x1 of 51 information 2003 Systems Consulting Limited
  • 5. Chain of risks could be up to 6 links? • Adding in a distinction by John Musa (1998 book, Software Reliability Engineering): – not all deviations are failures (but this is just the “anomaly” concept?) – (so the associated risks are in the testing process rather than development: that an anomaly may not be noticed, or may be misinterpreted) • A possible hybrid of all sources: (false alarm): or Change Request,  Anomaly: or testware mistake an unexpected result during testing RISK OF MIS- INTERPRETING Note: this fits its usage in inspections RISK OF MISSING  Mistake:  Defect:  Fault:  Failure: RISK a human action that RISK incorrect results RISK an incorrect step, RISK an incorrect result produces an incorrect in specifications process or data result (eg in spec- writing, program- definition in a computer program  Error: amount by which coding) RISK (ie executable result is incorrect software) Direct programming mistake © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 2.x2 of 51 information 2003 Systems Consulting Limited
  • 6. Three types of software risk Process Risk Project Risk variances in planning and resource constraints, estimation, shortfalls in external interfaces, staffing, failure to track supplier relationships, progress, lack of quality contract restrictions assurance and configuration management Primarily a management Planning and the development responsibility process are the main issues here. Product Risk lack of requirements stability, complexity, Testers are design quality, coding mainly quality, non-functional concerned with issues, test specifications. Product Risk Requirements risks are the most significant risks reported in risk assessments. © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 3 of 51 information 2003 Systems Consulting Limited
  • 7. Risk management components BAD THINGS WHICH COULD HAPPEN, AND PROBABILITY OF EACH N E W S CONSEQUENCE OF EACH BAD THING WHICH COULD HAPPEN © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 4 of 51 information 2003 Systems Consulting Limited
  • 8. Symmetric view of risk probability & consequence PROBABILITY risk EXPOSURE = probability (likelihood) x of bad thing occurring 3 6 9 consequence 2 4 6 1 2 3 CONSEQUENCE (impact) if bad thing does occur • This is how most people quantify risk • Adding gives same rank as multiplying, but less differentiation © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 5 of 51 information 2003 Systems Consulting Limited
  • 9. Risk management components BAD THINGS WHICH COULD HAPPEN, AND PROBABILITY OF EACH any other N E W S dimensions? CONSEQUENCE OF EACH BAD THING WHICH COULD HAPPEN © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 4r of 51 information 2003 Systems Consulting Limited
  • 10. Do risks have any other dimensions? • In addition to probability and consequence... • Undetectability: – difficulty of seeing a bad thing if it does happen – eg insidious database corruption • Urgency: – advisability of looking for / preventing some bad things before other bad things – eg lack of requirements stability • Any others? © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 6 of 51 information 2003 Systems Consulting Limited
  • 11. 2. “Traditional” use of risk in testing • Few authors & trainers in software testing now miss an opportunity to link testing to “risk” • In recent years, almost a mandatory mantra • But it isn‟t new (what will be new is translating the mantra into everyday doings!) • Let‟s look at the major “traditional” authors: – Hetzel – Myers – Beizer – others © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 7 of 51 information 2003 Systems Consulting Limited
  • 12. Who wrote this and when? • “Part of the art of testing is to know when to stop testing”: – some recent visionary / pragmatist? – Myers? His eponymous 1979 book was the first on testing? – No, No and No! – Fred Gruenberger in the original testing book, Program Test Methods 1972-3, Ed. William C. Hetzel • Also in this book (which is the proceedings of first-ever testing conference)... © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 8 of 51 information 2003 Systems Consulting Limited
  • 13. Hetzel (Ed.) 1972-3 • Little / nothing explicitly on risk, but: – reliability as a factor in quality; inability to cope with complexity of systems – “the probability of being faulty is great”p255 (Jean-Claude Rault, CRL France)... – “how to run the test for a given probability of error... number of random input combinations before... considered „good‟”p258; sampling as a principle of testing • Interestingly: – “sampling as a principle should decrease in importance and be replaced by hierarchical organization & logical reduction”p28 (William C. Hetzel) • Other curiosities: – ?source of Myers‟ triangle exercise p13 (ref. SYSTEM DESIGN Dr. Richard Hamming, “Computers and Society”) COMPON‟T DESIGN MODULE DESIGN – the first “V-model”? p172 Outside-in design, inside-out testing MODULE TEST COMPONENT TEST SYSTEM TEST (Allan L. Scherr, IBM Poughkeepsie NY / his colleagues) CUSTOMER USE © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 9 of 51 information 2003 Systems Consulting Limited
  • 14. Myers 1976: Software Reliability - Principles & Practices • Again, “risk” not explicit, but principles are there: – “reliability must be stated as a function of the severity of errors as well as their frequency”; “software reliability is the probability that the software will execute for a period of time without a failure, weighted by the cost to the user of each failure”; “probability that a user will not enter a particular set of inputs that leads to a failure”p7 – “if there is reason to believe that this set of test cases had a high probability of uncovering all possible errors, then the tests have established some confidence in the program‟s correctness”; “each test case used should provide should provide a maximum yield on our investment... the probability that the test case will expose a previously undetected error”p170, 176 – “if a reasonable estimate of [the number of remaining errors in a program] were available during the testing stages, it would help to determine when to stop testing”p329 – hazard function as a component of reliability modelsp330 © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 10 of 51 information 2003 Systems Consulting Limited
  • 15. Myers 1979: The Art of Software Testing • Risk is still not in the index, but more principles: – “the earlier that errors are found, the lower are the costs of correcting... and the higher is the probability of correcting the errors correctly”p18 – “what subset of all possible test cases has the highest probability of detecting the most errors”p36 – tries to base completion criteria for each phase of testing on an estimate of the number of errors originating in particular design processes, and during what testing phases these errors are likely to be detectedp124 – testing adds value by increasing reliabilityp5 – revisits / updates the reliability models outlined in his 1976 book: • those related to hardware reliability theory (reliability growth, Bayesian, Markov, tailored per program) • error seeding, statistical sampling theory • simple intuitive (parallel independent testers, historic error data) • complexity-based (composite design, code properties) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 11 of 51 information 2003 Systems Consulting Limited
  • 16. Hetzel (1984)-8: The Complete Guide to Software Testing • Risk appears only once in the index, but is prominent: – Testing principle #4p24: Testing Is Risk-Based • amount of testing depends on risk of failure, or of missing a defect; so... • use risk to decide number of cases, amount of emphasis, time & resources • Other principles appear: – testing measures software quality; want maximum confidence per unit cost via maximum probability of finding defectsp255 – objectives of Testing In The Large include:p123 • are major failures unlikely? • what level of quality is good enough? • what amount of implementation risk is acceptable? – System Testing should end when we have enough confidence that Acceptance Testing is ready to startp134 © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 12 of 51 information 2003 Systems Consulting Limited
  • 17. Beizer 1984: Software System Testing & Quality Assurance • Risk appears twice in index, but both insignificant • However, some relevant principles are to be found: – smartness in software production is ability to avoid past, present & future bugsp2 (and bwgs?) – now more than a dozen models/variations in software reliability theory: but all far from reality; and all far from providing simple, pragmatic tools that can be used to measure software developmentp292-293 – six specific criticisms: but if a theory were to overcome these then it would probably be too complicated to be practicalp293-294 – a compromise may be possible in future, but instead for now, suggest go-live when the system is considered to be useful, or at least sufficiently useful to permit the risk of failurep295 – plotting and extrapolation of S-curves to assess when this point attainedp295-304 © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 13 of 51 information 2003 Systems Consulting Limited
  • 18. Beizer (1983)-90: Software Testing Techniques • “Risk” word is indexed as though deliberate: – a couple of occurrences are insignificant, but others: • purpose of testing is not to prove anything but to reduce perceived risk [of software not working] to an acceptable value (penultimate phase of attitude) • testing not an act; is a mental discipline which results in low-risk software without much testing effort (ultimate phase of attitude) p4 • accepting principles of statistical quality control (but perhaps not yet implementing, because is not yet obvious how to, and in the case of small products, is dangerous) p6 • add test cases for transactions with high risksp135 • we risk release when confidence is high enoughp6 • Other occurrences of key principles, including: – probability of failure due to hibernating bwgs* low enough to acceptp26 – importance of a bwg* depends on frequency, correction cost, [fix] installation cost & consequencesp27 *bwg: ghost, spectre, bogey, hobgoblin, spirit of the night,any imaginary (?) thing that frightens a person (Welsh) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 14 of 51 information 2003 Systems Consulting Limited
  • 19. Others • The “traditional” period could be said to cover the 1970s and 1980s. A variety of views can be found: – Edward Miller 1978, in Software Testing & Validation Techniques (IEEE Tutorial): • “except under very special situations [...], it is important to recognise that program testing, if performed systematically, can serve to guarantee the absence of bugs”p4 • and/but(?) “a program is well tested when the program tester has an adequately high level of confidence that there are no remaining “errors” that further testing would uncover”p9 (italics by Neil Thompson!) – DeMillo, McCracken, Martin & Passafiume 1987: Software Testing & Evaluation • “a technologically sound approach to testing will incorporate... evaluations of software status into overall assessments of risk associated with the development and eventual fielding of the system”p vii © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 15 of 51 information 2003 Systems Consulting Limited
  • 20. 3. More recent contributions to (risk use) thinking • Traditional basis of testing on risk (although more perceptive than some give credit for) is less than satisfactory because: – it tends to be “lip-service”, with no follow-through / practical application – if there is follow-through, it involves merely using risk analysis as part of the Testing Strategy (then that is shelved, and it‟s “heads down” from then on?) • Contributions more recently from (for example): – Ed Kit (Software Testing in the real world, 1995) – Testing Maturity Model (Illinois Institute of Technology) – Test Process Improvement® (Tim Koomen & Martin Pol) – Testing Organisation MaturityTM questionnaire (Systeme Evolutif) – Hans Schaefer‟s work – Zen and the art of Object-Oriented Risk Management (Neil Thompson) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 16x of 51 information 2003 Systems Consulting Limited
  • 21. Kit 1995: Software Testing in the real world • Error-fault-failure chain extendedp18 • Clear statements on risk and risk managementp26: – test the parts of the system whose failures would have most serious consequencesp27 – frequent-use areas increase chances of failure foundp27 – focus on parts most likely to have errors in themp27 – risk is not only basis for test management decisions, is basis for everyday test practitioner decisionsp28 • Risk management used in integration testp95 © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 16.x1 of 51 information 2003 Systems Consulting Limited
  • 22. Testing Maturity Model • Five levels of increasing maturity, based loosely on decades of testing evolution (eg in 1950s testing not even distinguished from debugging) • Maturity goals and process areas for the five levels do not include risk explicitly, although emphasis moves from tactical to strategic (eg fault detection to prevention): – in level 1, software released without adequate visibility of quality & risks – in level 3, test strategy is determined using risk management techniques – in level 4, software products are evaluated using quality criteria (relation to risk?) – in level 5, costs & test effectiveness are continually improved (sampling quality) • Strongly recommended are key practices & subpractices: – (not yet available?) • Little explicitly visible on risk; very process-oriented © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 17 of 51 information 2003 Systems Consulting Limited
  • 23. Test Process Improvement® • Only one entry in index: – risks and recommendations, substantiated with metrics (as part of Reporting) • risks indicated with regard to (parts of) the tested object • risks can be (eg) delays, quality shortfalls • But risks incorporated to some extent, eg: – Test Strategy (differentiation in test depth depending on risks) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 18 of 51 information 2003 Systems Consulting Limited
  • 24. Testing Organisation Maturity (TOMTM) questionnaire • Risk assessment is not only at the beginning: – when development slips, a risk assessment is conducted and a decision to squeeze, maintain or extend test time may be made – [for] tests that are descoped, the associated risks are identified & understood © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 19 of 51 information 2003 Systems Consulting Limited
  • 25. Hans Schaefer’s work • Squeeze on testing prioritise based on risk • Consider possibility of stepwise release: : – test most important functions first – look for functions which can be delayed • What is “important” in the potential release (key functions, worst problems?) – visibility (of function / characteristic) – frequency of use – possible cost of failure • Where likely to be most problems? – project history (new technology, methods, tools; numerous people, dispersed) – product measures (areas complex, changed, needing optimising, faulty before) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 20 of 51 information 2003 Systems Consulting Limited
  • 26. Zen and the Art of Object-Oriented Risk Management • Testing continues to lag development; but pessimism / delay was currently unacceptable (deregulation, Euro, Y2k) • Could use OO concepts to help testing: – encapsulate risk information with tests (for effectiveness); and/or – inherit tests to reuse (efficiency) • Basic risk management: – relationship to V-model (outline level) – detail level: test specification © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 21 of 51 information 2003 Systems Consulting Limited
  • 27. Risk management & the V-model Acceptance testing System Risks that system(s) have testing undetected defects in them Integration testing Unit testing © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 22a of 51 information 2003 Systems Consulting Limited
  • 28. Risk management & the V-model Risks that system(s) and business(es) are not right and ready for each other Acceptance testing System Risks that system(s) have testing undetected defects in them Integration testing Unit testing © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 22b of 51 information 2003 Systems Consulting Limited
  • 29. Where and how testing manages risks: first, at outline level Level Risks Service requirements Acceptance Undetected errors damage testing business System specification System Undetected errors waste user time testing & damage confidence in Acceptance testing Interfaces don‟t match Integration Undetected errors too late to fix testing Units don‟t work right Unit Undetected errors won‟t be found testing by later tests © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 23a of 51 information 2003 Systems Consulting Limited
  • 30. Where and how testing manages risks: first, at outline level Level Risks How managed Service requirements Specify user-wanted tests against URS Acceptance Undetected errors damage Script tests around user guide and user testing business & operator training materials System specification Use independent testers, functional & System Undetected errors waste user time technical, to get fresh view testing & damage confidence in Take last opportunity to do automated Acceptance testing stress testing before env‟ts re-used Interfaces don‟t match Use skills of designers before they move Integration Undetected errors too late to fix away testing Take last opportunity to exercise interfaces singly Units don‟t work right Use detailed knowledge of developers Unit Undetected errors won‟t be found before they forget testing by later tests Take last opportunity to exercise every error message © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 23b of 51 information 2003 Systems Consulting Limited
  • 31. Second, at detail level: risk management during test specification Test specification based on total magnitude of risks for all defects imaginable = ( Estimated probability of defect occurring x Estimated severity of defect ) • To help decision-making during the “squeezing of testing”, it would be useful to have recorded explicitly as part of the specification of each test: – the type of risk the set of tests is designed to minimise – any specific risks at which a particular test or tests is aimed • And this was one of the inputs to... © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 24 of 51 information 2003 Systems Consulting Limited
  • 32. 4. Risk-Based (E-Business) Testing • Advert! – Artech House, 2002 – ISBN 1-58053-314-0 – reviews amazon.com & co.uk • companion website www.riskbasedtesting.com – sample chapters – Master Test Planning template – comments from readers (reviews, corrections) With acknowledgements © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 25 of 51 information 2003 to lead author Paul Gerrard Systems Consulting Limited
  • 33. Risk-Based E-Business Testing: main themes • Can define approximately 100 “product” risks threatening a typical e- business system and its implementation and maintenance • Test objectives can be derived almost directly as “inverse” of risks • Usable reliability models are some way off (perhaps even unattainable?) so better for now to work on basis of stakeholders‟ perceptions of risk • Lists & explains techniques appropriate to each risk type • Includes information on commercial and DIY tools • Final chapters are on “making it happen” • Go-live decision-making: when benefits “now” exceed risks “now” • Written for e-business but principles are portable; extended to wider tutorial for EuroSTAR 2002; following slides summarise key points With acknowledgements © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 26 of 51 information 2003 to lead author Paul Gerrard Systems Consulting Limited
  • 34. Risks and test objectives - examples Risk Test Objective The web site fails to function To demonstrate that the application functions correctly on the user‟s client correctly on selected combinations of operating system and browser operating systems and browser version configuration. combinations. Bank statement details To demonstrate that statement details presented in the client presented in the client browser reconcile with browser do not match records back-end legacy systems. in the back-end legacy banking systems. Vulnerabilities that hackers To demonstrate through audit, scanning and could exploit exist in the web ethical hacking that there are no security site networking infrastructure. vulnerabilities in the web site networking infrastructure. © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 27 of 51 information 2003 Systems Consulting Limited
  • 35. Generic test objectives Test Objective Typical Test Stage Demonstrate component meets requirements Component Testing Demonstrate component is ready for reuse in larger Component Testing sub-system Demonstrate integrated components correctly Integration testing assembled/combined and collaborate Demonstrate system meets functional requirements Functional System Testing Demonstrate system meets non-functional requirements Non-Functional System Testing Demonstrate system meets industry regulation System or Acceptance requirements Testing Demonstrate supplier meets contractual obligations (Contract) Acceptance Testing Validate system meets business or user requirements (User) Acceptance Testing Demonstrate system, processes and people meet (User) Acceptance business requirements Testing © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 28 of 51 information 2003 Systems Consulting Limited
  • 36. Master test planning © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 28.x1 of 51 information 2003 Systems Consulting Limited
  • 37. Test Plans for each testing stage (example) eg for System Testing: Test 1 2 3 4 5 6 7 8 9 10 11 ... Risk F1 125 Test objective F1 Risk F2 10 Test objective F2 Risk F3 60 Test objective F3 Risk F4 32 Test objective F4 Risk U1 12 Test objective U1 Risk U2 25 Test objective U2 Risk U3 30 Test objective U3 Risk S1 100 Test objective S1 Risk S2 40 Test objective S2 Requirement F1 Requirement F2 Generic test objective G4 Requirement F3 Requirement N1 Generic test objective G5 Requirement N2 Test importance H L M H H H M M M L L ... © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 29 of 51 information 2003 Systems Consulting Limited
  • 38. Test Design: target execution schedule (example) ENVIRONMENTS TEAMS TESTERS Test execution days 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ... Balance & 1 Retests & regression tests transaction reporting Inter- 5 10 Partition for account 3 7 11 functional tests transfers 2 Payments 6 9 Direct debits End-to-end customer scenarios Partition for 4 8 disruptive non-functional tests Comfortable completion date Earliest completion date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 30 of 51 information 2003 Systems Consulting Limited
  • 39. Plan to manage risk (and scope) during test specification & execution Quality Quality Scope Cost Risk Quality best pair to fine-tune Time Time Cost Scope Scope Cost Time © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 31x of 51 information 2003 Systems Consulting Limited
  • 40. Risk as the “inverse” of quality Quality Scope Time Scope Time Risk © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 31.x1 of 51 information 2003 Systems Consulting Limited
  • 41. Managing risk during test specification initial scope set target go-live date Scope Time by requirements set in advance increasing decreasing risk of faults risk as introduced faults found Risk © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 31.x2 of 51 information 2003 Systems Consulting Limited
  • 42. Verification & validation as risk management methods © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 31.x3 of 51 information 2003 Systems Consulting Limited
  • 43. Risk management during test specification: “micro-risks” Test specification: Estimated Estimated RISKS TO •for all defects probability x consequence = TEST AGAINST imaginable... © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32ax of 51 information 2003 Systems Consulting Limited
  • 44. Risk management clarifies during test execution Test specification: Estimated Estimated RISKS TO •for all defects probability x consequence = TEST AGAINST imaginable... Test execution: Consequence = •for each defect Probability detected =1 x f (urgency, importance) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32bx of 51 information 2003 Systems Consulting Limited
  • 45. But clarity during test execution is only close-range: fog ahead! Test specification: Estimated Estimated RISKS TO •for all defects probability x consequence = TEST AGAINST imaginable... Test execution: Consequence = •for each failure Probability detected… =1 x f (urgency, { } importance) = REMAINING •for all Estimated Estimated RISKS anomalies as yet probability x consequence undiscovered... © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32cx of 51 information 2003 Systems Consulting Limited
  • 46. Managing risk during test execution: when has risk got low enough? initial scope set target go-live date Scope Time by requirements set in advance increasing decreasing risk of faults risk as introduced faults found Risk © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x1 of 51 information 2003 Systems Consulting Limited
  • 47. Pragmatic approximation to risk reduction: progress through “test+fix” # anomalies • Cumulative S-curves are good because they: Cumulative anomalies – show several things at once – facilitate extrapolation of trends Resolved – are based on acknowledged Awaiting theory and empirical data... fix Deferred # Closed tests date Target tests run Fail Actual tests run Pass date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x2 of 51 information 2003 Systems Consulting Limited
  • 48. Cumulative S-curves: theoretical basis (potential) # failures per day Software reliability growth model depending on operational profile Software Hardware Above curves based on Myers 1976: Software Reliability: Principles & Practices p10 date cumulative # Anomalies found More than 1 Much less than 1 anomaly per test anomaly per test Much more than 1 anomaly per test, but Tests run Late tests slower because go-live date only one visible at a time! awaiting difficult and/or lo-priority fixes date Early tests blocked by hi-impact anomalies Middle tests fast and productive © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x3 of 51 information 2003 Systems Consulting Limited
  • 49. A possible variation on the potential software reliability growth model # failures Possibility of knock-on errors included in go-live date Software Littlewood & Verrall 1973: per day A Bayesian Reliability Growth Model for Computer Software (in IEEE Symposium) Bad maintenance Good maintenance Good testing & maintenance: Bad testing & maintenance: date convergence on stability divergence into instability failures failures TEST & TEST & FIX HACK RETEST RETEST failures failures CHECK AGAINST HAVE A EXPECTED STROKE THINK AND DIAGNOSE TALK TO FOLKS RESULTS CHIN © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x4 of 51 information 2003 Systems Consulting Limited
  • 50. Cumulative S-curves: more theory cumulative # Actually there Anomalies found are several reliability growth models, but: Tests run go-live date • the Rayleigh model is part of hardware reliability methodology date and has been used successfully in # per day software reliability during development Pattern of fault discovery Dunn & Ullman 1982 p61 and testing Anomalies found • its curve produces the S-curve when go-live date accumulated Tests run date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x5 of 51 information 2003 Systems Consulting Limited
  • 51. Reliability theory more generally # failures per “t” Software: exponential decay Myers 1976 Hardware: Poisson distribution Myers 1976 Both of our software execution time Σt curves are members • shape of the Weibull parameter “m” # failures can take various • NB: models tend to distribution: per “t” values (only 1 & 2 use execution time • has been used for shown here): Kan rather than elapsed decades in hardware: 1995 p180 time (because removes Kan 1995 p179 Rayleigh (m=2) test distortion, and uses • two single-parameter operational profile ie cases applied to how often used live) software by Wagoner Exponential (m=1) in 1970s: Dunn & Ullman 1982 p318 execution time Σt © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x6 of 51 information 2003 Systems Consulting Limited
  • 52. Reliability models taxonomy (Beizer, Dunn & Ullman, + Musa) future addition to slide pack © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x7 of 51 information 2003 Systems Consulting Limited
  • 53. Reliability: problems with theories (Beizer, 1984) • Main problem is getting theories to match reality • Several acknowledged shortcomings of many theories, eg: – don‟t evaluate consequence (severity) of anomalies – assume testing is like live (eg relatively few special cases) – don‟t correct properly for stress-test effects, or code enhancements – don‟t consider interactions between faults – don‟t allow for debugging getting harder over time • The science is moving on eg Wiley, Journal of Software Testing, Verification & Reliability but: – a reliability theory that satisfied all the above would be complex – would project managers use it, or would they go live anyway? • So until these are resolved, let‟s turn to empirical data... © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x8 of 51 information 2003 Systems Consulting Limited
  • 54. Cumulative S-curves: empirical data Observed empirically that faults plot takes characteristic shape Hetzel 1988 p210 # Anomalies found Possible to use to roughly gauge test time or faults remaining Hetzel 1988 p210 Tests run go-live date date S-curve also visible in Kit 1995: Software Testing in the Real World p135 The Japanese “Project Bankruptcy” study: Abe, Sakamura & Aiso 1979, in Beizer 1984 • analysed 23 projects, including application software & system software developments • included new code, modifications to existing code, and combinations • remarkable similarity across all projects for shape of test completion curve • anomaly detection rates not significant (eg low could mean good software or bad testing) • significant were (a) length of initial slow progress, and (b) shape of anomaly detection curve... © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x9 of 51 information 2003 Systems Consulting Limited
  • 55. Project bankruptcy study, cumulative # summarised by Beizer Anomalies found (b) A secondary indicator was start of integration stage anomaly detection rate planned go-live date Derivative (looks like rate Tests run Rayleigh “bankrupt” curve) failure date success inevitable Phase I............II...............III 100% All the projects had 3 phases: Beizer 1984: Software System Testing & Quality Assurance, p297-300 (a) Duration of phases was primary indicator of project success or “bankruptcy”: Ph I+II...............................72%.......................................126% success “bankrupt” failure inevitable Ph I..15%................55%................97% © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x10 of 51 information 2003 Systems Consulting Limited
  • 56. S-curves & reliability: summary of references • Many of the original references are very old: – illustrates the points that (a) the earliest testers seemed to be “advanced” even at that outset, and (b) software reliability seems not to have penetrated mainstream testing 30 years on! – but: means these books & papers hard to obtain, so... • Recommended “recent” references: – Boris Beizer 1984 book: Software System Testing & Quality Assurance (ISBN 0-442-21306-9) – Stephen H. Kan 1995 book: Metrics & Models in Software Quality Engineering (ISBN 0-201-63339-6) – John Musa 1999 book: Software Reliability Engineering (ISBN 0-07-913271-5) © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 32.x11 of 51 information 2003 Systems Consulting Limited
  • 57. Progress through tests • We are interested in two main aspects: – can we manage the test execution to get complete before target date? – if not, can we do it for those tests of high (and medium?) importance? # tests # tests Target tests passed Target tests run Actual tests passed High Fail Actual tests run Pass Medium date date Low © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 33 of 51 information 2003 Systems Consulting Limited
  • 58. Progress through anomaly fixing and retesting • Similarly, two main aspects: – can we manage the workflow to get anomalies fixed and retested before target date? – if not, can we do it for those of material impact? # anomalies # anomalies Cumulative anomalies Outstanding material impact Resolved Awaiting fix Deferred Closed date date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 34x of 51 information 2003 Systems Consulting Limited
  • 59. Quantitative and qualitative risk reduction from tests and retests PROGRESS & RESIDUAL RISK UP RIGHT SIDE OF W-MODEL System Large-Scale Acceptance Testing Integration Testing Testing PROGRESS THROUGH H M Material TESTS Fail impact L Pass PROGRESS THROUGH Awaiting fix Awaiting fix Resolved Material INCIDENT Deferred impact FIXING Closed & RETESTING © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 35 of 51 information 2003 Systems Consulting Limited
  • 60. …and also into regression testing Retesting & Specification Execution regression of tests of tests testing Difficulties, What’s left for: Desired Time & Physical coverage resource constraints squeeze on “complete” second allowance for con- (environ- scripting & first run further straints ments execution run runs etc.) time ... Testing Testing Test Test Execution: Execution: Regression Strategy Plan Design Scripts First run Retest & 2nd run Testing © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 36 of 51 information 2003 Systems Consulting Limited
  • 61. Managing risk during test execution, against “fixed” scope & time initial scope set target go-live date Scope Time by requirements set in advance increasing decreasing risk of faults risk as introduced faults found Risk © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 37x of 51 information 2003 Systems Consulting Limited
  • 62. From what we can report, to what we would like to report © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 37.1x of 51 information 2003 Systems Consulting Limited
  • 63. Risk-based reporting Planned start end all risks „open‟ at the start Progress through the test plan © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 38a of 51 information 2003 Systems Consulting Limited
  • 64. Risk-based reporting Planned start today end residual risks of releasing TODAY Progress through the test plan © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 38b of 51 information 2003 Systems Consulting Limited
  • 65. Report not only risk, but also scope, over time initial scope set target go-live date Scope Time by requirements set in advance how much scope safely delivered so far? increasing decreasing risk of faults risk as introduced faults found Risk © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 38.1x of 51 information 2003 Systems Consulting Limited
  • 66. Benefit & objectives based test reporting Blocked NFR 1 Blocked func 1 Blocked func 2 Objective Objective Objective Objective Benefit Benefit Benefit Benefit Open Closed Risks Open Closed Closed Open Open Closed Project objectives, hence benefits, available for release © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 39ax of 51 information 2003 Systems Consulting Limited
  • 67. Benefit & objectives based test reporting Objective Objective Objective Objective Objective Benefit Benefit Benefit Benefit Benefit Benefit Open Closed Risks Open Closed Closed Open Open Closed Project objectives, hence benefits, available for release © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 39bx of 51 information 2003 Systems Consulting Limited
  • 68. Slippages and trade-offs: an example • If Test Review Boards recommend delay, management may demand a trade-off, “slip in a little of that descoped functionality” original first target slip scope date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 40a of 51 information 2003 Systems Consulting Limited
  • 69. Slippages and trade-offs: an example • If Test Review Boards recommend delay, management may demand a trade-off, “slip in a little of that descoped functionality” • This adds benefits but also new risks: more delay? original first actual target slip go-live scope date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 40b of 51 information 2003 Systems Consulting Limited
  • 70. Tolerable risk-benefit balance: another example • Even if we resist temptation to trade off slippage against scope, may still need to renegotiate the tolerable level of risk balanced against benefits original target (risk - date benefits) original target net risk date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 41a of 51 information 2003 Systems Consulting Limited
  • 71. Tolerable risk-benefit balance: another example • Even if we resist temptation to trade off slippage against scope, may still need to renegotiate the tolerable level of risk balanced against benefits original actual target (risk - go-live date benefits) “go for it” margin original target net risk date © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 41b of 51 information 2003 Systems Consulting Limited
  • 72. 5. Next steps in Risk-Based Testing • End-to-end risk data model: – is wanted to keep risk information linked to tests – is under development by Paul Gerrard – main reason is of course... • Automation: – clerical and spreadsheets really not enough – want to keep risk to test link up-to-date through descopes, reprioritisations etc – Paul is working on an Access database for now – any vendors with risk in their data models? With acknowledgements © Thompson Neil Thompson: Risk and Testing 27/28 Mar Slide 42 of 51 information 2003 to lead author Paul Gerrard Systems Consulting Limited