SlideShare a Scribd company logo
1 of 46
Ian McDonald
Techniques for Estimating within Testing

                                           July 2010 v1



                                           January 2013 v3


Β© 2010 Ian McDonald - First created 2010
Introduction

  This is work in progress and is in a draft
   state.
  It however contains enough information
   to be initially helpful.
  The prime focus is to allow items to be
   noted that need to be included within
   project plans and assistance in
   estimating.
Scope

  This slide pack assumes the creation of
   an application that is then integrated
   onto a hardware platform.
  Testing of the hardware platform itself
   and the integration testing for this is not
   included at present within this draft pack.
Contents
    Overview
    Budgeting for Tooling
    Design and Build
    Application Security and DDA Testing
    Test Application and Integration Phase
    Working with Widgets and GUI Interfaces
    Estimating Manual Test Run Time
    Estimating GUI Automation Testing
    Methods for Managing, Prioritising Regression Testing
    Techniques for Deciding Sufficient Testing Done
    Estimating Testing for Protective Monitoring Testing
    Mobile interface testing
    Acceptance Testing (FAT, SAT, UAT, OAT)
Overview

 This collection of notes was compiled to provide guidance for new Test
 Managers and Project Managers in the art of estimating test effort, with

 Overview
 particular attention made to the activities that are usually forgotten about
 and which leads to stresses and pressures on a test team and potentially
 risk to successful delivery.
 It needs to be understood that while there is an attempt to bring a science to
 the process, there will also be special factors to take into account which will
 require additional effort.
 The aim of these notes is to prevent underestimation and so reduce the risk
 of late delivery or poor test coverage, due to a test team being under
 resourced.




                                                                                02/01/2013   5
Targeting within Test Strategy
    Within the test strategy, be clear as to what is being tested at each phase.
         Unit code tests will test the implementation of the understanding of the requirements at a
          unit code level. The logic is being tested as understood and implemented. Error
          conditions and error handling should also be tested. The creation of stubs and drivers
          plus the creation of test data will be necessary. Typically white box test analysis is used
          as per BS7925-2.
    At a functional level, the application integration is being tested. Again functionality
     is tested but at a black box techniques are used as per BS7925-2.
    End to end functional stories are tested. However these scenarios need to be
     clearly documented with clear expectations.
    System level testing concerns the integration of the application with the main
     system, in addition to security and performance testing.
         Enterprise monitoring testing may be required.
         Performance monitoring testing may be required.
         PKI certificate testing may be required.
         Penetration testing is a security audit.
    User testing may include W3C compliance and testing against functional
     requirements.
    Site testing concerns itself with site specific targeted tests.
    Operational testing is concerned with maintenance and support of the system.
Good Enough?
    In testing we need to make a call as to what is good
     enough. This will depend upon commercial and
     product risk.
    The level of testing required will be defined within the
     project Test Policy.
    For the purposes of this presentation, we look at a
     typical large project (Β£3 million) and point out key
     things that are often forgotten in estimates.
    While in some projects this will be an overkill, the
     bigger danger is to assume an overkill, miss key points
     and end up with a project overspent, behind schedule
     and even causing company failure.
Rewards and Approach
    A common problem for testers is under resourcing. Project
     managers typically estimate testing as a small or larger fraction of
     development time. However this is frequently incorrect.
    Other pressures are based on the delivery project manager
     getting the product out to meet a bonus payment based upon
     saving development costs. However when the project is handed
     over to the support phase this could incur far greater costs and
     risk to the company reputation. The key point is not to reward
     wrong behaviour.
    These slides focus on producing realistic time estimations with a
     responsible view to quality. However if dealing with safety or
     highly sensitive commercial targets then this will reflect the
     minimum time scales. On the other hand if one is producing a
     simple web project with no commercial impact if this fails and no
     risk to commercial reputation or life, then far less testing will be
     required.
Is it not simple?
    One approach that is often used as a first finger in the
     air is to say that the Test Effort is between 50% and              You can view the
     75% of the development effort.                                     Arian lift off at:
    This however does not always work because:                         http://www.youtube.
         Development may assume use of existing code that is           com/watch?v=gp_D
          integrated. Assuming existing code does not need
          integration testing can be dangerous (e.g. this is why the    8r-2hwk)
          Arian rocket, delivered to project schedule, exploded
          shortly after lift off).                                      This is a good
         Many test tasks are traditionally underestimated or
          forgotten and the broad approach existed when systems         example of a
          were less complex.                                            project delivered
         It may potentially need for some systems more test effort
          than development effort. It may not however become            to schedule, but
          obvious till the team are under pressure and it is too late   not performing
          to bring additional staff onto the project.
         Testing version of tools (e.g. Visual Studio) are more        full test analysis.
          expensive than the cut down development versions.

     Hence the answer is No it is not that simple and here
     are some notes to help.
Budgeting for Tooling

    This part examines cost implications for
     test tooling
Off the Shelf Test Tooling and Hardware
    Typical project Tooling To be included in a bid:
         Test management tooling for each tester.
               Access to tooling to track requirements to tests to defects.
               If a .Net Project Then
                      For Testers, project will require Visual Studio Ultimate licence each.
                      For Developers will want to ensure that they have access to FxCop and ReSharper, within their development
                       versions of Visual Studio.
               If a large programme will want to consider using HP Quality Centre tool set.
               For small to medium sized projects will want to use cheaper tools such as Atlassian.
         If running PKI certificate licensing tests, will need to budget for additional licences that can
          be cancelled and revoked.
         Is video evidence required for testing – do we need a capture tool?
         Do we need emulators and real devices such as mobile phones for testing? There may
          also be security checks to make before testing can start.
         Load and Performance test tooling. May require additional licences for each platform and
          will need to provide a licence that allows for appropriate level of testing for stress.
         The load and performance test platform needs to be comparable and scalable. If resources
          are too low, then the results may not be scalable and it may be difficult to run tests with
          even a small number of users so not allowing a realistic trend to be identified with any
          certainty over margins of error. This in turn can cause considerable problems at SAT and
          OAT and cause a project to be delivered late or with performance errors.
         Code coverage tools and other analysis tooling, such as Coverity, may be required for large
          programmes.
         Are checking tools required (e.g. for DDA / W3C, Usability, etc).
Customised Test Tooling
    There may be the need to develop internal test tooling
     for a project. This can typically include:
        Methods for generating large amounts of data.
        Methods for comparing or validating large amounts of data.
        Test Stubs and Test Drivers.
        Approaches for extracting data, including extracting data from
         test tooling.


    Budget for design, build, review, test and verification of
     the tooling and documentation, configuration control
     and support for the tooling.
Test Manager
   Budget for Test Manager:
        10% of time per day per test team member. If more than 5 individuals, then need to consider
         breaking team down to include team leads.
        1 day per week dealing with other teams, project manager, Development manager, security
         testing, system architect, etc.
        0.5 day per week rising to 1 day per week in various meetings.
        1 hour per day preparing short reports and extracting data – Task can be done by new
         graduate or technician under Test Manager guidance.
        Reviewing Development and other test documentation – 1 hour per document per version.
         Assume 2 versions.
        Reviewing Requirements – 1 day per 100 requirements (assuming all single logical
         statements).
        Dealing with customer and customer issues – 1 day per week.
        Reviewing Test Analysis – 1 day per 100 requirements.
        Reviewing Test Script coverage – 1 day per 100 test cases.
        Reviewing Test Coverage – 1 day per regression run.
        Dealing with Hosting Company – 1 day per week.
        Dealing with other Stakeholders – 1 day per week during and leading up to integration, plus
         during SAT and OAT, this will increase to 1 day per stakeholder. This may need to be
         covered by additional test support (Principle Consultant level), if many Stakeholders.
        Writing documentation
        Who is chairing the code reviews? If the Test manager is responsible, then this needs to be
         budgeted. One hour per review, with one hour preparation and one hour for minutes and
         actions. Also need to budget in other review team member effort at one hour per meeting
         and potentially longer than one hour preparation time.
Key Test Documents
 The following is just general guidance for writing (not reviewing) and time can
  increase, less likely to decrease (Principle / Senior level staff):
        Test Strategy – 2 weeks
        Functional Test Plan – 2 weeks
        Function Test Specification (when required) – 4 weeks
        Non Functional Test Plan (if required) – 2 weeks
        Security Test Plan – 2 weeks
        Load and Performance Test Plan – 2 weeks
        Integration Test Plan – 2 weeks
        Enterprise Monitoring (EM) Test Plan for SAT – 2 weeks
        Protective Monitoring (PM) Test Plan for SAT – 4 weeks
        Site / System Acceptance Plan (in addition to EM and PM) – 2 weeks
        Operational Acceptance Test Plan – 2 weeks
        Test harness definitions (includes stubs, drivers, etc) – 2 weeks
        Documentation for other special test tooling – 2 weeks each.
        Input to support testing related work with training material – 2 weeks
        Covered Separately (Technician / Recent Graduate):
              Test Analysis
              Test Cases
              Test Scripts (Manual and Automated)
              Test data created

           Note: For large projects and programmes additional time may be required.
Design and Build Phase

    This part examines testing during the
     design and build phase.
Assumptions and Risks
 Before we even consider estimating, we need to consider the
  quality of the requirements. Defects will leak into the system at
  requirements level. Poor requirements or poorly constructed
  requirements will mean that considerable overhead on testing will
  occur. This will mean that additional resources will be required to
  help administer the test team and to specifically help in preparing
  reports and measure test effectiveness.
     For poorly constructed requirements then an additional person at
      consultant level will be required for the duration of the project. OR
     The requirements need to be checked.
     If requirements are poorly constructed then it may be necessary to
      break these down into sub-requirements and link to user stories, which
      will need matching to Test Cases and match the Test Cases to test
      Scripts. This results in:
         Need for specialist requirements management tooling and the need to link
          this with specialist commercial test tooling.
         Need for additional resources to develop, review and manage requirement
          improvements
Requirements
  Tests are delivered against Requirements. It is important to check that:
        There are no Requirement Gaps.
        There are no Requirement Conflicts.
        Derived engineering requirements are well understood and documented.
        There are no blatant Errors in Requirements.
        There is no Lack of Detail in terms of valid ranges and expectations of
         behaviour when error conditions arise.
        Details concerning security have been identified.
        Specifications have been checked as if subsets of requirements – do not
         assume that these will be all complete and correct.
        Where Browsers are defined, are all tests to be repeated for each browser, or
         can we prioritise and distribute tests across different browsers. E.g. 100% of
         tests on Firefox, 90% on IE 6.0, 25% on other. This needs to be reflected in
         test run time effort calculations. Attention to pop-up behaviour is often
         different across browsers.
        Where interfaces to other systems are present and the requirement interface
         is not proven, poorly (or unreliably) documented, then additional resource will
         be required.
Early intervention




 Resource required to check requirements for testing. Needs to be at Principle /
 Senior staff level.
 As a general rule 5 to 10 days investment with requirement review by an appropriate
 test architect can typically save 2 man months of effort later, if advice is adopted.
Defect Cost Implications
                                                                                                            Defects slip in at the
                                                     Cost of Fault by Phase                                  requirements phase and
                        Β£25.0                                                                                grow.
                                                                                                            The later the detection the
  Cost per Fault (Β£K)




                        Β£20.0

                        Β£15.0
                                                                                                             greater the cost to detect,
                                                                                     Cost per Fault (Β£K)     fix and retest.
                        Β£10.0
                                                                                                            It is not a choice of being
                         Β£5.0
                                                                                                             able to afford early test
                            Β£-
                                                                                                             intervention in checking
                                                                 t

                                                                       st
                                   ts

                                            gn




                                                                es
                                                     ng




                                                                                se



                                                                                                             requirements.
                                                                     Te
                                  en




                                                           lT




                                                                            U
                                           i


                                                    i
                                        es

                                                 od
                                 m




                                                                   em


                                                                            d
                                                           na
                                        D

                                                 C
                            ire




                                                                          el




                                                                                                            It is a fact that early
                                                        tio




                                                                        Fi
                                                                 st
                           u




                                                                Sy
                        eq




                                                     nc
                                                 Fu
                        R




                                                     Project Phase                                           intervention:
                                                                                                                Saves money
                                                                                                                Prevents project overrun
 Around 10% of defects are seeded in a
                                                                                                                Reduces development
 project at the requirements phase. Late                                                                         and test effort.
 detection means longer time to delivery of                                                                     Improves Development
 project and greater costs.                                                                                      delivery
Test Management of Requirements
    It is not enough for requirements to have sufficient content to be testable. They
     also need to be manageable.
    Tests are mapped to individual requirements. Requirements need to be
     structured as individual single logical statements. Failure to do this will mean
     that many tests are required to sign off many attributes of a requirement. As a
     project grows, it becomes necessary to cut tests down to a manageable set of
     regression tests based upon risk assessment. With multiple embedded
     requirements this creates difficulties and introduces the risk of a requirement
     attribute not being delivered, containing defects that are not tested and can mean
     that critical defects go undetected.
    It is vital that requirements are structured to be a single logical statement with a
     separate reference number. Logical statements normally do not have the words
     OR / AND within the statement.
    If this structuring is not done, then there will be a requirement to have additional
     effort required to maintain the test reporting tool. If the solution is to create User
     Stories, then these will need to be managed, reviewed and there may be issues in
     extracting information from tools such as Visual Studio.
Early Testing Effort
  Testing applies equally to coding as well as System Testing.
  To cut defect leakage early on, it is vital that code is:
     Reviewed against best practice check lists.
         Checked early for security impacts
         Checked early for performance issues
     Checked against tools such as ReSharper and FxCop – This
      requires configuration and build control effort from the Development
      Team with the necessary resources to run tests and analyse output
      early on. This will save time in System Testing effort and help to
      speed development. To avoid false reporting, ensure tools
      configured correctly (allow 5 man days for configuration and setup).
  Resource to ensure: Adequate Static testing to include:
     Review of Code
     Running of static tooling
Review of Code Effort
  It is vital that code reviews are adequately resourced. Reviews need to
   be effective and so the review rate needs to be considered.
      Too fast and defects will leak through increasing the overall project cost.
      Too long and the review become ineffective and people become blinded by
       lines of code.
      Reviews need to be resourced, regular, guided and targeted.
      A review period of 1 hour to 2 hours max is most efficient and reviews longer
       than 2 hours need to be broken down into targeted focused chunks.
      Or give individuals specific areas to review.
      Review rate may be around 1KLOC/hour.
  Time also needs to be allocated for static review of documents and
   diagrams.
  Static test tools can help add confidence to a code review and will (if set
   up and used correctly) will add value to a review, but should not be used
   to replace a code review.
Code review activities
  If reviewing code in a closed meeting, comments by one reviewer will
   typically inspire comment from another reviewer. If reviewing code using
   tools to support remote reviews, then the first reviewers will miss
   comments from other reviewers. Hence it is important for parties to go
   back over comments when all comments are collected.
  Review tasks should be set for individuals. Typically these will be
   supported by project check lists and will include:
        Use of good coding practice;
        Code efficiency / Performance;
        Code security;
        Consistency with requirements;
        Consistency with interfaces and other code modules. Any module being
         interfaced with should have an assigned individual representing that module
         to check compliance.
System Architecture

    This has impact for testing of:
      Security
      Load and Performance



    Ensure that the security test team and
     performance tester have early input to
     the design. This review needs to be
     budgeted.
Application Security and DDA Testing

    This details points that often need testing
     and can get missed out
Security Scenarios
    While the system will be subjected to security testing,
     do not forget to test the application as soon as
     possible. This needs to be budgeted and resourced.
    Scenarios need to be put in place for:
        Ensuring that SQL injection cannot be used. One test per
         field
        Ensuring that URL injection cannot be used on secure web
         pages.
        Check timeout of logins
        Check success of logout and try the back button.
Disability Discrimination Act (DDA) Testing

     While the world wide web consortium (W3C) has tooling to check web sites, this
      may not be usable on sites prior to go-live. Consequently if developing on an air
      gaped system, testing for disability can be more involved.
     The level of DDA adherence will vary under contractual agreement. However one
      should ensure that the following is tested as minimum good practice and this will
      need resourcing and budget:
          Check for Blue / Green colour contrast not being present.
          Check for Red / Brown colour contrast not being present.
          Check for Green / Brown colour contrast not being present.
          Check that images and logos have alternate text for web pages.
          Check if a web page reader will actually read within a column before moving to the next
           column and not just read in turn the top line of each column before moving to the next
           line of each column.
          Check that fonts in browsers can be resized. So a page does not restrict access for
           those with poor eyesight.
          Allow time for scripting and running these extra tests.
Test Application and Integration Phase

    This part examines test related activity
     during the early testing of an application
     and during integration.
Test Analysis Effort
  Having had time to read documentation and understand the design then
   the test analysis will be required to identify test cases.
  There are a range of techniques such as those detailed in BS7925-2 plus
   methods like Classification Tree. The CT method comes with a tool
   Classification Tree Editor (CTE), which can help to group tests and cut
   test effort. In practice for estimation, this will help to provide a margin of
   error to avoid underestimation of testing.
         This assumes however that the system under test is not safety critical.
         If it is safety critical, the free CTE tool in a different mode will help to ensure
          that test cases are less likely to be missed.
         For large projects there is a commercial version of the CTE tool, which is
          worth consideration.
         The CTE tool also interfaces with the HP tool set.
    Allow at least 15 minutes per single logical requirement for the analysis
     phase of testing.
Manual Test Scripting Effort
    To create a test script from a test case, allow for each logical
     requirement:
         10 minutes to write test setup phase.
         5 minutes per step, which will equate to values to be entered (taken
          from the test case).
         10 minutes to write the end of the test and check the test sanity and
          ensure the test is under configuration control.

    As a general rule a manual test takes around 30 minutes to write
     per test case.
    NOTE: Test cases need to be reviewed.
         One way to check the sanity of a test is to run it the first time using
          another tester.
         HOWEVER the test case set needs to be reviewed for test coverage
          and effectiveness and this can take around 5 to 10 minutes per test.
Test Case and Script Traps
  There is a risk that test cases and scripts may
   miss key opportunities to test during
   intervening steps. So for each step assess
   what needs to be checked and referenced. Do
   not focus only on the final state.
  If using end to end scenarios for functional
   testing, then check that the requirements fully
   document the required actions. Failure to
   document the requirement flows fully can lead
   to inadequate testing.
  Check that the requirement authors are
   involved in reviewing test cases and scripts.
Estimation First Principles
    It is assumed that all requirements are in single logical statement. If a statement
     refers to a standard or other sets of requirements then the relevant requirements
     need to be identified as single statements.
    There are a range of test analysis techniques (e.g. Classification Tree and
     approaches in BS7925-2). For a simple approach one would need to consider the
     boundary analysis technique. This would run tests with values between boundaries
     A and B. The tests that one would use would therefore be:
         Far below Boundary A (can include negative numbers)
         Just below boundary A
         On boundary A
         Just above boundary A
         Mid point between boundary A and B. (Not always tested, but recommended)
         Just below boundary B
         On boundary B
         Just above boundary B
         Far above boundary B
         Special case of value 0
         Illegal value (e.g alpha, special characters, etc, when expecting a numeric value).
    So for each single statement requirement there are a minimum of 11 tests. As a
     general rule this is a good starting point for estimating.
Pair-wise and Orthogonal Array

    Pair-wise relies upon 2 variable combinations creating
     defects that a single change would not produce. So
     assume 3 inputs (factors), each having a state of 1
     and 2 (ie 2 levels). We would test 4 cases (Runs):
               I/P 1       I/P 2       I/P 3


     Case 1            1           1           1

     Case 2            1           2           2

     Case 3            2           2           1

     Case 4            2           1           2



     Hence while thorough this can reduce the test cases from the 8
     possible test cases. Orthogonal Arrays take the Pair-wise analysis
     further and is out of the scope of this slide set.
End to End Tests

  End to End Tests are used to check an
   application and system from a full user
   perspective.
  The end to end business rules will be defined
   within the requirements and as a general rule
   allow 30 minutes scripting per rule, which
   needs to include both positive successful end
   to end cases and cases where the process will
   lead to testing error handling. Both sets need
   to be identified in the count for estimation.
Working with Widgets and GUI Interfaces


     This part examines estimating test
      scripting effort for GUI interfaces, so
      where requirements are structured in a
      User Experience Document
Estimating scripting effort for a GUI interface


      As with normal requirements a User Experience
       Document needs to be reviewed for single logical
       features identified.
      Error conditions and legal ranges need to be identified.
      Business rules need to identify the end to end
       processes.
      As a rough estimate of the amount of scripting time:
      For each widget (GUI interface), the test scripting
       effort = Number of widget features x 11 x 30 minutes,
       where 11 represents the standard minimum number of
       boundary tests required.
Estimating Manual Test Run Time

    This part examines estimating test run
     time for Manual Test Scripts.
Estimating Manual Test Run Time For First Pass


     To estimate manual test script run time.
     For each run:
          5 minutes to set up each test script.
          3 minutes per step in the script (not including the first set up and final
           end steps).
          3 minutes for the end step BUT add time for defect handling. Or
           count last step as 5 minutes.


     One can expect that around 10% of scripts will flag a problem and
      so will need a defect report raised. So 15 minutes per defect x
      10% of scripts to be run.
     Any additional time to set up (or re-set) the test environment will
      need to be added.
Estimating Manual Regression Runs

     For each set of scripts run:
          10% will need to be run again to verify defect fixes.
          Repeat runs will be required for regression runs. This will either be:
               All scripts and initially one would want to re-run at least 3 times.
               Run all scripts once, then if a non critical or low risk system on each
                regression run, where new functionality is being added, gradually reduce
                the module testing as one adds end to end tests and automate tests,
                then for each pass reduce the manual module tests by 10%. Choice of
                reduction is based upon risk and this is covered later.
               IF critical or high risk, then all module tests will be required to be tested.
                However these can be either:
                     Gradually automated as code stabilises
                     Automate all tests from the start, however this has a very high overhead on
                      test effort and minor changes in the code can mean considerable need to re-
                      write tests, depending upon test framework in place. This needs to be
                      resourced.
Estimating GUI Automation Testing

    This looks at the approach for estimating
     GUI Automation test effort
GUI Test Automation
    Estimation of GUI automation effort will depend upon choice of tool, the presence
     of an automated test framework and the stability of the code.
    If aiming to automate then allow for:
         Familiarisation of the tooling.
         Setting up of an automated test framework – could be 2 weeks minimum for a developer.
         Scripting, running and proving the first tests will take longer allow at least one week for
          first tests.
    If using a tool like Selenium and within a framework then allow for scripting:
         5 to 10 minutes for low complexity, based upon experience.
         For highly complex scripts, a single step can take 1 hour to write.
         Hence an estimation and banding of the Risk and Complexity of the test target needs to
          be done.
         Note if using record and playback scripting is the same as a test run but add 15 minutes
          for administration.
         NOTE: If code is unstable, then the overhead on managing and updating scripts can be
          high. It may be decided to target automation at regression end to end scripts for stable
          code.
High Use of Automation
    Automation for unit code tests has the advantage of being able to
     measure simply the code coverage and should be encouraged.
    Usually automation is used gradually to replace manual functional
     scripts for code that is stabilised and has low risk of causing the
     need to re-write automated scripts.
    IF all functional scripts are to be automated early on, then there
     can be a high level of maintenance. In many instances, a manual
     test script that takes an hour to write, may require a day to write
     and prove an automated version (depending upon tool and
     framework).
    A manual test script may only take 5 minutes to change and may
     even be tolerant of change to code. However an automated
     script may require completely re-writing. So the maintenance
     level of scripts needs considerable thought. However there are
     ways around this.
Automation
  Ideally you need a low maintenance approach.
  Use where possible common scripts, where
   the data and expected results can be pulled
   from a table.
  This means that only the data needs
   manipulating and updating. Which in turn can
   reduce test maintenance effort.
  Always look for the smart approach to tooling
   and do not rely upon record and playback as
   this can be expensive.
Methods for Managing, Prioritising Regression Testing



       There are a number of methods for
        prioritising regression tests to target
        Risk. This section looks at these.
Managing Regression Pack
  A regression pack will grow as functionality is added.
  If manual scripts are being used for the core regression pack, then once
   the code become stable, it will be possible to automate scripts gradually.
  Set priorities for automation based upon:
         targeting scripts that are more successful at finding defects,
         targeting scripts that test critical or more risk related functionality.
    When running a manual regression pack within a time limited period,
     choose the test as a subset of the full test pack, choosing a customised
     subset for each run. The choice will be based upon:
         High risk functionality.
         Areas of code that have been changed or have interfaces that are impacted by
          change.
         Areas of code that have an existing record of being susceptible to defects.
    For a final regression run, one will want to run a full set of tests.
End of Part 1 of 2

    See slide pack part 2 of 2.

More Related Content

What's hot

Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success TEST Huddle
Β 
Derk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both WorldsDerk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both WorldsTEST Huddle
Β 
Testing Framework
Testing FrameworkTesting Framework
Testing Frameworknazeer pasha
Β 
Michael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software TestingMichael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software TestingTEST Huddle
Β 
Practical Application Of Risk Based Testing Methods
Practical Application Of Risk Based Testing MethodsPractical Application Of Risk Based Testing Methods
Practical Application Of Risk Based Testing MethodsReuben Korngold
Β 
'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik Boelen'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik BoelenTEST Huddle
Β 
Kasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than KnowledgeKasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than KnowledgeTEST Huddle
Β 
Mattias Diagl - Low Budget Tooling - Excel-ent
Mattias Diagl - Low Budget Tooling - Excel-entMattias Diagl - Low Budget Tooling - Excel-ent
Mattias Diagl - Low Budget Tooling - Excel-entTEST Huddle
Β 
Better Software Classic Testing Mistakes
Better Software Classic Testing MistakesBetter Software Classic Testing Mistakes
Better Software Classic Testing Mistakesnazeer pasha
Β 
Fundamentals of Risk-based Testing
Fundamentals of Risk-based TestingFundamentals of Risk-based Testing
Fundamentals of Risk-based TestingTechWell
Β 
Klaus Olsen - Agile Test Management Using Scrum
Klaus Olsen - Agile Test Management Using ScrumKlaus Olsen - Agile Test Management Using Scrum
Klaus Olsen - Agile Test Management Using ScrumTEST Huddle
Β 
Risks of Risk-Based Testing
Risks of Risk-Based TestingRisks of Risk-Based Testing
Risks of Risk-Based Testingrrice2000
Β 
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010TEST Huddle
Β 
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!TEST Huddle
Β 
But Did You Test It
But Did You Test ItBut Did You Test It
But Did You Test ItRuth Blakely
Β 
Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?TEST Huddle
Β 
Software Engineering Fundamentals Svetlin Nakov
Software Engineering Fundamentals Svetlin NakovSoftware Engineering Fundamentals Svetlin Nakov
Software Engineering Fundamentals Svetlin Nakovnazeer pasha
Β 
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...TEST Huddle
Β 

What's hot (20)

Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success Mats Grindal - Risk-Based Testing - Details of Our Success
Mats Grindal - Risk-Based Testing - Details of Our Success
Β 
Derk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both WorldsDerk jan de Grood - ET, Best of Both Worlds
Derk jan de Grood - ET, Best of Both Worlds
Β 
Testing Framework
Testing FrameworkTesting Framework
Testing Framework
Β 
Michael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software TestingMichael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software Testing
Β 
Practical Application Of Risk Based Testing Methods
Practical Application Of Risk Based Testing MethodsPractical Application Of Risk Based Testing Methods
Practical Application Of Risk Based Testing Methods
Β 
'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik Boelen'Acceptance Testing' by Erik Boelen
'Acceptance Testing' by Erik Boelen
Β 
Kasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than KnowledgeKasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than Knowledge
Β 
Embedded SW Testing
Embedded SW TestingEmbedded SW Testing
Embedded SW Testing
Β 
Mattias Diagl - Low Budget Tooling - Excel-ent
Mattias Diagl - Low Budget Tooling - Excel-entMattias Diagl - Low Budget Tooling - Excel-ent
Mattias Diagl - Low Budget Tooling - Excel-ent
Β 
Better Software Classic Testing Mistakes
Better Software Classic Testing MistakesBetter Software Classic Testing Mistakes
Better Software Classic Testing Mistakes
Β 
Fundamentals of Risk-based Testing
Fundamentals of Risk-based TestingFundamentals of Risk-based Testing
Fundamentals of Risk-based Testing
Β 
Klaus Olsen - Agile Test Management Using Scrum
Klaus Olsen - Agile Test Management Using ScrumKlaus Olsen - Agile Test Management Using Scrum
Klaus Olsen - Agile Test Management Using Scrum
Β 
Risks of Risk-Based Testing
Risks of Risk-Based TestingRisks of Risk-Based Testing
Risks of Risk-Based Testing
Β 
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010Edwin Van Loon -  How Much Testing is Enough - EuroSTAR 2010
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010
Β 
ISTQB Advanced Test Manager Training 2012 - Testing Process
ISTQB Advanced Test Manager Training 2012 - Testing Process ISTQB Advanced Test Manager Training 2012 - Testing Process
ISTQB Advanced Test Manager Training 2012 - Testing Process
Β 
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!
Β 
But Did You Test It
But Did You Test ItBut Did You Test It
But Did You Test It
Β 
Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?
Β 
Software Engineering Fundamentals Svetlin Nakov
Software Engineering Fundamentals Svetlin NakovSoftware Engineering Fundamentals Svetlin Nakov
Software Engineering Fundamentals Svetlin Nakov
Β 
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...
Β 

Similar to Estimating test effort part 1 of 2

Performance Testing in Agile Process
Performance Testing in Agile ProcessPerformance Testing in Agile Process
Performance Testing in Agile ProcessIdexcel Technologies
Β 
Implementing a testing strategy
Implementing a testing strategyImplementing a testing strategy
Implementing a testing strategyDaniel Giraldo
Β 
Quality at the speed of digital
Quality   at the speed of digitalQuality   at the speed of digital
Quality at the speed of digitalrajni singh
Β 
ISTQB / ISEB Foundation Exam Practice - 2
ISTQB / ISEB Foundation Exam Practice - 2ISTQB / ISEB Foundation Exam Practice - 2
ISTQB / ISEB Foundation Exam Practice - 2Yogindernath Gupta
Β 
Types of Testing
Types of TestingTypes of Testing
Types of TestingMurageppa-QA
Β 
How to build confidence in your release cycle
How to build confidence in your release cycleHow to build confidence in your release cycle
How to build confidence in your release cycleDiUS
Β 
NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning Udayantha de Silva
Β 
ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2onsoftwaretest
Β 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
Β 
The Case for Agile testing
The Case for Agile testingThe Case for Agile testing
The Case for Agile testingCognizant
Β 
Risk Driven Testing
Risk Driven TestingRisk Driven Testing
Risk Driven TestingJorge Boria
Β 
ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2Chandukar
Β 
Interview questions and answers for quality assurance
Interview questions and answers for quality assuranceInterview questions and answers for quality assurance
Interview questions and answers for quality assuranceGaruda Trainings
Β 
Test Planning and Test Estimation Techniques
Test Planning and Test Estimation TechniquesTest Planning and Test Estimation Techniques
Test Planning and Test Estimation TechniquesMurageppa-QA
Β 
programming testing.pdf
programming testing.pdfprogramming testing.pdf
programming testing.pdfSatishkumar722293
Β 
programming testing.pdf
programming testing.pdfprogramming testing.pdf
programming testing.pdfSatishkumar722293
Β 
programming testing.pdf
programming testing.pdfprogramming testing.pdf
programming testing.pdfSatishkumar722293
Β 
Agile testing guide_2021
Agile testing guide_2021Agile testing guide_2021
Agile testing guide_2021QMetry
Β 

Similar to Estimating test effort part 1 of 2 (20)

Performance Testing in Agile Process
Performance Testing in Agile ProcessPerformance Testing in Agile Process
Performance Testing in Agile Process
Β 
Implementing a testing strategy
Implementing a testing strategyImplementing a testing strategy
Implementing a testing strategy
Β 
Quality at the speed of digital
Quality   at the speed of digitalQuality   at the speed of digital
Quality at the speed of digital
Β 
ISTQB / ISEB Foundation Exam Practice - 2
ISTQB / ISEB Foundation Exam Practice - 2ISTQB / ISEB Foundation Exam Practice - 2
ISTQB / ISEB Foundation Exam Practice - 2
Β 
Types of Testing
Types of TestingTypes of Testing
Types of Testing
Β 
How to build confidence in your release cycle
How to build confidence in your release cycleHow to build confidence in your release cycle
How to build confidence in your release cycle
Β 
stlc
stlcstlc
stlc
Β 
NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning
Β 
ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2
Β 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
Β 
The Case for Agile testing
The Case for Agile testingThe Case for Agile testing
The Case for Agile testing
Β 
Risk Driven Testing
Risk Driven TestingRisk Driven Testing
Risk Driven Testing
Β 
ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2
Β 
stlc
stlcstlc
stlc
Β 
Interview questions and answers for quality assurance
Interview questions and answers for quality assuranceInterview questions and answers for quality assurance
Interview questions and answers for quality assurance
Β 
Test Planning and Test Estimation Techniques
Test Planning and Test Estimation TechniquesTest Planning and Test Estimation Techniques
Test Planning and Test Estimation Techniques
Β 
programming testing.pdf
programming testing.pdfprogramming testing.pdf
programming testing.pdf
Β 
programming testing.pdf
programming testing.pdfprogramming testing.pdf
programming testing.pdf
Β 
programming testing.pdf
programming testing.pdfprogramming testing.pdf
programming testing.pdf
Β 
Agile testing guide_2021
Agile testing guide_2021Agile testing guide_2021
Agile testing guide_2021
Β 

More from Ian McDonald

Non functional performance requirements v2.2
Non functional performance requirements v2.2Non functional performance requirements v2.2
Non functional performance requirements v2.2Ian McDonald
Β 
Choosing an alm tool set
Choosing an alm tool setChoosing an alm tool set
Choosing an alm tool setIan McDonald
Β 
Requirements Verification v3
Requirements Verification v3Requirements Verification v3
Requirements Verification v3Ian McDonald
Β 
Boundary and equivalnce systematic test design
Boundary and equivalnce   systematic test designBoundary and equivalnce   systematic test design
Boundary and equivalnce systematic test designIan McDonald
Β 
Implementing test scripting Ian McDonald updated (minor changes) 26-04-2013
Implementing test scripting   Ian McDonald updated (minor changes) 26-04-2013Implementing test scripting   Ian McDonald updated (minor changes) 26-04-2013
Implementing test scripting Ian McDonald updated (minor changes) 26-04-2013Ian McDonald
Β 
CTE Presentation V2
CTE Presentation V2CTE Presentation V2
CTE Presentation V2Ian McDonald
Β 
RCA Presentation V0 1
RCA Presentation V0 1RCA Presentation V0 1
RCA Presentation V0 1Ian McDonald
Β 
TEA Presentation V 0.3
TEA Presentation V 0.3TEA Presentation V 0.3
TEA Presentation V 0.3Ian McDonald
Β 

More from Ian McDonald (8)

Non functional performance requirements v2.2
Non functional performance requirements v2.2Non functional performance requirements v2.2
Non functional performance requirements v2.2
Β 
Choosing an alm tool set
Choosing an alm tool setChoosing an alm tool set
Choosing an alm tool set
Β 
Requirements Verification v3
Requirements Verification v3Requirements Verification v3
Requirements Verification v3
Β 
Boundary and equivalnce systematic test design
Boundary and equivalnce   systematic test designBoundary and equivalnce   systematic test design
Boundary and equivalnce systematic test design
Β 
Implementing test scripting Ian McDonald updated (minor changes) 26-04-2013
Implementing test scripting   Ian McDonald updated (minor changes) 26-04-2013Implementing test scripting   Ian McDonald updated (minor changes) 26-04-2013
Implementing test scripting Ian McDonald updated (minor changes) 26-04-2013
Β 
CTE Presentation V2
CTE Presentation V2CTE Presentation V2
CTE Presentation V2
Β 
RCA Presentation V0 1
RCA Presentation V0 1RCA Presentation V0 1
RCA Presentation V0 1
Β 
TEA Presentation V 0.3
TEA Presentation V 0.3TEA Presentation V 0.3
TEA Presentation V 0.3
Β 

Recently uploaded

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
Β 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
Β 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
Β 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
Β 
WhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
Β 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
Β 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
Β 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
Β 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
Β 
FULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | DelhiFULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | Delhisoniya singh
Β 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
Β 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
Β 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
Β 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
Β 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
Β 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
Β 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
Β 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
Β 

Recently uploaded (20)

E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
Β 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
Β 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
Β 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
Β 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
Β 
WhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 βœ“Call Girls In Kalyan ( Mumbai ) secure service
Β 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Β 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
Β 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Β 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Β 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
Β 
FULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | DelhiFULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY πŸ” 8264348440 πŸ” Call Girls in Diplomatic Enclave | Delhi
Β 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
Β 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
Β 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Β 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Β 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Β 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
Β 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
Β 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
Β 

Estimating test effort part 1 of 2

  • 1. Ian McDonald Techniques for Estimating within Testing July 2010 v1 January 2013 v3 Β© 2010 Ian McDonald - First created 2010
  • 2. Introduction  This is work in progress and is in a draft state.  It however contains enough information to be initially helpful.  The prime focus is to allow items to be noted that need to be included within project plans and assistance in estimating.
  • 3. Scope  This slide pack assumes the creation of an application that is then integrated onto a hardware platform.  Testing of the hardware platform itself and the integration testing for this is not included at present within this draft pack.
  • 4. Contents  Overview  Budgeting for Tooling  Design and Build  Application Security and DDA Testing  Test Application and Integration Phase  Working with Widgets and GUI Interfaces  Estimating Manual Test Run Time  Estimating GUI Automation Testing  Methods for Managing, Prioritising Regression Testing  Techniques for Deciding Sufficient Testing Done  Estimating Testing for Protective Monitoring Testing  Mobile interface testing  Acceptance Testing (FAT, SAT, UAT, OAT)
  • 5. Overview This collection of notes was compiled to provide guidance for new Test Managers and Project Managers in the art of estimating test effort, with Overview particular attention made to the activities that are usually forgotten about and which leads to stresses and pressures on a test team and potentially risk to successful delivery. It needs to be understood that while there is an attempt to bring a science to the process, there will also be special factors to take into account which will require additional effort. The aim of these notes is to prevent underestimation and so reduce the risk of late delivery or poor test coverage, due to a test team being under resourced. 02/01/2013 5
  • 6. Targeting within Test Strategy  Within the test strategy, be clear as to what is being tested at each phase.  Unit code tests will test the implementation of the understanding of the requirements at a unit code level. The logic is being tested as understood and implemented. Error conditions and error handling should also be tested. The creation of stubs and drivers plus the creation of test data will be necessary. Typically white box test analysis is used as per BS7925-2.  At a functional level, the application integration is being tested. Again functionality is tested but at a black box techniques are used as per BS7925-2.  End to end functional stories are tested. However these scenarios need to be clearly documented with clear expectations.  System level testing concerns the integration of the application with the main system, in addition to security and performance testing.  Enterprise monitoring testing may be required.  Performance monitoring testing may be required.  PKI certificate testing may be required.  Penetration testing is a security audit.  User testing may include W3C compliance and testing against functional requirements.  Site testing concerns itself with site specific targeted tests.  Operational testing is concerned with maintenance and support of the system.
  • 7. Good Enough?  In testing we need to make a call as to what is good enough. This will depend upon commercial and product risk.  The level of testing required will be defined within the project Test Policy.  For the purposes of this presentation, we look at a typical large project (Β£3 million) and point out key things that are often forgotten in estimates.  While in some projects this will be an overkill, the bigger danger is to assume an overkill, miss key points and end up with a project overspent, behind schedule and even causing company failure.
  • 8. Rewards and Approach  A common problem for testers is under resourcing. Project managers typically estimate testing as a small or larger fraction of development time. However this is frequently incorrect.  Other pressures are based on the delivery project manager getting the product out to meet a bonus payment based upon saving development costs. However when the project is handed over to the support phase this could incur far greater costs and risk to the company reputation. The key point is not to reward wrong behaviour.  These slides focus on producing realistic time estimations with a responsible view to quality. However if dealing with safety or highly sensitive commercial targets then this will reflect the minimum time scales. On the other hand if one is producing a simple web project with no commercial impact if this fails and no risk to commercial reputation or life, then far less testing will be required.
  • 9. Is it not simple?  One approach that is often used as a first finger in the air is to say that the Test Effort is between 50% and You can view the 75% of the development effort. Arian lift off at:  This however does not always work because: http://www.youtube.  Development may assume use of existing code that is com/watch?v=gp_D integrated. Assuming existing code does not need integration testing can be dangerous (e.g. this is why the 8r-2hwk) Arian rocket, delivered to project schedule, exploded shortly after lift off). This is a good  Many test tasks are traditionally underestimated or forgotten and the broad approach existed when systems example of a were less complex. project delivered  It may potentially need for some systems more test effort than development effort. It may not however become to schedule, but obvious till the team are under pressure and it is too late not performing to bring additional staff onto the project.  Testing version of tools (e.g. Visual Studio) are more full test analysis. expensive than the cut down development versions. Hence the answer is No it is not that simple and here are some notes to help.
  • 10. Budgeting for Tooling  This part examines cost implications for test tooling
  • 11. Off the Shelf Test Tooling and Hardware  Typical project Tooling To be included in a bid:  Test management tooling for each tester.  Access to tooling to track requirements to tests to defects.  If a .Net Project Then  For Testers, project will require Visual Studio Ultimate licence each.  For Developers will want to ensure that they have access to FxCop and ReSharper, within their development versions of Visual Studio.  If a large programme will want to consider using HP Quality Centre tool set.  For small to medium sized projects will want to use cheaper tools such as Atlassian.  If running PKI certificate licensing tests, will need to budget for additional licences that can be cancelled and revoked.  Is video evidence required for testing – do we need a capture tool?  Do we need emulators and real devices such as mobile phones for testing? There may also be security checks to make before testing can start.  Load and Performance test tooling. May require additional licences for each platform and will need to provide a licence that allows for appropriate level of testing for stress.  The load and performance test platform needs to be comparable and scalable. If resources are too low, then the results may not be scalable and it may be difficult to run tests with even a small number of users so not allowing a realistic trend to be identified with any certainty over margins of error. This in turn can cause considerable problems at SAT and OAT and cause a project to be delivered late or with performance errors.  Code coverage tools and other analysis tooling, such as Coverity, may be required for large programmes.  Are checking tools required (e.g. for DDA / W3C, Usability, etc).
  • 12. Customised Test Tooling  There may be the need to develop internal test tooling for a project. This can typically include:  Methods for generating large amounts of data.  Methods for comparing or validating large amounts of data.  Test Stubs and Test Drivers.  Approaches for extracting data, including extracting data from test tooling.  Budget for design, build, review, test and verification of the tooling and documentation, configuration control and support for the tooling.
  • 13. Test Manager  Budget for Test Manager:  10% of time per day per test team member. If more than 5 individuals, then need to consider breaking team down to include team leads.  1 day per week dealing with other teams, project manager, Development manager, security testing, system architect, etc.  0.5 day per week rising to 1 day per week in various meetings.  1 hour per day preparing short reports and extracting data – Task can be done by new graduate or technician under Test Manager guidance.  Reviewing Development and other test documentation – 1 hour per document per version. Assume 2 versions.  Reviewing Requirements – 1 day per 100 requirements (assuming all single logical statements).  Dealing with customer and customer issues – 1 day per week.  Reviewing Test Analysis – 1 day per 100 requirements.  Reviewing Test Script coverage – 1 day per 100 test cases.  Reviewing Test Coverage – 1 day per regression run.  Dealing with Hosting Company – 1 day per week.  Dealing with other Stakeholders – 1 day per week during and leading up to integration, plus during SAT and OAT, this will increase to 1 day per stakeholder. This may need to be covered by additional test support (Principle Consultant level), if many Stakeholders.  Writing documentation  Who is chairing the code reviews? If the Test manager is responsible, then this needs to be budgeted. One hour per review, with one hour preparation and one hour for minutes and actions. Also need to budget in other review team member effort at one hour per meeting and potentially longer than one hour preparation time.
  • 14. Key Test Documents  The following is just general guidance for writing (not reviewing) and time can increase, less likely to decrease (Principle / Senior level staff):  Test Strategy – 2 weeks  Functional Test Plan – 2 weeks  Function Test Specification (when required) – 4 weeks  Non Functional Test Plan (if required) – 2 weeks  Security Test Plan – 2 weeks  Load and Performance Test Plan – 2 weeks  Integration Test Plan – 2 weeks  Enterprise Monitoring (EM) Test Plan for SAT – 2 weeks  Protective Monitoring (PM) Test Plan for SAT – 4 weeks  Site / System Acceptance Plan (in addition to EM and PM) – 2 weeks  Operational Acceptance Test Plan – 2 weeks  Test harness definitions (includes stubs, drivers, etc) – 2 weeks  Documentation for other special test tooling – 2 weeks each.  Input to support testing related work with training material – 2 weeks  Covered Separately (Technician / Recent Graduate):  Test Analysis  Test Cases  Test Scripts (Manual and Automated)  Test data created Note: For large projects and programmes additional time may be required.
  • 15. Design and Build Phase  This part examines testing during the design and build phase.
  • 16. Assumptions and Risks  Before we even consider estimating, we need to consider the quality of the requirements. Defects will leak into the system at requirements level. Poor requirements or poorly constructed requirements will mean that considerable overhead on testing will occur. This will mean that additional resources will be required to help administer the test team and to specifically help in preparing reports and measure test effectiveness.  For poorly constructed requirements then an additional person at consultant level will be required for the duration of the project. OR  The requirements need to be checked.  If requirements are poorly constructed then it may be necessary to break these down into sub-requirements and link to user stories, which will need matching to Test Cases and match the Test Cases to test Scripts. This results in:  Need for specialist requirements management tooling and the need to link this with specialist commercial test tooling.  Need for additional resources to develop, review and manage requirement improvements
  • 17. Requirements  Tests are delivered against Requirements. It is important to check that:  There are no Requirement Gaps.  There are no Requirement Conflicts.  Derived engineering requirements are well understood and documented.  There are no blatant Errors in Requirements.  There is no Lack of Detail in terms of valid ranges and expectations of behaviour when error conditions arise.  Details concerning security have been identified.  Specifications have been checked as if subsets of requirements – do not assume that these will be all complete and correct.  Where Browsers are defined, are all tests to be repeated for each browser, or can we prioritise and distribute tests across different browsers. E.g. 100% of tests on Firefox, 90% on IE 6.0, 25% on other. This needs to be reflected in test run time effort calculations. Attention to pop-up behaviour is often different across browsers.  Where interfaces to other systems are present and the requirement interface is not proven, poorly (or unreliably) documented, then additional resource will be required.
  • 18. Early intervention Resource required to check requirements for testing. Needs to be at Principle / Senior staff level. As a general rule 5 to 10 days investment with requirement review by an appropriate test architect can typically save 2 man months of effort later, if advice is adopted.
  • 19. Defect Cost Implications  Defects slip in at the Cost of Fault by Phase requirements phase and Β£25.0 grow.  The later the detection the Cost per Fault (Β£K) Β£20.0 Β£15.0 greater the cost to detect, Cost per Fault (Β£K) fix and retest. Β£10.0  It is not a choice of being Β£5.0 able to afford early test Β£- intervention in checking t st ts gn es ng se requirements. Te en lT U i i es od m em d na D C ire el  It is a fact that early tio Fi st u Sy eq nc Fu R Project Phase intervention:  Saves money  Prevents project overrun Around 10% of defects are seeded in a  Reduces development project at the requirements phase. Late and test effort. detection means longer time to delivery of  Improves Development project and greater costs. delivery
  • 20. Test Management of Requirements  It is not enough for requirements to have sufficient content to be testable. They also need to be manageable.  Tests are mapped to individual requirements. Requirements need to be structured as individual single logical statements. Failure to do this will mean that many tests are required to sign off many attributes of a requirement. As a project grows, it becomes necessary to cut tests down to a manageable set of regression tests based upon risk assessment. With multiple embedded requirements this creates difficulties and introduces the risk of a requirement attribute not being delivered, containing defects that are not tested and can mean that critical defects go undetected.  It is vital that requirements are structured to be a single logical statement with a separate reference number. Logical statements normally do not have the words OR / AND within the statement.  If this structuring is not done, then there will be a requirement to have additional effort required to maintain the test reporting tool. If the solution is to create User Stories, then these will need to be managed, reviewed and there may be issues in extracting information from tools such as Visual Studio.
  • 21. Early Testing Effort  Testing applies equally to coding as well as System Testing.  To cut defect leakage early on, it is vital that code is:  Reviewed against best practice check lists.  Checked early for security impacts  Checked early for performance issues  Checked against tools such as ReSharper and FxCop – This requires configuration and build control effort from the Development Team with the necessary resources to run tests and analyse output early on. This will save time in System Testing effort and help to speed development. To avoid false reporting, ensure tools configured correctly (allow 5 man days for configuration and setup).  Resource to ensure: Adequate Static testing to include:  Review of Code  Running of static tooling
  • 22. Review of Code Effort  It is vital that code reviews are adequately resourced. Reviews need to be effective and so the review rate needs to be considered.  Too fast and defects will leak through increasing the overall project cost.  Too long and the review become ineffective and people become blinded by lines of code.  Reviews need to be resourced, regular, guided and targeted.  A review period of 1 hour to 2 hours max is most efficient and reviews longer than 2 hours need to be broken down into targeted focused chunks.  Or give individuals specific areas to review.  Review rate may be around 1KLOC/hour.  Time also needs to be allocated for static review of documents and diagrams.  Static test tools can help add confidence to a code review and will (if set up and used correctly) will add value to a review, but should not be used to replace a code review.
  • 23. Code review activities  If reviewing code in a closed meeting, comments by one reviewer will typically inspire comment from another reviewer. If reviewing code using tools to support remote reviews, then the first reviewers will miss comments from other reviewers. Hence it is important for parties to go back over comments when all comments are collected.  Review tasks should be set for individuals. Typically these will be supported by project check lists and will include:  Use of good coding practice;  Code efficiency / Performance;  Code security;  Consistency with requirements;  Consistency with interfaces and other code modules. Any module being interfaced with should have an assigned individual representing that module to check compliance.
  • 24. System Architecture  This has impact for testing of:  Security  Load and Performance  Ensure that the security test team and performance tester have early input to the design. This review needs to be budgeted.
  • 25. Application Security and DDA Testing  This details points that often need testing and can get missed out
  • 26. Security Scenarios  While the system will be subjected to security testing, do not forget to test the application as soon as possible. This needs to be budgeted and resourced.  Scenarios need to be put in place for:  Ensuring that SQL injection cannot be used. One test per field  Ensuring that URL injection cannot be used on secure web pages.  Check timeout of logins  Check success of logout and try the back button.
  • 27. Disability Discrimination Act (DDA) Testing  While the world wide web consortium (W3C) has tooling to check web sites, this may not be usable on sites prior to go-live. Consequently if developing on an air gaped system, testing for disability can be more involved.  The level of DDA adherence will vary under contractual agreement. However one should ensure that the following is tested as minimum good practice and this will need resourcing and budget:  Check for Blue / Green colour contrast not being present.  Check for Red / Brown colour contrast not being present.  Check for Green / Brown colour contrast not being present.  Check that images and logos have alternate text for web pages.  Check if a web page reader will actually read within a column before moving to the next column and not just read in turn the top line of each column before moving to the next line of each column.  Check that fonts in browsers can be resized. So a page does not restrict access for those with poor eyesight.  Allow time for scripting and running these extra tests.
  • 28. Test Application and Integration Phase  This part examines test related activity during the early testing of an application and during integration.
  • 29. Test Analysis Effort  Having had time to read documentation and understand the design then the test analysis will be required to identify test cases.  There are a range of techniques such as those detailed in BS7925-2 plus methods like Classification Tree. The CT method comes with a tool Classification Tree Editor (CTE), which can help to group tests and cut test effort. In practice for estimation, this will help to provide a margin of error to avoid underestimation of testing.  This assumes however that the system under test is not safety critical.  If it is safety critical, the free CTE tool in a different mode will help to ensure that test cases are less likely to be missed.  For large projects there is a commercial version of the CTE tool, which is worth consideration.  The CTE tool also interfaces with the HP tool set.  Allow at least 15 minutes per single logical requirement for the analysis phase of testing.
  • 30. Manual Test Scripting Effort  To create a test script from a test case, allow for each logical requirement:  10 minutes to write test setup phase.  5 minutes per step, which will equate to values to be entered (taken from the test case).  10 minutes to write the end of the test and check the test sanity and ensure the test is under configuration control.  As a general rule a manual test takes around 30 minutes to write per test case.  NOTE: Test cases need to be reviewed.  One way to check the sanity of a test is to run it the first time using another tester.  HOWEVER the test case set needs to be reviewed for test coverage and effectiveness and this can take around 5 to 10 minutes per test.
  • 31. Test Case and Script Traps  There is a risk that test cases and scripts may miss key opportunities to test during intervening steps. So for each step assess what needs to be checked and referenced. Do not focus only on the final state.  If using end to end scenarios for functional testing, then check that the requirements fully document the required actions. Failure to document the requirement flows fully can lead to inadequate testing.  Check that the requirement authors are involved in reviewing test cases and scripts.
  • 32. Estimation First Principles  It is assumed that all requirements are in single logical statement. If a statement refers to a standard or other sets of requirements then the relevant requirements need to be identified as single statements.  There are a range of test analysis techniques (e.g. Classification Tree and approaches in BS7925-2). For a simple approach one would need to consider the boundary analysis technique. This would run tests with values between boundaries A and B. The tests that one would use would therefore be:  Far below Boundary A (can include negative numbers)  Just below boundary A  On boundary A  Just above boundary A  Mid point between boundary A and B. (Not always tested, but recommended)  Just below boundary B  On boundary B  Just above boundary B  Far above boundary B  Special case of value 0  Illegal value (e.g alpha, special characters, etc, when expecting a numeric value).  So for each single statement requirement there are a minimum of 11 tests. As a general rule this is a good starting point for estimating.
  • 33. Pair-wise and Orthogonal Array  Pair-wise relies upon 2 variable combinations creating defects that a single change would not produce. So assume 3 inputs (factors), each having a state of 1 and 2 (ie 2 levels). We would test 4 cases (Runs): I/P 1 I/P 2 I/P 3 Case 1 1 1 1 Case 2 1 2 2 Case 3 2 2 1 Case 4 2 1 2 Hence while thorough this can reduce the test cases from the 8 possible test cases. Orthogonal Arrays take the Pair-wise analysis further and is out of the scope of this slide set.
  • 34. End to End Tests  End to End Tests are used to check an application and system from a full user perspective.  The end to end business rules will be defined within the requirements and as a general rule allow 30 minutes scripting per rule, which needs to include both positive successful end to end cases and cases where the process will lead to testing error handling. Both sets need to be identified in the count for estimation.
  • 35. Working with Widgets and GUI Interfaces  This part examines estimating test scripting effort for GUI interfaces, so where requirements are structured in a User Experience Document
  • 36. Estimating scripting effort for a GUI interface  As with normal requirements a User Experience Document needs to be reviewed for single logical features identified.  Error conditions and legal ranges need to be identified.  Business rules need to identify the end to end processes.  As a rough estimate of the amount of scripting time:  For each widget (GUI interface), the test scripting effort = Number of widget features x 11 x 30 minutes, where 11 represents the standard minimum number of boundary tests required.
  • 37. Estimating Manual Test Run Time  This part examines estimating test run time for Manual Test Scripts.
  • 38. Estimating Manual Test Run Time For First Pass  To estimate manual test script run time.  For each run:  5 minutes to set up each test script.  3 minutes per step in the script (not including the first set up and final end steps).  3 minutes for the end step BUT add time for defect handling. Or count last step as 5 minutes.  One can expect that around 10% of scripts will flag a problem and so will need a defect report raised. So 15 minutes per defect x 10% of scripts to be run.  Any additional time to set up (or re-set) the test environment will need to be added.
  • 39. Estimating Manual Regression Runs  For each set of scripts run:  10% will need to be run again to verify defect fixes.  Repeat runs will be required for regression runs. This will either be:  All scripts and initially one would want to re-run at least 3 times.  Run all scripts once, then if a non critical or low risk system on each regression run, where new functionality is being added, gradually reduce the module testing as one adds end to end tests and automate tests, then for each pass reduce the manual module tests by 10%. Choice of reduction is based upon risk and this is covered later.  IF critical or high risk, then all module tests will be required to be tested. However these can be either:  Gradually automated as code stabilises  Automate all tests from the start, however this has a very high overhead on test effort and minor changes in the code can mean considerable need to re- write tests, depending upon test framework in place. This needs to be resourced.
  • 40. Estimating GUI Automation Testing  This looks at the approach for estimating GUI Automation test effort
  • 41. GUI Test Automation  Estimation of GUI automation effort will depend upon choice of tool, the presence of an automated test framework and the stability of the code.  If aiming to automate then allow for:  Familiarisation of the tooling.  Setting up of an automated test framework – could be 2 weeks minimum for a developer.  Scripting, running and proving the first tests will take longer allow at least one week for first tests.  If using a tool like Selenium and within a framework then allow for scripting:  5 to 10 minutes for low complexity, based upon experience.  For highly complex scripts, a single step can take 1 hour to write.  Hence an estimation and banding of the Risk and Complexity of the test target needs to be done.  Note if using record and playback scripting is the same as a test run but add 15 minutes for administration.  NOTE: If code is unstable, then the overhead on managing and updating scripts can be high. It may be decided to target automation at regression end to end scripts for stable code.
  • 42. High Use of Automation  Automation for unit code tests has the advantage of being able to measure simply the code coverage and should be encouraged.  Usually automation is used gradually to replace manual functional scripts for code that is stabilised and has low risk of causing the need to re-write automated scripts.  IF all functional scripts are to be automated early on, then there can be a high level of maintenance. In many instances, a manual test script that takes an hour to write, may require a day to write and prove an automated version (depending upon tool and framework).  A manual test script may only take 5 minutes to change and may even be tolerant of change to code. However an automated script may require completely re-writing. So the maintenance level of scripts needs considerable thought. However there are ways around this.
  • 43. Automation  Ideally you need a low maintenance approach.  Use where possible common scripts, where the data and expected results can be pulled from a table.  This means that only the data needs manipulating and updating. Which in turn can reduce test maintenance effort.  Always look for the smart approach to tooling and do not rely upon record and playback as this can be expensive.
  • 44. Methods for Managing, Prioritising Regression Testing  There are a number of methods for prioritising regression tests to target Risk. This section looks at these.
  • 45. Managing Regression Pack  A regression pack will grow as functionality is added.  If manual scripts are being used for the core regression pack, then once the code become stable, it will be possible to automate scripts gradually.  Set priorities for automation based upon:  targeting scripts that are more successful at finding defects,  targeting scripts that test critical or more risk related functionality.  When running a manual regression pack within a time limited period, choose the test as a subset of the full test pack, choosing a customised subset for each run. The choice will be based upon:  High risk functionality.  Areas of code that have been changed or have interfaces that are impacted by change.  Areas of code that have an existing record of being susceptible to defects.  For a final regression run, one will want to run a full set of tests.
  • 46. End of Part 1 of 2  See slide pack part 2 of 2.