• Save
AJRA Test Strategy Discussion
Upcoming SlideShare
Loading in...5
×
 

AJRA Test Strategy Discussion

on

  • 1,962 views

This presentation discusses the test strategy recommendations by A.J. Rhem & Associates for a typical organization.

This presentation discusses the test strategy recommendations by A.J. Rhem & Associates for a typical organization.

Statistics

Views

Total Views
1,962
Views on SlideShare
1,954
Embed Views
8

Actions

Likes
1
Downloads
0
Comments
0

2 Embeds 8

http://www.linkedin.com 6
https://www.linkedin.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Speaker reads the slide and says… Requirement could be considered, as ‘Testable’ if following characteristics will be met.      The state of the system and the data elements that are inputs (e.g., customer number, product number)     The condition or action associated with the requirement (e.g., the user enters data, the order is validated, the check amount is deducted)     The expected or specified result described in terms of data elements (e.g., customer number must be 8 digit numeric, product quantity must be greater than zero). Give the example of building your house.
  • Speaker reads the slide and says… Requirement could be considered, as ‘Testable’ if following characteristics will be met.      The state of the system and the data elements that are inputs (e.g., customer number, product number)     The condition or action associated with the requirement (e.g., the user enters data, the order is validated, the check amount is deducted)     The expected or specified result described in terms of data elements (e.g., customer number must be 8 digit numeric, product quantity must be greater than zero). Give the example of building your house.
  • Speaker reads the slide and says… Explain the above characteristics and explain the examples also.     
  • Speaker reads the slide and says… Explain the above characteristics and explain the examples also.     
  • Speaker reads the slide and says… Explain the above characteristics and explain the examples also.     
  • Speaker reads point 1 Point 1 – The ability to identify where the greatest risk lies enables Test Managers to appropriately assign test resource. In addition, if testing time frames are shortened the Test Manager can focus on areas of highest risk. Point 2 – Early identification of non-testable requirements prevents designs type defect from being identified in later test phases Point 3 – Gets the business, developers and tester communicate using a standard deliverable. Since we knew there was a greater possibility of a defect occurring on a particular functionality then it should be no surprise if it actually does happen. Point 4 If there is a Change Control this process (risk assessment) will allow us to make a fair estimation of the level of risk and help us understand what possible people resources could be need to mitigate the risk.

AJRA Test Strategy Discussion AJRA Test Strategy Discussion Presentation Transcript

  •  
  • Test Strategy Discussion
    • Test Management Mission:
    • Verify that the Project Business Vision is effectively carried through the iterative development process to customer deliverables.
    • Test Management Goals:
    • Ensure that the application performs as intended
    • Detect and document as many defects (errors or application deficiencies) as early as possible in the development life cycle prior to reduce risks and increase the probability of achieving project goals.
    • Ensure that code changes and/or fixes work as expected and do not cause any new defects to the operation and function of any unchanged components of the application
    • Ensure that testing activities are executed quickly, effectively, within program timelines
  • Test Strategy Discussion
    • Testing within iterations is intentionally focused on short horizon
      • Plan to test only requirements that can be tested
      • Focus on what needs to be verified to move to next step
    • Scope Box or Time Box the Iterations?
      • Scope Boxing implies that Testing must conclude to an acceptable type and number of defects on a given scope, for the iteration to be called complete
      • Time Boxing implies that Testing must conclude by an acceptable date, and all untested requirements are then moved to the next iteration
    Scope Box? Time Box?
  • Test Strategy Discussion Testing everybody?
    • Testability
    • Test Scope
    • Test Effort
    • Test Timing
    • Test Coverage
    • Test Acceptance
  • Test Strategy Discussion Testability
    • Testable Requirement
    • This is a type of requirement that has been broken down to a level where it is precise, unambiguous, and deployed to an extent of manageable requirements
    • Non-Testable Requirement
    • This is a type of requirement for which we can't specify a definitive, unambiguous test for a requested attribute to be delivered in the solution product. Either this attribute is unachievable, or we don't understand it well enough to be able to deliver it
    • Characteristics of Testable Requirement:
    • Completeness
    • Conciseness
    • Correctness
    • Feasible
    • Unambiguous
  • Testability Test Strategy Discussion
    • Completeness
      • The requirement should be complete by all means with the necessary and related information
      • Example:
        • Testable Requirement: User should be authenticated based on their user ID, Password and Department
        • Non-Testable Requirement: User should be authenticated
    • Conciseness
      • The requirement should be concise w.r.t that requirement. It should not club with other requirements
      • Example:
        • Testable Requirement: The user should be prompted for user name & password authentication
        • Non-Testable Requirement: The user should be prompted for authentication
  • Testability Test Strategy Discussion
    • Correctness
      • The correctness in expressing the requirement
      • Example:
        • Testable Requirement: Access to 'Secured' page should be restricted based on the User Privileges
        • Non-Testable Requirement: Access to 'Secured' page should NOT be restricted
    • Feasible
      • Whether it is feasible to test the requirement based on infrastructure, data, time, resource, etc constraints
      • Example:
        • Testable Requirement: The batch with 1000 records should be executed in 10 Sec
        • Non-Testable Requirement: System should run 24/7
  • Testability Test Strategy Discussion
    • Unambiguous
      • The requirement is explained clearly without any ambiguity. Requirement that has only one interpretation even if its read by many readers.
      • Example:
        • Testable Requirement: The System Admin can view all the information, the general users can only view the information and the editors can edit (update and create)
        • Non-Testable Requirement: Every one should not access all the information
  • Test Planning Considerations
    • Requirements for Test = Target of Test
      • The requirements for test may be derived from many sources, including use cases, use-case models, supplemental specifications, design requirements, business cases, interviews with end-users, and the software architecture document. All of these should be reviewed to gather information that is used to identify the requirements for test
      • The requirement for test must be an observable, measurable behavior. If the requirement for test cannot be observed or measured, it can't can be assessed to determine if the requirement has been satisfied
      • There is not a one-to-one relationship between each use case or supplemental requirement of a system and a requirement for test
        • Use cases will often have more than one requirement for test
        • Some supplemental requirements will derive one or more requirements for test and others will derive none (such as marketing or packaging requirements)
    Test Scope
  • Test Planning Considerations
    • Targets for Functional Tests
      • Functional test requirements for test are derived from descriptions of the target-of-test's functional behaviors
      • At a minimum, each use case should derive at least one requirement for test
      • A more detailed list of requirements for test would include at least one requirement for test for each use case flow of events
    • Targets for Other Tests
      • Requirements for other tests are derived from several sources, including Supplementary Specifications, User-Interface Guidelines, Design Guidelines, and Programming Guidelines. These include…
      • Functionality not covered in Use Cases
      • Usability
      • Reliability
      • Performance
      • Scalability
      • Others…
    Test Scope
  • Test Planning Considerations Test Scope
  • Test Planning Considerations Test Scope Targets of Test The Full Set of Integrated Subsystems by Vision A Set of Code Units by Function A Set of Code Units by Association 1x Class or Routine Code N/A N/A A Set of Service Calls by Association 1x Service Contract Service The Full Set of Integrated Reporting Subsystems by Vision A Set of configurations of Reports along with the Report Engine code or applications by Function A Set of configurations of a Report along with the Report Engine code or application by Association 1x Report per medium (display / print/ email, etc.) Report N/A The Full Set of Screens by Vision Application N/A A Set of Screens by Function Integrated Subsystem/ Incremented Application N/A A Set of Screens by Association String 1x Interface 1x Screen / 1x Pop-up Unit Interface Screen
  • Test Planning Considerations Test Scope N/A N/A Progression Test (Test Team) (Upon completion) (Form, Function, Content, Security) Progression Test (Test Team) (Upon completion) (Functional Test, Stress Test, Load Test, Security Test) User Interface Test (Test Team) (Upon completion) (Form, Content, Navigation) Application Regression Test (Test Team) (Upon completion), User Acceptance Test (Business) (Upon completion), Parallel Test (Test Team) (Upon completion), Certification Test (Certification Authority) (Upon completion), (Subsets of User Interface Test & Progression Tests, plus other tests as needed.) Regression Test (Test Team) (Every iteration), User Acceptance Test (Business) (Every iteration), (Subsets of User Interface Test & Progression Tests, plus other tests as needed.) N/A N/A Progression Test (Test Team) (Every iteration) (Form, Function, Content, Security) Progression Test (Test Team) (Every iteration) (Functional Test, Stress Test, Load Test, Security Test) User Interface Test (Test Team) (Upon integration) (Form, Content, Navigation) Integrated Subsystem/ Incremented Application String Test (Developer) (Upon development) (Function, Stress, Load (Volume), Security) Unit Test (Developer) (Upon development) (Function, Stress, Load (Volume), Security) Code String Test (Developer) (Upon development) (Function, Security) Unit Test (Developer) (Upon development) (Function, Security) Service String Test (Developer, Business) (Upon development) (Form, Function, Content, Security) Unit Test (Developer, Business) (Upon development) (Form, Function, Content) Report N/A String Test (Developer) (Upon development) (Form, Content, Navigation) String Unit Test (Developer) (Upon development) (Function, Security) Unit Test (Developer) (Upon development) (Form, Content, Navigation) Unit Interface Screen
  • Test Planning Considerations IT Business Test Effort Time Test Effort D C T Iteration
  • Test Planning Considerations Test Timing Unit Test (Developer) String Test (Developer) 1 Regression Test (Test Team) (Functional Test, Stress Test, Load Test, Security Test, plus others as needed) Progression Test (Test Team) (Upon completion) (Functional Test, Stress Test, Load Test, Security Test) 2 User Acceptance Test (Business) 3 Code Test 1 2 3 1 2 3 Code Test
  • Test Planning Considerations Test Timing Unit Test (Developer) String Test (Developer) 1 User Acceptance Test (Business) 3 1 2 3 1 2 3 Regression Test Run Automated Tests (Test Team) (Functional Test, Stress Test, Load Test, Security Test, plus others as needed) Progression Test Add Delta and Run Automated Tests Functional Test, Stress Test, Load Test, Security Test (Test Team) 2 Code Test Code Test
  • Test Planning Considerations
    • How much coverage?
      • Functional test cases for functional testing are derived from the project's use cases
      • Test cases should be developed for each use-case scenario
      • The use-case scenarios are identified by describing the paths through the use case that traverse the basic flow and alternate flows start to finish through the use case
    1. Ref: Rational Unified Process® Version 2003.06.13 © Copyright IBM Corp. 1987, 2004 Test Coverage
  • Test Planning Considerations
    • Potentially an infinite number of tests could be conducted.
      • E.g. Use Case “Do Something” has the following flows:
        • Main Flow M, Alternative Flows A1, A2, and A3
      • Test Scenarios that could be executed:
    M M A1 M A2 M A3 M A1 A2 M A2 A1 M A2 A3 M A3 A2 M A1 A3 M A3 A1 M A1 A2 A3 M A2 A1 A3 M A2 A3 A1 M A3 A2 A1 M A1 A3 A2 M A3 A1 A2 To each, add at least 2 Test Data Conditions: Good Data, Bad Data … That’s  … That’s Very   means Infinity! Test Coverage
  • Test Planning Considerations
    • Optimization using Risk and Relevance Based Testing
      • The mathematically large permutations need to be optimized using considerations of Risk and Relevance.
      • This is as important and as much an Art & Science process, as Analysis & Design!
      • The test effort should be prioritized so that the most important, significant, or riskiest use cases or components are tested first
      • A Risk Assessment and Relevance measure are used as the basis for establishing the test priority. Three steps to assessing risk and establishing the test priorities are:
        • Assess Risk
        • Determine Relevance
        • Establish Test Priority
    Test Coverage
  • Test Planning Considerations
    • Step 1: Assess Risk
      • Risk Based Testing focuses the project’s test activities on the most important tests that will mitigate the highest amount of risk
      • Provides vehicle for business, development and test teams to align on assumptions that drive the test approach and planning process
      • Increases test effectiveness while potentially resulting in fewer tests, thus reducing cost while maintaining or increasing test effectiveness
    • Step 2: Determine Relevance
      • Relevance refers to how much the test target is expected to be used, by whom, and in what way
    • Step 3: Establish Test Priority
      • Test Prioritization refers to:
        • Prioritizing what will be tested and in what order should tests be performed
      • This is the result of Assess Risk and Determine Relevance above
    Test Coverage
  • Test Planning Considerations
    • Step 1: Assess Risk
      • Risk Severity
        • H - high risk, not tolerable. Severe external exposure. The company will suffer great financial losses, liability, or un-recoverable loss of reputation
        • M - medium risk, tolerable, but not desirable. Minimal external exposure, the company may suffer financially, but there is limited liability or loss of reputation
        • L - low risk, tolerable. Little or no external exposure, company has little or no financial loss or liability. Company's reputation unaffected
        • Risk Assessment Perspectives
          • There are two perspectives that can be used for assessing Risk Severity
            • Cause - an undesirable outcome caused by the failure of a use case
            • Effect - the impact or consequence of a specified use case (requirement, etc.) failing
    Test Coverage
  • Test Planning Considerations
    • Step 1: Assess Risk (contd.)
      • Risk Chance
        • Assessing risk by Chance is to determine the probability that a use case (or component implementing a use case) will fail
        • The probability is usually based on an external factors such as:
          • Failure rate(s) and / or density already observed
          • Rate of change
          • Complexity
          • Origination / Originator
    Test Coverage
  • Test Planning Considerations
    • Step 2: Determine Relevance
      • The next step in assessing risk and establishing a test priority is to determine the target-of-test's Relevance
      • Begin by identifying and describing the operational profile magnitude indicators that will be used, such as
        • H - used quite frequently, many times per period or by many actors or use cases
        • M - used frequently, several times per period or by several actors or use cases
        • L - infrequently used or used by very few actors or use cases
      • The operational profile indicator you select should be based upon the frequency a use case or component is executed, including:
        • The number of times ONE actor (or use case) executes the use case (or component) in a given period of time, or
        • The number of ACTORS (or use cases) that execute the use case (or component)
    Test Coverage
  • Test Plan
    • Step 3: Establish Test Priority
      • Test Prioritization refers to:
        • Prioritizing what will be tested and in what order should tests be performed
          • To ensure the test efforts are focused on the most appropriate requirements for test
          • To ensure the most critical, significant, or riskiest requirements for test are addressed as early as possible
          • To ensure that any dependencies (sequence, data, etc.) are accounted for in the testing
    Test Coverage
  • Test Plan
    • Step 3: Establish Test Priority (contd.)
      • Begin by identifying and describing the test priority magnitude indicators that will be used, such as:
        • H - must be tested
        • M - should be tested, will test only after all H items are tested
        • L - might be tested, but not until all H and M items have been tested
      • Consider the following when determining the test priority indicators for each item:
        • The Risk magnitude indicator value you identified earlier
        • The Relevance magnitude value you identified earlier
        • Also, the actor descriptions (are the actors experienced?, tolerant of work-arounds?, etc.)
        • Any contractual obligations (will the target-of-test be acceptable if a use case or component is not delivered?)
    Test Coverage
  • Test Strategy Discussion
    • Thank You!
    Questions