Presented by
Aliaa Monier
HOWTO PLAN FORTESTING
• To identify what is being tested.
• To determine the overall test effort.
• Used as the basis for test coverage.
Define test items
Why to identify test items
Item 1
Item 2
Item 3
Verifiable: they have an observable, measurable
outcome.
Items to be tested should be:
Example:
The home page needs to load fast.
 Home page loading time will take maximum 10 sec
once Home page link is clicked.
Output:
- Hierarchy of features to be tested , which
can be grouped by :
 Use case
 Business case
 Type of test (functional, performance, etc.)
Note:
- Each use case should derive at least one
test item.
Patrons of the library can search library catalog online to locate various
resources - books, periodicals, audio and visual materials, or other
items under control of the library. Patrons may reserve or renew item,
provide feedback, and manage their account.
Online public access catalog
Why not to include some features in testing:
• Not to be included in this release of the
Software.
• Low risk, has been used before and is
considered stable.
• OOB component
• Will be tested by the client
Risk assessment and establishing test priority
What is Risk?
Risk is a future uncertain event with a probability of occurrence and a potential for loss
why to assess risk:
• To ensure the most critical, significant, or
riskiest requirements for tests are
addressed as early as possible
• To ensure the test efforts are focused on
the most appropriate requirements for test
• To ensure that any dependencies
(sequence, data, etc.) are accounted for in
the testing
RiskTypes
Schedule
Risk
Budget Risk
Operational
Risks
Technical risks Programmatic
Risks
Wrongtimeestimation
Resourcesarenottrackedproperly
Failuretoidentifycomplexfunctionalities
Wrongbudgetestimation
Costoverruns
Projectscopeexpansion
Failuretoresolvetheresponsibilities
Nopropersubjecttraining
Nocommunicationinteam
Continuouschangingrequirements
Difficultprojectmodulesintegration
Runningoutoffund
Marketdevelopment
Changingcustomerproductstrategy
andpriority
Assess Risk
Determine
operational
profile
Establish
test
priority
Three steps to assessing risk and establishing
the test priorities
a - Identify and describe the risk magnitude
indicators that will be used, such as:
• H - High risk:
• M - Medium risk:
• L - Low risk:
b - For each item in your test items list , define
expected risks, select a risk magnitude indicator,
and justify (in a brief statement) the value you
selected.
Assess Risk
There are three perspectives that can be used for assessing risk:
• Effect - the consequence of a specified test item fails.
• Cause - an undesirable outcome caused by the failure of a test item
• Likelihood - the probability of a test item fails.
Effect
To assess risk by Effect, identify a condition, event, or action and try to
determine its impact.
Ask the question:
"What would happen if ___________?"
For example:
• "What would happen if while installing the new software, the
system runs out of disk space?"
Description Risk Mitigation
Factor
Justification
Insufficient disk
space during
install
Example
H Installing the software provides the user with the
first impression of the product. Any undesirable
outcomes, such as those listed below would
degrade the user's system, the installed software,
and communicate a negative impression to the
user:
• software is partially installed (some files,
some registry entries), which leaves the
installed software in an unstable condition, or
• the installation halts leaving the system in an
unstable state
Cause
Assessing risk by Cause is the opposite of by Effect.
Begin by stating an undesirable event or condition, and identify the set of
events that could have permitted the condition to exist. Ask a question such
as:
"How could ___________ happen?”
For example:
• "How could an order being replicated?"
Example
Description Risk
Mitigation
Factor
Justification
Replicated
orders
H  Replicated orders increase the company
overhead and diminish profits via the costs
associated with shipping, handling, and
restocking.
 Possible causes include:
 Transaction that writes order to the
database replicated due to user
intervention, user enters order twice - no
confirmation of entry
 Transaction that writes order to the
database replicated due to non-user
intervention (recovery process from lost
Internet connection, restore of database)
Likelihood
Assessing risk by Likelihood is to determine the probability that a test item
will fail.
The probability is usually based on an external factors such as:
• Failure rate(s)
• Rate of change
• Complexity
• Origination / Originator
Example:
"Historically we've found many defects in the components used to implement
use cases 1, 10, and 12, and our customers requested many changes in use
case 14 and 19."
Example
Description Risk
Mitigation
Factor
Justification
High failure
discovery rates
/ defect
densities in use
cases 1, 10, 12.
Change
Requests in use
cases 14 and
19.
H
H
Due to the previous high failure discovery rates and
defect density use cases 1, 10, and 12 are considered
high risk.
A high number of changes to these use cases
increases the probability of injecting defects into the
code.
a - Identify and describe the operational profile
magnitude indicators that will be used, such as:
• H - Quite frequently used:
• M - Frequently used:
• L - Infrequently used:
b- For each item in you test items list, select an
operational profile magnitude indicator and
state your justification for the indicator value.
Determine
Operational
Profile
Examples:
• Ordering items from the on-line catalog
• Customers inquiring about their order on-line after order is placed
• Item selection dialog
Description Operational Profile Factor Justification
Ordering items
from the catalog
This is the most common use case
executed by users.
H
a- Identify and describe the test priority
magnitude indicators that will be used,
such as:
• H - Must be tested.
• M - Should be tested, will test only after all
H items are tested
• L - Might be tested, but not until all H and
M items have been tested
b- For each item in you test items list, select a
test priority indicator and a state your
justification.
Establish
Test
Priority
Consider the following :
• the risk magnitude indicator value you identified earlier
• the operational profile magnitude value you identified earlier
• contractual obligations (will the target-of-test be acceptable if a use case or
component is not delivered?)
Strategies for establishing a test priority include:
• Use the highest assessed factor
• Identify one assessed factor as being the most significant and use that factor's
value as the priority.
• Use a combination of assessed factors to identify the priority.
• Using a weighting schema where individual factors are weighed, and their values
and priority calculated based upon the weight.
Examples:
• Ordering items from the on-line catalog
• Customers inquiring about their order on-line after order is placed
• Item Selection Dialog
Priority when the highest assessed value is used to determine priority:
Item Risk Operational
Profile
Contract Priority
Ordering
items from
catalog
H H H
Customer
Inquiries
L L L
Item
Selection
Dialog
L H L
H
L
H
• Delivery of a third party product.
• New version of interfacing software
• Ability to use and understand a new
package/tool, etc.
• Extremely complex functions
• Modifications to components with a past
history of failure
• Poorly documented modules or change
requests
• misunderstanding of the original
requirements.
Example for common risks
Test Strategy
• Define Types of Testing which will be
used and their objectives
• Define which testing techniques will be
used
• Define Entrance Criteria
• Define suspension criteria and
resumption requirements
• Define Exit Criteria
• Define Testing Stages
Define Types of Testing which will
be used and their objectives
Integration
Testing
Acceptance
testing
System testing
Regression
Testing
Functional
Testing
Security testingPerformance
testing
Define Testing Stages
Clearly state the stage in which the test will be executed.
Application Stages
Individual
Components are
implemented
Individual
Components
are Integrated
All System
Components
are integrated
System will be
delivered to the
Client
FunctionalTesting
System testing
IntegrationTesting
RegressionTesting
Performance testing
Security testing
Acceptance testing
Testingtypes
Y Y
Y
Y Y
Y
Y
Y
Y
How to decide which testing types will be used
Define which testing techniques will be used
For each testing types :
For each test item:
• specify how the test will be implemented
• who will execute it
• Which method(s) will be used to evaluate the results
Define which testing techniques will be used
For each testing types :Functional testing
For each test item: Registeration Form
• Specify how the test will be implemented :
• Who will execute it :
 There will be a set of test cases, each representing the actions taken by the
actor when the test item is executed.
 A minimum of two test cases will be created for each test item; one test
case to reflect the positive condition and one to reflect the negative
(unacceptable) condition.
 QE / Automated Tool (For security/ performance testing : Tool / SME) , (For
Acceptancr : QE / Client)
• Which method(s) will be used to evaluate the results
 Test case execution :
Function was executed successfully and as desired
 Window Existence, or Object Data verification methods:(UI / Data)
Windows /data were displayed during test execution.
 Database reflection testing :
Database will be examined before the test and again after the test to verify
that the changes executed during the test are accurately reflected in the data.
Define Entrance Criteria
• Components are developed and unit tested
• Test environment is ready
• Testing tools are available
• Testing Resources are available
• All bugs are fixed (regression testing)
Define suspension and resumption requirements
Example:
If the number or type of defects reaches a point where the follow on testing has no value,
it makes no sense to continue the test; you are just wasting resources.
Testing after a truly fatal error will generate conditions that may be identified as defects
but are in fact ghost errors caused by the earlier defects that were ignored.
Why to define Exit Criteria?
 To identify acceptable product quality
 To identify when the test effort has been successfully implemented
A clear statement of exit criteria should include the following items:
 What is being tested (the specific test item)
 How is the measurement being made
 What criteria is being used to evaluate the measurement
Define Exit Criteria
Example
 All planned test cases have been executed
 All identified defects have been addressed to an agreed upon resolution
 All planned test cases have been re-executed and all known defects have been
addressed as agreed upon, and no new defects have been discovered
IDENTIFY THE RESOURCES NECESSARY TO TESTING
Human resources (skills, knowledge, availability , training)
Test environment (Hardware and software requirements)
Tools
Data
• Manage and plan the testing
• Design the tests and data
• Implement the tests and data
• (test cases creation and test data
• preparation)
• Execute testing and evaluate the results
• Manage and maintain the test
systems(Support team)
Identify human resources who can
do the following
Team Leader
-
Development
Project
Manager
Development
Team
Testing team Client
User Acceptance test
System/Integration
testing
Unit testing
System Design
Reviews
Test Cases Creation
Test Cases Review
Screen prototype
reviews
Regression testing
Define RESPONSIBILITIES
Who is in charge?
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Identify non-human resource needs (Test environment)
Two different physical environments are recommended
Implementation
Environment
Execution
Environment
• The application-under-test.
• The client O/S.
• The server O/S.
• Internet Browser
• Bugs Tracking system
• Test case repository tool
• Database management tool
• Microsoft office , Outlook
Software Needed
Minimum software needed:
Additional software needed:
Data can be used as:
 Input (creating or supporting a test condition)
 Output (to be compared to an expected result).
• What software tools will be used,
• by whom,
• and what information or benefit will be gained by the use of each tool.
Tools
Data
• Test plan document.
• Test cases.
• Test Data
• Traceability matrix
• Build status report
• Release notes
• Test design specifications.
• Output of testing tools. (Performance-security - automation - Reports)
CREATING SCHEDULE
Why to create a test schedule:
Creating a schedule includes:
• Estimate test effort
• Generate test schedule
To identify and communicate test effort,
schedule, and milestones
Software testing estimation methods:
 Percentage of development effort.
 Experience Based.
 Work Breakdown Structure.
 Delphi technique
 Three-point estimation
Why to estimate
To avoid the exceeding timescales and overshooting budgets
 Reading / analyzing and reviewing Requirements.
 Test design (Test cases creation , test data
preparation....)
 Test implementation ( Recording test cases)
 Test Execution
 Re-testing(Issues)
 Regression testing
 Integration testing
 User acceptance testing
 Performance / security testing
 Compatibility testing(Across different browsers , OS..)
 Language testing
Estimates should include estimates for:
Generate test schedule
A test project schedule can be built from the work estimates and
resource assignments.
It is always best to tie all test dates directly to their related
development activity dates.
GENERATE TEST PLAN
To organize and communicate to others the test-planning information.
Prior to generating the test plan, a review of all the existing project information should be
done to ensure the test plan contains the most current and accurate information.
The test plan should be distributed to at least the following:
• all test roles
• developer representative
• Project Leader
• client representative
APPROVAL
Who can approve the process as complete
and allow the project to proceed to the next
level
Types of test Plan
Project Information Software Information
Create details
test plan
Create Master
test plan
Master test
plan
Details test
plan
Master test
plan
Unit test
plan
Acceptance
test plan
Integration
test plan
System
test plan
References
http://www.computing.dcu.ie/~davids/courses/CA267/ieee829mtp.pdf
http://www.upedu.org/process/gdlines/md_tstpl.htm

Test planning

  • 1.
  • 5.
  • 8.
    • To identifywhat is being tested. • To determine the overall test effort. • Used as the basis for test coverage. Define test items Why to identify test items Item 1 Item 2 Item 3
  • 9.
    Verifiable: they havean observable, measurable outcome. Items to be tested should be: Example: The home page needs to load fast.  Home page loading time will take maximum 10 sec once Home page link is clicked.
  • 10.
    Output: - Hierarchy offeatures to be tested , which can be grouped by :  Use case  Business case  Type of test (functional, performance, etc.) Note: - Each use case should derive at least one test item.
  • 11.
    Patrons of thelibrary can search library catalog online to locate various resources - books, periodicals, audio and visual materials, or other items under control of the library. Patrons may reserve or renew item, provide feedback, and manage their account. Online public access catalog
  • 12.
    Why not toinclude some features in testing: • Not to be included in this release of the Software. • Low risk, has been used before and is considered stable. • OOB component • Will be tested by the client
  • 14.
    Risk assessment andestablishing test priority What is Risk? Risk is a future uncertain event with a probability of occurrence and a potential for loss
  • 15.
    why to assessrisk: • To ensure the most critical, significant, or riskiest requirements for tests are addressed as early as possible • To ensure the test efforts are focused on the most appropriate requirements for test • To ensure that any dependencies (sequence, data, etc.) are accounted for in the testing
  • 16.
    RiskTypes Schedule Risk Budget Risk Operational Risks Technical risksProgrammatic Risks Wrongtimeestimation Resourcesarenottrackedproperly Failuretoidentifycomplexfunctionalities Wrongbudgetestimation Costoverruns Projectscopeexpansion Failuretoresolvetheresponsibilities Nopropersubjecttraining Nocommunicationinteam Continuouschangingrequirements Difficultprojectmodulesintegration Runningoutoffund Marketdevelopment Changingcustomerproductstrategy andpriority
  • 17.
    Assess Risk Determine operational profile Establish test priority Three stepsto assessing risk and establishing the test priorities
  • 18.
    a - Identifyand describe the risk magnitude indicators that will be used, such as: • H - High risk: • M - Medium risk: • L - Low risk: b - For each item in your test items list , define expected risks, select a risk magnitude indicator, and justify (in a brief statement) the value you selected. Assess Risk
  • 19.
    There are threeperspectives that can be used for assessing risk: • Effect - the consequence of a specified test item fails. • Cause - an undesirable outcome caused by the failure of a test item • Likelihood - the probability of a test item fails.
  • 20.
    Effect To assess riskby Effect, identify a condition, event, or action and try to determine its impact. Ask the question: "What would happen if ___________?" For example: • "What would happen if while installing the new software, the system runs out of disk space?"
  • 21.
    Description Risk Mitigation Factor Justification Insufficientdisk space during install Example H Installing the software provides the user with the first impression of the product. Any undesirable outcomes, such as those listed below would degrade the user's system, the installed software, and communicate a negative impression to the user: • software is partially installed (some files, some registry entries), which leaves the installed software in an unstable condition, or • the installation halts leaving the system in an unstable state
  • 22.
    Cause Assessing risk byCause is the opposite of by Effect. Begin by stating an undesirable event or condition, and identify the set of events that could have permitted the condition to exist. Ask a question such as: "How could ___________ happen?” For example: • "How could an order being replicated?"
  • 23.
    Example Description Risk Mitigation Factor Justification Replicated orders H Replicated orders increase the company overhead and diminish profits via the costs associated with shipping, handling, and restocking.  Possible causes include:  Transaction that writes order to the database replicated due to user intervention, user enters order twice - no confirmation of entry  Transaction that writes order to the database replicated due to non-user intervention (recovery process from lost Internet connection, restore of database)
  • 24.
    Likelihood Assessing risk byLikelihood is to determine the probability that a test item will fail. The probability is usually based on an external factors such as: • Failure rate(s) • Rate of change • Complexity • Origination / Originator Example: "Historically we've found many defects in the components used to implement use cases 1, 10, and 12, and our customers requested many changes in use case 14 and 19."
  • 25.
    Example Description Risk Mitigation Factor Justification High failure discoveryrates / defect densities in use cases 1, 10, 12. Change Requests in use cases 14 and 19. H H Due to the previous high failure discovery rates and defect density use cases 1, 10, and 12 are considered high risk. A high number of changes to these use cases increases the probability of injecting defects into the code.
  • 26.
    a - Identifyand describe the operational profile magnitude indicators that will be used, such as: • H - Quite frequently used: • M - Frequently used: • L - Infrequently used: b- For each item in you test items list, select an operational profile magnitude indicator and state your justification for the indicator value. Determine Operational Profile
  • 27.
    Examples: • Ordering itemsfrom the on-line catalog • Customers inquiring about their order on-line after order is placed • Item selection dialog Description Operational Profile Factor Justification Ordering items from the catalog This is the most common use case executed by users. H
  • 28.
    a- Identify anddescribe the test priority magnitude indicators that will be used, such as: • H - Must be tested. • M - Should be tested, will test only after all H items are tested • L - Might be tested, but not until all H and M items have been tested b- For each item in you test items list, select a test priority indicator and a state your justification. Establish Test Priority
  • 29.
    Consider the following: • the risk magnitude indicator value you identified earlier • the operational profile magnitude value you identified earlier • contractual obligations (will the target-of-test be acceptable if a use case or component is not delivered?) Strategies for establishing a test priority include: • Use the highest assessed factor • Identify one assessed factor as being the most significant and use that factor's value as the priority. • Use a combination of assessed factors to identify the priority. • Using a weighting schema where individual factors are weighed, and their values and priority calculated based upon the weight.
  • 30.
    Examples: • Ordering itemsfrom the on-line catalog • Customers inquiring about their order on-line after order is placed • Item Selection Dialog Priority when the highest assessed value is used to determine priority: Item Risk Operational Profile Contract Priority Ordering items from catalog H H H Customer Inquiries L L L Item Selection Dialog L H L H L H
  • 31.
    • Delivery ofa third party product. • New version of interfacing software • Ability to use and understand a new package/tool, etc. • Extremely complex functions • Modifications to components with a past history of failure • Poorly documented modules or change requests • misunderstanding of the original requirements. Example for common risks
  • 33.
    Test Strategy • DefineTypes of Testing which will be used and their objectives • Define which testing techniques will be used • Define Entrance Criteria • Define suspension criteria and resumption requirements • Define Exit Criteria • Define Testing Stages
  • 34.
    Define Types ofTesting which will be used and their objectives Integration Testing Acceptance testing System testing Regression Testing Functional Testing Security testingPerformance testing
  • 35.
    Define Testing Stages Clearlystate the stage in which the test will be executed. Application Stages Individual Components are implemented Individual Components are Integrated All System Components are integrated System will be delivered to the Client FunctionalTesting System testing IntegrationTesting RegressionTesting Performance testing Security testing Acceptance testing Testingtypes Y Y Y Y Y Y Y Y Y How to decide which testing types will be used
  • 36.
    Define which testingtechniques will be used For each testing types : For each test item: • specify how the test will be implemented • who will execute it • Which method(s) will be used to evaluate the results
  • 37.
    Define which testingtechniques will be used For each testing types :Functional testing For each test item: Registeration Form • Specify how the test will be implemented : • Who will execute it :  There will be a set of test cases, each representing the actions taken by the actor when the test item is executed.  A minimum of two test cases will be created for each test item; one test case to reflect the positive condition and one to reflect the negative (unacceptable) condition.  QE / Automated Tool (For security/ performance testing : Tool / SME) , (For Acceptancr : QE / Client)
  • 38.
    • Which method(s)will be used to evaluate the results  Test case execution : Function was executed successfully and as desired  Window Existence, or Object Data verification methods:(UI / Data) Windows /data were displayed during test execution.  Database reflection testing : Database will be examined before the test and again after the test to verify that the changes executed during the test are accurately reflected in the data.
  • 39.
    Define Entrance Criteria •Components are developed and unit tested • Test environment is ready • Testing tools are available • Testing Resources are available • All bugs are fixed (regression testing)
  • 40.
    Define suspension andresumption requirements Example: If the number or type of defects reaches a point where the follow on testing has no value, it makes no sense to continue the test; you are just wasting resources. Testing after a truly fatal error will generate conditions that may be identified as defects but are in fact ghost errors caused by the earlier defects that were ignored.
  • 41.
    Why to defineExit Criteria?  To identify acceptable product quality  To identify when the test effort has been successfully implemented A clear statement of exit criteria should include the following items:  What is being tested (the specific test item)  How is the measurement being made  What criteria is being used to evaluate the measurement Define Exit Criteria
  • 42.
    Example  All plannedtest cases have been executed  All identified defects have been addressed to an agreed upon resolution  All planned test cases have been re-executed and all known defects have been addressed as agreed upon, and no new defects have been discovered
  • 43.
    IDENTIFY THE RESOURCESNECESSARY TO TESTING Human resources (skills, knowledge, availability , training) Test environment (Hardware and software requirements) Tools Data
  • 44.
    • Manage andplan the testing • Design the tests and data • Implement the tests and data • (test cases creation and test data • preparation) • Execute testing and evaluate the results • Manage and maintain the test systems(Support team) Identify human resources who can do the following
  • 45.
    Team Leader - Development Project Manager Development Team Testing teamClient User Acceptance test System/Integration testing Unit testing System Design Reviews Test Cases Creation Test Cases Review Screen prototype reviews Regression testing Define RESPONSIBILITIES Who is in charge? Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
  • 46.
    Identify non-human resourceneeds (Test environment) Two different physical environments are recommended Implementation Environment Execution Environment
  • 47.
    • The application-under-test. •The client O/S. • The server O/S. • Internet Browser • Bugs Tracking system • Test case repository tool • Database management tool • Microsoft office , Outlook Software Needed Minimum software needed: Additional software needed:
  • 48.
    Data can beused as:  Input (creating or supporting a test condition)  Output (to be compared to an expected result). • What software tools will be used, • by whom, • and what information or benefit will be gained by the use of each tool. Tools Data
  • 50.
    • Test plandocument. • Test cases. • Test Data • Traceability matrix • Build status report • Release notes • Test design specifications. • Output of testing tools. (Performance-security - automation - Reports)
  • 51.
    CREATING SCHEDULE Why tocreate a test schedule: Creating a schedule includes: • Estimate test effort • Generate test schedule To identify and communicate test effort, schedule, and milestones
  • 52.
    Software testing estimationmethods:  Percentage of development effort.  Experience Based.  Work Breakdown Structure.  Delphi technique  Three-point estimation Why to estimate To avoid the exceeding timescales and overshooting budgets
  • 53.
     Reading /analyzing and reviewing Requirements.  Test design (Test cases creation , test data preparation....)  Test implementation ( Recording test cases)  Test Execution  Re-testing(Issues)  Regression testing  Integration testing  User acceptance testing  Performance / security testing  Compatibility testing(Across different browsers , OS..)  Language testing Estimates should include estimates for:
  • 54.
    Generate test schedule Atest project schedule can be built from the work estimates and resource assignments. It is always best to tie all test dates directly to their related development activity dates.
  • 55.
    GENERATE TEST PLAN Toorganize and communicate to others the test-planning information. Prior to generating the test plan, a review of all the existing project information should be done to ensure the test plan contains the most current and accurate information. The test plan should be distributed to at least the following: • all test roles • developer representative • Project Leader • client representative
  • 56.
    APPROVAL Who can approvethe process as complete and allow the project to proceed to the next level
  • 58.
    Types of testPlan Project Information Software Information Create details test plan Create Master test plan Master test plan Details test plan
  • 59.
    Master test plan Unit test plan Acceptance testplan Integration test plan System test plan
  • 61.

Editor's Notes

  • #12 Manage Account: - Create Account - Delete Account Search: - Search by book name - Search by Editor - Search by Publisher - Search by year Reserve item:
  • #20 Notes: - Select one perspective, identify a risk magnitude indicator and justify your selection. - It is not necessary to identify an indicator for each risk perspective. - It is recommended that, if a low indicator was identified, try evaluating the item from a different risk perspective to ensure the item is really a low risk.
  • #25 Failure discovery rate and / or density: The probability of a failure increases as the failure discovery rates or density increases. Defects tend to congregate, therefore, as the rate of discovery or the number of defects (density) increases in a use case or component, the probability of finding another defect also increases. Discovery rates and density from previous releases should also be considered when assessing risk using this factor, as previous high discovery rates or densities indicate a high probability of additional failures. Rate of change The probability of a failure increases as the rate of change to the use case or component increases. Therefore, as the number of changes increases, so too does the probability that a defect has been introduced. Every time a change is made to the code, there is the risk of "injecting" another defect in it. Complexity The probability of a failure increases as the measure of complexity of the use case or component increases. Origination / Originator Knowledge and experience of where the code originated and by whom can increase or decrease the probability of a failure. The use of third party components typically decreases the probability of failure. However, this is only true if the third party component has been certified (meets your requirements, either through formal test or experience). The probability of failure typically decreases with the increased knowledge and skills of the implementer. However, such factors as the use of new tools, technologies, or acting in multiple roles may increase the probability of a failure even by the best team members.
  • #27 The operational profile indicator you select should be based upon the frequency a use case or component is executed, including: the number of times ONE actor (or use case) executes the use case (or component) in a given period of time, or the number of ACTORS (or use cases) that execute the use case (or component) Typically, the greater the number of times a use case or component is used, the higher the operational profile indicator.
  • #43 Example 2 All planned test cases have been executed. All identified defects have been addressed to an agreed upon resolution. All Severity 1 or 2 defects have been resolved (status = verified or postponed). All high priority test cases have been re-executed and all known defects have addressed as agreed upon, and no new defects have been discovered. Example 3 All high priority test cases have been executed. All identified defects have been addressed to an agreed upon resolution. All Severity 1 or 2 defects have been resolved (status = fixed or postponed). All high priority test cases have been re-executed and all known defects have addressed as agreed upon, and no new defects have been discovered.
  • #44 human resources (such as availability or need for non-test resources to support / participate in test) (SMEs) constraints, (such as equipment limitations or availability, or the need / lack of special equipment) - (barcode testing) / (Labs) special requirements, such as test scheduling or access to systems(Server which can't be accessed or down at specific time) Examples: Testing databases will require the support of a database designer / administrator to create, update, and refresh test data.(Like BI / Oracle team in Syngenta) , so we need to raise this issue in the test plan in case the database is locked. System performance testing will use the servers on the existing network (which supports non-test traffic). Testing will need to be scheduled after hours to ensure no non-test traffic on the network.
  • #51 Development deliverable: Infrastructure Design Document : (Network Architecture) Database Design Document Front-End/Middle Tier Design Document.
  • #52 we take some consideration when estimating time for project: - productivity and skill / knowledge level of the human resources working on the project
  • #53 Software testing estimation methods: Percentage of development effort metho. Experience Based: - Metrics collected from previous tests. - You already tested similar application in previous project. - Inputs are taken from Subject Matter experts who know the application (as well as testing) very well. Work Breakdown Structure: - By breaking down the test project into small pieces. - Modules are divided into sub-modules. - Sub modules are further divided into functionalities - Functionalities are divided in sub-functionalities(Tasks) - Estimate the duration of each task. Delphi technique: - Same as WBS. - Each task is allocated to each team member. - Then team member gives estimate to complete the task. - Average estimate is taken. Three-point estimation: - Same as WBS. - Three types of estimation are done on each task Optimistic Estimate (Best case scenario in which nothing goes wrong and all conditions are optimal.) = a Most Likely Estimate (most likely duration and there may be some problem but most of the things will go right.) = m Pessimistic Estimate (worst case scenario which everything goes wrong.) = b Formula to find Value for Estimate (E) = a + (4*m) + b / 6