SOFTWAR
E TESTING
Team Members
o Mohamed Ibrahem Mahsoup
o Mohamed Hosni
o Nadia Saleh
o Hanan Emad Eldien
o Sara Adel
AGENDA
• What is software testingIntroduction
• Error , Big ,Fault & failureOBJECTIVES
• Project Initiation
• Test Plan
• Test Design cases , Execute Test Cases(manual ,automated), Report Defects
The Software Testing Life
Cycle
• Functional
• Non-Functional ..
Testing Types
• Unit Testing
• Integration testing
• System Testing
Testing Levels
INTRODUCTION:
Software Testing :
It is the process used to identify the correctness, completeness and
quality of developed computer software.
It is the process of executing a program/application under positive and
negative conditions by manual or automated means. It checks for the :-
 Specification
 Functionality
 Performance
OBJECTIVESUncover as many as errors (or bugs) as possible in a given product.
Demonstrate a given software product matching its requirement specifications.
Validate the quality of a software testing using the minimum cost and efforts.
Generate high quality test cases, perform effective tests, and issue correct and
helpful problem reports.
ERROR , BUG , FAULT & FAILURE
Error : It is a human action that produces the incorrect result that
produces a fault.
Bug : The presence of error at the time of execution of the software.
Fault : State of software caused by an error.
Failure : Deviation of the software from its expected result. It is an event.
SDLC(SOFTWARE DEVELOPMENT LIFE CYCLE)
 Standard model used word wide to develop a software.
 A framework that describes the activities performed at each stage of a
software development project.
 Necessary to ensure the quality of the software.
 Logical steps taken to develop a software product.
INTEGRATION
SYSTEM TEST
CODING &Unit Test
Classical Waterfall Model
TESTING
Testing is the process of evaluating a software product with the intent of finding
errors in it and improving its quality. Done manually or by using tools with the
predefined objectives.
 Detect defects.
 Determine that the specified requirements are
met.
 Test the performance
TESTING OBJECTIVES
THE SOFTWARE TESTING LIFE CYCLE
TESTING LIFE CYCLE
Project Initiation
System Study
Summary Reports
Report Defects
Execute Test Cases
( manual /automated )
Design Test Cases
Test Plan
Regression Test
Analysis
TEST PLANNING
It is a systematic approach to test a system i.e. software. The plan typically contains a
detailed understanding of what the eventual testing workflow will be.
Testing objectives are defined :
Identifying the resources and schedules for testing.
Identify features to be tested
Defining exit criteria
TEST ANALYSIS AND DESIGN
Converting test objectives into different test cases.
This phase also includes identifying necessary data required for
testing, designing the test environment setup, and identifying
required infrastructure and tools.
TEST CASE
It is a specific procedure of testing a particular requirement.
It will include:
Identification of specific requirement tested
Test case success/failure criteria
Specific steps to execute test
Test data
TEST IMPLEMENTATION AND EXECUTION
Actual testing is performed in this phase:
 Performing testing using test cases.
 Creating a log of the outcome of test execution.
 Comparing the result o test execution with the expected results.
 Repeating test activities in order to ensure that the defect
identified previously is fixed and no new defects are introduced
EVALUATING EXIT CRITERIA AND REPORTING
A test summary report is also
created for stakeholders to review
the progress of testing .
TEST CLOSURE ACTIVITIES
Data from completed test activities is archived in this phase for future reference.
VERIFICATION AND VALIDATION
Verification
• Verification is the process of determining if software is developed using correct requirements or
specifications .
Validation
• Validation is the process of determining if the software developed meets requirements or
specifications.
MANUAL TESTING
• the process of testing software
without the use of automated
tools
• For effective manual testing, it
is very important to design
tests with detailed steps to test
all software requirements.
• Automated testing is the process of performing
activities in the testing process with the use of
automated tools.
• Helps in reducing testing time.
• providing accuracy in the test execution.
• Various types of testing, such as functionality
testing and performance testing, can be done at
different test levels using automated tools .
• Automated testing can only be done with the help of
previously written scripts
AUTOMATED TESTING
TEST TYPES
 Functional Testing
Testing of the functionality of software as documented in requirement specifications.
 Nonfunctional Testing
Testing of behavioral characteristics of software such as reliability , usability , efficiency,
maintainability, and portability .
Confirmation Testing
After the defect is reported and fixed in the software, a new build is released to the testing
team. Confirmation testing , also known as retesting , is done on the new build to verify that the
reported defect is fixed in the new build
TEST TYPES
 Regression Testing
Regression testing ensures that little changes don't break software.
 Static Testing
performed on the component or on the system without running the code or the0 software. It
involves checking the syntax of code either by reading it manually or by using tools to find errors
software developers review code by inspecting code files or by discussing the code in a group .
 Dynamic Testing
performed on the component or on the system by compiling and running the code and the
software. It involves executing code to validate what and how the software works, by providing
input to the system and checking if the output is correct .
TEST TYPES
Black box testing
 No knowledge of internal program design or code required.
 Tests are based on requirements and functionality.
White box testing
 Knowledge of the internal program design and code required.
 Tests are based on coverage of code statements, branches, paths, conditions.
BLACK BOX TESTING
WHITE BOX TESTING
TESTING LEVELS
A test level is a group of testing activities that are performed to test
individual components, integration of components, and the complete
system .
UNIT TESTING
Tests each module individually.
Follows a white box testing (Logic of the
program).
Done by developers.
INTEGRATION TESTING
Once all the modules have been unit tested, integration testing is performed.
It is systematic testing.
Testing is done by developers or the testing teams.
Produce tests to identify errors associated with interfacing.
Types:
Big Bang Integration testing
Top Down Integration testing
Bottom Up Integration testing
Mixed Integration testing
OBJECTIVES OF INTEGRATION TESTING
Objectives are to detect faults due to interface errors or invalid assumptions about interfaces.
Interface types
 Parameter interfaces Data passed from one method or procedure to another.
 Shared memory interfaces Block of memory is shared between procedures or functions.
 Procedural interfaces Sub-system encapsulates a set of procedures to be called by other sub-systems.
 Message passing interfaces Sub-systems request services from other sub-systems
EXAMPLE:
EXAMPLE:
 calc_interest(): will have to communicate with
check_principle() to get the principle amount and
other functions .
 check_principle(): retrieves the principal amount
from the database, and calc_interest() calculates
simple interest and presents the data on the
graphical user interface(GUI)
 calc_interest() communicates with the
check_principle() component , database and GUI
Every Component my have been tested at the
component level test (unit test) but the may be
defects at the integration level testing
for instance :
o Communication failure between the database and the
check_principle() component
o If the process of updating the principal amount on the
database and the calculation of simple interest is not properly
synchronized, the database may present wrong values for the
principal among.
o calc_interest() takes values in hundreds and
check_principle() provides values in thousands
o All defects may not be in an individual component, buy they
will display the wrong interest value to the user these defects
can be detected at the integration level by testing the
integrated subsystem using the test cases with all possible
communications within the subsystem
DEFECTS ON INTEGRATION LEVEL TESTING
DEFECTS ON INTEGRATION LEVEL TESTING
for instance :
o Communication failure between the database and the
check_principle() component
o If the process of updating the principal amount on the
database and the calculation of simple interest is not properly
synchronized, the database may present wrong values for the
principal among.
o calc_interest() takes values in hundreds and
check_principle() provides values in thousands
o All defects may not be in an individual component, buy they
will display the wrong interest value to the user these defects
can be detected at the integration level by testing the
integrated subsystem using the test cases with all possible
communications within the subsystem
SYSTEM TESTING
 The system as a whole is tested to uncover
requirement errors.
Performed by the testing team in an environment
similar to the production environment
 Verifies that all system elements work properly
and that overall system function and performance
has been achieved.
Types:
Alpha Testing
Beta Testing
Acceptance Testing
Performance Testing
Alpha Testing
It is carried out by the test team within the developing
organization .
Beta Testing
It is performed by a selected group of friendly
customers.
Acceptance Testing
It is performed by the customer to determine whether
to accept or reject the delivery of the system.
Performance Testing
It is carried out to check whether the system meets the
nonfunctional requirements identified in the SRS
document.
TEST LEVELS
Thanks

Software testing

  • 1.
  • 2.
    Team Members o MohamedIbrahem Mahsoup o Mohamed Hosni o Nadia Saleh o Hanan Emad Eldien o Sara Adel
  • 3.
    AGENDA • What issoftware testingIntroduction • Error , Big ,Fault & failureOBJECTIVES • Project Initiation • Test Plan • Test Design cases , Execute Test Cases(manual ,automated), Report Defects The Software Testing Life Cycle • Functional • Non-Functional .. Testing Types • Unit Testing • Integration testing • System Testing Testing Levels
  • 4.
    INTRODUCTION: Software Testing : Itis the process used to identify the correctness, completeness and quality of developed computer software. It is the process of executing a program/application under positive and negative conditions by manual or automated means. It checks for the :-  Specification  Functionality  Performance
  • 5.
    OBJECTIVESUncover as manyas errors (or bugs) as possible in a given product. Demonstrate a given software product matching its requirement specifications. Validate the quality of a software testing using the minimum cost and efforts. Generate high quality test cases, perform effective tests, and issue correct and helpful problem reports.
  • 6.
    ERROR , BUG, FAULT & FAILURE Error : It is a human action that produces the incorrect result that produces a fault. Bug : The presence of error at the time of execution of the software. Fault : State of software caused by an error. Failure : Deviation of the software from its expected result. It is an event.
  • 7.
    SDLC(SOFTWARE DEVELOPMENT LIFECYCLE)  Standard model used word wide to develop a software.  A framework that describes the activities performed at each stage of a software development project.  Necessary to ensure the quality of the software.  Logical steps taken to develop a software product.
  • 8.
  • 9.
  • 11.
    TESTING Testing is theprocess of evaluating a software product with the intent of finding errors in it and improving its quality. Done manually or by using tools with the predefined objectives.  Detect defects.  Determine that the specified requirements are met.  Test the performance TESTING OBJECTIVES
  • 12.
  • 13.
    TESTING LIFE CYCLE ProjectInitiation System Study Summary Reports Report Defects Execute Test Cases ( manual /automated ) Design Test Cases Test Plan Regression Test Analysis
  • 14.
    TEST PLANNING It isa systematic approach to test a system i.e. software. The plan typically contains a detailed understanding of what the eventual testing workflow will be. Testing objectives are defined : Identifying the resources and schedules for testing. Identify features to be tested Defining exit criteria
  • 15.
    TEST ANALYSIS ANDDESIGN Converting test objectives into different test cases. This phase also includes identifying necessary data required for testing, designing the test environment setup, and identifying required infrastructure and tools.
  • 16.
    TEST CASE It isa specific procedure of testing a particular requirement. It will include: Identification of specific requirement tested Test case success/failure criteria Specific steps to execute test Test data
  • 17.
    TEST IMPLEMENTATION ANDEXECUTION Actual testing is performed in this phase:  Performing testing using test cases.  Creating a log of the outcome of test execution.  Comparing the result o test execution with the expected results.  Repeating test activities in order to ensure that the defect identified previously is fixed and no new defects are introduced
  • 18.
    EVALUATING EXIT CRITERIAAND REPORTING A test summary report is also created for stakeholders to review the progress of testing .
  • 19.
    TEST CLOSURE ACTIVITIES Datafrom completed test activities is archived in this phase for future reference.
  • 20.
    VERIFICATION AND VALIDATION Verification •Verification is the process of determining if software is developed using correct requirements or specifications . Validation • Validation is the process of determining if the software developed meets requirements or specifications.
  • 21.
    MANUAL TESTING • theprocess of testing software without the use of automated tools • For effective manual testing, it is very important to design tests with detailed steps to test all software requirements. • Automated testing is the process of performing activities in the testing process with the use of automated tools. • Helps in reducing testing time. • providing accuracy in the test execution. • Various types of testing, such as functionality testing and performance testing, can be done at different test levels using automated tools . • Automated testing can only be done with the help of previously written scripts AUTOMATED TESTING
  • 22.
    TEST TYPES  FunctionalTesting Testing of the functionality of software as documented in requirement specifications.  Nonfunctional Testing Testing of behavioral characteristics of software such as reliability , usability , efficiency, maintainability, and portability . Confirmation Testing After the defect is reported and fixed in the software, a new build is released to the testing team. Confirmation testing , also known as retesting , is done on the new build to verify that the reported defect is fixed in the new build
  • 23.
    TEST TYPES  RegressionTesting Regression testing ensures that little changes don't break software.  Static Testing performed on the component or on the system without running the code or the0 software. It involves checking the syntax of code either by reading it manually or by using tools to find errors software developers review code by inspecting code files or by discussing the code in a group .  Dynamic Testing performed on the component or on the system by compiling and running the code and the software. It involves executing code to validate what and how the software works, by providing input to the system and checking if the output is correct .
  • 24.
    TEST TYPES Black boxtesting  No knowledge of internal program design or code required.  Tests are based on requirements and functionality. White box testing  Knowledge of the internal program design and code required.  Tests are based on coverage of code statements, branches, paths, conditions.
  • 25.
  • 26.
  • 28.
    TESTING LEVELS A testlevel is a group of testing activities that are performed to test individual components, integration of components, and the complete system .
  • 29.
    UNIT TESTING Tests eachmodule individually. Follows a white box testing (Logic of the program). Done by developers.
  • 30.
    INTEGRATION TESTING Once allthe modules have been unit tested, integration testing is performed. It is systematic testing. Testing is done by developers or the testing teams. Produce tests to identify errors associated with interfacing. Types: Big Bang Integration testing Top Down Integration testing Bottom Up Integration testing Mixed Integration testing
  • 31.
    OBJECTIVES OF INTEGRATIONTESTING Objectives are to detect faults due to interface errors or invalid assumptions about interfaces. Interface types  Parameter interfaces Data passed from one method or procedure to another.  Shared memory interfaces Block of memory is shared between procedures or functions.  Procedural interfaces Sub-system encapsulates a set of procedures to be called by other sub-systems.  Message passing interfaces Sub-systems request services from other sub-systems
  • 32.
  • 33.
    EXAMPLE:  calc_interest(): willhave to communicate with check_principle() to get the principle amount and other functions .  check_principle(): retrieves the principal amount from the database, and calc_interest() calculates simple interest and presents the data on the graphical user interface(GUI)  calc_interest() communicates with the check_principle() component , database and GUI Every Component my have been tested at the component level test (unit test) but the may be defects at the integration level testing for instance : o Communication failure between the database and the check_principle() component o If the process of updating the principal amount on the database and the calculation of simple interest is not properly synchronized, the database may present wrong values for the principal among. o calc_interest() takes values in hundreds and check_principle() provides values in thousands o All defects may not be in an individual component, buy they will display the wrong interest value to the user these defects can be detected at the integration level by testing the integrated subsystem using the test cases with all possible communications within the subsystem DEFECTS ON INTEGRATION LEVEL TESTING
  • 34.
    DEFECTS ON INTEGRATIONLEVEL TESTING for instance : o Communication failure between the database and the check_principle() component o If the process of updating the principal amount on the database and the calculation of simple interest is not properly synchronized, the database may present wrong values for the principal among. o calc_interest() takes values in hundreds and check_principle() provides values in thousands o All defects may not be in an individual component, buy they will display the wrong interest value to the user these defects can be detected at the integration level by testing the integrated subsystem using the test cases with all possible communications within the subsystem
  • 35.
    SYSTEM TESTING  Thesystem as a whole is tested to uncover requirement errors. Performed by the testing team in an environment similar to the production environment  Verifies that all system elements work properly and that overall system function and performance has been achieved. Types: Alpha Testing Beta Testing Acceptance Testing Performance Testing
  • 36.
    Alpha Testing It iscarried out by the test team within the developing organization . Beta Testing It is performed by a selected group of friendly customers. Acceptance Testing It is performed by the customer to determine whether to accept or reject the delivery of the system. Performance Testing It is carried out to check whether the system meets the nonfunctional requirements identified in the SRS document.
  • 37.
  • 38.