4. INTRODUCTION
• A variety of terms is used to describe the fact
that software is flawed.
• The seemingly arbitrary or even synonymous
usage of terms such as bug, defect, error,
failure, fault, mistake, or problem is
confusing.
• Defects may also be called faults
5. 5
DEFINITION OF DEFECTS
According to Cleff, there are two definitions for defects :
• The property based definition addresses the absence of a
guaranteed property of a software. A defect in this sense is
characterized by not fulfilling a specified performance. A
property based defect is statically bound to the program
code
• The risk based definition can be derived from product
liability judicature. It addresses risks that arise from
defective software. Defects could potentially cause harm
and pose the unauthorized or misappropriate usage of
software.
6. 6
TERMS OF DEFECT
• In some cases, the word defect is
overloaded to refer to different
types of anomalies.
• Different engineering cultures and
standards may use somewhat
different meanings for these terms.
7. 7
TERMS OF DEFECT (CONT…)
• Computational Error: “the difference between a computed, observed, or
measured value or condition and the true, specified, or theoretically correct value
or condition.”
• Error: “A human action that produces an incorrect result.” A slip or mistake that a
person makes. Also called human error.
• Defect: An “imperfection or deficiency in a work product where that work
product does not meet its requirements or specifications and needs to be either
repaired or replaced.” A defect is caused by a person committing an error.
• Fault: A defect in source code. An “incorrect step, process, or data definition in
computer program.” The encoding of a human error in source code. Fault is the
formal name of a bug.
• Failure: An “event in which a system or system component does not perform a
required function within specified limits.” A failure is produced when a fault is
encountered by the processor under specified conditions.
8. 8
ERROR, DEFECT AND FAILURE
• Errors describe a mistake in the life cycle of a software
that leads to an incorrect result. It is caused by mistakes
humans make or their misunderstanding of a program’s
specification, the programming language used, or other
vital parts of the development process
• If the consequence is identified when executing a
program, a failure has been found; this failure hints to
the existence of a defect.
• Failures can only be identified when executing a program
and are often bound to the input given to it.
9. 9
ILLUSTRATION
• A simple program expects two parameters a and b and prints the sum
of a and two times b. Consider the main statement of the program to
be : print a + b * 2;
• If the programmer typed too fast or did not work carefully, he could
type a wrong number. In other words: an error occurred ! The actual
statement would then e.g. be: print a + b * 3;
This is the defect ! If the program is tested, a failure might be observed
because the result must be not as expected.
12. 12
TESTING
According to the IEEE testing can be defined
as follows :
1. “The process of operating a system or
component under specified conditions,
observing or recording the results, and making
an evaluation of some aspect of the system or
component.”
2. “The process of analyzing a software item to
detect the differences between existing and
required conditions (that is, bugs) and to
evaluate the features of the software items.”
13. 13
TESTING GOALS
1. Testing is the process of executing a
program with the intention of finding
errors
2. A good test case is a test case that has a
high probability of finding errors that have
never been found before
3. A successful test is a test that uncovers all
the errors that have never been found
previously.
14. 14
TESTING AIMS
1. Decreasing costs to remove defects,
2. Decreasing costs defects cause (due to
system failure),
3. Assessing software quality,
4. Meeting norms and laws,
5. Avoiding lawsuits and customer related
problems,
6. Increasing the trust into the software
developed or into a company that develops
software products, and
7. Supporting debugging as good as possible.
15. 15
THINGS RELATED TO TESTING
• Verification : Checking software entities for
compliance and consistency by evaluating results
against predetermined requirements
• Validation : Checking the correctness of the
system whether the process written in the
specification is what the user actually wants or
needs
• Error Detection : To determine whether things
are happening when they shouldn't happen or
things are happening where they should be
16. 16
BASIC PRINCIPLES OF
TESTING
1. All tests must be traceable to customer requirements.
2. Tests must be planned long before they begin.
3. The Pareto principle applies to software testing. The
Pareto principle implies that 80% of all errors found
during testing are likely to be traceable to 20% of all
program modules.
4. Testing should start "small" and progress to "big"
testing.
5. In-depth testing is impossible
6. Most effective, testing is done by an independent third
party
18. 18
DEFINITION
1. Software testability is how easily a
computer program can be tested.
2. Because testing is so difficult, it's important
to know what can be done to make it easy.
3. Sometimes programmers are willing to do
things which will assist the testing process
and checklist of design issues easy, useful
features and so on in negotiating with them.
20. 20
OPERABILITY
• “The better it works, the more efficiently it
can be tested”
• The system has some bugs (bugs add analysis and
reporting costs to the testing process).
• No bugs blocking test execution
• Product develops in functional stages (allows
development and testing simultaneously)
21. 21
OBSERVABILITY
• “What you see is what you test”
• Different outputs are issued by each input
• Stages and system variables can be viewed or
queued during execution.
• Past system and variables can be viewed or
queued (eg transaction logs)
• All factors affecting output can be viewed
• Internal errors are detected automatically
through selftesting mechanism
• Internal errors are reported automatically
• Source code is accessible
22. 22
CONTROLLABILITY
• “The better we can control the software, the
more tests can be automated and optimized”
• All possible outputs can be generated via some
combination of inputs
• All code can be executed through various
combinations of inputs
• Software and hardware state and variables can be
controlled automatically directly by the test engineer
• Consistent and structured input and output format
• Tests can be well specified, optimized and reproduced
23. 23
DECOMPOSABILITY
• “By controlling the scope of the test, we can
more quickly isolate problems and perform
retesting more smoothly”
• System software is built of independent
modules
• Software modules can be tested
independently
24. 24
SIMPLICITY
• “The less we test, the faster we can test it”
• Functional simplicity (i.e., a feature set is the
minimum requirement to satisfy a requirement).
• Structural simplicity (eg, architecture is
modularized to limit the spread of errors)
• Code simplicity (eg, coding standards are adopted
for ease of inspection and maintenance).
25. 25
STABILITY
• “Less changes, less crashes in testing”
• Tests to software infrequently
• Changes to software are controlled
• Changes to software validate existing tests
• Software failures can be fixed properly
26. 26
UNDERSTANDABILITY
• “The more information we have, the
smoother the test will be”
• Design is well understood
• Dependencies among internal, external and
shared components, are well understood.
• Changes to the design are communicated
• Technical documentation is quickly accessible
• Technical documentation is well organized
• Specific and detailed technical documentation
• Accurate technical documentation
27. 27
“GOOD” TEST ATTRIBUTES
1. A good test has a high probability of
finding errors.
2. Good testing is not redundant.
3. A good test should be "the best kind".
4. A good test should not be too simple
or too complex.
29. 29
ORGANIZATION OF TESTING
• Testing is based on technical methods but it
also has an organizational dimension.
• Successful testing is bound to preconditions
that have to be met in companies that
conduct software tests.
• Test organization comprises of seeing
testing as a process and dividing it into
phases, of documenting tests, of test
management, and of certifications for
testers.
30. 30
PHASES OF TESTING
1. Component test (also known as module test or unit test)
• Developers test modules they coded conducted using Glass-Box tests and probably also Gray-
Box tests.
2. Central component test
• Components are tested on a system which is similar to the target system for the finished
software product.
3. Integration test
• Components developed by multiple developers are integrated and their interplay is checked.
4. Performance test (#1)
• It aims at getting a first idea of the system’s performance and at finding possible performance
bottlenecks.
5. System test
• Testing commonly is done by dedicated testers who do not know the source code.
31. 31
6. Performance test (#2)
• Whereas the first performance test could reveal algorithmic limitations and general
misconceptions, the system performance measures the program under realistic
conditions.
7. Acceptance test
• The system is checked for compliance with the specification.
8. Pilot test
• The almost finished product is installed on a number of systems. They can be
conducted in a closed community or public.
9. Productive usage
• Software is put into regular operation.
10. Maintenance
• Done if development is a periodic process that leads to new releases of the program
PHASES OF TESTING(CONT…)
34. 34
TEST PLAN
• A test plan is a document that sets out the scope,
approach, and schedule of intended testing
activities. The test plan may also list the resources
the software tester needs to function effectively.
• In other words, a test plan can be referred to as a
plan or scenario for testing that will be carried out
either by experts or general users.
35. 35
THE TASK
• The process of preparing a test plan is a useful
way to think through the efforts needed to
validate the acceptability of a software
product.
• The task of test planning consists of the
following:
1. Defining the testing activities to achieve those
goals
2. Prioritizing quality goals for the release
3. Evaluating how well the activities support the goals
4. Planning the actions needed to carry out the
activities
36. 36
THE PURPOSE
• The purpose of making a test plan in
general is :
• To make it easier for developers to carry out
testing so that the testing carried out becomes
clear so that the results are more useful and
efficient.
• To produce documentation that describes how
the tester will verify that the system works as
intended. The document should describe what
needs to be tested, how it will be tested, and
who’s responsible for doing so.
37. 37
“GOOD” TEST PLAN
• Concise : Your test plan should be no longer than one
page with bullet points.
• Organized : Make sure all the information is logically
grouped.
• Readable : The document should be easy to read,
avoiding technical language where possible.
• Flexible : Your test plan should be adaptable and not
fixed in stone. You want to create documentation that
won't hold you back if new information comes along or
changes need to be made.
• Accurate : Make sure all the information you've included
is accurate.
38. 38
WRITE A TEST PLAN
1. Learn about the software
2. Define the scope of testing
3. Create test cases
4. Develop a test strategy
5. Define the test objective
6. Choose testing tools
7. Find bugs early
8. Define your test criteria
9. Resource planning
10. Plan your test environment
11. Plan test team logistics
12. Schedule & estimation
13. Test deliverables
14. Test automation
40. Software
Under Test
Test Data
Output
Result ?
Expected ?
Not
Expected ?
Test Case
Inputs that has been
developed to test the
system Inputs to the system and the predicted outputs from
operating system in these inputs, if the system performs
to its specification
41. 41
TEST CASE
• A test case is a set of test inputs, execution conditions, and expected
results developed for a particular objective.
• There should be at least one test case for each scenario.
• For each invalid test case, there should be only one invalid input.
42. “ HOW TO MAKE
GOOD TEST CASE
??
A. Based on
requirement
B. Based on Use Case
42
43. 43
SOFTWARE REQUIREMENTS AS THE
BASIS OF TESTING
• Software testing depends on good requirements. So, it is
important to understand some of the key elements of quality
requirements, such as:
• Understandable
• Necessary
• Modifiable
• Nonredundant
• Terse
• Testable
• Traceable
• Within Scope
44. 44
PROCESS FOR CREATING TEST CASES
1. Review the Requirements : the requirements need to be reviewed to
ensure that they reflect the requirements’ quality factors.
2. Write a Test Plan : A software test plan is a document that describes
the objectives, scope, approach, and focus of a software testing effort.
3. Identify the Test Suite : A test suite, also known as a validation suite, is
a collection of test cases that are intended to be used as input to a
software program to show that it has some specified set of behaviors.
4. Name the Test Cases : Having an organized system test suite makes it
easier to list test cases because the task is broken down into many small,
specific subtasks.
45. 45
PROCESS FOR CREATING TEST CASES
5. Write Test Case Descriptions and Objectives : The description
should provide enough information so that you could come back to it
after several weeks and recall the same ad hoc testing steps that you
have in mind now.
6. Create the Test Cases : Write the test case steps and specify test data.
This is where the testing techniques can help you define the test data
and conditions.
7. Review the Test Cases : If there is a requirement that is not tested by
any system test case, then you are not assured that the requirement has
been satisfied.
46. 46
TRANSFORMING USE CASES TO TEST CASES
The use case is a scenario that describes the use of a system by an
actor to accomplish work. The steps that the tester can follow to create
effective test cases from use cases are :
1. Draw a Use Case Diagram
2. Write the Detailed Use Case Text
3. Identify Use Case Scenarios
4. Generating the Test Cases
5. Generating Test Data
47. References
Lewis, W. E. (2009). Software Testing And Continuous Quality
Improvement ed. 3rd. Auerbach publications.
02
Majchrzak, T. A. (2012). Improving Software Testing: Technical And
Organizational Developments. Springer Science & Business Media.
03
Myers, G. J., Sandler, C., & Badgett, T. (2012). The Art Of Software
Testing. John Wiley & Sons.
04
Roger, S. P., & Bruce, R. M. (2019). Software Engineering: A
Practitioner’s Approach Ed.9th. McGraw-Hill Education.
01