The presentation discusses software quality assurance and testing. It covers topics such as the importance of software quality, types of software quality (functional and non-functional), software testing principles and processes. The testing process involves test planning, analysis and design, implementation and execution, evaluating results, and closure activities. The presentation emphasizes that testing is a critical part of the software development process to improve quality and find defects.
2. Software-Quality-Testing
ā¢ The economic importance of Software
-The function of machine and equipment depends largely on
software
-We can not imagine large systems in telecommunication,
finance(Bank), or traffic control (Airlines) without software
ā¢ Software Quality
-More and more, the quality of software has become the
determining factor for the success of technical or
commercial systems and products.
ā¢ Testing for quality improvement
-Testing insure the improvement of the quality of software
products as well as the quality of the software development
process itself
3. What is Software??
ā¢ Computer software, or just software, is a collection of computer
programs and related data that provides the instructions to a
computer what to do and how to do (for perform a specific job).
ā¢ Definition Software (as per IEEE 610):
Computer programs, procedures and possibly associated
documentation and data pertaining to the operation of a
computer system.
ā¢ Types of Software:
-System Software
-Application Software
* ATM Machine
4. What is Software Quality??
ā¢ Software Quality (as per ISO/ IEC 9126):
The totality of functionality and features of a
software product that contribute to its ability to
satisfy stated or implied needs.
ā¢ Software Quality (as IEEE Std 610):
The degree to which a component, system or
process meets specified requirements and/or
user/customer needs and expectations.
Institute of Electrical and Electronics Engineers
International Electro technical Commission
International Organization for Standardization
5. What is Testing??
ā¢ Which measures the:
ā¢ Quality,
ā¢ Performance,
ā¢ Strengths,
ā¢ Capability and
ā¢ Reliability
of (someone or something), before putting it
into widespread use or practice.
6. Software Testing ??
ā¢ Which measures the:
ā¢ Quality,
ā¢ Performance,
ā¢ Strengths,
ā¢ Capability and
ā¢ Reliability
of a software before
putting it into
widespread use.
7. Software Testing
ā¢ Software Testing is the process of executing a
program or system with the intent of finding
errors.
ā¢ Or, it involves any activity aimed at evaluating
an attribute or capability of a program or
system and determining that it meets its
required results.
8. Failure Example 01
ā¢ Flight Ariane 5
(Most Expensive Computer Bugs in History)
On June 4, 1996, the rocket Ariane 5 tore itself apart 37
seconds after launch because of a malfunction in the control
software making the fault most expensive computer bugs in
history.
-mission critical issue
9. Failure Example 02
ā¢ Lethal X-Rays
Theara-25 was a radiation therapy machine produced by
Atomic Energy Commission of Canada in 1986. But initially
lot of people dies because of massive overdose of radiation.
And this is happen because of a software bugs.
-safety critical issue
10. Cause of failures:
-Human Error:
coding, DB design, system configurationā¦
Causes: knowledge, time presser, complexity ā¦
-Environmental Condition:
change of environmentā¦
Causes: radiation, electro-magnetic field, pollution,
sun spots, power failureā¦
11. Cost & Defects
Specification Design Coding Test Acceptance
Relative
Cost of error
correction
Costs of defects
-The costs of fixing
defect is increase with
the time they remain in
the system.
-Detecting errors at an
early stage allows for
error correction at
reduced cost.
12. Software Quality:
according to ISO/IEC 9126
software quality consists of:
-Functionality
-Reliability
-Usability
-Efficiency
-Maintainability
-Portability
13. Types of QA:
Constructive QA
activities to prevent defects,
e.g. through appropriate methods of software
engineering
Analytical QA
activities for finding defects,
e.g. through testing
16. Analytical QA
Motto: Defects should be detected as early as
possible in the process through Testing
Static
Examination without executing the program
Dynamic
Includes executing the program
White box
Black box
Experience based techniques
17. FigureofAnalyticalQA
StaticDynamic
WhiteboxBlackbox
Reviews/ walkthroughs
Control flow analysis
Data flow analysis
Compiler metrics/ analysis
Statement Coverage
Branch Coverage
Condition Coverage
Path Coverage
Experience-based techniques
Equivalence partitioning
Boundary value analysis
State transition testing
Decision tables
Use case based testing
18. Software Quality:
according to ISO/IEC 9126
software quality consists of:
-Functionality Functional Q
-Reliability
-Usability
-Efficiency Non-Functional
-Maintainability
-Portability
19. Functional Q-attributes:
Functional means correctness & completeness
Correctness: the functionality meets the required
attributes / capabilities
Completeness: the functionality meets all (functional)
requirements
According to ISO/IEC 9126 Functionality includes:
-Suitability
-Accuracy
-Compliance
-Interoperability
-Security
20. Non-Functional Q-attributes:
-Reliability
maturity, fault tolerance, recovery after failure
-Usability
learn-ability, understandability, attractiveness
-Efficiency
min use of resource
-Maintainability
Verifiability, changeability
-Portability
Transfer, easy to install ā¦
21. Non-Functional Q-attributes
Reliability
- maturity, fault tolerance, recovery after failure
- characteristic: under given conditions, a software/ a system will keep its
capabilities/ functionality over a period of time.
- reliability = quality/time
Usability
- Learn ability ,understandability, attractiveness
- Characteristics: easy to learn, compliance with guidelines, intuitive handling
22. Efficiency
- System behavior: functionality and time behavior
- Characteristics: the system requires a minimal use of resources
(e.g. CPU-time) for executing the given task
Maintainability
- Verifiability, stability, analyzability, changeability
- Characteristics: amount of effort needed to introduce changes in system
components
Portability
- Reparability, compliance, install ability
- Ability to transfer the software to a new environment
(software, hardware, organization)
- Characteristics: easy to install and uninstall, parameters
23. How much Testing is Enough ?
-Exit Criteria
Not finding (any more) defects is not an
appropriate criteria to stop testing activities.
-Risk Based Testing
-Time and Budget
24. Test case description according to IEEE 829:
- Distinct identification: Id or key in order to link, for example, an error
report to the test case where it appeared
- Preconditions: situation previous to test execution or characteristics of
the test object before conducting the test case
- Input values: description of the input data on the test object
- Expected result: output data that the test object is expected to
produce
- Post conditions: Characteristics of the test object after test execution,
description of its situation after the test
- Dependencies: order of execution of test cases, reason for
dependencies
- Requirements: Characteristics of the test object that the test case
will examine
- how to execute the test and check results (optional)
- priority(optional)
25.
26. Testing and Debugging?
Test and re-test are test activities
Testing shows system failures
Re-testing proves, that the defect has been corrected
Debugging and correcting defects are developer
activities
Through debugging, developers can reproduce failures,
investigate the state of programs and find the corresponding
defect in order to correct it.
Test Debugging
Correct.
Defects
Re-test
27. Error, defect, failure
- Error(IEEE 610):
a human action that produces an incorrect result,
e.g. a programming error
- Defect:
a flaw in a component or system that can cause the component or
system to fail to perform its required function,
e.g. an incorrect statement or data definition.
- Failure:
the physical or functional manifestation of a defect. A defect, if
encountered during execution, may cause a failure.
- Deviation of the component or system from its expected delivery,
service or result. (After Fenton)
Defects cause failure
28. Defects and Failure
A human being can make an error (mistake), which
produces a defect (fault, bug) in the program code, or in a
document. If a defect in code is executed, the system may
fail to do what it should do (or do something it shouldnāt),
causing a failure.
Debugging vs. Testing
Debugging and testing are different. Dynamic testing can
show failures that are caused by defects.
Debugging is the development activity that finds, analyses
and removes the cause of the failure.
29. Seven Principle of
Testing:
Principles
A number of testing principles have
been suggested over the past 40 years
and offer general guidelines common for
all testing.
Principle 1 ā Testing shows presence of defects
Testing can prove the presence of defects, but cannot prove the
absence of defects. Testing reduces the probability of
undiscovered defects remaining in the software but, even if no
defects are found, it is not a proof of correctness.
30. Principle 2 ā Exhaustive testing is impossible
Testing everything (all combinations of inputs and
preconditions) is not feasible. Instead of exhaustive
testing, risk analysis, time & cost and priorities
should be used to focus testing efforts.
Principle 3 ā Early testing
To find defects early, testing activities shall be
started as early as possible in the software or
system development life cycle, and shall be focused
on defined objectives.
31. Principle 4 ā Defect clustering
Testing effort shall be focused proportionally to the
expected and later observed defect density of
modules. A small number of modules usually
contains most of the defects discovered during
prerelease testing, or is responsible for most of the
operational failures.
Principle 5 ā Pesticide paradox
If the same tests are repeated over and over again,
eventually the same set of test cases will no longer
find any new defects. To overcome this āpesticide
paradoxā, test cases need to be regularly reviewed
and revised, and new and different tests need to be
written to exercise different parts of the software or
system to find potentially more defects.
32. Principle 6 ā Testing is context dependent
Testing is done differently in different
contexts. For example, safety-critical
software is tested differently from an e-
commerce site.
Principle 7 ā Absence-of-errors fallacy
It doesn't prove the Quality.
Finding and fixing defects does not help if
the system built is unusable and does not
fulfill the usersā needs and expectations.
33. Depending on the approach chosen, testing will take place at different
points within the development process
- Testing is a process itself
- The testing process is determined by the following phases
- Test planning
- Test analysis and test design
- Test implementation and test execution
- Evaluating Exit Criteria and reporting
- Test closure activities
- Test Controlling (at all phases)
Test phases may overlap
Testing as a process within the SW development process
34. Testing Process
TestControlling Test Plan
Test Analysis and Test Design
Test Implementation
and Test Execution
Evaluating Exit Criteria and
Reporting
Test Closure Activities
35. - Testing is more than test
execution!
- Includes overlapping and
backtracking
- Each phase of the testing
process takes place concurrent
to the phase of the software
development process
36. Test Planning-main tasks
- Determining the scope and risk
- Identifying the objectives of testing
and exit criteria
- Determining the approach: test techniques,
test coverage, testing Teams
- Implement testing method/test strategy,
plan time span for actives following
- Acquiring and scheduling test resources:
people, test environment, test budget
TestControlling
Test Plan
Test Analysis and
Test Design
Test
Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
37. Test analysis and Design-main tasks/1
- Reviewing the test basis (requirements, system architecture, design, interfaces).
*Analyze system architecture, system design
including interfaces among test objects
- Identify specific test conditions and required
test data.
*evaluate the availability of test data and/or the
feasibility of generating test data.
- Designing the test/test cases.
*Create and prioritize logical test cases
(test causes without specific values for test data)
- Select Test tools
TestControlling
Test Plan
Test Analysis and
Test Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
38. Test Implementation & Execution
ā developing and prioritizing test cases
ā¢ creating test data , writing test procedure
ā¢ creating test sequences
ā creating test automation scripts, if necessary
ā configuring the test environment
ā executing test(manually or automatically)
ā¢ follow test sequence state in the test
plan(test suites, order of test cases)
ā test result recording and analysis
ā Retest (after defect correction)
ā Regression test
ā¢ ensure that changes (after installing a new release, or error fixing) did not
uncover other or introduce new defects.
TestControlling
Test Plan
Test Analysis and
Test Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
39. ā¢ Evaluating Exit Criteria-main tasks
ā Assessing test execution against the
defined objectives (e.g. test and criteria)
ā Evaluating test logs (summary of test
activities, test result, communicate
exit criteria)
ā Provide information to allow the decision,
whether more test should take place
TestControlling
Test Plan
Test Analysis and
Test Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure
Activities
40. Test control
Test control is an on going activity
influencing test planning. The test plan
may be modified according to the information
acquired from best controlling
- The status of the test process is determined
by comparing the progress achieved against
the last plan. Necessary activities will be
started accordingly.
- Measure and analyze results
- The test progress, test coverage and the
exit criteria are monitored and documented
- Start correcting measures
- Prepare and make decisions
TestControlling
Test Plan
Test Analysis and Test
Design
Test Implementation
and Test Execution
Evaluating Exit
Criteria and
Reporting
Test Closure Activities
41. ā¢ Test Closure Activities - main task
ā Collect data from completed test activities
to consolidate experience, facts
and numbers.
ā Closure of incident reports or raising
change requests for any remaining open points
ā Documenting the acceptance of the system
ā Finalizing and archiving test ware, the test
environment and the test infrastructure for
later reuse, hand over to operations.
ā Analyzing ālessons learnedā for future project
TestControlling
42. ā¢ Test suite/test sequence
ā a set of several test cases for a component or system , where post condition of one test is used as
the precondition for the next one
ā¢ Test procedure specification(test scenario)
ā a document specifying a sequence of action for the execution of a test. Also known as test script
or manual test script. (After IEEE 829)
ā¢ Test execution
ā The process of running a test, producing actual results.
ā¢ Test log (test protocol, test report)
ā A chronological record of relevant details about the execution of tests:
when the test was done, what result was produced.
ā¢ Regression tests:
ā tasting of a previously tasted program following modification of ensure that defects have not
been introduced or uncovered in unchanged areas of the software, as a result of the changes
made. It is performed when the software or its environment is changed.
ā¢ Confirmation testing, Retest:
ā repeating a test after a defect has been fixed in order to confirm that the original defect has been
successfully removed
43. ā¢ Roles and Responsibilities
Perception:
Wrong!
Testing is a constructive activity as well,
It aims eliminating defects from a product !
Developer role Tester role
Implements requirements Plans testing activities
Develops structures Design test case
Designs and programs the
software
Is concerned only with finding
defects
Creating a product is his
success
Finding an error made by a
developer is his success
Developers are constructive! Tester are destructive!
44. Personal attributes of a good tester /1
Curious , perceptive, attentive to detail
ā To comprehend the practical scenarios of the customer
ā To be able to analysis the structure of the test
ā To discover details, where failure might show
Skepticism and has a critical eye
ā Test object contain defects- you just have to find them
ā Do not believed everything you are told by the developers
ā One must not get frightened by the fact that serious defects may often
be found which will have impact on the course of the project.
45. Personal attributes of a good tester /2
Good communication skills
ā To bring bad news to the developers
ā To overbear frustration state of minds
ā Both technical as well as issue of the practical use of the system must be
understood and communicated
ā Positive communication can help to avoid or to ease difficult situations.
ā To quickly establish a working relationship with the developers
Experiences
ā Personal factors influencing error occurrence
ā Experience helps identifying where errors might accumulate
46. Differences: to design- to develop ā to test
ā Testing requires a different mindset from designing developing
new computer systems
ā¢ Common goal: to provide good software
ā¢ Design mission: help the customer to supply the right requirements
ā¢ Developerās mission: convert the requirements into functions
ā¢ Testerās mission: examine the correct implementation of the
customerās requirements
ā In principle, one person can be given all three roles to work at.
ā¢ Differences in goal and role models must be taken into account
ā¢ This is difficult but possible
ā¢ Other solution (independent testers) are often easier and produce
better results
47. Independent testing
ā The separation of testing responsibilities support the independent
evaluation of test results.
ā The diagram below show the degree of independent as a bar chart.
48. ā¢ Types of test organization /1
Developer test
-The developer will never examine his ācreationā unbiased
(emotional attachment)
ā¢ He, however, knows the test object better than anybody else
ā¢ Extra costs result for the orientation of other person on the test
object
-Human being tend to overlook their own faults.
ā¢ The developer run the risks of not recognizing even self-evident
defects.
-Error made because of misinterpretation of the requirements will
remain undetected.
ā¢ Setting up test teams where developers test each otherās products
helps to avoid or at least lessen this short coming.
49. Types of test organization /2
ā¢ Teams of developers
ā Developers speak the same language
ā Costs for orientation in the test object are kept moderate
especially when the teams exchange test objects.
ā Danger of generation of conflicts among developing teams
ā¢ One developer who looks for and finds a defect will not be the
other developerās best friend
ā Mingling development and test activities
ā¢ Frequent switching of ways of thinking
ā¢ Makes difficult to control project budget
50. Types of test organization /3
ā¢ Test teams
ā Creating test teams converting different project
areas enhances the quality of testing.
ā It is important that test teams of different areas in
the project work independently
51. Types of test organization /4
ā¢ Outsourcing tests
ā The separation of testing activities and development activities offers best
independence between test object and tester.
ā Outsourced test activities are performed by persons having relatively little
knowledge about the test object and the project background
ā¢ Learning curve bring high costs, therefore unbiased party experts should be involved at
the early stages of the project
ā External expert have a high level of testing know how:
ā¢ An appropriate test design is ensured
ā¢ Methods and tools find optimal
ā¢ Designing test cases automatically
ā Computer aided generation of test cases, e.g. based on the formal
specification documents, is also independent
52. Difficulties /1
ā¢ Unable to understand each other
ā Developers should have basic knowledge of testing
ā Tester should have basic knowledge of software development
ā¢ Especially in stress situations, discovering errors that someone has made
often leads to conflicts.
ā The way of documenting defects and the way of the defects is described will
decide how the situation will develop.
ā Persons should not be criticized, the defects must be stated faculty
ā Defect description should help the developers find the error
ā Common objectives must always be the main issue.
ā¢
53. Difficulties /2
ā¢ Communication between tester and developers missing or
insufficient. This can make impossible to work together.
ā Tester seen as āonly messenger of bed news ā
ā Improvement: try to see yourself in the other personās role. Did my
message come through? Did the answer reach me?
ā¢ A solid test requires the appropriate distance to the test object
ā An independent and non-biased position is acquired through distance
from the development
ā However, too large a distance between the test object and the
development team will lead to more effort and time for testing.
54. Summary
ā¢ People make mistakes, every implementation has defects.
ā¢ Human nature makes it difficult to stand in front of oneās own
defects(error blindness)
ā¢ Developer and tester means two different worlds meet each other.
ā Developing is constructive- something is created that was not there before
ā Testing seems destructive at first glance-defect will found
ā Together , development and test are constructive in their objective to ensure
software with the least defects possible.
ā¢ Independent testing enhances quality of testing:
instead of developer, use tester teams and teams with external personnel
for testing.