1. UNIT V - TESTING
Object Oriented Methodologies - Software Quality Assurance -
Impact of object orientation on Testing - Develop Test Cases
and Test Plans.
OBJECT ORIENTED ANALYSIS AND DESIGN
CS8592
2. Object Oriented Methodologies
ā¢ Object Oriented Methodology (OOM) is a system
development approach encouraging and facilitating re-use
of software components.
ā¢ Object- Oriented Methodology is a set of Methods, Models
and Rules for developing the system.
ā¢ With this methodology, a computer system can be
developed on a component basis which enables the
effective re-use of existing components and facilitates the
sharing of its components by other systems.
ā¢ By the adoption of OOM, higher productivity, lower
maintenance cost and better quality can be achieved.
2
3. Objective of OOM
ā¢ The ultimate objective of OOM is application assembly.
ā¢ The construction of new business solutions from existing
components.
ā¢ The components are combined in different ways to meet
the new requirements specified by the user community.
ā¢ Only completely new functionality will have to be built to
complete the solution.
3
Software components can be assembled to
form applications
4. Object
4
XXX XX XX
XXX XX XXX XXX X
XXX XX XXX XXX XX X
Data Behaviour
Object
ā¢ OOM applies a single object model
that evolves from the analysis and
design stage and carries all the way
down to the programming level.
ā¢ An object contains both the data
and the functions that operate upon
that data.
ā¢ An object can only be accessed via
the functions it makes publicly
available, so that all details of its
implementation are hidden from all
other objects.
ā¢ This strong encapsulation provides
the basis for the improvements in
traceability, quality, maintainability
and extensibility that are key
features of well-designed Object
Oriented systems
5. History of OOM
ā¢ The use of OOM for analyzing and designing systems began to mature
towards 1990 with the launch of methodologies from the three industry-
leading methodologists : Ivar Jacobson, Grady Booch and James
Rumbaugh.
ā¢ In 1989, the Object Management Group (OMG) was founded. The
mission of OMG is to establish industry guidelines, detailed object
management specifications and common frameworks for application
development.
ā¢ One of the best-known specifications maintained by OMG is the Unified
Modeling Language (UML). The UML is a language for specifying,
visualizing, constructing, and documenting the deliverables of software
systems, as well as for business modelling and other non-software
systems.
5
6. Benefits of OOM
ā¢ Improve productivity - Application development is facilitated by the reuse of existing
components which can greatly improve the productivity and facilitate rapid delivery.
ā¢ Deliver high quality system - The quality of the system can be improved as the
system is built up in a component manner with the use of existing components
which are well-tested and well-proven.
ā¢ Lower maintenance cost - The associated property of traceability of OOM can help to
ensure the impact of change is localized and the problem area can be easily traced.
As a result, the maintenance cost can be reduced.
ā¢ Facilitate reuse - With this approach, a computer system can be developed on a
component basis that enables the effective re-use of existing components.
Opportunities for the reuse are facilitated by the accumulation and proper management
of an inventory of reusable components either developed internally or acquired
externally.
ā¢ Manage complexity - The use of OOM eases the process in managing complexity.
By the breaking down of a complex solution into different components and with each
component encapsulated (e.g. treated as a black box) from others, complex
development can be better managed.
6
10. Methodologies
ā¢ Booch-1986 object-oriented design concept, Booch method
ā¢ Sally Shaler & Steve Mellor (1989 ā 91) OO Systems Analysis + Object Lifecycles
ā¢ Peter Coad & Ed Yourdon (1991) (OOA & OOD) prototype-oriented approach
ā¢ Wirfs-Brock-1990 Class-Responsibility-Collaboration (CRC) Methodology
ā¢ Booch/Rational work on Ada (1994 ā 95) OO Analysis and Design with Apps.
ā¢ Rumbaugh (GE) Object Modeling Technique (OMT) 1991
ā¢ Jim Odell & James Martin ( IS applications )(1994 ā 96)
ā¢ Jacobson (Ericsson) OO Software Engineering (1994-95)
ā¢ Rumbaugh joins Rational to work with Booch (1994)
ā¢ Rational bought Objectory & Jacobson (1995)
ā¢ Through OMG & Rational, UML was born.
ā¢ UML unifies the methods of James Rumbaugh, Grady Booch, and Ivar Jacobson.
10
12. OOM
ā¢ Rumbaugh et al. method is well suited for describing the
object model or the static structure of the system and
dynamic model.
ā¢ The Jacobson et al. method is good for producing user-
driven analysis models.
ā¢ The Booch method produces detailed object-oriented
design models.
12
13. -Class attributes, methods, inheritance and association
be expressed easily.
-Dynamic behavior of objects can be described using
the OMT dynamic model.
-Detailed specification of state transitions and their
-descriptions within a system
can
Rumbaughās Object Modeling Technique (OMT)
ā¢ A method for analysis, design and implementation by an
object oriented technique.
ā¢ Fast and intuitive approach for identifying and modeling all
objects making up a system.
ā¢ Class attributes, methods, inheritance and association can
be expressed easily.
ā¢ Dynamic behavior of objects can be described using the
OMT dynamic model.
ā¢ Detailed specification of state transitions and their -
descriptions within a system
13
14. Four phases of OMT
1. Analysis: objects, dynamic and functional models
2. System Design: Basic architecture of the system & high
level strategy of the system..
3. Object Design: static, dynamic and functional models of
objects.
4. Implementation: reusable, extendible and robust code.
14
15. Three different parts of OMT modeling
1. Object model - Presented by the object model and data
dictionary. (Class diagrams)
2. Dynamic model - Presented by the state transition
diagrams, and event flow diagrams.
3. Functional model - Presented by data flow and
constraints. (DFD)
15
16. Object Model
ā¢ Structure of objects in a system.
ā¢ Identity, relationships to other objects, attributes and
operations.
ā¢ Object diagram
ā¢ Classes interconnected by association lines
ā¢ Classes- a set of individual objects
ā¢ Association lines- relationship among classes (i.e., objects of one
class to objects of another class)
16
21. OMT Functional Model
ā¢ DFD- (Data Flow Diagram)
ā¢ Shows flow of data between different processes in a
business.
ā¢ Simple and intuitive method for describing business
processes without focusing on the details of computer
systems.
ā¢ Four primary symbols
21
1. Process- any function being performed
2. Data Flow- Direction of data element movement
3. Data Store ā Location where data is stored
4. External Entity-Source or Destination of a data element
24. Booch Methodology
ā¢ Widely used OO method
ā¢ Uses the object paradigm
ā¢ Covers the design and analysis phase of an OO system
ā¢ Criticized for his large set of symbols
ā¢ Booch method consists of :
1. Class diagrams - describe roles and responsibilities of objects
2. Object diagrams - describe the desired behavior of the system in terms of
scenarios
3. State transition diagrams - state of a class based on a stimulus
4. Module diagrams - to map out where each class & object should be declared
5. Interaction diagrams - describes behavior of the system in terms of scenarios
6. Process diagrams - to determine to which processor to allocate a process
24
30. Booch Methodology - Macro development process
ā¢ Booch methodology consist of Macro and Micro
development process
ā¢ Macro development process:
It serves as a controlling framework for micro process consists of:
ā¢ conceptualization
ā¢ analysis and development of the model
ā¢ design or create the system architecture
ā¢ evolution or implementation
ā¢ maintenance
30
31. Micro development process
ā¢ Each macro development process has its own micro
development processes.
ā¢ Micro process is a day-to-day activities
ā¢ Micro development process consists of :
ā¢ identify classes and objects
ā¢ identify class and object semantics
ā¢ identify class and object relationships
ā¢ identify class and object interfaces and implementation
31
33. JACOBSON Methodologies
ā¢ This methodologies covers the entire life cycle and stress
traceability between the different phases both forward &
backward.
ā¢ It consists of:
ā¢ Use Cases
ā¢ OOBE (Object Oriented Business Engineering)
ā¢ OOSE (Object Oriented Software Engineering) also called:
ā¢ OBJECTORY (Object Factory for Software Development)
33
34. Use Cases
ā¢ Understanding system requirements
ā¢ Interaction between Users and Systems
ā¢ The use case description must contain
ā¢ How and when the use case begins and ends.
ā¢ The Interaction between the use case and its actors,
ā¢ including when the interaction occurs and what is exchanged.
ā¢ How and when the use case will need data stored in the system.
ā¢ Exception to the flow of events
ā¢ How and when concepts of the problem domain are handled.
34
35. OOSE - Object Oriented Software Engineering
ā¢ OOSE also called objectory is a method of OO
development with the specific aim to fit the development of
large, real time systems.
ā¢ The development process is called use-case driven
process. Used across:
ā¢ Analysis, Design, Validation & Testing
ā¢ Use-case model
ā¢ Domain Object model: The object of the real world are mapped into the
domain object model.
ā¢ Analysis Object model: It presents how the source code(implementation)
should be carried out and written.
ā¢ Implementation model:
ā¢ Test model: It includes the test plans, specifications, and reports.
35
36. OOSE - Objectory
Objectory is built around several different models such as:
ā¢ Use case model - The use-case model defines the outside
(actors) and inside (use case) of the systems behavior.
ā¢ Domain object model - The objects of the ārealā world are
mapped into the domain object model.
ā¢ Analysis object model - The analysis object model presents
how the source code (implementation) should be carried out and
written.
ā¢ Implementation model - The implementation model represents
the implementation of the system.
ā¢ Test model - The test model constitutes the test plans,
specifications, and reports.
36
37. OOSE ā Usecase Model
37
The use-case model is considered in every model and phase.
38. OOBE - Object Oriented Business Engineering
ā¢ OOBE provides the framework that businesses use to
articulate and communicate business process
improvements, business definitions and rules.
ā¢ It provides the crucial link missing from additional
approaches to systems development and business process
engineering: a clear path from business concepts to
reusable information systems components.
ā¢ OOBE is object modeling at the enterprise level. (Use case
are also central here)
ā¢ OOBE consists of three phases:
1. Analysis phase
2. Design & Implementation phases
3. Testing phase: Unit, integration & system testing.
38
39. Software Quality Assurance
ā¢ Software Quality Assurance (SQA) is a means of monitoring
the software engineering processes and methods used to
ensure proper quality. This is accomplished by many and
varied approaches. It may include ensuring conformance to
one or more standards, such as ISO 9000, CMMI model,
ISO15504, etc.
ā¢ It is a set of activities for ensuring quality in software
engineering processes that ultimately results, or at least gives
confidence, in the quality of software products.
39
40. SQA
SQA encompasses the entire software development process
ā¢ Software requirements
ā¢ Software design
ā¢ Coding
ā¢ Source code control
ā¢ Code reviews
ā¢ Change management
ā¢ Configuration management
ā¢ Release management
40
41. Software Quality Assurance
Software Quality
ā¢ Software quality as āthe fitness for use of the total software
productā.
ā¢ A good quality software does exactly what it is supposed to do and is
interpreted in terms of satisfaction of the requirement specification laid
down by the user.
Quality Assurance
ā¢ Software quality assurance is a methodology that determines the extent
to which a software product is fit for use.
ā¢ The activities that are included for determining software quality are ā
ā¢ Auditing
ā¢ Development of standards and guidelines
ā¢ Production of reports
ā¢ Review of quality system
41
42. Software Quality Assurance
Quality Factors
ā¢ Correctness ā Correctness determines whether the software
requirements are appropriately met.
ā¢ Usability ā Usability determines whether the software can be used by
different categories of users (beginners, non-technical, and experts).
ā¢ Portability ā Portability determines whether the software can operate in
different platforms with different hardware devices.
ā¢ Maintainability ā Maintainability determines the ease at which errors
can be corrected and modules can be updated.
ā¢ Reusability ā Reusability determines whether the modules and classes
can be reused for developing other software products.
42
43. SQA Activities & Processes
SQA includes the following activities:
ā¢ Process definition
ā¢ Process training
ā¢ Process implementation
ā¢ Process audit
SQA includes the following processes:
ā¢ Project Management
ā¢ Project Estimation
ā¢ Configuration Management
ā¢ Requirements Management
ā¢ Software Design
ā¢ Software Development [Refer to SDLC]
ā¢ Software Testing [Refer to STLC]
ā¢ Software Deployment
ā¢ Software Maintenance & etc.
43
44. Object Oriented Metrics
1. Project Metrics
ā¢ Project Metrics enable a software project manager to assess the status
and performance of an ongoing project.
ā¢ Number of scenario scripts
ā¢ Number of key classes
ā¢ Number of support classes
ā¢ Number of subsystems
2. Product Metrics
ā¢ Product metrics measure the characteristics of the software product that
has been developed.
ā¢ Methods per Class
ā¢ Inheritance Structure
ā¢ Coupling and Cohesion
ā¢ Response for a Class
44
45. Object Oriented Metrics
3. Process Metrics
ā¢ Process metrics help in measuring how a process is performing. They
are collected over all projects over long periods of time. They are used
as indicators for long-term software process improvements.
ā¢ Number of KLOC (Kilo Lines of Code)
ā¢ Defect removal efficiency
ā¢ Average number of failures detected during testing
ā¢ Number of latent defects per KLOC
45
46. Quality Costs
ā¢ Prevention costs
ā¢ Quality planning, formal technical reviews, test equipment,
training
ā¢ Appraisal costs
ā¢ In-process and inter-process inspection, equipment calibration
and maintenance, testing
ā¢ Failure costs
ā¢ Rework, repair, failure mode analysis
ā¢ External failure costs
ā¢ Complaint resolution, product return and replacement, help line
support, warranty work
46
47. SQA Group Activities
ā¢ Prepare SQA plan for the project.
ā¢ Participate in the development of the project's software
process description.
ā¢ Review software engineering activities to verify
compliance with the defined software process.
ā¢ Audit designated software work products to verify
compliance with those defined as part of the software
process.
ā¢ Ensure that any deviations in software or work products are
documented and handled according to a documented
procedure.
ā¢ Record any evidence of noncompliance and reports them
to management. 47
48. Software Reviews
ā¢ Purpose is to find defects (errors) before they are passed
on to another software engineering activity or released to
the customer.
ā¢ Software engineers (and others) conduct formal
technical reviews (FTR) for software engineers.
ā¢ Using formal technical reviews (walkthroughs or
inspections) is an effective means for improving software
quality.
48
49. Review Roles
ā¢ Presenter (designer/producer).
ā¢ Coordinator (not person who hires/fires).
ā¢ Recorder
ā¢ records events of meeting
ā¢ builds paper trail
ā¢ Reviewers
ā¢ maintenance oracle
ā¢ standards bearer
ā¢ user representative
ā¢ others
49
50. Formal Technical Reviews
ā¢ Involves 3 to 5 people (including reviewers)
ā¢ Advance preparation (no more than 2 hours per person) required
ā¢ Duration of review meeting should be less than 2 hours
ā¢ Focus of review is on a discrete work product
ā¢ Review leader organizes the review meeting at the producer's request.
ā¢ Reviewers ask questions that enable the producer to discover his or her
own error (the product is under review not the producer)
ā¢ Producer of the work product walks the reviewers through the product
ā¢ Recorder writes down any significant issues raised during the review
ā¢ Reviewers decide to accept or reject the work product and whether to
require additional reviews of product or not.
50
51. Why do peer reviews?
ā¢ To improve quality.
ā¢ Catches 80% of all errors if done properly.
ā¢ Catches both coding errors and design errors.
ā¢ Enforce the spirit of any organization standards.
ā¢ Training and insurance.
51
52. Review Guidelines
ā¢ Keep it short (< 30 minutes).
ā¢ Donāt schedule two in a row.
ā¢ Donāt review product fragments.
ā¢ Use standards to avoid style disagreements.
ā¢ Let the coordinator run the meeting and maintain order.
52
54. Software Quality Assurance Plan
1. Purpose
2. Reference documents
3. Management
4. Documentation
5. Standards, practices, convention, and metrics
6. Software Reviews
7. Tests
8. Problem reporting and corrective actions
9. Tools, techniques, and methodologies
10. Media control
11. Supplier control
12. Records collection, maintenance, and retention
13. Training
14. Risk management
15. Glossary
16. SQAP change procedure and history
Underlined sections will be included in projectās SQAP
54
55. SQA Plan
ā¢ Management section
ā¢ Describes the place of SQA in the structure of the organization
ā¢ Documentation section
ā¢ Describes each work product produced as part of the software process
ā¢ Standards, practices, and conventions section
ā¢ Lists all applicable standards/practices applied during the software process and any metrics
to be collected as part of the software engineering work
ā¢ Reviews and audits section
ā¢ Provides an overview of the approach used in the reviews and audits to be conducted
during the project
ā¢ Test section
ā¢ References the test plan and procedure document and defines test record keeping
requirements
ā¢ Problem reporting and corrective action section
ā¢ Defines procedures for reporting, tracking, and resolving errors or defects, identifies
organizational responsibilities for these activities
ā¢ Other
ā¢ Tools, SQA methods, change control, record keeping, training, and risk management 55
57. Impact of object orientation on Testing
What is Testing?
ā¢ Software testing is an investigation conducted to provide
stakeholders with information about the quality of the
software product or service under test.
ā¢ Software testing can also provide an objective, independent
view of the software to allow the business to appreciate and
understand the risks of software implementation.
ā¢ Test techniques include the process of executing a program
or application with the intent of finding software bugs (errors
or other defects), and verifying that the software product is
fit for use.
57
58. Properties of Testing
The following properties indicate the extent to which the
component or system under test:
ā¢ meets the requirements that guided its design and development,
ā¢ responds correctly to all kinds of inputs,
ā¢ performs its functions within an acceptable time,
ā¢ it is sufficiently usable,
ā¢ can be installed and run in its intended environments, and
ā¢ achieves the general result its stakeholders desire.
58
59. Static, dynamic and passive testing
ā¢ Static testing is often implicit, like proofreading, plus when
programming tools/text editors check source code structure or compilers
(pre-compilers) check syntax and data flow as static program analysis.
ā¢ Dynamic testing takes place when the program itself is run.
Dynamic testing may begin before the program is 100% complete in
order to test particular sections of code and are applied to discrete
functions or modules.
ā¢ Static testing involves verification, whereas dynamic testing also
involves validation.
ā¢ Passive testing means verifying the system behavior without any
interaction with the software product. Contrary to active testing, testers
do not provide any test data but look at system logs and traces. They
mine for patterns and specific behavior in order to make some kind of
decisions. This is related to offline runtime verification and log analysis.
59
60. Testing approach
ā¢ Exploratory approach
ā¢ Exploratory testing is an approach to software testing that is
concisely described as simultaneous learning, test design and
test execution.
ā¢ The "box" approach
ā¢ This approach is used to describe the point of view that the tester
takes when designing test cases.
1. Black-box testing
2. White-box testing
3. Grey-box testing
60
61. Black-box testing
ā¢ Black-box testing (also known as functional testing) treats the software
as a "black box," examining functionality without any knowledge of
internal implementation, without seeing the source code.
ā¢ The testers are only aware of what the software is supposed to do, not
how it does it.
ā¢ Black-box testing methods include:
ā¢ equivalence partitioning
ā¢ boundary value analysis
ā¢ all-pairs testing
ā¢ state transition tables
ā¢ decision table testing
ā¢ fuzz testing
ā¢ model-based testing
ā¢ use case testing
ā¢ exploratory testing
ā¢ specification-based testing.
61
62. White-box testing
ā¢ White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) verifies the internal
structures or workings of a program, as opposed to the functionality
exposed to the end-user.
ā¢ In white-box testing, an internal perspective of the system (the source
code), as well as programming skills, are used to design test cases.
ā¢ The tester chooses inputs to exercise paths through the code and
determine the appropriate outputs.
ā¢ While white-box testing can be applied at the unit, integration, and
system levels of the software testing process, it is usually done at the
unit level.
ā¢ It can test paths within a unit, paths between units during integration,
and between subsystems during a system level test.
62
63. Grey-box testing
ā¢ Grey-box testing (American spelling: gray-box testing) involves having
knowledge of internal data structures and algorithms for purposes
of designing tests while executing those tests at the user, or black-box
level.
ā¢ The tester will often have access to both "the source code and the
executable binary.
ā¢ Grey-box testing may also include reverse engineering (using
dynamic code analysis) to determine, for instance, boundary values or
error messages.
ā¢ Manipulating input data and formatting output do not qualify as grey-
box, as the input and output are clearly outside of the "black box" that
we are calling the system under test.
ā¢ This distinction is particularly important when conducting integration
testing between two modules of code written by two different
developers, where only the interfaces are exposed for the test.
63
64. Testing levels
ā¢ Unit testing
ā¢ Individual components are tested for correctness.
ā¢ Unit testing refers to tests that verify the functionality of a specific section of code,
usually at the function level. In an object-oriented environment, this is usually at
the class level, and the minimal unit tests include the constructors and destructors.
ā¢ Integration testing
ā¢ Integration testing is any type of software testing that seeks to verify the interfaces
between components against a software design. Software components may be
integrated in an iterative way or all together ("big bang").
ā¢ System testing
ā¢ System testing tests a completely integrated system to verify that the system
meets its requirements.
ā¢ Operational acceptance testing
ā¢ Operational acceptance is used to conduct operational readiness (pre-release) of
a product, service or system as part of a quality management system. OAT is a
common type of non-functional software testing, used mainly in software
development and software maintenance projects.
64
66. Object-Oriented Testing
ā« Testing performed by Units
ā« Object-Oriented Testing is a collection of testing techniques to
ā« verify and validate object-oriented software.
ā« When should testing begin?
ā« Analysis and Design
ā« Programming
ā« To complete the OOT cycle mention below testing are required
ā« Requirement Testing
ā« Analysis and Design Testing
ā« Code Testing
ā« Integration Tests
ā« System Tests
ā« User Testing
66
70. Testing OOA and OOD Models
ā¢ OO analysis and design models is especially useful because the same
semantic constructs (e.g., classes, attributes, operations, messages)
appear at the analysis, design, and code level.
ā¢ Analysis and design models cannot be tested in the conventional sense,
because they cannot be executed.
ā¢ Formal technical review can be used to examine the correctness and
consistency of both analysis and design models.
ā¢ Correctness:
ā¢ Syntax: Each model is reviewed to ensure that proper modeling conventions have been
maintained.
ā¢ Semantic: Must be judged based on the modelās conformance to the real world problem domain
by domain experts.
ā¢ Consistency:
ā¢ May be judged by considering the relationship among entities in the model.
ā¢ Each class and its connections to other classes should be examined.
ā¢ The Class-responsibility-collaboration model can be used.
ā¢ Completeness
70
71. Issues in OO Testing
ā« Implications of Composition and Encapsulation
ā« Implications of Inheritance
ā« Implications of Polymorphism
71
72. Levels of OO-Testing
ā« Operation or Method Testing
ā« Class Testing
ā« Integration Testing
ā« System Testing
72
73. Class (Unit) Testing
ā« Smallest testable unit is the encapsulated class
ā« Test each operation as part of a class hierarchy
because its class hierarchy defines its context of use
ā« Approach:
ā Test each method (and constructor) within a class
ā Test the state behavior (attributes) of the class between methods
ā« How is class testing different from conventional testing?
ā« Conventional testing focuses on input-process-output,
whereas class testing focuses on each method, then designing
sequences of methods to exercise states of a class
ā« But white-box testing can still be applied
73
75. Class Test Case Design
1. Identify each test case uniquely
- Associate test case explicitly with the class and/or method to be tested
2. State the purpose of the test
3. Each test case should contain:
a. A list of messages and operations that will be exercised as a consequence
of the test
b. A list of exceptions that may occur as the object is tested
c. A list of external conditions for setup (i.e., changes in the environment
external to the software that must exist in order to properly conduct the
test)
d. Supplementary information that will aid in understanding or implementing
the test
ā Automated unit testing tools facilitate these requirements
75
76. Challenges of Class Testing
ā« Encapsulation:
ā Difficult to obtain a snapshot of a class without building extra methods
which display the classesā state
ā« Inheritance and polymorphism:
ā Each new context of use (subclass) requires re-testing because a method
may be implemented differently (polymorphism).
ā Other unaltered methods within the subclass may use the redefined method
and need to be tested
ā« White box tests:
ā Basis path, condition, data flow and loop tests can all apply to individual
methods, but donāt test interactions between methods
76
77. Random Class Testing
1. Identify methods applicable to a class
2. Define constraints on their use ā e.g. the class must always be
initialized first
3. Identify a minimum test sequence ā an operation sequence that
defines the minimum life history of the class
4. Generate a variety of random (but valid) test sequences ā this
exercises more complex class instance life histories
Example:
1. An account class in a banking application has open, setup, deposit,
withdraw, balance, summarize and close methods
2. The account must be opened first and closed on completion
3. Open ā setup ā deposit ā withdraw ā close
4. Open ā setup ā deposit ā* [deposit | withdraw | balance | summarize] ā
withdraw ā close. Generate random test sequences using this template
77
78. Integration Testing
ā« OO does not have a hierarchical control structure so conventional
top-down and bottom-up integration tests have little meaning
ā« Integration applied three different incremental strategies:
ā Thread-based testing: integrates classes required to respond to one input
or event
ā Use-based testing: integrates classes required by one use case
ā Cluster testing: integrates classes required to demonstrate one
collaboration
78
79. Random Integration Testing
UML support for integration testing
ā¢ Interaction diagrams
ā¢ Collaboration Diagram (Communication diagram)
ā¢ Sequence Diagram
ā« Multiple Class Random Testing
1. For each client class, use the list of class methods to generate a series of
random test sequences.
Methods will send messages to other server classes.
2. For each message that is generated, determine the collaborating class and the
corresponding method in the server object.
3. For each method in the server object (that has been invoked by messages sent
from the client object), determine the messages that it transmits
4. For each of the messages, determine the next level of methods that are
invoked and incorporate these into the test sequence
79
80. MM ā Paths
ā« Method to message path
ā« Sequence of method executions linked by messages
ā« Starts with a method and ends when it reaches a method that does
not issue any message of its own
80
81. GUI Testing
ā« Special Cases of event driven system
ā« Data flow testing
ā« Based on data flow analysis and test data(test cases)
ā« In Conventional program testing, data flow analysis is a
mechanism for selecting execution path
81
82. OO System Testing
ā« It is independent os system implementation
ā« The system has been defined and refined with the UML.
ā« Finding system level thread test cases from UML
82
83. System Testing
Software may be part of a larger system. This often leads to
āfinger pointingā by other system dev teams
Finger pointing defence:
1. Design error-handling paths that test external information
2. Conduct a series of tests that simulate bad data
3. Record the results of tests to use as evidence
Types of System Testing:
ā Recovery testing: how well and quickly does the system recover from
faults
ā Security testing: verify that protection mechanisms built into the system
will protect from unauthorized access (hackers, disgruntled employees,
fraudsters)
ā Stress testing: place abnormal load on the system
ā Performance testing: investigate the run-time performance within the
context of an integrated system
83
84. System Testing
ā« System functions
ā« Presentation layer
ā« High level usecases
ā« Essential usecases
ā« Detailed GUI definition
ā« Expanded essential usecases
ā« Real usecases
84
85. Develop Test Cases and Test Plans
What is a Test Case?
ā¢ A test case is a specification of the inputs, execution conditions, testing
procedure, and expected results that define a single test to be executed
to achieve a particular software testing objective, such as to exercise a
particular program path or to verify compliance with a specific
requirement.
ā¢ A test case is a set of conditions or variables under which a tester will
determine whether a system under test satisfies requirements or works
correctly.
ā¢ Two type of Test Case
1. Formal test cases
2. Informal test cases
85
86. Why do we write test cases?
ā¢ Test case creation could have two broad goals:
1. Test Cases are supposed part of the deliverable to the customer. TC goal
credibility in this case. Typically UAT (acceptance) level.
2. Test Cases are for team internal use only. Typically System level testing.
Testing efficiency should be the goal in this case. The idea is to write test cases
based on design while code is incomplete, so that we could test product quickly
once the code is ready.
ā¢ In case of agile development goal number one is not applicable
ā¢ TC are used internally, but the goal is credibility, not efficiency. It also means that
TC are dramatically reworked during test execution
86
87. Test case examples
Test case for ATM
ā¢ TC 1 :- successful card insertion.
ā¢ TC 2 :- unsuccessful operation due to wrong angle card insertion.
ā¢ TC 3:- unsuccessful operation due to invalid account card.
ā¢ TC 4:- successful entry of pin number.
ā¢ TC 5:- unsuccessful operation due to wrong pin number entered 3 times.
ā¢ TC 6:- successful selection of language.
ā¢ TC 7:- successful selection of account type.
ā¢ TC 8:- unsuccessful operation due to wrong account type selected w/r to that inserted card.
ā¢ TC 9:- successful selection of withdrawal option.
ā¢ TC 10:- successful selection of amount.
ā¢ TC 11:- unsuccessful operation due to wrong denominations.
ā¢ TC 12:- successful withdrawal operation.
ā¢ TC 13:- unsuccessful withdrawal operation due to amount greater than possible balance.
ā¢ TC 14:- unsuccessful due to lack of amount in ATM.
ā¢ TC 15:- un due to amount greater than the day limit.
ā¢ TC 16:- un due to server down.
ā¢ TC 17:- un due to click cancel after insert card.
ā¢ TC 18:- un due to click cancel after insert card and pin no.
ā¢ TC 19:- un due to click cancel after language selection, account type selection, withdrawal
selection, enter amount 87
88. Test case examples
Test cases for a web page
ā¢ Testing without entering any username and password
ā¢ Test it only with Username
ā¢ Test it only with password.
ā¢ User name with wrong password
ā¢ Password with wrong user name
ā¢ Right username and right password
ā¢ Cancel, after entering username and password.
ā¢ Enter long username and password that exceeds the set limit of
characters.
ā¢ Try copy/paste in the password text box.
ā¢ After successful sign-out, try āBackā option from your browser. Check
whether it gets you to the āsigned-inā page.
88
89. Writing better test cases
1. Requirement ID(s) being covered in the Test Case.
2. Test Condition(s) and Expected Result(s) being exercised in the Test Case.
3. Initial setup required for executing the test script. This could be environment or data
or configuration setup to be done before running the test case.
4. Post execution activities. For e.g.:- Delete the application user āWebAdminā after test
execution is completed.
5. Priority (High, Medium and Low) of the Test Case. Priority will help the tester to
decide which test case(s) have to be run earlier than others.
6. Complexity of the Test Case. It will help to identify and filter Test Cases based on
complexity. This would help in assigning test cases to testers, before test execution.
7. Approximate time required for executing the test case. This entry is required from
Project management perspective to track the productivity and also to ensure we can
still meet the test execution deadlines.
8. Test Steps. This contains instruction on what actions to perform and what test data to
use.
89
90. Writing better test cases
1. Expected results. Each Test Step will have a corresponding Expected result field that
would specify the expected response.
2. Actual result. Each Test Step will have a corresponding Actual result field. Tester
would enter the details on the response he saw after executing the test step.
3. Test Step result. Typically this field would contain values Not Applicable, No Run,
Passed, Failed or in progress etc
4. Test Case Version number
5. Test case creation timestamp.
6. Revision history. When and who wrote or modified the test case etc.
7. Test Case status (Draft, completed, reviewed, Not Valid etc.)
8. Test Case execution timestamps.
9. Associated Defects. This field will help to identify what are the existing defect(s) that
are associated with the test case.
10. Project Name
11. Application Name
90
91. How to write test cases?
ā¢ Get the test cases reviewed
ā¢ Detail of each test case
ā¢ Write test cases in plain English
ā¢ Requirement traceability matrix (RTM)
ā¢ Provide test data within the test cases
ā¢ Test case version control
ā¢ Prioritize test cases
ā¢ Pre-requisites and clean up sections
91
92. Test case review!
ā¢ Get your Test Cases reviewed by Business Analyst and
Business Users.
ā¢ It is always good to get inputs from Subject Matter
Experts
ā¢ It is better to get review comments from Business Analysts
and Business Users before testing starts rather than they
pointing out at the deficiencies of your test cases during
acceptance testing or post implementation.
92
93. Detailed Test Case
ā¢ Test Cases steps should be as detailed as possible and
should not be written at high level.
ā¢ Writing detailed test steps is very important considering the
fact that the same person who wrote the test cases may not
always execute the same test cases.
ā¢ If the test cases are not very detailed then the person who
executes the test case for the first time will not be able to
validate the system thoroughly as he/she might have not
gone through the requirements and will test only as per the
test case steps.
93
94. Example ā Detailed Test Case
Example of a Test Case that is written at high level.
ā¢ Step No: 1
Step Description: Login to test application with valid
user id/password
Expected Result: Home page is displayed
ā¢ Step No: 2
Step Description: Click āLogoutā link on home page
Expected Result: Login page is displayed
Example of a Test case written that is written in
detail.
ā¢ Step No: 1
Step Description: Open URL
https://www.example.com/login.asp
Expected Result: Login page is displayed and
contain the below fields
a) āUser Nameā text field
b) āPasswordā text field
c) āSubmitā button
ā¢ Step No: 2
Step Description: Once Login page is displayed.
Enter valid userid/password.
a) Enter āuser1ā in āUser Nameā text field
b) Enter āabc123ā³ in āpasswordā text field
Expected Result: a) Verify āUser Nameā is populated
with text āuser1ā
b) Verify text entered in āpasswordā field is masked
and is not readable.
ā¢ Step No: 3
Step Description: Click āSubmitā button
Expected Result: Verify application āHomeā page is
displayed
a) Verify āHomeā page displays āWelcome user1ā
message on top of left navigation Menu.
b) Verify Left Navigation menu contains links
āDirectoryā, āSubmissionā, āLatest Linksā, āApprove
Linksā, āLogoutā.
ā¢ Step No: 4
Step Description: Click āLogoutā link on the left
menu.
Expected Result: Verify user is successfully logged
out of the application and application login page is
displayed https://www.example.com/login.asp
94
95. Develop Test Cases and Test Plans
What is a Test Plan?
ā¢ A test plan is a document detailing the objectives,
resources, and processes for a specific test for a software
or hardware product. The plan typically contains a detailed
understanding of the eventual workflow.
ā¢ A test plan documents the strategy that will be used to
verify and ensure that a product or system meets its design
specifications and other requirements.
ā¢ A test plan is usually prepared by or with significant input
from test engineers.
95
96. Test plan strategy
ā¢ Design Verification or Compliance test ā to be performed during the
development or approval stages of the product, typically on a small
sample of units.
ā¢ Manufacturing or Production test ā to be performed during
preparation or assembly of the product in an ongoing manner for
purposes of performance verification and quality control.
ā¢ Acceptance or Commissioning test ā to be performed at the time of
delivery or installation of the product.
ā¢ Service and Repair test ā to be performed as required over the service
life of the product.
ā¢ Regression test ā to be performed on an existing operational product,
to verify that existing functionality was not negatively affected when
other aspects of the environment were changed (e.g., upgrading the
platform on which an existing application runs).
96
97. Why to use a test plan?
ā¢ Test plans often help testers generate more bugs than
exploratory testing.
ā¢ They are able to test all sections and all use cases.
ā¢ A benefit of test plans is that exploratory testing generates
good and valuable bugs.
ā¢ They generally imply āout of the boxā thinking, but as for
quantity, nothing beats a good test plan.
97
98. Test plan Types
ā¢ Master test plan
ā¢ A test plan that typically addresses multiple test levels.
ā¢ Phase test plan
ā¢ A test plan that typically addresses one test phase.
ā¢ Testing Level Specific Test Plans
ā¢ Plans for each level of testing.
ā¢ Unit Test Plan
ā¢ Integration Test Plan
ā¢ System Test Plan
ā¢ Acceptance Test Plan
ā¢ Testing Type Specific Test Plans
ā¢ Plans for major types of testing like Performance Test Plan and
Security Test Plan.
98
99. Test Plan Template
ā¢ The format and content of a software test plan vary depending on
the processes, standards, and test management tools being
implemented.
ā¢ Nevertheless, the following format, which is based on IEEE
standard for software test documentation, provides a summary of
what a test plan can/should contain.
ā¢ Test Plan Identifier
ā¢ Introduction
ā¢ References
ā¢ Test Items
ā¢ Features to be Tested
ā¢ Features Not to Be Tested
ā¢ Approach
ā¢ Item Pass/Fail Criteria
ā¢ Suspension Criteria and Resumption Requirements
ā¢ Test Deliverables
99
ā¢ Test Environment
ā¢ Test Items
ā¢ Estimate
ā¢ Schedule
ā¢ Staffing and Training Needs
ā¢ Responsibilities
ā¢ Risks
ā¢ Assumptions and Dependencies
ā¢ Approvals
100. Sample Test Plan Template
100
TITLE PAGE NO.
1 INTRODUCTION 3
2 BUSINESS BACKGROUND 3
3 TEST OBJECTIVES 3
4 SCOPE 3
5 TEST TYPES IDENTIFIED 3
6 PROBLEMS PERCEIVED 3
7 ARCHITECTURE 3
8 ENVIRONMENT 3
9 ASSUMPTIONS 3
10 FUNCTIONALITY 3
11 SECURITY 4
12 PERFORMANCE 4
13 USABILITY 5
14 TEST TEAM ORGANIZATION 6
15 SCHEDULE 6
16 DEFECTS CLASSIFICATION MECHANISM 6
17 CONFIGURATION MANAGEMENT 6
18 RELEASE CRITERIA 6
101. How to plan for testing?
ā¢ Analyze the product
ā¢ Design the Test Strategy
ā¢ Define the Test Objectives
ā¢ Define Test Criteria
ā¢ Resource Planning
ā¢ Plan Test Environment
ā¢ Schedule & Estimation
ā¢ Determine Test Deliverables
101
102. References
ā¢ Text Book 1 : Craig Larman, Applying UML and Patterns: An Introduction to Object Oriented Analysis and Design
and Iterative Development, Third Edition, Pearson Education, 2005.
ā¢ Text Book2 : Ali Bahrami, Object Oriented Systems Development, McGraw Hill International Edition, 1999
ā¢ http://softwaretestingfundamentals.com/
ā¢ https://www.guru99.com/
102