CS8494 – Software Engineering
III Year / V Semester
UNIT IV
TESTING AND MAINTENANCE
Software testing fundamentals-Internal and external views
of Testing-white box testing - basis path testing-control
structure testing-black box testing- Regression Testing –
Unit Testing – Integration Testing – Validation Testing –
System Testing And Debugging –Software Implementation
Techniques: Coding practices-Refactoring-Maintenance and
Reengineering-BPR model-Reengineering process model-
Reverse and Forward Engineering
Testing
 Testing is the process of exercising a program
with the specific intent of finding errors prior
to delivery to the end user.
 A critical element of software quality
assurance and represents the ultimate review of
specification, design and coding.
Testing
 Objectives - Glen Myers states:
 Testing is a process of executing a program with the
intent of finding an error.
A good test case is one that has a high probability of
finding an as-yet undiscovered error.
A successful test is one that uncovers an as-yet-
undiscovered error.
Testing
 Principles:
 All tests should be traceable to customer
requirements
 Tests should be planned long before testing
begins
 The Pareto principle applies to software
Testing
Errors: A mistake, misconception, or
misunderstanding on the part of a software
developer.
 Faults: A fault (defect) is introduced into the
software as the result of an error. It is an
anomaly in the software that may cause it to
behave incorrectly, and not according to its
Testing
Failures: A failure is the inability of a software
system or component to perform its required
functions within specified performance
requirements.
Testing
Characteristics of Good Tester:
 The tester must able to understand the software.
He should be in position to find out high
probability errors.
 The tester must be able to write simple test cases.
 The tester should conduct the tests which should
have highest likelihood of uncovering errors.
Internal and External View of
Testing
 Black Box Testing:
 It is tested whether the input is accepted properly
and output is correctly produced.
 The major focus on functions, operations,external
interfaces and information.
Internal and External View of
Testing
 White Box Testing:
 The procedural details are closely examined.
 The internals of software are tested to make sure
that they operate according to specifications and
designs.
 Focus on internal structures, logic paths, Control
flows, data flows and loops
White Box Testing
 It is also called as structural testing
 Derivation of test cases is according to
program structure.
To exercise all program statements.
White Box Testing
 Condition Testing
 To test the logical conditions in the program
module the condition testing is used.
 This condition can be a Boolean or a relational
expression.
White Box Testing
 Condition Testing
 It focus on each testing condition in the program
 The branch testing is a condition testing strategy
in which for a compound condition each and every
true or false branches are tested.
White Box Testing
 Loop Testing:
 Loop testing a white box testing technique
performed to validate the loops. There are four
kinds of loops as mentioned below:
Simple Loops
Nested Loops
Concatenated Loops
Unstructured Loops
White Box Testing
 Why do Loop Testing?
 Loop Testing is done for the following reasons
Testing can fix the loop repetition issues
Loops testing can reveal performance/capacity
bottlenecks
By testing loops, the uninitialized variables in the loop
can be determined
It helps to identify loops initialization problems.
White Box Testing
Loop Testing - Simple loop:
 A simple loop is tested in the following way:
 Skip the entire loop
Make 1 passes through the loop
Make 2 passes through the loop
Make a passes through the loop where a<b, n is the
maximum number of passes through the loop
Make b, b-1; b+1 passes through the loop where
White Box Testing
Loop Testing - Nested loop:
White Box Testing
Loop Testing - Nested loop:
 Set all the other loops to minimum value and start at
the innermost loop
For the innermost loop, perform a simple loop test and
hold the outer loops at their minimum iteration
parameter value
Perform test for the next loop and work outward.
Continue until the outermost loop has been tested.
White Box Testing
Loop Testing - Concatenated loop:
 If two loops are independent of each other then
they are tested using simple loops or else test them
as nested loops.
White Box Testing
Loop Testing - Unstructured Loops:
 It requires restructuring of the design to reflect the
use of the structured programming constructs.
White Box Testing
Basis Path Testing:
 Test cases derived to exercise the basis set are
guaranteed to execute every statement in the
program at least one time during testing.
White Box Testing
Basis Path Testing:
 Steps:
 Design the flow graph for the program or a
component.
 Calculate the cyclomatic complexity
 Select a basis set of path
 Generate test cases for these paths.
White Box Testing
Basis Path Testing – Steps: Design the flow
graph for the program or a component.
 Flow graph is a graphical representation of logical
control flow of the program.
 Circle – Flow graph node
 Arrow – Edges or links
 Regions – The areas bounded by nodes and edges.
White Box Testing
Basis Path Testing – Steps: Design the flow
graph for the program or a component.
White Box Testing
White Box Testing
Basis Path Testing – Steps: Calculate the
cyclomatic complexity:
 a software metric that provides a quantitative measure
of the logical complexity of a program.
 It defines the number of independent paths in the basis
set of a program and with an upper bound for the
number of tests that must be conducted to ensure that
all statements have been executed at least once.
White Box Testing
Basis Path Testing – Steps: Calculate the
cyclomatic complexity – Methods:
 The number of regions of the flow graph
 V(G) = E – N + 2, V (G) = 11 – 9 + 2 = 4
 V(G) = P + 1, P - the number of predicate nodes (
Decision Making Nodes)
 V (G) = 3 + 1 = 4
White Box Testing
Basis Path Testing – Steps: Select a basis set of
path
 Independent Program Paths:
 Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11
White Box Testing
Basis Path Testing – Steps: Generate test cases
for these paths.
Test id Test
Case
Name
Test
case
Descrip
tion
Step Expected Actual Test
case
status (
Pass /
Fail)
Test
priority
Defect
Severity
Black Box Testing
 It focus on the functional requirements of the
software.
 Test sets are derived that fully exercise all
functional requirements.
 It uncovers different classes of error than
white box testing.
Black Box Testing
 It uncovers following types of errors.
 Incorrect or missing functions
 Interface errors
 Errors in data structures
 Performance errors
 Initialization or termination errors
Black Box Testing
 Equivalence Partitioning:
 It divides the input domain of a program into
classes of data from which test cases can be
derived.
 An ideal test case single-handedly uncovers a
class of errors (e.g., incorrect processing of all
character data) that might otherwise require many
test cases to be executed before the general error is
Black Box Testing
 Equivalence Partitioning:
 Test-case design for equivalence partitioning is
based on an evaluation of equivalence classes for
an input condition
 An equivalence class represents a set of valid or
invalid states for input conditions.
Black Box Testing
 Equivalence Partitioning – Guidelines:
 If an input condition specifies a range, one valid and two
invalid equivalence classes are defined.
If an input condition requires a specific value, one valid
and two invalid equivalence classes are defined.
If an input condition specifies a member of a set, one valid
and one invalid equivalence class are defined.
If an input condition is Boolean, one valid and one invalid
class are defined.
Black Box Testing
 Boundary Value Analysis
 To check boundary conditions
 The elements at the edge of the domain are
selected and tested.
 Boundary value analysis leads to a selection of
test cases that exercise bounding values.
Black Box Testing
 Boundary Value Analysis – Guidelines:
 If an input condition specifies a range bounded by
values a and b, test cases should be designed with
values a and b and just above and just below a and b.
If an input condition specifies a number of values, test
cases should be developed that exercise the minimum
and maximum numbers. Values just above and below
minimum and maximum are also tested.
Black Box Testing
 Boundary Value Analysis – Guidelines:
 If internal program data structures have prescribed
boundaries (e.g., a table has a defined limit of 100
entries), be certain to design a test case to exercise
the data structure at its boundary.
Black Box Testing Vs White Box
Testing
Black Box Testing White Box Testing
It is called behavioral testing It is called glass box testing
It examines some fundamental
aspect of the system with little
regard internal logical structure of
the software
The procedural details, all the
logical paths, all the internal data
structures are closely examined.
Program cannot be tested 100
percent
It leads to test the program
thoroughly
This type of testing suitable for
large projects
It is suitable for small projects.
Testing Strategy
‘testing-in-the-small’ and move toward
‘testing-in-the-large’
Testing Strategy
System engineering
Analysis modeling
Design modeling
Code generation Unit test
Integration test
Validation test
System test
Testing Strategy
 Unit Testing:
 Unit testing focuses verification effort on the
smallest unit of software design—the software
component or module.
 The unit test focuses on the internal processing
logic and data structures within the boundaries of a
component.
Testing Strategy
 Unit Testing:
Testing Strategy
 Unit Testing:
 The module interface is tested to ensure that
information properly flows into and out of the
program unit under test.
 Local data structures are examined to ensure that
data stored temporarily maintains its integrity
during all steps in an algorithm’s execution
Testing Strategy
 Unit Testing:
 All independent paths through the control
structure are exercised to ensure that all statements
in a module have been executed at least once.
 Boundary conditions are tested to ensure that the
module operates properly at boundaries established
to limit or restrict processing.
 And finally, all error-handling paths are tested.
Testing Strategy
 Unit Testing:
 Unit testing is normally considered as an adjunct
to the coding step
 Each test case should be coupled with a set of
expected results.
Testing Strategy
 Unit Testing:
 Driver and/or stub software must often be developed
for each unit test.
 A driver is nothing more than a “main program” that
accepts test case data, passes such data to the
component (to be tested), and prints relevant results.
 Stubs serve to replace modules that are subordinate
(invoked by) the component to be tested.
Testing Strategy
 Unit Testing:
Testing Strategy
 Unit Testing:
 A stub or “dummy subprogram” uses the
subordinate module’s interface, may do minimal
data manipulation, prints verification of entry, and
returns control to the module undergoing testing.
 Unit testing is simplified when a component with
high cohesion is designed.
Testing Strategy
 Integration Testing:
 A group of dependent components are tested
together to ensure their quality of their integration
unit.
 The objective is to take unit-tested components
and build a program structure that has been
dictated by design.
Testing Strategy
 Integration Testing – Focus:
 Design and construction of software architecture
 Integrated functions or operations at subsystem level
 Interfaces and interactions between them.
 Resource integration and / or environment integration
Testing Strategy
 Integration Testing – Approach:
 Big bang Approach:
 All components are combined in advance
 The entire program is tested as a whole
 Correction is difficult because isolation of causes is
complicated by the vast expanse of the entire program.
Testing Strategy
 Integration Testing – Approach:
 Big bang Approach:
 Once these errors are corrected, new ones appear
and the process continues in a seemingly endless
loop.
Testing Strategy
 Integration Testing – Strategy:
 Top down integration
 Bottom up integration
Testing Strategy
 Integration Testing – Strategy:
 Top down integration:
 Top-down integration testing is an incremental
approach to construction of the software architecture.
 Modules are integrated by moving downward through
the control hierarchy, beginning with the main control
module.
Testing Strategy
 Integration Testing – Strategy:
 Top down integration:
Testing Strategy
 Integration Testing – Strategy:
 Top down integration:
 Modules subordinate (and ultimately subordinate)
to the main control module are incorporated into
the structure in either a depth-first or breadth-first
manner.
Testing Strategy
 Integration Testing – Strategy:
 Top down integration – Steps:
 The main control module is used as a test driver and stubs
are substituted for all components directly subordinate to
the main control module.
 Depending on the integration approach selected (i.e., depth
or breadth first), subordinate stubs are replaced one at a
time with actual components
Testing Strategy
 Integration Testing – Strategy:
 Top down integration – Steps:
 Tests are conducted as each component is integrated
 On completion of each set of tests, another stub is
replaced with the real component
 Regression testing may be conducted to ensure that
new errors have not been introduced
Testing Strategy
 Integration Testing – Strategy:
 Bottom Up integration:
 The modules at the lowest levels are integrated at
first, then integration is done by moving upward
through the control structure.
Testing Strategy
 Integration Testing – Strategy:
 Bottom Up integration – Steps:
 Low-level components are combined into clusters (sometimes
called builds) that perform a specific software subfunctions.
 A driver (a control program for testing) is written to coordinate
test case input and output.
 The cluster is tested
 Drivers are removed and clusters are combined moving upward
in the program structure
Testing Strategy
 Integration Testing – Strategy:
 Bottom Up integration:
Testing Strategy
 Integration Testing:
 Regression Testing:
“The reexecution of some subset of tests that have
already been conducted to ensure that changes
have not propagated unintended side effects”
Testing Strategy
 Integration Testing:
 Regression Testing – Classes of Test cases:
 A representative sample of tests that will exercise all
software functions
 Additional tests that focus on software functions that are
likely to be affected by the change
 Tests that focus on the software components that have been
changed
Testing Strategy
 Integration Testing:
 Smoke Testing:
 It is designed as a pacing mechanism for time-
critical projects, allowing the software team to
assess the project on a frequent basis.
Testing Strategy
 Integration Testing:
 Smoke Testing – Activities:
 Software components that have been translated into
code are integrated into a build. A build includes all
data files, libraries, reusable modules, and engineered
components that are required to implement one or
more product functions
Testing Strategy
 Integration Testing:
 Smoke Testing – Activities:
 A series of tests is designed to expose errors that will
keep the build from properly performing its function
 The build is integrated with other builds, and the
entire product (in its current form) is smoke tested
daily
Testing Strategy
 Integration Testing:
 Smoke Testing – Benefits:
 Integration risk is minimized
 The quality of the end product is improved
 Error diagnosis and correction are simplified
 Progress is easier to assess.
Testing Strategy
 Validation Testing:
 Verification: "Are we building the product right?"
Validation: "Are we building the right product?"
“ Validation succeeds when software functions in a
manner that can be reasonably expected by the
customer.”
Testing Strategy
 Validation Testing – Criteria:
 Software validation is achieved through a series of
tests that demonstrate conformity with requirements.
 A test plan outlines - functional requirements, all
behavioral characteristics, Content, documentation and
other requirements.
Testing Strategy
 Validation Testing – Conditions:
 The function or performance characteristic conforms
to specification and is accepted
 A deviation from specification is uncovered and a
deficiency list is created. It is often necessary to
negotiate with the customer to establish a method for
resolving deficiencies.
Testing Strategy
 Validation Testing – Configuration Review:
“The intent of the review is to ensure that all elements
of the software configuration have been properly
developed, are cataloged, and have the necessary
detail to encourage the support activities.”
Testing Strategy
 Validation Testing – Acceptance Testing:
 When custom software is built for one customer, a series
of acceptance tests are conducted to enable the customer to
validate all requirements.
 An acceptance test can range from an informal “test drive”
to a planned and systematically executed series of tests
Testing Strategy
 Validation Testing – Acceptance Testing:
 Alpha Testing:
 It is conducted at the developer’s site by a representative
group of end users.
 The software is used in a natural setting with the developer
“looking over the shoulder” of the users and recording
errors and usage problems.
 Alpha tests are conducted in a controlled environment.
Testing Strategy
 Validation Testing – Acceptance Testing:
 Beta Testing:
 It conducted at one or more end-user sites.
 The beta test is a “live” application of the software in an
environment that cannot be controlled by the developer
The customer records all problems (real or imagined) that
are encountered during beta testing and reports these to the
developer at regular intervals.
Testing Strategy
 Validation Testing – Acceptance Testing:
 Beta Testing:
 As a result of problems reported during beta tests,
you make modifications and then prepare for
release of the software product to the entire
customer base.
Testing Strategy
 System Testing:
 It is actually a series of different tests whose
primary purpose is to fully exercise the computer-
based system.
 Although each test has a different purpose, all
work to verify that system elements have been
properly integrated and perform allocated
Testing Strategy
 System Testing – Focus:
 System functions and performance
 System reliability and recoverability
 System installation
 System behavior in the special conditions
 System user operations
 Integration of external software and the system.
Testing Strategy
 System Testing – Recovery Testing:
 Many computer-based systems must recover from
faults and resume processing with little or no
downtime.
 Recovery testing is a system test that forces the
software to fail in a variety of ways and verifies that
recovery is properly performed
Testing Strategy
 System Testing – Security Testing:
 Security testing attempts to verify that protection
mechanisms built into a system will, in fact, protect it
from improper penetration.
 It also verifies that protection mechanisms built into
the system prevent intrusion such as unauthorized
internal or external access or willful damage.
Testing Strategy
 System Testing – Stress Testing:
 Stress testing executes a system in a manner that
demands resources in abnormal quantity, frequency, or
volume.
 Sensitivity testing attempts to uncover data
combinations within valid input classes that may cause
instability or improper processing
Testing Strategy
 System Testing – Performance Testing:
 Performance testing is designed to test the run-time
performance of software within the context of an
integrated system.
 Resource utilization such as CPU load, throughput,
memory usage can be measured.
 Beta testing is useful for performance testing
Debugging
 Debugging occurs as a consequence of
successful testing. That is, when a test case
uncovers an error, debugging is the process
that results in the removal of the error
 The actual test results are compared with the
expected results.
 The suspected causes are identified and
Debugging
 Approaches:
 Brute Force - Using a “let the computer find
the error” philosophy, memory dumps are
taken, run-time traces are invoked, and the
program is loaded with output statements.
Debugging
 Approaches:
 Backtracking - Beginning at the site where a
symptom has been uncovered, the source code
is traced backward (manually) until the cause
is found
Debugging
 Approaches:
 Cause Elimination – It uses binary
partitioning to reduce the number of locations
where errors can exist.
Software Implementation
Techniques
 The goal coding is to implement the design in
the best possible manner.
 Objectives:
 Programs developed in coding should be readable.
 They should execute efficiently
 The program should utilize less amount of
memory
Software Implementation
Techniques
 Coding Practices:
 Control construct
 The single entry and single exit constructs need to
be used.
 Uses of gotos
 Avoid use of goto statements
Software Implementation
Techniques
 Coding Practices:
 Information hiding
 In that case only access functions to the data
structures must be made visible and the
information present in it must be hidden.
Software Implementation
Techniques
 Coding Practices:
 Nesting:
 It means defining one structure inside another.
 If the nesting is too deep then it hard to
understand the code.
 Avoid deep nesting of the code
Software Implementation
Techniques
 Coding Practices:
 User defined data types:
 Modern programming languages allow the user to
use defined data type as the enumerated types.
 It enhances the readability of the code.
Software Implementation
Techniques
 Coding Practices:
 Module size:
 The large size of the module will not be
functionally cohesive.
 Module Interface:
 The module interface with more than five
parameters must be broken into multiple modules
Software Implementation
Techniques
 Coding Practices:
 Side Effects:
 If some part of the code is changed randomly then
it will cause some side effect.
 Robustness:
 The program is said to robust if it does something
even though some unexceptional condition occurs.
Software Implementation
Techniques
 Coding Practices:
 Switch case with defaults:
 The choice being passes to the switch case
statement may have some unpredictable value, and
then the default case will help to execute the
switch case statement without any problem.
Software Implementation
Techniques
 Coding Practices:
 Empty catch block:
 Take some default action even if it is just writing
some print statement, whenever the exception is
caught.
 Empty if and while statements:
 Useless checks should be avoided.
Software Implementation
Techniques
 Coding Practices:
 Check for read return:
 The return value must be checked for the read
operation.
 Return from finally Block:
 The return value must come from finally block
whenever it is possible.
Software Implementation
Techniques
 Coding Practices:
 Trusted Data sources:
 Counter check should be made before accessing
the input data.
 Correlated Parameters:
 To check the co-relation before performing any
operation on those data items.
Software Implementation
Techniques
 Coding Practices:
 Exceptions Handling:
 In order to make the software more reliable, it
necessary to write the code for execution.
Software Implementation
Techniques
 Coding Standards:
 Any good software development approach
suggests to adhere to some well defined standards
or rules for coding.
Software Implementation
Techniques
 Coding Standards:
 Naming Conventions:
 Package name and variable names should be in
lower case.
 Variable names must not begin with numbers.
 The type name should be noun and it should start
with capital letter.
Software Implementation
Techniques
 Coding Standards:
Files:
 Reader must get an idea about the purpose of the
file by its name.
 The file extension must be java
 The name of the file and the class defined in the
file must have the same name.
Software Implementation
Techniques
 Coding Standards:
Commenting / Layout:
 They are non executable part of the code.
 It enhances the readability of the code.
 The purpose of the code is to explain the logic of
the program.
Software Implementation
Techniques
 Coding Standards:
Commenting / Layout:
 Single line comments must be given by //
 For the names of the variables comments must be
given.
Software Implementation
Techniques
 Coding Standards:
 Statements:
 Declare some related variables on same line and
unrelated variables on another line.
 Class variable should never be declared public.
 Make use of only loop control within the for loop.
 Avoid use of do.. While, complex conditional
Software Implementation
Techniques
 Refactoring:
 A change made to the internal structure of
software for better understanding and performing
cheaper to modifications without changing
system’s behavior.
 To improve the design of the system
Software Implementation
Techniques
 Refactoring:
 The coupling may get reduced
 Cohesion may get increased.
 The open-closed principle may get followed
strictly.
Software Implementation
Techniques
 Refactoring:
 In order to mitigate or avoid this risk following
two rules must be followed.
 Re-factor in small steps
 For testing the existing functionalities make use of the
test scripts.
 Due to refactoring, the cost of testing and
debugging gets reduced.
Maintenance and Reengineering
 Software maintenance is an activity in which
program is modified after it has been put into use.
 A process in which changes are implemented by
either modifying the existing system’s
architecture or by adding new components to the
system.
Maintenance and Reengineering
 Need for Maintenance:
 Usually the system requirements are changing and
to meet these requirements some changes are
incorporated in the system.
 When a system a system is installed in an
environment, it changes that environment. This
ultimately changes the system requirements.
Maintenance and Reengineering
 Types of Software Maintenance:
 Corrective Maintenance – Correcting the software faults
 Adaptive Maintenance – Adapting the change in
environment
 Perfective Maintenance – Changing the system to meet the
new requirements
 Preventive Maintenance – To improve future
maintainability
Maintenance and Reengineering
Software Maintenance Process:
 Initially the request for change is made
 Change Management: The status of all the change
requests is identified, described.
 Impact Analysis:
 Identify all systems and system products affected by a
change request.
 Make an estimate of the resources needed to effect the
Maintenance and Reengineering
Software Maintenance Process:
 System release planning: The schedule and
contents of software release is planned.
 Change implementation: It can be done by first
designing the changes, then coding for these
changes and finally testing the changes. Preferably
the regression testing must be performed while
Maintenance and Reengineering
Software Maintenance Process:
 System release:
 Documentation
 Software
 Training
 Hardware changes
 Data conversion should be described
Maintenance and Reengineering
 Issues in Software Maintenance:
 Technical: Limited understanding of system,
testing, impact analysis, maintainability
 Management: Staffing problem, process issue,
organizational structure and outsourcing.
Maintenance and Reengineering
 Issues in Software Maintenance:
 Cost estimation: It is based on cost,
experience of projects.
 Measurement: Size, schedule, quality,
resource utilization, design complexity and
reliability.
Business Process Reengineering
“The implementation of radical change in the
business process to achieve breakthrough
results.”
Business Process Reengineering
 BPR Model
 Business Process Definition:
 The processes are defined based on business
goals.
 Factors:
 Cost reduction
 Quality improvement
Business Process Reengineering
 BPR Model
 Process identification:
 The critical processes are identified
 They are prioritized according to their need for
change
Business Process Reengineering
 BPR Model
 Process Evaluation:
 The existing processes are analyzed
 The time and cost required by these tasks are
measured
 The quality performance issues are identified and
isolated.
Business Process Reengineering
 BPR Model
 Process specification and Design:
 The use cases are prepared for each process that
need to be redesigned in BPR
 Each use case captures the scenario that give out
some outcome to the customer.
 On analyzing the use cases, new tasks are
Business Process Reengineering
 BPR Model
 Prototyping: The redesigned process is
prototyped for testing purpose
 Refinement and Instantiation:
 The feedback for each prototype in BPR models
collected.
 The processes are refined based on the feedback.
Business Process Reengineering
 Reengineering Process Model:
 Software reengineering is a process of modifying
the system for maintenance purpose
 Inventory Analysis:
 The software organization possess the inventory
of all the required applications.
 This inventory should be revisited periodically.
Business Process Reengineering
 Reengineering Process Model:
 Document Restructuring:
 Instead of having time consuming documentation,
remain with weak documentation.
 Update poor documentation if needed.
 For the critical systems, rewrite the documents if
needed.
Business Process Reengineering
 Reengineering Process Model:
 Reverse Engineering:
 The process of design recovery
 In reverse engineering the data, architectural and
procedural information is extracted from a source
code.
 It creates a representation of the program at a
Business Process Reengineering
 Reengineering Process Model:
 Code Restructuring:
 The code is rewritten in modern programming
language.
 The resultant restructured code is tested and
reviewed.
Business Process Reengineering
 Reengineering Process Model:
 Data Restructuring:
 The changes in data demand for changes in
architecture or code.
 Forward Engineering:
 A process in which the design information is
recovered from the existing software and the
Business Process Reengineering
Software Engineering Reverse Engineering
A discipline in which theories,
methods and tools are applied to
develop a professional software
product.
The dirty or unstructured code is
taken, processed and it is
restructured.
User requirements are available
for software engineering process.
A dirty or unstructured code is
available initially.
It is simple and straight forward
approach
It is complex because cleaning the
dirty or unstructured code requires
more efforts
Documentation or specification of
the product is useful to the end-
user.
Documentation or specification of
the product is useful to the
developer.
Business Process Reengineering
Reverse Engineering Re-Engineering
A process of finding out how a
product works from already
created software system.
To observe the software system
and build it again for better use
In reverse engineering, the source
code is re-created from the
compiled code.
New piece of code with similar or
better functionality than the
existing one is created.
Reverse engineering is carried out
for trying to understand inner
working of the artifact with
availability of any documents
Reengineering is carried out for
designing something again. Many
times from scratch.

CS8494 SOFTWARE ENGINEERING Unit-4

  • 1.
    CS8494 – SoftwareEngineering III Year / V Semester
  • 2.
    UNIT IV TESTING ANDMAINTENANCE Software testing fundamentals-Internal and external views of Testing-white box testing - basis path testing-control structure testing-black box testing- Regression Testing – Unit Testing – Integration Testing – Validation Testing – System Testing And Debugging –Software Implementation Techniques: Coding practices-Refactoring-Maintenance and Reengineering-BPR model-Reengineering process model- Reverse and Forward Engineering
  • 3.
    Testing  Testing isthe process of exercising a program with the specific intent of finding errors prior to delivery to the end user.  A critical element of software quality assurance and represents the ultimate review of specification, design and coding.
  • 4.
    Testing  Objectives -Glen Myers states:  Testing is a process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding an as-yet undiscovered error. A successful test is one that uncovers an as-yet- undiscovered error.
  • 5.
    Testing  Principles:  Alltests should be traceable to customer requirements  Tests should be planned long before testing begins  The Pareto principle applies to software
  • 6.
    Testing Errors: A mistake,misconception, or misunderstanding on the part of a software developer.  Faults: A fault (defect) is introduced into the software as the result of an error. It is an anomaly in the software that may cause it to behave incorrectly, and not according to its
  • 7.
    Testing Failures: A failureis the inability of a software system or component to perform its required functions within specified performance requirements.
  • 8.
    Testing Characteristics of GoodTester:  The tester must able to understand the software. He should be in position to find out high probability errors.  The tester must be able to write simple test cases.  The tester should conduct the tests which should have highest likelihood of uncovering errors.
  • 9.
    Internal and ExternalView of Testing  Black Box Testing:  It is tested whether the input is accepted properly and output is correctly produced.  The major focus on functions, operations,external interfaces and information.
  • 10.
    Internal and ExternalView of Testing  White Box Testing:  The procedural details are closely examined.  The internals of software are tested to make sure that they operate according to specifications and designs.  Focus on internal structures, logic paths, Control flows, data flows and loops
  • 11.
    White Box Testing It is also called as structural testing  Derivation of test cases is according to program structure. To exercise all program statements.
  • 12.
    White Box Testing Condition Testing  To test the logical conditions in the program module the condition testing is used.  This condition can be a Boolean or a relational expression.
  • 13.
    White Box Testing Condition Testing  It focus on each testing condition in the program  The branch testing is a condition testing strategy in which for a compound condition each and every true or false branches are tested.
  • 14.
    White Box Testing Loop Testing:  Loop testing a white box testing technique performed to validate the loops. There are four kinds of loops as mentioned below: Simple Loops Nested Loops Concatenated Loops Unstructured Loops
  • 15.
    White Box Testing Why do Loop Testing?  Loop Testing is done for the following reasons Testing can fix the loop repetition issues Loops testing can reveal performance/capacity bottlenecks By testing loops, the uninitialized variables in the loop can be determined It helps to identify loops initialization problems.
  • 16.
    White Box Testing LoopTesting - Simple loop:  A simple loop is tested in the following way:  Skip the entire loop Make 1 passes through the loop Make 2 passes through the loop Make a passes through the loop where a<b, n is the maximum number of passes through the loop Make b, b-1; b+1 passes through the loop where
  • 17.
    White Box Testing LoopTesting - Nested loop:
  • 18.
    White Box Testing LoopTesting - Nested loop:  Set all the other loops to minimum value and start at the innermost loop For the innermost loop, perform a simple loop test and hold the outer loops at their minimum iteration parameter value Perform test for the next loop and work outward. Continue until the outermost loop has been tested.
  • 19.
    White Box Testing LoopTesting - Concatenated loop:  If two loops are independent of each other then they are tested using simple loops or else test them as nested loops.
  • 20.
    White Box Testing LoopTesting - Unstructured Loops:  It requires restructuring of the design to reflect the use of the structured programming constructs.
  • 21.
    White Box Testing BasisPath Testing:  Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.
  • 22.
    White Box Testing BasisPath Testing:  Steps:  Design the flow graph for the program or a component.  Calculate the cyclomatic complexity  Select a basis set of path  Generate test cases for these paths.
  • 23.
    White Box Testing BasisPath Testing – Steps: Design the flow graph for the program or a component.  Flow graph is a graphical representation of logical control flow of the program.  Circle – Flow graph node  Arrow – Edges or links  Regions – The areas bounded by nodes and edges.
  • 24.
    White Box Testing BasisPath Testing – Steps: Design the flow graph for the program or a component.
  • 25.
  • 26.
    White Box Testing BasisPath Testing – Steps: Calculate the cyclomatic complexity:  a software metric that provides a quantitative measure of the logical complexity of a program.  It defines the number of independent paths in the basis set of a program and with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.
  • 27.
    White Box Testing BasisPath Testing – Steps: Calculate the cyclomatic complexity – Methods:  The number of regions of the flow graph  V(G) = E – N + 2, V (G) = 11 – 9 + 2 = 4  V(G) = P + 1, P - the number of predicate nodes ( Decision Making Nodes)  V (G) = 3 + 1 = 4
  • 28.
    White Box Testing BasisPath Testing – Steps: Select a basis set of path  Independent Program Paths:  Path 1: 1-11 Path 2: 1-2-3-4-5-10-1-11 Path 3: 1-2-3-6-8-9-10-1-11 Path 4: 1-2-3-6-7-9-10-1-11
  • 29.
    White Box Testing BasisPath Testing – Steps: Generate test cases for these paths. Test id Test Case Name Test case Descrip tion Step Expected Actual Test case status ( Pass / Fail) Test priority Defect Severity
  • 30.
    Black Box Testing It focus on the functional requirements of the software.  Test sets are derived that fully exercise all functional requirements.  It uncovers different classes of error than white box testing.
  • 31.
    Black Box Testing It uncovers following types of errors.  Incorrect or missing functions  Interface errors  Errors in data structures  Performance errors  Initialization or termination errors
  • 32.
    Black Box Testing Equivalence Partitioning:  It divides the input domain of a program into classes of data from which test cases can be derived.  An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all character data) that might otherwise require many test cases to be executed before the general error is
  • 33.
    Black Box Testing Equivalence Partitioning:  Test-case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition  An equivalence class represents a set of valid or invalid states for input conditions.
  • 34.
    Black Box Testing Equivalence Partitioning – Guidelines:  If an input condition specifies a range, one valid and two invalid equivalence classes are defined. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined. If an input condition is Boolean, one valid and one invalid class are defined.
  • 35.
    Black Box Testing Boundary Value Analysis  To check boundary conditions  The elements at the edge of the domain are selected and tested.  Boundary value analysis leads to a selection of test cases that exercise bounding values.
  • 36.
    Black Box Testing Boundary Value Analysis – Guidelines:  If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b and just above and just below a and b. If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested.
  • 37.
    Black Box Testing Boundary Value Analysis – Guidelines:  If internal program data structures have prescribed boundaries (e.g., a table has a defined limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary.
  • 38.
    Black Box TestingVs White Box Testing Black Box Testing White Box Testing It is called behavioral testing It is called glass box testing It examines some fundamental aspect of the system with little regard internal logical structure of the software The procedural details, all the logical paths, all the internal data structures are closely examined. Program cannot be tested 100 percent It leads to test the program thoroughly This type of testing suitable for large projects It is suitable for small projects.
  • 39.
    Testing Strategy ‘testing-in-the-small’ andmove toward ‘testing-in-the-large’
  • 40.
    Testing Strategy System engineering Analysismodeling Design modeling Code generation Unit test Integration test Validation test System test
  • 41.
    Testing Strategy  UnitTesting:  Unit testing focuses verification effort on the smallest unit of software design—the software component or module.  The unit test focuses on the internal processing logic and data structures within the boundaries of a component.
  • 42.
  • 43.
    Testing Strategy  UnitTesting:  The module interface is tested to ensure that information properly flows into and out of the program unit under test.  Local data structures are examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution
  • 44.
    Testing Strategy  UnitTesting:  All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once.  Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.  And finally, all error-handling paths are tested.
  • 45.
    Testing Strategy  UnitTesting:  Unit testing is normally considered as an adjunct to the coding step  Each test case should be coupled with a set of expected results.
  • 46.
    Testing Strategy  UnitTesting:  Driver and/or stub software must often be developed for each unit test.  A driver is nothing more than a “main program” that accepts test case data, passes such data to the component (to be tested), and prints relevant results.  Stubs serve to replace modules that are subordinate (invoked by) the component to be tested.
  • 47.
  • 48.
    Testing Strategy  UnitTesting:  A stub or “dummy subprogram” uses the subordinate module’s interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing.  Unit testing is simplified when a component with high cohesion is designed.
  • 49.
    Testing Strategy  IntegrationTesting:  A group of dependent components are tested together to ensure their quality of their integration unit.  The objective is to take unit-tested components and build a program structure that has been dictated by design.
  • 50.
    Testing Strategy  IntegrationTesting – Focus:  Design and construction of software architecture  Integrated functions or operations at subsystem level  Interfaces and interactions between them.  Resource integration and / or environment integration
  • 51.
    Testing Strategy  IntegrationTesting – Approach:  Big bang Approach:  All components are combined in advance  The entire program is tested as a whole  Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program.
  • 52.
    Testing Strategy  IntegrationTesting – Approach:  Big bang Approach:  Once these errors are corrected, new ones appear and the process continues in a seemingly endless loop.
  • 53.
    Testing Strategy  IntegrationTesting – Strategy:  Top down integration  Bottom up integration
  • 54.
    Testing Strategy  IntegrationTesting – Strategy:  Top down integration:  Top-down integration testing is an incremental approach to construction of the software architecture.  Modules are integrated by moving downward through the control hierarchy, beginning with the main control module.
  • 55.
    Testing Strategy  IntegrationTesting – Strategy:  Top down integration:
  • 56.
    Testing Strategy  IntegrationTesting – Strategy:  Top down integration:  Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
  • 57.
    Testing Strategy  IntegrationTesting – Strategy:  Top down integration – Steps:  The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.  Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components
  • 58.
    Testing Strategy  IntegrationTesting – Strategy:  Top down integration – Steps:  Tests are conducted as each component is integrated  On completion of each set of tests, another stub is replaced with the real component  Regression testing may be conducted to ensure that new errors have not been introduced
  • 59.
    Testing Strategy  IntegrationTesting – Strategy:  Bottom Up integration:  The modules at the lowest levels are integrated at first, then integration is done by moving upward through the control structure.
  • 60.
    Testing Strategy  IntegrationTesting – Strategy:  Bottom Up integration – Steps:  Low-level components are combined into clusters (sometimes called builds) that perform a specific software subfunctions.  A driver (a control program for testing) is written to coordinate test case input and output.  The cluster is tested  Drivers are removed and clusters are combined moving upward in the program structure
  • 61.
    Testing Strategy  IntegrationTesting – Strategy:  Bottom Up integration:
  • 62.
    Testing Strategy  IntegrationTesting:  Regression Testing: “The reexecution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects”
  • 63.
    Testing Strategy  IntegrationTesting:  Regression Testing – Classes of Test cases:  A representative sample of tests that will exercise all software functions  Additional tests that focus on software functions that are likely to be affected by the change  Tests that focus on the software components that have been changed
  • 64.
    Testing Strategy  IntegrationTesting:  Smoke Testing:  It is designed as a pacing mechanism for time- critical projects, allowing the software team to assess the project on a frequent basis.
  • 65.
    Testing Strategy  IntegrationTesting:  Smoke Testing – Activities:  Software components that have been translated into code are integrated into a build. A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions
  • 66.
    Testing Strategy  IntegrationTesting:  Smoke Testing – Activities:  A series of tests is designed to expose errors that will keep the build from properly performing its function  The build is integrated with other builds, and the entire product (in its current form) is smoke tested daily
  • 67.
    Testing Strategy  IntegrationTesting:  Smoke Testing – Benefits:  Integration risk is minimized  The quality of the end product is improved  Error diagnosis and correction are simplified  Progress is easier to assess.
  • 68.
    Testing Strategy  ValidationTesting:  Verification: "Are we building the product right?" Validation: "Are we building the right product?" “ Validation succeeds when software functions in a manner that can be reasonably expected by the customer.”
  • 69.
    Testing Strategy  ValidationTesting – Criteria:  Software validation is achieved through a series of tests that demonstrate conformity with requirements.  A test plan outlines - functional requirements, all behavioral characteristics, Content, documentation and other requirements.
  • 70.
    Testing Strategy  ValidationTesting – Conditions:  The function or performance characteristic conforms to specification and is accepted  A deviation from specification is uncovered and a deficiency list is created. It is often necessary to negotiate with the customer to establish a method for resolving deficiencies.
  • 71.
    Testing Strategy  ValidationTesting – Configuration Review: “The intent of the review is to ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail to encourage the support activities.”
  • 72.
    Testing Strategy  ValidationTesting – Acceptance Testing:  When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements.  An acceptance test can range from an informal “test drive” to a planned and systematically executed series of tests
  • 73.
    Testing Strategy  ValidationTesting – Acceptance Testing:  Alpha Testing:  It is conducted at the developer’s site by a representative group of end users.  The software is used in a natural setting with the developer “looking over the shoulder” of the users and recording errors and usage problems.  Alpha tests are conducted in a controlled environment.
  • 74.
    Testing Strategy  ValidationTesting – Acceptance Testing:  Beta Testing:  It conducted at one or more end-user sites.  The beta test is a “live” application of the software in an environment that cannot be controlled by the developer The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals.
  • 75.
    Testing Strategy  ValidationTesting – Acceptance Testing:  Beta Testing:  As a result of problems reported during beta tests, you make modifications and then prepare for release of the software product to the entire customer base.
  • 76.
    Testing Strategy  SystemTesting:  It is actually a series of different tests whose primary purpose is to fully exercise the computer- based system.  Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated
  • 77.
    Testing Strategy  SystemTesting – Focus:  System functions and performance  System reliability and recoverability  System installation  System behavior in the special conditions  System user operations  Integration of external software and the system.
  • 78.
    Testing Strategy  SystemTesting – Recovery Testing:  Many computer-based systems must recover from faults and resume processing with little or no downtime.  Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed
  • 79.
    Testing Strategy  SystemTesting – Security Testing:  Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration.  It also verifies that protection mechanisms built into the system prevent intrusion such as unauthorized internal or external access or willful damage.
  • 80.
    Testing Strategy  SystemTesting – Stress Testing:  Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.  Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing
  • 81.
    Testing Strategy  SystemTesting – Performance Testing:  Performance testing is designed to test the run-time performance of software within the context of an integrated system.  Resource utilization such as CPU load, throughput, memory usage can be measured.  Beta testing is useful for performance testing
  • 82.
    Debugging  Debugging occursas a consequence of successful testing. That is, when a test case uncovers an error, debugging is the process that results in the removal of the error  The actual test results are compared with the expected results.  The suspected causes are identified and
  • 83.
    Debugging  Approaches:  BruteForce - Using a “let the computer find the error” philosophy, memory dumps are taken, run-time traces are invoked, and the program is loaded with output statements.
  • 84.
    Debugging  Approaches:  Backtracking- Beginning at the site where a symptom has been uncovered, the source code is traced backward (manually) until the cause is found
  • 85.
    Debugging  Approaches:  CauseElimination – It uses binary partitioning to reduce the number of locations where errors can exist.
  • 86.
    Software Implementation Techniques  Thegoal coding is to implement the design in the best possible manner.  Objectives:  Programs developed in coding should be readable.  They should execute efficiently  The program should utilize less amount of memory
  • 87.
    Software Implementation Techniques  CodingPractices:  Control construct  The single entry and single exit constructs need to be used.  Uses of gotos  Avoid use of goto statements
  • 88.
    Software Implementation Techniques  CodingPractices:  Information hiding  In that case only access functions to the data structures must be made visible and the information present in it must be hidden.
  • 89.
    Software Implementation Techniques  CodingPractices:  Nesting:  It means defining one structure inside another.  If the nesting is too deep then it hard to understand the code.  Avoid deep nesting of the code
  • 90.
    Software Implementation Techniques  CodingPractices:  User defined data types:  Modern programming languages allow the user to use defined data type as the enumerated types.  It enhances the readability of the code.
  • 91.
    Software Implementation Techniques  CodingPractices:  Module size:  The large size of the module will not be functionally cohesive.  Module Interface:  The module interface with more than five parameters must be broken into multiple modules
  • 92.
    Software Implementation Techniques  CodingPractices:  Side Effects:  If some part of the code is changed randomly then it will cause some side effect.  Robustness:  The program is said to robust if it does something even though some unexceptional condition occurs.
  • 93.
    Software Implementation Techniques  CodingPractices:  Switch case with defaults:  The choice being passes to the switch case statement may have some unpredictable value, and then the default case will help to execute the switch case statement without any problem.
  • 94.
    Software Implementation Techniques  CodingPractices:  Empty catch block:  Take some default action even if it is just writing some print statement, whenever the exception is caught.  Empty if and while statements:  Useless checks should be avoided.
  • 95.
    Software Implementation Techniques  CodingPractices:  Check for read return:  The return value must be checked for the read operation.  Return from finally Block:  The return value must come from finally block whenever it is possible.
  • 96.
    Software Implementation Techniques  CodingPractices:  Trusted Data sources:  Counter check should be made before accessing the input data.  Correlated Parameters:  To check the co-relation before performing any operation on those data items.
  • 97.
    Software Implementation Techniques  CodingPractices:  Exceptions Handling:  In order to make the software more reliable, it necessary to write the code for execution.
  • 98.
    Software Implementation Techniques  CodingStandards:  Any good software development approach suggests to adhere to some well defined standards or rules for coding.
  • 99.
    Software Implementation Techniques  CodingStandards:  Naming Conventions:  Package name and variable names should be in lower case.  Variable names must not begin with numbers.  The type name should be noun and it should start with capital letter.
  • 100.
    Software Implementation Techniques  CodingStandards: Files:  Reader must get an idea about the purpose of the file by its name.  The file extension must be java  The name of the file and the class defined in the file must have the same name.
  • 101.
    Software Implementation Techniques  CodingStandards: Commenting / Layout:  They are non executable part of the code.  It enhances the readability of the code.  The purpose of the code is to explain the logic of the program.
  • 102.
    Software Implementation Techniques  CodingStandards: Commenting / Layout:  Single line comments must be given by //  For the names of the variables comments must be given.
  • 103.
    Software Implementation Techniques  CodingStandards:  Statements:  Declare some related variables on same line and unrelated variables on another line.  Class variable should never be declared public.  Make use of only loop control within the for loop.  Avoid use of do.. While, complex conditional
  • 104.
    Software Implementation Techniques  Refactoring: A change made to the internal structure of software for better understanding and performing cheaper to modifications without changing system’s behavior.  To improve the design of the system
  • 105.
    Software Implementation Techniques  Refactoring: The coupling may get reduced  Cohesion may get increased.  The open-closed principle may get followed strictly.
  • 106.
    Software Implementation Techniques  Refactoring: In order to mitigate or avoid this risk following two rules must be followed.  Re-factor in small steps  For testing the existing functionalities make use of the test scripts.  Due to refactoring, the cost of testing and debugging gets reduced.
  • 107.
    Maintenance and Reengineering Software maintenance is an activity in which program is modified after it has been put into use.  A process in which changes are implemented by either modifying the existing system’s architecture or by adding new components to the system.
  • 108.
    Maintenance and Reengineering Need for Maintenance:  Usually the system requirements are changing and to meet these requirements some changes are incorporated in the system.  When a system a system is installed in an environment, it changes that environment. This ultimately changes the system requirements.
  • 109.
    Maintenance and Reengineering Types of Software Maintenance:  Corrective Maintenance – Correcting the software faults  Adaptive Maintenance – Adapting the change in environment  Perfective Maintenance – Changing the system to meet the new requirements  Preventive Maintenance – To improve future maintainability
  • 110.
    Maintenance and Reengineering SoftwareMaintenance Process:  Initially the request for change is made  Change Management: The status of all the change requests is identified, described.  Impact Analysis:  Identify all systems and system products affected by a change request.  Make an estimate of the resources needed to effect the
  • 111.
    Maintenance and Reengineering SoftwareMaintenance Process:  System release planning: The schedule and contents of software release is planned.  Change implementation: It can be done by first designing the changes, then coding for these changes and finally testing the changes. Preferably the regression testing must be performed while
  • 112.
    Maintenance and Reengineering SoftwareMaintenance Process:  System release:  Documentation  Software  Training  Hardware changes  Data conversion should be described
  • 113.
    Maintenance and Reengineering Issues in Software Maintenance:  Technical: Limited understanding of system, testing, impact analysis, maintainability  Management: Staffing problem, process issue, organizational structure and outsourcing.
  • 114.
    Maintenance and Reengineering Issues in Software Maintenance:  Cost estimation: It is based on cost, experience of projects.  Measurement: Size, schedule, quality, resource utilization, design complexity and reliability.
  • 115.
    Business Process Reengineering “Theimplementation of radical change in the business process to achieve breakthrough results.”
  • 116.
    Business Process Reengineering BPR Model  Business Process Definition:  The processes are defined based on business goals.  Factors:  Cost reduction  Quality improvement
  • 117.
    Business Process Reengineering BPR Model  Process identification:  The critical processes are identified  They are prioritized according to their need for change
  • 118.
    Business Process Reengineering BPR Model  Process Evaluation:  The existing processes are analyzed  The time and cost required by these tasks are measured  The quality performance issues are identified and isolated.
  • 119.
    Business Process Reengineering BPR Model  Process specification and Design:  The use cases are prepared for each process that need to be redesigned in BPR  Each use case captures the scenario that give out some outcome to the customer.  On analyzing the use cases, new tasks are
  • 120.
    Business Process Reengineering BPR Model  Prototyping: The redesigned process is prototyped for testing purpose  Refinement and Instantiation:  The feedback for each prototype in BPR models collected.  The processes are refined based on the feedback.
  • 121.
    Business Process Reengineering Reengineering Process Model:  Software reengineering is a process of modifying the system for maintenance purpose  Inventory Analysis:  The software organization possess the inventory of all the required applications.  This inventory should be revisited periodically.
  • 122.
    Business Process Reengineering Reengineering Process Model:  Document Restructuring:  Instead of having time consuming documentation, remain with weak documentation.  Update poor documentation if needed.  For the critical systems, rewrite the documents if needed.
  • 123.
    Business Process Reengineering Reengineering Process Model:  Reverse Engineering:  The process of design recovery  In reverse engineering the data, architectural and procedural information is extracted from a source code.  It creates a representation of the program at a
  • 124.
    Business Process Reengineering Reengineering Process Model:  Code Restructuring:  The code is rewritten in modern programming language.  The resultant restructured code is tested and reviewed.
  • 125.
    Business Process Reengineering Reengineering Process Model:  Data Restructuring:  The changes in data demand for changes in architecture or code.  Forward Engineering:  A process in which the design information is recovered from the existing software and the
  • 126.
    Business Process Reengineering SoftwareEngineering Reverse Engineering A discipline in which theories, methods and tools are applied to develop a professional software product. The dirty or unstructured code is taken, processed and it is restructured. User requirements are available for software engineering process. A dirty or unstructured code is available initially. It is simple and straight forward approach It is complex because cleaning the dirty or unstructured code requires more efforts Documentation or specification of the product is useful to the end- user. Documentation or specification of the product is useful to the developer.
  • 127.
    Business Process Reengineering ReverseEngineering Re-Engineering A process of finding out how a product works from already created software system. To observe the software system and build it again for better use In reverse engineering, the source code is re-created from the compiled code. New piece of code with similar or better functionality than the existing one is created. Reverse engineering is carried out for trying to understand inner working of the artifact with availability of any documents Reengineering is carried out for designing something again. Many times from scratch.