2. Objectives
To understand the phases in a software project
To understand fundamental concepts of
requirements engineering and Analysis
Modeling.
To understand the basics of object oriented
concept
To understand the major considerations for
enterprise integration and deployment.
To learn various testing and project
management techniques
3. COURSE OUTCOMES
CO. NO. COURSE OUTCOME
BLOOMS
LEVEL
At the end of the course students will be able to
CO 1 Compare different process models. C4
CO 2
Formulate Concepts of requirements engineering and
Analysis Modeling.
C4
CO 3
To understand the fundamentals of object oriented
concept
C3
CO 4 Apply systematic procedure for software design C4
CO 5 Find errors with various testing techniques C4
CO6 Evaluate project schedule, estimate project cost and
effort required
C5
4. Syllabus
UNIT-I Software Process and Agile Development 9
Introduction to Software Engineering, Software Process, Perspective and
Specialized Process Models – Introduction to Agility – Agile process – Extreme
programming – XP process - Estimation-FP,LOC and COCOMO I and II, Risk
Management, Project Scheduling.
.
UNIT-IIRequirements Analysis and Specification 9
Software Requirements: Functional and Non-Functional, User requirements,
Software Requirements Document – Requirement Engineering Process:
Feasibility Studies, Requirements elicitation and analysis, requirements
validation, requirements management-Classical analysis: Structured system
Analysis, Petri Nets
UNIT III- Object Oriented Concepts 9
Introduction to OO concepts, UML Use case Diagram,Class Diagram-Object
Diagram-Component Diagram-Sequence and Collaboration-Deployment-
Activity Diagram-Package Diagram
5. Syllabus
UNIT-IV Software Design 9
Design Concepts- Design Heuristic – Architectural Design –Architectural
styles, Architectural Design, Architectural Mapping using Data Flow- User
Interface Design: Interface analysis, Interface Design –Component level
Design: Designing Class based components.
UNIT-VTesting and Management 9
Software testing fundamentals- white box testing- basis path testing-control
structure testing-black box testing- Regression Testing – Unit Testing –
Integration Testing – Validation Testing – System Testing And Debugging –
Reengineering process model – Reverse and Forward Engineering.
Total: 45 Periods
6. Syllabus
LEARNING RESOURCES:
TEXT BOOKS
1.Roger S. Pressman, “Software Engineering – A Practitioner’s Approach”, Eighth
Edition, McGraw-Hill International Edition, 2019.(Unit I,II,IV,V)
2. Ian Sommerville, “Software Engineering”, Global Edition, Pearson Education Asia,
2015.(Unit I,II,III)
3.Bernd Bruegge& Allen H. Dutoit Object-oriented software engineering using UML,
patterns, and Java ,Prentice hall ,3rd Edition 2010.(Unit III)
REFERENCES
1.Titus Winters,Tom Manshreck& Hyrum Wright, Software Engineering at
Google,lessons learned from programming over time, O’ REILLY
publications,2020. (Unit I,II,IV,V)
2.Rajib Mall, “Fundamentals of Software Engineering”, Third Edition, PHI Learning
Private Limited, 2009. (Unit I,II,IV,V)
3.PankajJalote, “Software Engineering, A Precise Approach”, Wiley India, 2010 (Unit
I,II,IV,V)
ONLINE LINKS
1.http://www.nptelvideos.in/2012/11/software-engineering.html
2. https://nptel.ac.in/courses/106/101/106101061/
11. Who Tests the Software?
11
developer independent tester
Understands the system
but, will test "gently"
and, is driven by "delivery"
Must learn about the system,
but, will attempt to break it
and, is driven by quality
13. Testing Strategy
• We begin by ‘testing-in-the-small’ and move
toward ‘testing-in-the-large’
• For conventional software
– The module (component) is our initial focus
– Integration of modules follows
• For OO software
– our focus when “testing in the small” changes from an
individual module (the conventional view) to an OO
class that encompasses attributes and operations and
implies communication and collaboration
13
14. Strategic Issues
• State testing objectives explicitly.
• Understand the users of the software and develop a profile for each user category.
• Develop a testing plan that emphasizes “rapid cycle testing.”
• Build “robust” software that is designed to test itself
• Use effective formal technical reviews as a filter prior to testing
• Conduct formal technical reviews to assess the test strategy and test cases
themselves.
• Develop a continuous improvement approach for the testing process.
14
17. Unit Test Environment
17
Module
stub stub
driver
RESULTS
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
19. Top Down Integration
19
top module is tested with
stubs
stubs are replaced one at
a time, "depth first"
as new modules are integrated,
some subset of tests is re-run
A
B
C
D E
F G
20. Bottom-Up Integration
20
drivers are replaced one at a
time, "depth first"
worker modules are grouped into
builds and integrated
A
B
C
D E
F G
cluster
21. Sandwich Testing
21
Top modules are
tested with stubs
Worker modules are grouped into
builds and integrated
A
B
C
D E
F G
cluster
22. Object-Oriented Testing
• begins by evaluating the correctness and
consistency of the OOA and OOD models
• testing strategy changes
– the concept of the ‘unit’ broadens due to
encapsulation
– integration focuses on classes and their execution
across a ‘thread’ or in the context of a usage scenario
– validation uses conventional black box methods
• test case design draws on conventional methods,
but also encompasses special features
22
23. Broadening the View of “Testing”
23
It can be argued that the review of OO analysis and
design models is especially useful because the same
semantic constructs (e.g., classes, attributes, operations,
messages) appear at the analysis, design, and code level.
Therefore, a problem in the definition of class attributes
that is uncovered during analysis will circumvent side
effects that might occur if the problem were not
discovered until design or code (or even the next
iteration of analysis).
24. High Order Testing
• Validation testing
– Focus is on software requirements
• System testing
– Focus is on system integration
• Alpha/Beta testing
– Focus is on customer usage
• Recovery testing
– forces the software to fail in a variety of ways and verifies that recovery is properly
performed
• Security testing
– verifies that protection mechanisms built into a system will, in fact, protect it from
improper penetration
• Stress testing
– executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
• Performance Testing
– test the run-time performance of software within the context of an integrated system
24
27. The Debugging Process
27
test cases
results
Debugging
suspected
causes
identified
causes
corrections
regression
tests
new test
cases
28. Debugging Effort
28
time required
to diagnose the
symptom and
determine the
cause
time required
to correct the error
and conduct
regression tests
29. 29
Testability
• Operability—it operates cleanly
• Observability—the results of each test case are readily observed
• Controllability—the degree to which testing can be automated
and optimized
• Decomposability—testing can be targeted
• Simplicity—reduce complex architecture and logic to simplify
tests
• Stability—few changes are requested during testing
• Understandability—of the design
30. 30
What is a “Good” Test?
• A good test has a high probability of finding an
error
• A good test is not redundant.
• A good test should be “best of breed”
• A good test should be neither too simple nor
too complex
31. 31
Internal and External Views
• Any engineered product (and most other things)
can be tested in one of two ways:
– Knowing the specified function that a product has
been designed to perform, tests can be conducted
that demonstrate each function is fully operational
while at the same time searching for errors in each
function;
– Knowing the internal workings of a product, tests can
be conducted to ensure that "all gears mesh," that is,
internal operations are performed according to
specifications and all internal components have been
adequately exercised.
32. 32
Test Case Design
"Bugs lurk in corners
and congregate at
boundaries ..."
Boris Beizer
OBJECTIVE
CRITERIA
CONSTRAINT
to uncover errors
in a complete manner
with a minimum of effort and time
33. 33
Exhaustive Testing
loop < 20 X
There are 10 possible paths! If we execute one test per
millisecond, it would take 3,170 years to test this program!!
14
36. 36
White-Box Testing
... our goal is to ensure that all
statements and conditions have
been executed at least once ...
37. 37
Why Cover?
logic errors and incorrect assumptions
are inversely proportional to a path's
execution probability
we often believe that a path is not
likely to be executed; in fact, reality is
often counter intuitive
typographical errors are random; it's
likely that untested paths will contain
some
38. 38
Basis Path Testing
First, we compute the cyclomatic
complexity:
number of simple decisions + 1
or
number of enclosed areas + 1
In this case, V(G) = 4
39. 39
Basis Path Testing
Next, we derive the
independent paths:
Since V(G) = 4,
there are four paths
Path 1: 1,2,3,6,7,8
Path 2: 1,2,3,5,7,8
Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
Finally, we derive test
cases to exercise these
paths.
1
2
3
4
5 6
7
8
40. 40
Control Structure Testing
• Condition testing — a test case design method
that exercises the logical conditions contained
in a program module
• Data flow testing — selects test paths of a
program according to the locations of
definitions and uses of variables in the
program
42. 42
Loop Testing: Simple Loops
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes
43. 43
Loop Testing: Nested Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
Nested Loops
Concatenated Loops
45. 45
Black-Box Testing
• How is functional validity tested?
• How is system behavior and performance tested?
• What classes of input will make good test cases?
• Is the system particularly sensitive to certain input values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system tolerate?
• What effect will specific combinations of data have on system
operation?
46. 46
Graph-Based Methods
new
file
menu select generates
(generation time 1.0 sec)
document
window
document
tex
t
is represented as
contains
Attributes:
background color: white
text color: default color
or preferences
(b)
object
#1
Directed link
(link weight)
object
#2
object
#
3
Undirected link
Parallel links
Node weight
(value
)
(a)
allows editing
of
To understand the
objects that are
modeled in
software and the
relationships that
connect these
objects
In this context, we
consider the term
“objects” in the broadest
possible context. It
encompasses data
objects, traditional
components (modules),
and object-oriented
elements of computer
software.
48. 48
Sample Equivalence Classes
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
Valid data
Invalid data
50. 50
Comparison Testing
• Used only in situations in which the reliability of
software is absolutely critical (e.g., human-rated
systems)
– Separate software engineering teams develop
independent versions of an application using the same
specification
– Each version can be tested with the same test data to
ensure that all provide identical output
– Then all versions are executed in parallel with real-
time comparison of results to ensure consistency
51. 51
Orthogonal Array Testing
• Used when the number of input parameters is small
and the values that each of the parameters may take
are clearly bounded
One input item at a time L9 orthogonal array
X
Y
Z
X
Y
Z
53. 53
Business Process Reengineering
• Business definition. Business goals are identified within the context of four key
drivers: cost reduction, time reduction, quality improvement, and personnel
development and empowerment.
• Process identification. Processes that are critical to achieving the goals defined
in the business definition are identified.
• Process evaluation. The existing process is thoroughly analyzed and measured.
• Process specification and design. Based on information obtained during the
first three BPR activities, use-cases are prepared for each process that is to be
redesigned.
• Prototyping. A redesigned business process must be prototyped before it is
fully integrated into the business.
• Refinement and instantiation. Based on feedback from the prototype, the
business process is refined and then instantiated within a business system.
55. 55
BPR Principles
• Organize around outcomes, not tasks.
• Have those who use the output of the process perform the process.
• Incorporate information processing work into the real work that
produces the raw information.
• Treat geographically dispersed resources as though they were
centralized.
• Link parallel activities instead of integrated their results. When
different
• Put the decision point where the work is performed, and build
control into the process.
• Capture data once, at its source.
57. 57
Inventory Analysis
• build a table that contains all applications
• establish a list of criteria, e.g.,
– name of the application
– year it was originally created
– number of substantive changes made to it
– total effort applied to make these changes
– date of last substantive change
– effort applied to make the last change
– system(s) in which it resides
– applications to which it interfaces, ...
• analyze and prioritize to select candidates for
reengineering
58. 58
Document Restructuring
• Weak documentation is the trademark of many legacy systems.
• But what do we do about it? What are our options?
• Options …
– Creating documentation is far too time consuming. If the system works, we’ll
live with what we have. In some cases, this is the correct approach.
– Documentation must be updated, but we have limited resources. We’ll use a
“document when touched” approach. It may not be necessary to fully
redocument an application.
– The system is business critical and must be fully redocumented. Even in this case,
an intelligent approach is to pare documentation to an essential minimum.
60. 60
Code Restructuring
• Source code is analyzed using a restructuring tool.
• Poorly design code segments are redesigned
• Violations of structured programming constructs are
noted and code is then restructured (this can be done
automatically)
• The resultant restructured code is reviewed and tested
to ensure that no anomalies have been introduced
• Internal code documentation is updated.
61. 61
Data Restructuring
• Unlike code restructuring, which occurs at a relatively low level of
abstraction, data structuring is a full-scale reengineering activity
• In most cases, data restructuring begins with a reverse engineering activity.
– Current data architecture is dissected and necessary data models are defined
(Chapter 9).
– Data objects and attributes are identified, and existing data structures are
reviewed for quality.
– When data structure is weak (e.g., flat files are currently implemented, when a
relational approach would greatly simplify processing), the data are reengineered.
• Because data architecture has a strong influence on program architecture and
the algorithms that populate it, changes to the data will invariably result in
either architectural or code-level changes.
62. 62
Forward Engineering
1. The cost to maintain one line of source code may be 20 to 40 times the
cost of initial development of that line.
2. Redesign of the software architecture (program and/or data structure),
using modern design concepts, can greatly facilitate future maintenance.
3. Because a prototype of the software already exists, development
productivity should be much higher than average.
4. The user now has experience with the software. Therefore, new
requirements and the direction of change can be ascertained with greater
ease.
5. CASE tools for reengineering will automate some parts of the job.
6. A complete software configuration (documents, programs and data) will
exist upon completion of preventive maintenance.
63. 63
Economics of Reengineering-I
• A cost/benefit analysis model for reengineering has
been proposed by Sneed [Sne95]. Nine parameters are
defined:
• P1 = current annual maintenance cost for an application.
• P2 = current annual operation cost for an application.
• P3 = current annual business value of an application.
• P4 = predicted annual maintenance cost after reengineering.
• P5 = predicted annual operations cost after reengineering.
• P6 = predicted annual business value after reengineering.
• P7 = estimated reengineering costs.
• P8 = estimated reengineering calendar time.
• P9 = reengineering risk factor (P9 = 1.0 is nominal).
• L = expected life of the system.
64. 64
Economics of Reengineering-II
• The cost associated with continuing maintenance of a candidate application
(i.e., reengineering is not performed) can be defined as
Cmaint = [P3 - (P1 + P2)] x L
• The costs associated with reengineering are defined using the following
relationship:
Creeng = [P6 - (P4 + P5) x (L - P8) - (P7 x P9)] `
• Using the costs presented in equations above, the overall benefit of
reengineering can be computed as
cost benefit = Creeng - Cmaint