ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
Â
Managing Software Project
1. 1
Part 4
Managing Sof t ware
Proj ect
Software Engineering: A Practitionerâs Approach, 6th edition
by Roger S. Pressman
2. 2
Chapt er 21
Proj ect Management
Concept s
Software Engineering: A Practitionerâs Approach, 6th edition
by Roger S. Pressman
3. 3
The 4 Pâs
ī§ People â the most important element
of a successful project
ī§ Product â the software to be built
ī§ Process â the set of framework
activities and software engineering
tasks to get the job done
ī§ Project â all work required to make
the product a reality
4. 4
St akeholders
ī§ Senior managers who define the business issues that often
have significant influence on the project.
ī§ Project (technical) managers who must plan, motivate, organize,
and control the practitioners who do software work.
ī§ Practitioners who deliver the technical skills that are necessary
to engineer a product or application.
ī§ Customers who specify the requirements for the software to be
engineered and other stakeholders who have a peripheral
interest in the outcome.
ī§ End-users who interact with the software once it is released for
production use.
5. 5
Sof t ware Teams
How to lead?
How to organize?
How to motivate?
How to collaborate?
How to create good ideas?
6. 6
Team Leader
ī§ The MOI Model:
ī Motivation. The ability to encourage (by âpush or pullâ)
technical people to produce to their best ability.
ī Organization. The ability to mold existing processes (or
invent new ones) that will enable the initial concept to be
translated into a final product.
ī Ideas or innovation. The ability to encourage people to
create and feel creative even when they must work within
bounds established for a particular software product or
application.
7. 7
Sof t ware Teams
ī§ The difficulty of the problem to be solved
ī§ The size of the resultant program(s) in lines of code or function
points
ī§ The time that the team will stay together (team lifetime)
ī§ The degree to which the problem can be modularized
ī§ The required quality and reliability of the system to be built
ī§ The rigidity of the delivery date
ī§ The degree of sociability (communication) required for the
project
The following factors must be considered when selectingThe following factors must be considered when selecting
a software project team structure ...a software project team structure ...
8. 8
Organizat ional Paradigms
ī§ Closed paradigmâstructures a team along a traditional
hierarchy of authority
ī§ Random paradigmâstructures a team loosely and depends on
individual initiative of the team members
ī§ Open paradigmâattempts to structure a team in a manner that
achieves some of the controls associated with the closed
paradigm but also much of the innovation that occurs when
using the random paradigm
ī§ Synchronous paradigmârelies on the natural
compartmentalization of a problem and organizes team
members to work on pieces of the problem with little active
communication among themselves
suggested by Constantine [CON93]
9. 9
Avoid Team âToxicit yâ
ī§ A frenzied work atmosphere in which team members waste
energy and lose focus on the objectives of the work to be
performed.
ī§ High frustration caused by personal, business, or technological
factors that cause friction among team members.
ī§ âFragmented or poorly coordinated proceduresâ or a poorly
defined or improperly chosen process model that becomes a
roadblock to accomplishment.
ī§ Unclear definition of roles resulting in a lack of accountability
and resultant finger-pointing.
ī§ âContinuous and repeated exposure to failureâ that leads to a
loss of confidence and a lowering of morale.
10. 10
Agile Teams
ī§ Team members must have trust in one another.
ī§ The distribution of skills must be appropriate to the problem.
ī§ Mavericks may have to be excluded from the team, if team
cohesiveness is to be maintained.
ī§ Team is âself-organizingâ
ī An adaptive team structure
ī Uses elements of Constantineâs random, open, and
synchronous paradigms
ī Significant autonomy
11. 11
Team Coordinat ion &
Communicat ion
ī§ Formal, impersonal approaches include software engineering
documents and work products (including source code), technical
memos, project milestones, schedules, and project control tools
(Chapter 23), change requests and related documentation, error
tracking reports, and repository data (see Chapter 26).
ī§ Formal, interpersonal procedures focus on quality assurance activities
(Chapter 25) applied to software engineering work products. These
include status review meetings and design and code inspections.
ī§ Informal, interpersonal procedures include group meetings for
information dissemination and problem solving and âcollocation of
requirements and development staff.â
ī§ Electronic communication encompasses electronic mail, electronic
bulletin boards, and by extension, video-based conferencing systems.
ī§ Interpersonal networking includes informal discussions with team
members and those outside the project who may have experience or
insight that can assist team members.
12. 12
The Product Scope
ī§ Scope:
ī§ Context. How does the software to be built fit into a larger
system, product, or business context and what
constraints are imposed as a result of the context?
ī§ Information objectives. What customer-visible data
objects (Chapter 8) are produced as output from the
software? What data objects are required for input?
ī§ Function and performance. What function does the
software perform to transform input data into output? Are
any special performance characteristics to be
addressed?
ī§ Software project scope must be unambiguous and
understandable at the management and technical levels.
13. 13
Problem Decomposit ion
ī§ Sometimes called partitioning or problem elaboration
ī§ Once scope is defined âĻ
ī It is decomposed into constituent functions
ī It is decomposed into user-visible data objects
ī or
ī It is decomposed into a set of problem classes
ī§ Decomposition process continues until all functions or
problem classes have been defined
14. 14
The Process
ī§ Once a process framework has been established
ī Consider project characteristics
ī Determine the degree of rigor required
ī Define a task set for each software engineering
activity
ī§ Task set =
ī Software engineering tasks
ī Work products
ī Quality assurance points
ī Milestones
16. 16
The Proj ect
ī§ Projects get into trouble when âĻ
ī Software people donât understand their customerâs needs.
ī The product scope is poorly defined.
ī Changes are managed poorly.
ī The chosen technology changes.
ī Business needs change [or are ill-defined].
ī Deadlines are unrealistic.
ī Users are resistant.
ī Sponsorship is lost [or was never properly obtained].
ī The project team lacks people with appropriate skills.
ī Managers [and practitioners] avoid best practices and lessons
learned.
17. 17
Common-Sense Approach
t o Proj ect s
ī§ Start on the right foot. This is accomplished by working hard (very
hard) to understand the problem that is to be solved and then setting
realistic objectives and expectations.
ī§ Maintain momentum. The project manager must provide incentives to
keep turnover of personnel to an absolute minimum, the team should
emphasize quality in every task it performs, and senior management
should do everything possible to stay out of the teamâs way.
ī§ Track progress. For a software project, progress is tracked as work
products (e.g., models, source code, sets of test cases) are produced
and approved (using formal technical reviews) as part of a quality
assurance activity.
ī§ Make smart decisions. In essence, the decisions of the project
manager and the software team should be to âkeep it simple.â
ī§ Conduct a postmortem analysis. Establish a consistent mechanism for
extracting lessons learned for each project.
18. 18
To Get t o t he Essence
of a Proj ect
ī§ Why is the system being developed?
ī§ What will be done?
ī§ When will it be accomplished?
ī§ Who is responsible?
ī§ Where are they organizationally located?
ī§ How will the job be done technically and
managerially?
ī§ How much of each resource (e.g., people, software,
tools, database) will be needed? Barry Boehm
19. 19
Crit ical Pract ices
ī§ Formal risk management
ī§ Empirical cost and schedule estimation
ī§ Metrics-based project management
ī§ Earned value tracking
ī§ Defect tracking against quality targets
ī§ People aware project management
20. 20
Chapt er 22
Process and Project
Metrics
Software Engineering: A Practitionerâs Approach, 6th edition
by Roger S. Pressman
21. 21
A Good Manager
Measures
measurementmeasurement
What do weWhat do we
use as ause as a
basis?basis?
âĸâĸ size?size?
âĸâĸ function?function?
project metricsproject metrics
process metricsprocess metrics
processprocess
productproduct
product metricsproduct metrics
22. 22
Why Do We Measure?
ī§ assess the status of an ongoing project
ī§ track potential risks
ī§ uncover problem areas before they go
âcritical,â
ī§ adjust work flow or tasks,
ī§ evaluate the project teamâs ability to
control quality of software work
products.
23. 23
Process Measurement
ī§ We measure the efficacy of a software process indirectly.
ī That is, we derive a set of metrics based on the outcomes that can
be derived from the process.
ī Outcomes include
ī§ measures of errors uncovered before release of the software
ī§ defects delivered to and reported by end-users
ī§ work products delivered (productivity)
ī§ human effort expended
ī§ calendar time expended
ī§ schedule conformance
ī§ other measures.
ī§ We also derive process metrics by measuring the characteristics
of specific software engineering tasks.
24. 24
Process Met rics Guidelines
ī§ Use common sense and organizational sensitivity when
interpreting metrics data.
ī§ Provide regular feedback to the individuals and teams who
collect measures and metrics.
ī§ Donât use metrics to appraise individuals.
ī§ Work with practitioners and teams to set clear goals and metrics
that will be used to achieve them.
ī§ Never use metrics to threaten individuals or teams.
ī§ Metrics data that indicate a problem area should not be
considered ânegative.â These data are merely an indicator for
process improvement.
ī§ Donât obsess on a single metric to the exclusion of other
important metrics.
25. 25
Sof t ware Process
I mprovement
SPI
Process model
Improvement goals
Process metrics
Process improvement
recommendations
26. 26
Process Met rics
ī§ Quality-related
ī focus on quality of work products and deliverables
ī§ Productivity-related
ī Production of work-products related to effort expended
ī§ Statistical SQA data
ī error categorization & analysis
ī§ Defect removal efficiency
ī propagation of errors from process activity to activity
ī§ Reuse data
ī The number of components produced and their degree of
reusability
27. 27
Proj ect Met rics
ī§ used to minimize the development schedule by making the
adjustments necessary to avoid delays and mitigate potential
problems and risks
ī§ used to assess product quality on an ongoing basis and, when
necessary, modify the technical approach to improve quality.
ī§ every project should measure:
ī inputsâmeasures of the resources (e.g., people, tools) required to
do the work.
ī outputsâmeasures of the deliverables or work products created
during the software engineering process.
ī resultsâmeasures that indicate the effectiveness of the
deliverables.
28. 28
Typical Proj ect Met rics
ī§ Effort/time per software engineering
task
ī§ Errors uncovered per review hour
ī§ Scheduled vs. actual milestone dates
ī§ Changes (number) and their
characteristics
ī§ Distribution of effort on software
engineering tasks
29. 29
Met rics Guidelines
ī§ Use common sense and organizational sensitivity when
interpreting metrics data.
ī§ Provide regular feedback to the individuals and teams who have
worked to collect measures and metrics.
ī§ Donât use metrics to appraise individuals.
ī§ Work with practitioners and teams to set clear goals and metrics
that will be used to achieve them.
ī§ Never use metrics to threaten individuals or teams.
ī§ Metrics data that indicate a problem area should not be
considered ânegative.â These data are merely an indicator for
process improvement.
ī§ Donât obsess on a single metric to the exclusion of other
important metrics.
30. 30
Typical Size-Orient ed
Met rics
ī§ errors per KLOC (thousand lines of code)
ī§ defects per KLOC
ī§ $ per LOC
ī§ pages of documentation per KLOC
ī§ errors per person-month
ī§ Errors per review hour
ī§ LOC per person-month
ī§ $ per page of documentation
31. 31
Typical Funct ion-Orient ed
Met rics
ī§ errors per FP (thousand lines of code)
ī§ defects per FP
ī§ $ per FP
ī§ pages of documentation per FP
ī§ FP per person-month
32. 32
Comparing LOC and FP
Programming LOC per Function point
Language avg. median low high
Ada 154 - 104 205
Assembler 337 315 91 694
C 162 109 33 704
C++ 66 53 29 178
COBOL 77 77 14 400
Java 63 53 77 -
JavaScript 58 63 42 75
Perl 60 - - -
PL/1 78 67 22 263
Powerbuilder 32 31 11 105
SAS 40 41 33 49
Smalltalk 26 19 10 55
SQL 40 37 7 110
Visual Basic 47 42 16 158
Representative values developed by QSM
33. 33
Why Opt f or FP?
ī§ Programming language independent
ī§ Used readily countable characteristics that are
determined early in the software process
ī§ Does not âpenalizeâ inventive (short)
implementations that use fewer LOC that other
more clumsy versions
ī§ Makes it easier to measure the impact of reusable
components
34. 34
Obj ect -Orient ed Met rics
ī§ Number of scenario scripts (use-cases)
ī§ Number of support classes (required to
implement the system but are not immediately
related to the problem domain)
ī§ Average number of support classes per key
class (analysis class)
ī§ Number of subsystems (an aggregation of
classes that support a function that is visible
to the end-user of a system)
35. 35
WebE Proj ect Met rics
ī§ Number of static Web pages (the end-user has no control over
the content displayed on the page)
ī§ Number of dynamic Web pages (end-user actions result in
customized content displayed on the page)
ī§ Number of internal page links (internal page links are pointers
that provide a hyperlink to some other Web page within the
WebApp)
ī§ Number of persistent data objects
ī§ Number of external systems interfaced
ī§ Number of static content objects
ī§ Number of dynamic content objects
ī§ Number of executable functions
36. 36
Measuring Qualit y
ī§ Correctness â the degree to which a
program operates according to specification
ī§ Maintainabilityâthe degree to which a
program is amenable to change
ī§ Integrityâthe degree to which a program is
impervious to outside attack
ī§ Usabilityâthe degree to which a program is
easy to use
37. 37
Def ect Removal
Ef f iciency
DRE = E /(E + D)
E is the number of errors found before delivery of
the software to the end-user
D is the number of defects found after delivery.
38. 38
Met rics f or Small
Organizat ions
ī§ time (hours or days) elapsed from the time a request is made
until evaluation is complete, tqueue.
ī§ effort (person-hours) to perform the evaluation, Weval.
ī§ time (hours or days) elapsed from completion of evaluation to
assignment of change order to personnel, teval.
ī§ effort (person-hours) required to make the change, Wchange.
ī§ time required (hours or days) to make the change, tchange.
ī§ errors uncovered during work to make change, Echange.
ī§ defects uncovered after change is released to the customer
base, Dchange.
39. 39
Est ablishing a Met rics
Program
ī§ Identify your business goals.
ī§ Identify what you want to know or learn.
ī§ Identify your subgoals.
ī§ Identify the entities and attributes related to your subgoals.
ī§ Formalize your measurement goals.
ī§ Identify quantifiable questions and the related indicators that you will
use to help you achieve your measurement goals.
ī§ Identify the data elements that you will collect to construct the indicators
that help answer your questions.
ī§ Define the measures to be used, and make these definitions
operational.
ī§ Identify the actions that you will take to implement the measures.
ī§ Prepare a plan for implementing the measures.
40. 40
Chapt er 23
Est imat ion f or Sof t ware
Proj ect s
Software Engineering: A Practitionerâs Approach, 6th edition
by Roger S. Pressman
41. 41
Sof t ware Proj ect
Planning
The overall goal of project planning is toThe overall goal of project planning is to
establish a pragmatic strategy for controlling,establish a pragmatic strategy for controlling,
tracking, and monitoring a complex technicaltracking, and monitoring a complex technical
project.project.
Why?Why?
So the end result gets done on time, with quality!So the end result gets done on time, with quality!
43. 43
Proj ect Planning Task Set -I I
ī§ Estimate cost and effort
ī Decompose the problem
ī Develop two or more estimates using size,
function points, process tasks or use-cases
ī Reconcile the estimates
ī§ Develop a project schedule
ī Establish a meaningful task set
ī Define a task network
ī Use scheduling tools to develop a timeline chart
ī Define schedule tracking mechanisms
44. 44
Est imat ion
ī§ Estimation of resources, cost, and schedule for a
software engineering effort requires
ī experience
ī access to good historical information (metrics
ī the courage to commit to quantitative predictions
when qualitative information is all that exists
ī§ Estimation carries inherent risk and this risk leads to
uncertainty
46. 46
To Underst and Scope ...
ī§ Understand the customers needs
ī§ understand the business context
ī§ understand the project boundaries
ī§ understand the customerâs motivation
ī§ understand the likely paths for change
ī§ understand that ...
Even when you understand,Even when you understand,
nothing is guaranteed!nothing is guaranteed!
47. 47
What is Scope?
ī§ Software scope describes
ī the functions and features that are to be delivered to end-
users
ī the data that are input and output
ī the âcontentâ that is presented to users as a consequence of
using the software
ī the performance, constraints, interfaces, and reliability that
bound the system.
ī§ Scope is defined using one of two techniques:
ī A narrative description of software scope is developed after
communication with all stakeholders.
ī A set of use-cases is developed by end-users.
49. 49
Proj ect Est imat ion
ī§ Project scope must be
understood
ī§ Elaboration (decomposition) is
necessary
ī§ Historical metrics are very helpful
ī§ At least two different techniques
should be used
ī§ Uncertainty is inherent in the
process
50. 50
Est imat ion Techniques
ī§ Past (similar) project experience
ī§ Conventional estimation techniques
ī task breakdown and effort estimates
ī size (e.g., FP) estimates
ī§ Empirical models
ī§ Automated tools
51. 51
Est imat ion Accuracy
ī§ Predicated on âĻ
ī the degree to which the planner has properly estimated the
size of the product to be built
ī the ability to translate the size estimate into human effort,
calendar time, and dollars (a function of the availability of
reliable software metrics from past projects)
ī the degree to which the project plan reflects the abilities of
the software team
ī the stability of product requirements and the environment
that supports the software engineering effort.
53. 53
Convent ional Met hods:
LOC/ FP Approach
ī§ compute LOC/FP using estimates
of information domain values
ī§ use historical data to build
estimates for the project
54. 54
Process-Based Est imat ion
Obtained from âprocess frameworkâObtained from âprocess frameworkâ
applicationapplication
functionsfunctions
framework activitiesframework activities
Effort required toEffort required to
accomplishaccomplish
each frameworkeach framework
activity for eachactivity for each
applicationapplication
functionfunction
55. 55
Process-Based Est imat ion
ExampleActivity
Task
Function
UICF
2DGA
3DGA
DSM
PCF
CGDF
DAM
Totals
% effort
CC Planning
Risk
Analysis Engineering
Construction
Release TotalsCE
analysis design code test
0.25 0.25 0.25 3.50 20.50 4.50 16.50 46.00
1% 1% 1% 8% 45% 10% 36%
CC = customer communication CE = customer evaluation
0.50
0.75
0.50
0.50
0.50
0.25
2.50
4.00
4.00
3.00
3.00
2.00
0.40
0.60
1.00
1.00
0.75
0.50
5.00
2.00
3.00
1.50
1.50
1.50
8.40
7.35
8.50
6.00
5.75
4.25
0.50 2.00 0.50 2.00 5.00
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Based on an average burdened labor rate of $8,000 per month, theBased on an average burdened labor rate of $8,000 per month, the
total estimated project cost is $368,000 and the estimated effort is 46total estimated project cost is $368,000 and the estimated effort is 46
person-months.person-months.
56. 56
Tool-Based Est imat ion
project characteristicsproject characteristics
calibration factorscalibration factors
LOC/FP dataLOC/FP data
57. 57
Est imat ion wit h Use-Cases
use cases scenarios pages Åšscenarios pages LOC LOC estimate
e subsystem 6 10 6 Åš 12 5 560 3,366
subsystem group 10 20 8 Åš 16 8 3100 31,233
e subsystem group 5 6 5 Åš 10 6 1650 7,970
Åš Åš Åš Åš
stimate Åš Åš Åš Åš 42,568
User interface subsystem
Engineering subsystem group
Infrastructure subsystem group
Total LOCestimate
Using 620 LOC/pm as the average productivity for systems of thisUsing 620 LOC/pm as the average productivity for systems of this
type and a burdened labor rate of $8000 per month, the cost per linetype and a burdened labor rate of $8000 per month, the cost per line
of code is approximately $13. Based on the use-case estimate and theof code is approximately $13. Based on the use-case estimate and the
historical productivity data, the total estimated project cost ishistorical productivity data, the total estimated project cost is
$552,000 and the estimated effort is 68 person-months.$552,000 and the estimated effort is 68 person-months.
58. 58
Empirical Est imat ion
Models
General form:General form:
effort = tuning coefficient * sizeeffort = tuning coefficient * size
exponentexponent
usually derivedusually derived
as person-monthsas person-months
of effort requiredof effort required
either a constant oreither a constant or
a number derived baseda number derived based
on complexity of projecton complexity of project
usually LOC butusually LOC but
may also bemay also be
function pointfunction point
empiricallyempirically
derivedderived
59. 59
COCOMO-I I
ī§ COCOMO II is actually a hierarchy of estimation models that
address the following areas:
ī Application composition model. Used during the early stages
of software engineering, when prototyping of user interfaces,
consideration of software and system interaction,
assessment of performance, and evaluation of technology
maturity are paramount.
ī Early design stage model. Used once requirements have
been stabilized and basic software architecture has been
established.
ī Post-architecture-stage model. Used during the construction
of the software.
60. 60
The Sof t ware Equat ion
A dynamic multivariable modelA dynamic multivariable model
E = [LOC x BE = [LOC x B0.3330.333
/P]/P]33
x (1/tx (1/t44
))
wherewhere
E = effort in person-months or person-yearsE = effort in person-months or person-years
t = project duration in months or yearst = project duration in months or years
B = âspecial skills factorâB = âspecial skills factorâ
P = âproductivity parameterâP = âproductivity parameterâ
61. 61
Est imat ion f or OO Proj ect s-
I
ī§ Develop estimates using effort decomposition, FP analysis, and
any other method that is applicable for conventional
applications.
ī§ Using object-oriented analysis modeling (Chapter 8), develop
use-cases and determine a count.
ī§ From the analysis model, determine the number of key classes
(called analysis classes in Chapter 8).
ī§ Categorize the type of interface for the application and develop
a multiplier for support classes:
ī Interface type Multiplier
ī No GUI 2.0
ī Text-based user interface 2.25
ī GUI 2.5
ī Complex GUI 3.0
62. 62
Est imat ion f or OO Proj ect s-
I I
ī§ Multiply the number of key classes (step 3) by the
multiplier to obtain an estimate for the number of
support classes.
ī§ Multiply the total number of classes (key + support)
by the average number of work-units per class.
Lorenz and Kidd suggest 15 to 20 person-days per
class.
ī§ Cross check the class-based estimate by multiplying
the average number of work-units per use-case
63. 63
Est imat ion f or Agile
Proj ect s
ī§ Each user scenario (a mini-use-case) is considered separately for
estimation purposes.
ī§ The scenario is decomposed into the set of software engineering tasks
that will be required to develop it.
ī§ Each task is estimated separately. Note: estimation can be based on
historical data, an empirical model, or âexperience.â
ī Alternatively, the âvolumeâ of the scenario can be estimated in LOC, FP or
some other volume-oriented measure (e.g., use-case count).
ī§ Estimates for each task are summed to create an estimate for the
scenario.
ī Alternatively, the volume estimate for the scenario is translated into effort
using historical data.
ī§ The effort estimates for all scenarios that are to be implemented for a
given software increment are summed to develop the effort estimate for
the increment.