• Like
Models for Competing on Schedule, Cost, and Quality
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Models for Competing on Schedule, Cost, and Quality

  • 1,107 views
Published

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,107
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
35
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Barry Boehm, USC COCOMO/SCM Forum #16 October 24, 2001 (boehm@sunset.usc.edu) (http://sunset.usc.edu) Competing on Schedule, Cost, and Quality: The Role of Software Models
  • 2. Outline
    • Traditional and e-Services Development
    • - USC research model perspectives
    • Software Schedule-Cost-Quality Tradeoffs
    • - Risk exposure
    • - Development and ownership costs
    • The SAIV/CAIV/SCQAIV Process Models
    • Conclusions and References
  • 3. Traditional and e-Services Development
    • Traditional Development
    • Standalone systems
    • Stable requirements
    • Rqts. determine capabilities
    • Control over evolution
    • Enough time to keep stable
    • Repeatability-oriented process, maturity models
      • e-Services Development
    • Everything connected--maybe
    • Rapid requirements change
    • COTS capabilities determine rqts.
    • No control over COTS evolution
    • Ever-decreasing cycle times
    • Adaptive process models
  • 4. USC : Model Integration Research MBASE, CeBASE Success Models Product Models Property Models Process Models
    • Win-Win • Business Case Analysis • Results Chains
    • Risk • Software Warranties • Correctness
    • RAD • Six Sigma • Stories
    • Award Fees • Agility
    • JAD • QFD
    • • Golden Rule
    • Waterfall
    • Spiral • RUP • XP
    • SAIV • CAIV • SCQAIV
    • Risk Management
    • Business Process Reengineering
    • CMM’s • Peopleware
    • IPT’s • Agile Development
    • Groupware • Easy WinWin
    • Experience Factory • GQM
    • UML • XML
    • CORBA • COM
    • Architectures
    • Product Lines
    • OO Analysis & Design
    • Requirements
    • Operational Concepts
    • Domain Ontologies
    • COTS • GOTS
    • COCOMO II
    • COCOTS • CORADMO
    • System Dynamics
    • Metrics • - ilities • COQUALMO
    • Simulation and Modeling
  • 5. Outline
    • Traditional and e-Services Development
    • - USC research model perspectives
    • Software Schedule-Cost-Quality Tradeoffs
    • - Risk exposure
    • - Development and ownership costs
    • The SAIV/CAIV/SCQAIV Process Models
    • Conclusions and References
  • 6. Competing on Schedule and Quality - A risk analysis approach
    • Risk Exposure RE = Prob (Loss) * Size (Loss)
      • “ Loss” – financial; reputation; future prospects, …
    • For multiple sources of loss:
    sources RE =  [Prob (Loss) * Size (Loss)] source
  • 7. Example RE Profile: Time to Ship - Loss due to unacceptable dependability Time to Ship (amount of testing) RE = P(L) * S(L) Many defects: high P(L) Critical defects: high S(L) Few defects: low P(L) Minor defects: low S(L)
  • 8. Example RE Profile: Time to Ship - Loss due to unacceptable dependability - Loss due to market share erosion Time to Ship (amount of testing) RE = P(L) * S(L) Few rivals: low P(L) Weak rivals: low S(L) Many rivals: high P(L) Strong rivals: high S(L) Many defects: high P(L) Critical defects: high S(L) Few defects: low P(L) Minor defects: low S(L)
  • 9. Example RE Profile: Time to Ship - Sum of Risk Exposures Time to Ship (amount of testing) RE = P(L) * S(L) Few rivals: low P(L) Weak rivals: low S(L) Many rivals: high P(L) Strong rivals: high S(L) Sweet Spot Many defects: high P(L) Critical defects: high S(L) Few defects: low P(L) Minor defects: low S(L)
  • 10. Comparative RE Profile: Safety-Critical System Time to Ship (amount of testing) RE = P(L) * S(L) Mainstream Sweet Spot Higher S(L): defects High-Q Sweet Spot
  • 11. Comparative RE Profile: Internet Startup Time to Ship (amount of testing) RE = P(L) * S(L) Mainstream Sweet Spot Higher S(L): delays Low-TTM Sweet Spot TTM: Time to Market
  • 12. Conclusions So Far
    • Unwise to try to compete on both cost/schedule and quality
      • Some exceptions: major technology or marketplace edge
    • There are no one-size-fits-all cost/schedule/quality strategies
    • Risk analysis helps determine how much testing (prototyping, formal verification, etc.) is enough
      • Buying information to reduce risk
    • Often difficult to determine parameter values
      • Some COCOMO II values discussed next
  • 13. 0.8 0.9 1.0 1.1 1.2 1.3 Slight inconvenience Low, easily recoverable loss Moderate recoverable loss High Financial Loss Loss of Human Life 1 month 1 day 2 years 100 years Defect Risk Rough MTBF(mean time between failures) In-house support software 1.0 Relative Cost/Source Instruction Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects 1 hour High RELY Rating Very High Nominal Low Very Low
  • 14. 0.8 0.9 1.0 1.1 1.2 1.3 Slight inconvenience Low, easily recoverable loss Moderate recoverable loss High Financial Loss Loss of Human Life 1 month 1 day 2 years 100 years Defect Risk Rough MTBF(mean time between failures) Commercial quality leader 1.10 In-house support software 1.0 Commercial cost leader 0.92 Relative Cost/Source Instruction Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects 1 hour High RELY Rating Very High Nominal Low Very Low
  • 15. 0.8 0.9 1.0 1.1 1.2 1.3 Slight inconvenience (1 hour) Low, easily recoverable loss Moderate recoverable loss High Financial Loss Loss of Human Life 1 month 1 day 2 years 100 years Defect Risk Rough MTBF(mean time between failures) Commercial quality leader 1.10 In-house support software 1.0 Commercial cost leader 0.92 0.82 Startup demo Safety-critical 1.26 Relative Cost/Source Instruction Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects High RELY Rating Very High Nominal Low Very Low
  • 16. COCOMO II RELY Factor Dispersion 0 20 40 60 80 100 Very Low Low Nominal High Very High t = 2.6 t > 1.9 significant
  • 17. COCOMO II RELY Factor Phenomenology RELY = Very Low RELY = Very High Rqts. and Product Design Detailed Design Code and Unit test Integration and test Little detail Many TBDs Little verification Minimal QA, CM, standards, draft user manual, test plans Minimal PDR Detailed verification, QA, CM, standards, PDR, documentation IV & V interface Very detailed test plans, procedures Basic design information Minimal QA, CM, standards, draft user manual, test plans Informal design inspections Detailed verification, QA, CM, standards, CDR, documentation Very thorough design inspections Very detailed test plans, procedures IV & V interface Less rqts. rework Detailed test procedures, QA, CM, documentation Very thorough code inspections Very extensive off- nominal tests IV & V interface Less rqts., design rework No test procedures Minimal path test, standards check Minimal QA, CM Minimal I/O and off- nominal tests Minimal user manual No test procedures Many requirements untested Minimal QA, CM Minimal stress and off-nominal tests Minimal as-built documentation Very detailed test procedures, QA, CM, documentation Very extensive stress and off-nominal tests IV & V interface Less rqts., design, code rework
  • 18. “ Quality is Free”
    • Did Philip Crosby’s book get it all wrong?
    • Investments in dependable systems
      • Cost extra for simple, short-life systems
      • Pay off for high-value, long-life systems
  • 19. Software Life-Cycle Cost vs. Dependability 0.8 Very Low Low Nominal High Very High 0.9 1.0 1.1 1.2 1.3 1.4 1.10 1.0 0.92 1.26 0.82 Relative Cost to Develop COCOMO II RELY Rating
  • 20. Software Life-Cycle Cost vs. Dependability 0.8 Very Low Low Nominal High Very High 0.9 1.0 1.1 1.2 1.3 1.4 1.10 0.92 1.26 0.82 Relative Cost to Develop, Maintain COCOMO II RELY Rating 1.23 1.10 0.99 1.07
  • 21. Software Life-Cycle Cost vs. Dependability
    • Low-dependability inadvisable for evolving systems
    0.8 Very Low Low Nominal High Very High 0.9 1.0 1.1 1.2 1.3 1.4 1.10 0.92 1.26 0.82 Relative Cost to Develop, Maintain COCOMO II RELY Rating 1.23 1.10 0.99 1.07 1.11 1.05 70% Maint. 1.07 1.20
  • 22. Software Ownership Cost vs. Dependability 0.8 Very Low Low Nominal High Very High 0.9 1.0 1.1 1.2 1.3 1.4 1.10 0.92 1.26 0.82 Relative Cost to Develop, Maintain , Own and Operate COCOMO II RELY Rating 1.23 1.10 0.99 1.07 1.11 1.05 70% Maint. 1.07 1.20 0.76 0.69 VL = 2.55 L = 1.52 Operational-defect cost at Nominal dependability = Software life cycle cost Operational - defect cost = 0
  • 23. Conclusions So Far - 2
    • Quality is better than free for high-value, long-life systems
    • There is no universal dependability sweet spot
      • Yours will be determined by your value model
      • And the relative contributions of dependability techniques
    • Risk analysis helps answer “How much is enough?”
    • COCOMO II provides schedule-cost-quality tradeoff analysis framework
  • 24. Outline
    • Traditional and e-Services Development
    • - USC research model perspectives
    • Software Schedule-Cost-Quality Tradeoffs
    • - Risk exposure
    • - Development and ownership costs
    • The COCOMO Suite of Tradeoff Models
    • - COCOMO II, CORADMO,
    • COQUALMO-ODC
    • The SAIV/CAIV/SCQAIV Process Models
    • Conclusions and References
  • 25.
    • 1. Introduction
    • 2. Model Definition
    • 3. Application Examples
    • 4. Calibration
    • 5. Emerging Extensions
    • 6. Future Trends
    • Appendices
        • Assumptions, Data Forms, User’s Manual, CD Content
    COCOMO II Book Table of Contents - Boehm, Abts, Brown, Chulani, Clark, Horowitz, Madachy, Reifer, Steece, Software Cost Estimation with COCOMO II , Prentice Hall, 2000 CD: Video tutorials, USC COCOMO II.2000, commercial tool demos, manuals, data forms, web site links, Affiliate forms
  • 26.
    • To help people reason about the
    • cost and schedule implications of
    • their software decisions
    Purpose of COCOMO II
  • 27. Major Decision Situations Helped by COCOMO II
    • Software investment decisions
      • When to develop, reuse, or purchase
      • What legacy software to modify or phase out
    • Setting project budgets and schedules
    • Negotiating cost/schedule/performance tradeoffs
    • Making software risk management decisions
    • Making software improvement decisions
      • Reuse, tools, process maturity, outsourcing
  • 28. Need to ReEngineer COCOMO 81
    • New software processes
    • New sizing phenomena
    • New reuse phenomena
    • Need to make decisions based on incomplete information
  • 29. COCOMO II Model Stages
  • 30. Early Design and Post-Architecture Model   Size Effort s Multiplier Environment           Environment: Product, Platform, People, Project Factors Size: Nonlinear reuse and volatility effects Process: Constraint, Risk/Architecture, Team, Maturity Factors   Schedule Multiplier     Factors Scale Process   Factors Scale Process   Effort
  • 31. Relative cost Amount Modified 1.0 0.75 0.5 0.25 0.25 0.5 0.75 1.0 0.55 0.70 1.0 0.046 Usual Linear Assumption Data on 2954 NASA modules [Selby, 1988] Nonlinear Reuse Effects
  • 32. COCOMO II. 2000 Productivity Ranges Productivity Range 1 1.2 1.4 1.6 1.8 2 2.2 2.4 Product Complexity (CPLX) Analyst Capability (ACAP) Programmer Capability (PCAP) Time Constraint (TIME) Personnel Continuity (PCON) Required Software Reliability (RELY) Documentation Match to Life Cycle Needs (DOCU) Multi-Site Development (SITE) Applications Experience (AEXP) Platform Volatility (PVOL) Use of Software Tools (TOOL) Storage Constraint (STOR) Process Maturity (PMAT) Language and Tools Experience (LTEX) Required Development Schedule (SCED) Data Base Size (DATA) Platform Experience (PEXP) Architecture and Risk Resolution (RESL) Precedentedness (PREC) Develop for Reuse (RUSE) Team Cohesion (TEAM) Development Flexibility (FLEX) Scale Factor Ranges: 10 , 100 , 1000 KSLOC
  • 33. Percentage of sample projects within 30% of actuals - Without and with calibration to data source COCOMO II Estimation Accuracy: COCOMO81 COCOMOII.2000 COCOMOII.1997 # Projects 63 83 161 Effort Schedule 81% 52% 64% 75% 80% 61% 62% 72% 81% 65%
  • 34. COCOMO II cost drivers (except SCED) Language Level, experience,... COCOMO II Phase Distributions(COPSEMO) RAD Extension Baseline effort, schedule Effort, schedule by stage RAD effort, schedule by phase RVHL DPRS CLAB RESL COCOMO II RAD Extension (CORADMO) PPOS RCAP
  • 35. Effect of RCAP on Cost, Schedule
  • 36. COCOMO II Current COQUALMO System COQUALMO Defect Introduction Model Defect Removal Model Software platform, Project, product and personnel attributes Software Size Estimate Defect removal profile levels Automation, Reviews, Testing Software development effort, cost and schedule estimate Number of residual defects Defect density per unit of size
  • 37. Defect Removal Rating Scales COCOMO II p.263 Highly advanced tools, model-based test More advance test tools, preparation. Dist-monitoring Well-defined test seq. and basic test coverage tool system Basic test Test criteria based on checklist Ad-hoc test and debug No testing Execution Testing and Tools Extensive review checklist Statistical control Root cause analysis, formal follow Using historical data Formal review roles and Well-trained people and basic checklist Well-defined preparation, review, minimal follow-up Ad-hoc informal walk-through No peer review Peer Reviews Formalized specification, verification. Advanced dist-processing More elaborate req./design Basic dist-processing Intermediate-level module Simple req./design Compiler extension Basic req. and design consistency Basic compiler capabilities Simple compiler syntax checking Automated Analysis Extra High Very High High Nominal Low Very Low
  • 38. Defect Removal Estimates - Nominal Defect Introduction Rates Delivered Defects / KSLOC Composite Defect Removal Rating
  • 39. COQUALMO-ODC Model Objectives
    • Support cost-schedule-quality tradeoff analysis
    • Provide reference for defect monitoring and control
    • Evolve to cover all major classes of project
    • - With different defect distributions
    • (e.g. COTS-based)
    • - Start simple;grow opportunistically
  • 40. Example of Desired Model - I 6 9 12 15 30 60 90 600K 1200K 1800K Cost @ $20K/PM Effort (PM) VL L N H VH Current: RELY rating Desired: Defect MTBF KSLOC (hr) 100 10 1 0.1 0.01 1 24 720 17,000 10 6 Details for any given point-next Time (Mo.)
  • 41. Example of Desired Model - II 30 KSLOC; RELY = VH; 75PM; 12Mo.; Delivered Defect = 0.3 Phase Rqts. Design Code & Unit test Integration & Test Effort(PM) Cost($K) Schedule(Mo.) Defects in/out/left - Rqts. -Design -Timing -Interface … .. 60/50/10 130/116/24 264/234/30 8/37.7/0.3 60/50/10 10/16/4 2/5/1 1/2/0 120/100/20 10/25/5 2/6.9/0.1 12/6/6 2/4/4 1/4.9/0.1 30/25/5 5/9/1 0/1/0 … .. … .. … .. 12.5 43.5 19 250 870 380 3.1 5.8 3.1
  • 42. Current COQUALMO shortfalls
    • One-size-fits-all model
    • - May not fit COTS/Web, embedded applications
    • Defect uniformity, independence assumptions
    • - Unvalidated hypotheses
    • COCOMO II C-S-Q trade offs just to RELY levels
    • - Not to delivered defect density, MTBF
    • Need for more calibration data
    • - ODC data could extend and strengthen model
  • 43. Outline
    • Traditional and e-Services Development
    • - USC research model perspectives
    • Software Schedule-Cost-Quality Tradeoffs
    • - Risk exposure
    • - Development and ownership costs
    • The COCOMO Suite of Tradeoff Models
    • - COCOMO II, CORADMO,
    • COQUALMO-ODC
    • The SAIV/CAIV/SCQAIV Process Models
    • Conclusions and References
  • 44. Example of Desired Model - II 30 KSLOC; RELY = VH; 75PM; 12Mo.; Delivered Defect = 0.3 Phase Rqts. Design Code & Unit test Integration & Test Effort(PM) Cost($K) Schedule(Mo.) Defects in/out/left - Rqts. -Design -Timing -Interface … .. 60/50/10 130/116/24 264/234/30 8/37.7/0.3 60/50/10 10/16/4 2/5/1 1/2/0 120/100/20 10/25/5 2/6.9/0.1 12/6/6 2/4/4 1/4.9/0.1 30/25/5 5/9/1 0/1/0 … .. … .. … .. 12.5 43.5 19 250 870 380 3.1 5.8 3.1
  • 45. Current COQUALMO shortfalls
    • One-size-fits-all model
    • - May not fit COTS/Web, embedded applications
    • Defect uniformity, independence assumptions
    • - Unvalidated hypotheses
    • COCOMO II C-S-Q trade offs just to RELY levels
    • - Not to delivered defect density, MTBF
    • Need for more calibration data
    • - ODC data could extend and strengthen model
  • 46. Outline
    • Traditional and e-Services Development
    • - USC research model perspectives
    • Software Schedule-Cost-Quality Tradeoffs
    • - Risk exposure
    • - Development and ownership costs
    • The COCOMO Suite of Tradeoff Models
    • - COCOMO II, CORADMO,
    • COQUALMO-ODC
    • The SAIV/CAIV/SCQAIV Process Models
    • Conclusions and References
  • 47. The SAIV Process Model
    • 1. Shared vision and expectations management
    • 2. Feature prioritization
    • 3. Schedule range estimation
    • 4. Architecture and core capabilities determination
    • 5. Incremental development
    • 6. Change and progress monitoring and control
  • 48. Shared Vision, Expectations Management, and Feature Prioritization
    • Use stakeholder win-win approach
    • Developer win condition: Don’t overrun fixed 9-month schedule
    • Clients’ win conditions: 24 months’ worth of features
    • Win-Win negotiation
      • Which features are most critical?
      • COCOMO II: How many features can be built within a 9-month schedule?
  • 49. COCOMO II Estimate Ranges
  • 50. Software Product Production Function Value of software product to organization Investment High-payoff Diminishing returns Operating system Basic application functions Main application functions Humanized I/O Secondary application functions Animated graphics Tertiary Application Functions Natural speech input T=12 mo. T=6 mo. Availability of delivery by time T 90% 90% 50% 50% Data management
  • 51. Core Capability Incremental Development, and Coping with Rapid Change
    • Core capability not just top-priority features
      • Useful end-to-end capability
      • Architected for ease of adding, dropping marginal features
    • Worst case: Deliver core capability in 9 months, with some extra effort
    • Most likely case: Finish core capability in 6-7 months
      • Add next-priority features
    • Cope with change by monitoring progress
      • Renegotiate plans as appropriate
  • 52. SAIV Experience I: USC Digital Library Projects
    • Life Cycle Architecture package in fixed 12 weeks
      • Compatible operational concept, prototypes, requirements, architecture, plans, feasibility rationale
    • Initial Operational Capability in 12 weeks
      • Including 2-week cold-turkey transition
    • Successful on 24 of 26 projects
      • Failure 1: too-ambitious core capability
        • Cover 3 image repositories at once
      • Failure 2: team disbanded
        • Graduation, summer job pressures
    • 92% success rate vs industry 16% in Standish Report
  • 53. Rapid Value TM Project Approach Define Design Develop Deploy Lines of readiness Are we ready for the next step? Iteration Scope, Listening, Delivery focus Identify System Actors Document Business Processes Generate Use Cases Define basic development strategies Object Domain Modeling Detailed Object Design, Logical Data Model Object Interactions, System Services Polish Design, Build Plan Build 1 Build 2 Stabilization Build Release to Test Beta Program Pilot Program Production
    • 16-24 week fixed schedule
    LA SPIN Copyright © 2001 C-bridge
  • 54. Conclusions: SAIV Critical Success Factors
    • Working with stakeholders in advance to achieve a shared product vision and realistic expectations;
    • Getting clients to develop and maintain prioritized requirements;
    • Scoping the core capability to fit within the high-payoff segment of the application’s production function for the given schedule;
    • Architecting the system for ease of adding and dropping features;
    • Disciplined progress monitoring and corrective action to counter schedule threats
    • Also works for Cost as Independent Variable
      • And “Cost, Schedule, Quality: Pick All Three”
  • 55. Conclusions
    • Future IT systems require new model perspectives
    • - Product, process, property, success models
    • USC COCOMO and MBASE model families helpful
    • - Tradeoff and decision analysis
    • - Integrated product and process development
    • - Rapid Application Development
    • - Process management and improvement
    • USC-IBM COQUALMO-ODC model a valuable next
    • step
  • 56. Further Information V. Basili, G. Caldeira, and H. Rombach, “The Experience Factory” and “The Goal Question Metric Approach,” in J. Marciniak (ed.), Encyclopedia of Software Engineering , Wiley, 1994. B. Boehm, C. Abts, A.W. Brown, S. Chulani, B. Clark, E. Horowitz, R. Madachy, D. Reifer, and B. Steece, Software Cost Estimation with COCOMO II , Prentice Hall, 2000. B. Boehm, D. Port, “Escaping the Software Tar Pit: Model Clashes and How to Avoid Them,” ACM Software Engineering Notes , January, 1999. B. Boehm et al., “Using the Win Win Spiral Model: A Case Study,” IEEE Computer , July 1998, pp. 33-44. R. van Solingen and E. Berghout, The Goal/Question/Metric Method, McGraw Hill, 1999. COCOMO II, MBASE items : http://sunset.usc.edu CeBASE items : http://www.cebase.org