Your SlideShare is downloading. ×
How to avoid drastic project change (using stochastic stability)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

How to avoid drastic project change (using stochastic stability)

396
views

Published on

by TIm Menzies, Steve Williams, Oussama El-waras, Barry Boehm, Jairus Hihn

by TIm Menzies, Steve Williams, Oussama El-waras, Barry Boehm, Jairus Hihn

Published in: Technology, Business

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
396
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. + how to avoid drastic software process change (using stochastic stability) WVU: Tim Menzies, Steve Williams, Ous Elwaras USC: Barry Boehm JPL: Jairus Hihn Apr 6, 2009
  • 2. + 2 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 3. + 3 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 4. + 4 stochastic stability   “For all is but a woven web of guesses.” -- Xenophanes (570 – 480 BCE)     Seek what holds true over the space of all guesses. Surprisingly, happily, such stable conclusions exist.     Bad idea for: The safety-critical guidance system of a manned rocket.     Good idea for: Exploring the myriad space of possibilities associated with   software project management.
  • 5. + 5 stochastic stability and option exploration   Software project managers have more options than they think.   It is possible (even useful) to push back against drastic change   Q: can we found local options that out-perform drastic change? A: yes, we can   Adjust current project options Fire people Do it all again, in ADA Deliver late
  • 6. + 6 expectation management We explore models built from real project data     We perform experiments with models about software project Not experiments with actual projects.     Such experiments are hard to do Gone of the days of Victor Basili-style SEL experimentation     where the researchers could tell the project what to do. Software developers are more aggressive in selecting their own methods.   We hope to apply this to real “lab rats”, soon   Meanwhile, we sharpen our tools     And publicize our results to date (see below).
  • 7. + 7 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 8. + 8 timm = ?
  • 9. + 9 timm = nerd hippy (type 3)
  • 10. + 10 timm = nerd hippy (type 3) Type 1 Type 2 Type 3 Habitat: Habitat: Habitat: Haight-Ashbury MIT (a) Berkeley (b) Mum’s living room, Helsinki (c) Portland Example: Richard Stallman Example: Linus Torvalds Goals: What goals? Goals Goals: lecturing you on how to Goals: Finding out how we can better build are a construct, man. Free do it better tools, together your mind!
  • 11. + 11 timm = nerd hippy (type 3) Type 1 Type 2 Type 3 Habitat: Habitat: Habitat: Haight-Ashbury MIT (a) Berkeley (b) Mum’s living room, Helsinki (c) Portland Example: Richard Stallman Example: Linus Torvalds Goals: What goals? Goals Goals: lecturing you on how to Goals: Finding out how we can better build are a construct, man. Free do it better tools, together your mind!
  • 12. + 12 timm = nerd hippy (type 3) Type 1 Type 2 Type 3 Habitat: Habitat: Habitat: Haight-Ashbury, MIT, Boston (a) Berkeley San Francisco (b) Portland (c) Mum’s living room, Helsinki Examples: happily, very few Example: Richard Stallman Example: Linus Torvalds Goals: Goals are a construct, Goals: lecturing you on how to Goals: Finding out how we can better build man. Free your mind! do it better tools, together
  • 13. + 13 hippies share (the “PROMISE” project)   Repeatable, improvable, (?refutable) software engineering experiments “Put up or shut up”     Submit the paper AND the data   Activities Annual conference: this year, co-located with ICSE     Journal special issues: 2008,2009 Empirical Software Engineering   On-line repository: http://promisedata.org/data: contributions welcome!
  • 14. + 14 plays well with others You have data? Ok then…..   With NASA, IV&V SE research chair    2008, predicting software defects  2001 With Dan Port    ASE’08: softwareprocess models to assess agile programming with Andrian Marcus:   : SEVERIS: automatic audits for  ICSM’08 text reports of software bugs.  ICSM’09 (submitted): incorporating user feedback for better concept location with Barry Boehm   below  See with Jamie Andrews:     TSE’09(submitted): genetic algorithms to design test cases that maximize code coverage
  • 15. + 15 plays well with others
  • 16. + 16 Plays well with others (but not as good as some)
  • 17. + 17 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 18. + 18 internal vs drastic changes   Internal changes: within the space of current project options   Drastic change: cry havoc and let slip the dogs of war
  • 19. + 19 internal vs drastic changes   Internal changes: within the space of current project options   Drastic change: cry havoc and let slip the dogs of war Internal choices
  • 20. + 20 internal vs drastic changes   Internal changes: within the space of current project options   Drastic change: cry havoc and let slip the dogs of war Internal choices Drastic changes
  • 21. + 21 internal vs drastic changes   Internal changes: within the space of current project options   Drastic change: cry havoc and let slip the dogs of war Internal choices Drastic changes Can internal choices out-perform drastic change?
  • 22. + 22 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 23. Estimates = model(P, T) + 23 T = tunings P = project G = goals
  • 24. Estimates = model(P, T) Some project options + T = tunings P = project Note: controllability assumption G = goals
  • 25. Estimates = model(P, T) Some project options + 25 Some tuning options Ranges seen in 161 projects, Learned via regression, Boehm 2000 Decrease! Increase! effort! effort! acap, apex, ltex, pcap, pcon, plex,sced, site,toool Note: controllability cplx, data, assumption docu pvol, rely, ruse, stor, time G = goals
  • 26. Estimates = model(P, T) Some project options + 26 Some tuning options Ranges seen in 161 projects, Learned T = tunings via regression, Boehm 2000 P = project Decrease! Increase! effort! effort! acap, apex, ltex, pcap, pcon, plex,sced, site,toool Note: controllability cplx, data, assumption docu pvol, rely, ruse, stor, time An objective function •  Find least p from P that reduce effort (E) , defects (D), time to complete- in months (M) G = goals
  • 27. + 27 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 28. + 28 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides. decision making need not wait on  Such detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides tools might be better than standard  AI methods (many papers). options found by the AI tools are  The better than (at least some) management repair actions (this paper)
  • 29. + 29 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides. Such decision making need not wait on   detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 30. + 30 what is interesting/different here? It is possible to predict software   Relative impact above lowest value development effort [Boehm81, Chulani99] team flex development models can be  Software resl used to debate trade-offs between pmat different management options: prec [Boehm00], and many others besides. e = b + 0.01 * sum( scalers ) cplx acap Such decision making need not wait on pcap Effort = a * KLOC ^e   time detailed local data domain collection * prod( multipliers ) [Fenton08, Menzies08] pcon rely Defaults <a,b> = <2.94, 0.91> site AI is useful for software engineering   docu Local calibration [Boehm81]: apex • Tune <a,b> using local data tool tools can explore and rank more  AI pvol options that humans [Menzies00], and stor many more besides Bayesian calibration [Chulani99]: sced •  Combine expert intuition with ltex historical data AI tools might be better than standard   data methods (many papers). plex ruse The options found by the AI tools are   0.00 2.00 4.00 6.00 better than (at least some) management Estimates are within 30% of actual, 69% of the time repair actions (this paper)
  • 31. + 31 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 32. + 32 what is interesting/different here? It is possible to predict software   Boehm’s value-based SE challenge: development effort [Boehm81, • Most SE techniques are “value-neutral” Chulani99] (euphuism for “useless”). • tune recommendations, process decisions development models can be  Software to the particulars of company used to debate trade-offs between different management options: [Boehm00], and many others besides e.g. [Huang06]: mapped a business into Boehms’ models developed a “risk exposure measure” combining Such decision making need not wait on (a) racing delivery to market detailed local data domain collection (b) delivered software defects [Fenton08, Menzies08] Ran two scenarios:. AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 33. + 33 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 34. + 34 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides Yet another victim of the data drought Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering tools can explore and rank more  AI options that humans [Menzies00], and many more besides Fenton07: “....much of the current software metrics research is inherently irrelevant to the industrial mix ... any software metrics AI tools might be better than standard   program that depends on some extensive methods (many papers). metrics collection is doomed to failure.” The options found by the AI tools are   e.g. After 26 years, Boehm collected less better than (at least some) management than 200 sample projects for the COCOMO repair actions (this paper) effort database
  • 35. + 35 what is interesting/different here? For project options P and internal model tunings T: It is possible to predict software   Estimates = model(P, T) development effort [Boehm81, Chulani99] Tuning uses local data to constraint T development models can be • e.g. Boehm’s local calibration constrains <a,b>  Software used to debate trade-offs between different management options: If T’s variance dominates P’s variance, then you must tune? [Boehm00], and many others besides T) Such decision making need not wait on Estimates = model(P, detailed local data domain collection [Fenton08, Menzies08] But what about if P’s variance dominates? AI is useful for software engineering   Estimates = model(P, T) tools can explore and rank more  AI options that humans [Menzies00], and Then control estimates by controlling P: many more besides • Keep “t” random (no local data for tuning) •  Find the smallest “p” from P (random project) AI tools might be better than standard   that most changes estimates. methods (many papers). [Menzies08]: The options found by the AI tools are • var(P) dominates in Boehm’s COCOMO models   better than (at least some) management • just changing P yields similar estimates to standard methods repair actions (this paper) • local data collection is useful, not mandatory. UAI finds “p” in Boehm’ s effort/time/defect predictors.
  • 36. + 36 what is interesting/different here? It is possible to predict software   Harman: search-based SE  Mark development effort [Boehm81,   Many management decisions are over- Chulani99] constrained   No solution satisfies all users, all criteria. development models can be  Software   E.g. better, faster, cheaper, pick any two used to debate trade-offs between different management options: [Boehm00], and many others besides. tools.  Many   Data mining to learn defect predictors (see Jan Such decision making need not wait on   IEEE TSE ’07) detailed local data domain collection   Genetic algorithms for test case generation [Fenton08, Menzies08] (recall work with Andrews).   Simulated annealing for software process AI is useful for software engineering   planning (my ASE’07 paper).   AI search for project planning (see below) tools can explore and rank more   Abduction (a.k.a. partial evaluation + constraints  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 37. + 37 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08] The “one slide” rule AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 38. + 38 what is interesting/different here? It is possible to predict software   development effort [Boehm81, At each “x”, AI Chulani99] search (*) finds and takes next best development models can be  Software decision. used to debate trade-offs between different management options: After each decision, [Boehm00], and many others besides run Boehm’s models 100 times (for the as- Such decision making need not wait on yet undecided, select detailed local data domain collection their values at [Fenton08, Menzies08] random). AI is useful for software engineering   Prune spurious final decisions with a tools can explore and rank more  AI back-select. options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 39. + 39 what is interesting/different here? It is possible to predict software   development effort [Boehm81, At each “x”, AI Chulani99] search (*) finds and takes next best development models can be  Software decision. used to debate trade-offs between different management options: After each decision, [Boehm00], and many others besides run Boehm’s models 100 times (for the as- Such decision making need not wait on yet undecided, select detailed local data domain collection their values at [Fenton08, Menzies08] random). AI is useful for software engineering   Prune spurious final decisions with a tools can explore and rank more  AI back-select. options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management Search methods = simulated annealing, beam, issamp, repair actions (this paper) keys, a-star, maxwalksat, dfid, LDS…
  • 40. + 40 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides. Such decision making need not wait on   detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper) IMPORTANT: this representation allows managers to perform trade-off relations on our recommendations
  • 41. + 41 what is interesting/different here? Conventional It is possible to predict software   development effort [Boehm81, optimizations: Chulani99] 1.  One solution 2.  Constraining all development models can be choices  Software used to debate trade-offs between different management options: [Boehm00], and many others besides. AI search : 1)  N solutions Such decision making need not wait on   2)  Provide detailed local data domain collection neighborhood [Fenton08, Menzies08] information AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard Mark Harman: “Solution robustness may be as important as   methods (many papers). solution functionality. The options found by the AI tools are   “For example, it may be better to locate an area of the better than (at least some) management search space that is rich in t solutions, rather than repair actions (this paper) identifying an even better solution that is surrounded by a set of far less t solutions. ”
  • 42. + 42 what is interesting/different here? It is possible to predict software   • Other uncertainty-in-SE-estimates development effort [Boehm81, Chulani99] •  Typically Bayes nets. •  Usually, single goal development models can be  Software • defects [Fenton08] used to debate trade-offs between different management options: • effort [Pendharkar05] [Boehm00], and many others besides. •  Little (?no) trade off analysis to understand Such decision making need not wait on   • neighborhood of solution detailed local data domain collection [Fenton08, Menzies08] • minimal solution AI is useful for software engineering •  We explore multiple, possibly competing, goals   • Better AND faster AND cheaper tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   • We offer neighborhood solutions methods (many papers). • We offer trade offs between solution size and effectiveness The options found by the AI tools are • We can work in very dimension space   better than (at least some) management . repair actions (this paper)
  • 43. + 43 what is interesting/different here? It is possible to predict software   Even advanced visualization methods development effort [Boehm81, fail after 5-10 dimensions Chulani99] development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides. Such decision making need not wait on   detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 44. + 44 what is interesting/different here? [Coarfo00] and [Gu97]: AI methods faster*100 It is possible to predict software   than (e.g.) int. programming for this kind of task. development effort [Boehm81, Chulani99] Our tools are like optimizers development models can be  Software that make so assumption of used to debate trade-offs between linearity, continuity, different management options: single maxima or [Boehm00], and many others besides. even smoothness. Such decision making need not wait on   Traditional gradient descent detailed local data domain collection optimizers assume smooth [Fenton08, Menzies08] surfaces. AI is useful for software engineering   But what of local-maxima? tools can explore and rank more  AI options that humans [Menzies00], and many more besides And what if our shapes are not smooth? AI tools might be better than standard   [Baker07]: learn methods (many papers). <a,b> for Boehm’s b model in NASA The options found by the AI tools are data 1000 times,   better than (at least some) management pick 90% of repair actions (this paper) data at random. a
  • 45. + 45 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] P(ground)= development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides. Such decision making need not wait on   detailed local data domain collection [Fenton08, Menzies08] AI is useful for software engineering   tools can explore and rank more  AI options that humans [Menzies00], and many more besides AI tools might be better than standard   methods (many papers). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 46. + 46 what is interesting/different here? It is possible to predict software   development effort [Boehm81, Chulani99] P(ground)= development models can be  Software used to debate trade-offs between different management options: [Boehm00], and many others besides. Such decision making need not wait on   detailed local data domain collection Generate consequences of drastic change [Fenton08, Menzies08] • Monte Carlo simulation of the above. AI is useful for software engineering   Contrast with good selection of internal choices tools can explore and rank more  AI Control estimates by controlling P: options that humans [Menzies00], and many more besides • Keep “t” random (no local data for tuning) AI tools might be better than standard   •  Find the smallest “p” from P (random project) methods (many papers). that most improves scores from estimates. The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 47. + 47 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 48. +
  • 49. +
  • 50. + 50
  • 51. + 51
  • 52. +
  • 53. + 53
  • 54. + 54 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 55. + 55 Related Work Other search-based SE: Other parametric cost models:     Focuses on few tools: SA, GA, tabu. PRICE-S, SLIM, or SEER- SEM     (we started with SA, but moved on) Not open source (have secrets)     Exciting new generation of tools ?possibly over-elaborated.       constraint satisfaction algorithms May generate range of estimates     stochastic SAT solves to explore   but no search to find better options. Other instance-based effort tools:   Other COCOMO work   ANGEL: Shepperd’s nearest neighbor     Very focused on regression methods & tuning No parametric form.    Problems with data drought, tuning No way to describe shape of the theory,   variance, performance variance no way to explore perturbations of that theory Discrete methods: insightful? fun!  
  • 56. + 56 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 57. + 57 Conclusion   Software project managers have more options than they think.   It is possible (even useful) to push back against drastic change   Q: can we found local options that out-perform drastic change? A: yes, we can   Adjust current Use models whose estimates are project options dominated by project variance; Estimates = model(P, T) Fire people Control estimates via P: Do it all again, • Keep “t” random (no need for local in ADA tuning data) • Using AI, find the fewest parts of P Deliver late that most change estimates
  • 58. + 58 Future work   Constraint logic programming Haven’t shown the dark side of our models   Procedural kludges regarding conditional co-dependencies     Much recent interest in constrained regression Users offer hints on what kinds of theories they’d accept   These hints bias the search algorithms     Is CLP a general/useful framework for adding “hinting” to our current tools?
  • 59. + 59 background   digression   internal vs drastic changes   what space do we explore?   what is interesting/different here?   results (on 4 projects)   this talk related work   conclusion & future work   questions? comments?  
  • 60. + 60 my research question: why are humans so successful?   The world is a very complex place: how do dumb humans get by?   How did dummies like me (?and you) build things as complex as: The internet?   The international domestic airplane network?     The Apollo moon rocket? (400K parts, 2K contractors, worked flawlessly)
  • 61. + 61 why do dummies get by?   Answer #1: we don’t get by Computers crash     Economic systems fail   Sure, we get some failures Zipf’s law: reuse frequency   But why don’t we fail all the time? of library functions: LINUX, Sun OS, Mac OS   Answer #2: some of us aren’t so dumb Kepler, Descartes, Newton, Planck,     So few of them, too many “Menzies”   Answer #3: the world is not as complex as it appears Key variables: a few things set the rest     Most possible differences, aren’t   More regularities than you might expect Distribution of, change- proned classes: Koffice, mozilla
  • 62. + 62 applications of “keys”   Rewrite all your algorithms, assuming keys   Reduce complex problems to simper ones   simpler, faster code   Reduce complex answers to simpler ones   shorter, clearer, theories
  • 63. + 63 what is interesting/different here? It is possible to predict software In effort estimation, useful for bridging expert vs   development effort [Boehm81, model-based estimation methods Chulani99] Support Jorgensen's Expert Judgment Best Practices: development models can be 1. evaluate estimation accuracy,  Software 2. avoid conflicting estimation goals used to debate trade-offs between different management options: 3. ask the estimators to justify and [Boehm00], and many others besides. criticize their estimates 4. avoid irrelevant and unreliable estimation information Such decision making need not wait on   5. use documented data from previous detailed local data domain collection development tasks [Fenton08, Menzies08] 6. find estimation experts with relevant domain background AI is useful for software engineering   7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many tools can explore and rank more  AI experts +estimation strategies options that humans [Menzies00], and 10. assess the uncertainty of the estimate many more besides 11. provide feedback on estimation accuracy 12. provide estimation training opportunities AI tools might be better than standard   (M. Jorgensen. A review of studies on expert estimation of software methods (many papers). development effort. Journal of Systems and Software, 2004). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 64. + 64 what is interesting/different here? It is possible to predict software In effort estimation, useful for bridging expert vs   development effort [Boehm81, model-based estimation methods Chulani99] Support Jorgensen's Expert Judgment Best Practices: development models can be 1. evaluate estimation accuracy,  Software Cross-val 2. avoid conflicting estimation goals used to debate trade-offs between different management options: 3. ask the estimators to justify and [Boehm00], and many others besides. criticize their estimates 4. avoid irrelevant and unreliable estimation information Such decision making need not wait on   5. use documented data from previous detailed local data domain collection development tasks [Fenton08, Menzies08] 6. find estimation experts with relevant domain background AI is useful for software engineering   7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many tools can explore and rank more  AI experts +estimation strategies options that humans [Menzies00], and 10. assess the uncertainty of the estimate many more besides 11. provide feedback on estimation accuracy 12. provide estimation training opportunities AI tools might be better than standard   (M. Jorgensen. A review of studies on expert estimation of software methods (many papers). development effort. Journal of Systems and Software, 2004). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 65. + 65 what is interesting/different here? It is possible to predict software In effort estimation, useful for bridging expert vs   development effort [Boehm81, model-based estimation methods Chulani99] Support Jorgensen's Expert Judgment Best Practices: development models can be 1. evaluate estimation accuracy,  Software 2. avoid conflicting estimation goals used to debate trade-offs between different management options: 3. ask the estimators to justify and [Boehm00], and many others besides Feature selection, criticize their estimates Instance-based 4. avoid irrelevant and unreliable estimation information learning Such decision making need not wait on   5. use documented data from previous detailed local data domain collection development tasks [Fenton08, Menzies08] 6. find estimation experts with relevant domain background AI is useful for software engineering   7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many tools can explore and rank more  AI experts +estimation strategies options that humans [Menzies00], and 10. assess the uncertainty of the estimate many more besides 11. provide feedback on estimation accuracy 12. provide estimation training opportunities AI tools might be better than standard   (M. Jorgensen. A review of studies on expert estimation of software methods (many papers). development effort. Journal of Systems and Software, 2004). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 66. + 66 what is interesting/different here? It is possible to predict software In effort estimation, useful for bridging expert vs   development effort [Boehm81, model-based estimation methods Chulani99] Support Jorgensen's Expert Judgment Best Practices: development models can be 1. evaluate estimation accuracy,  Software 2. avoid conflicting estimation goals used to debate trade-offs between different management options: 3. ask the estimators to justify and [Boehm00], and many others besides criticize their estimates 4. avoid irrelevant and unreliable estimation information Such decision making need not wait on 5. use documented data from previous detailed local data domain collection development tasks [Fenton08, Menzies08] Data mining 6. find estimation experts with relevant domain background AI is useful for software engineering   7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many tools can explore and rank more  AI experts +estimation strategies options that humans [Menzies00], and 10. assess the uncertainty of the estimate many more besides 11. provide feedback on estimation accuracy 12. provide estimation training opportunities AI tools might be better than standard   (M. Jorgensen. A review of studies on expert estimation of software methods (many papers). development effort. Journal of Systems and Software, 2004). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 67. + 67 what is interesting/different here? It is possible to predict software In effort estimation, useful for bridging expert vs   development effort [Boehm81, model-based estimation methods Chulani99] Support Jorgensen's Expert Judgment Best Practices: development models can be 1. evaluate estimation accuracy,  Software 2. avoid conflicting estimation goals used to debate trade-offs between different management options: 3. ask the estimators to justify and [Boehm00], and many others besides criticize their estimates 4. avoid irrelevant and unreliable estimation information Such decision making need not wait on 5. use documented data from previous detailed local data domain collection development tasks [Fenton08, Menzies08] 6. find estimation experts with relevant domain background AI is useful for software engineering   7. estimate top-down +bottom-up Ensemble 8. use estimation checklists learning 9. combine estimates of many tools can explore and rank more  AI experts +estimation strategies options that humans [Menzies00], and 10. assess the uncertainty of the estimate many more besides 11. provide feedback on estimation accuracy 12. provide estimation training opportunities AI tools might be better than standard   (M. Jorgensen. A review of studies on expert estimation of software methods (many papers). development effort. Journal of Systems and Software, 2004). The options found by the AI tools are   better than (at least some) management repair actions (this paper)
  • 68. + 68 what is interesting/different here? It is possible to predict software In effort estimation, useful for bridging expert vs   development effort [Boehm81, model-based estimation methods Chulani99] Support Jorgensen's Expert Judgment Best Practices: development models can be 1. evaluate estimation accuracy,  Software 2. avoid conflicting estimation goals used to debate trade-offs between different management options: 3. ask the estimators to justify and [Boehm00], and many others besides. criticize their estimates 4. avoid irrelevant and unreliable estimation information Such decision making need not wait on   5. use documented data from previous detailed local data domain collection development tasks [Fenton08, Menzies08] 6. find estimation experts with relevant domain background AI is useful for software engineering   7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many tools can explore and rank more  AI experts +estimation strategies options that humans [Menzies00], and Report variance 10. assess the uncertainty of the estimate many more besides in cross- 11. provide feedback on estimation accuracy validation 12. provide estimation training opportunities AI tools might be better than standard   (M. Jorgensen. A review of studies on expert estimation of software methods (many papers). development effort. Journal of Systems and Software, 2004). The options found by the AI tools are   better than (at least some) management repair actions (this paper)