Sarcia idoese08

198 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
198
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Sarcia idoese08

  1. 1. An Approach to Improving Parametric Estimation Models in case of Violation of Assumptions 1 Dept. of Informatica, Sistemi e Produzione University of Rome “Tor Vergata” S. Alessandro Sarcià 1,2 [email_address] Giovanni Cantone 1 Victor R. Basili 2,3 2 Dept. of Computer Science University of Maryland and 2 Fraunhofer Center for ESE Maryland Author Advisors
  2. 2. <ul><li>Motivation (Why) </li></ul><ul><li>Objectives (What) </li></ul><ul><li>Roadmap (How) </li></ul><ul><li>The problem </li></ul><ul><li>The solution </li></ul><ul><li>The application </li></ul><ul><li>A case study </li></ul><ul><li>Conclusion & Benefits </li></ul><ul><li>Questions & Feedbacks </li></ul>Outline
  3. 3. MOTIVATION
  4. 4. Predicting software engineering variables accurately is the basis for success of mature organizations. This is still an unsolved problem. Our point of view: Prediction is about estimating values based on mathematical and statistical approaches (no guessing), e.g., regression functions Variables are cost, effort, size, defects, fault proneness, number of test cases and so forth Success refers to delivering software systems on time, on budget, and on quality as initially required. In software estimation , success is about providing estimates as close to the actual values as possible (the error is less than a stated threshold). Focus: We consider a wider meaning of it as keeping prediction uncertainty within acceptable thresholds (risk analysis on the estimation model) Organizations that we refer to are learning organizations that aim at improving their success over time.
  5. 5. OBJECTIVES
  6. 6. Objectives <ul><li>Analyze the estimation risk (uncertainty) of the estimation model, the behavior of the EM with respect to the estimation error over the history ( Is it too risky using the chosen model? What is the model reliability? ) </li></ul><ul><li>State a strategy for mitigating the risk of getting estimation failures ( we cannot remove the error completely ) </li></ul><ul><li>State a strategy for improving the estimation model ( improvement over time ) not finding the best model ( novelty ) </li></ul>EM  Estimation Model
  7. 7. ROADMAP
  8. 8. An overview on the approach <ul><li>To reach our objectives: </li></ul><ul><li>We removed assumptions on the regression functions and dealt with the consequences of it </li></ul><ul><li>We tailored the Quality Improvement Paradigm (QIP) to an Estimation Improvement Process (EIP)  specific for prediction </li></ul><ul><li>We defined a particular kind of Artificial Neural Network (ANN) and a strategy for analyzing the estimation risk in case of violations of assumptions </li></ul><ul><li>We used this ANN for mitigating the estimation risk (prediction) and improving the model </li></ul>To analyze the uncertainty … To implement our solution To apply our solution The Problem The Solution The Application
  9. 9. THE PROBLEM
  10. 10. Error taxonomy
  11. 11. Regression functions EM: y = f (x,  ) +  , E(  ) = 0 and cov(  ) = I  2 <ul><li>y : dependent variable (e.g., effort …) </li></ul><ul><li>x : independent variables (e.g. size, complexity, …) </li></ul><ul><li>: random error (unknown) </li></ul><ul><li> : parameters of the model </li></ul><ul><li>E(  ) : expected value of  </li></ul><ul><li>I : identity </li></ul><ul><li>Var (  )=  2 </li></ul><ul><li>f may be linear, non-linear, and even a generalized model </li></ul>ŷ = f(x, B ) with B   and y  ŷ ; r = (y- ŷ)   e.g., Least Squares estimates
  12. 12. Regression assumptions <ul><li>Random Error  is not x correlated </li></ul><ul><li>The variance of the random error is constant (homoschedasticity) </li></ul><ul><li> is not auto-correlated </li></ul><ul><li>The probability density of the error is Gaussian </li></ul><ul><li>Very often, to have a closed solution for B: </li></ul><ul><li>The model is assumed linear in the parameters (linear or linearized), e.g. polynomials of any degree, log-linear models. Generalized models require iterative procedures for calculating B </li></ul>
  13. 13. In case of violations, when we estimate the uncertainty on the next estimate the prediction interval may be unreliable (type I – II errors). Violation of Regression assumptions If normality does not hold we cannot use t-Student’s percentiles This is no longer constant This is not the standard error This is not the spread It may be correct Estimate Prediction Interval
  14. 14. Violation of Regression assumptions
  15. 15. THE SOLUTION
  16. 16. The mathematical solution <ul><li>We have to build prediction intervals correctly : </li></ul><ul><li>Based on an empirical approach (observations without any assumptions) </li></ul><ul><li>Using a Bayesian approach (including prior and posterior information at the same time) </li></ul><ul><li>In particular, to estimate prediction intervals, we build a Feedforward Multilayer Artificial Neural Network for discrimination problems </li></ul><ul><li>We call such a network as </li></ul><ul><li>B ayesian D iscrimination F unction (BDF): </li></ul>
  17. 17. The Quality Improvement Paradigm
  18. 18. The Estimation Improvement Process
  19. 19. The framework
  20. 20. Building the BDF Non-linear x-dependent median Class A Class B BDF 0 1 0.5 RE KSLOC (Posterior) Probability RE RE (P1) RE (P2) fixing  A family
  21. 21. Inverting the BDF (Sigmoid is smooth and monotonic) Inv(BDF) Fixing the probability RE KSLOC (fixed) 0 0.975 0.5 (Posterior) Probability RE Me UP Fixing a credibility range (95%) 1 0.025 Me DOWN (Bayesian) Error Prediction Interval
  22. 22. Analyzing the model behavior 0 Flatter Steeper Biased Biased Unbiased Unbiased KSLOC = 0.95 KSLOC = 0.55 KSLOC = 0.32 KSLOC = 0.11
  23. 23. Estimate Prediction Interval (M. Jørgensen ) RE = (Act – Est)/Act To estimate the Estimate Prediction Interval from the Error Prediction Interval, we can substitute and inverting the formula: [Me DOWN , Me UP ] = (Act – Est)/ Act O N+1 DOWN = Act DOWN = Est/(1 – Me DOWN ) O N+1 UP = Act UP = Est/(1 – Me UP ) Estimate Prediction Interval
  24. 24. THE APPLICATION
  25. 25. Scope Error (similarity analysis with estimated data)
  26. 26. Assumption Error (estimated data)
  27. 27. Improving the model (actual data) Scope extension
  28. 28. Improving the model (actual data) Error magnitude and bias What we need to be worried about is the relative error magnitude not the bias
  29. 29. Improving the model (actual data) <ul><li>To shrink the magnitude of the relative error we can: </li></ul><ul><li>Find and try new variables </li></ul><ul><li>Removing irrelevant variables (PCA,CCA, Stepwise) </li></ul><ul><li>Considering dummy variables (different populations) </li></ul><ul><li>Improving the flexibility of the model (generalized models) </li></ul><ul><li>Selecting the right complexity of the model (cross-validation) </li></ul>
  30. 30. A CASE STUDY
  31. 31. The NASA COCOMO data set [PROMISE] UB BS UB BS -0.9 -2.4 Relative Error EXT EXT EXT UB UB UB UB UB UB 77 historical projects (before 1985), 16 projects being estimated (from 1985 to 1987)
  32. 32. CONCLUSION & BENEFITS
  33. 33. Benefits of using this approach <ul><li>Continue using parametric estimation models </li></ul><ul><li>Correct the limitations of the parametric models by dealing with the consequences of the violations </li></ul><ul><li>The approach is systematic (framework and process) and it can support learning organizations and improvement paradigms </li></ul><ul><li>Evaluate the estimation model reliability before using it (early risk evaluation) </li></ul><ul><li>The approach is traceable and repeatable (EIP + Frmwrk) </li></ul><ul><li>The approach can be completely implemented as an software tool that reduces human interaction </li></ul><ul><li>The approach produces experience packages (e.g. ANN) that are easier and faster to store and deliver </li></ul><ul><li>The approach is general even though we have shown up its application only to parametric models </li></ul>
  34. 34. QUESTIONS & FEEDBACKS
  35. 35. An Approach to Improving Parametric Estimation Models in case of Violation of Assumptions 1 Dept. of Informatica, Sistemi e Produzione University of Rome “Tor Vergata” S. Alessandro Sarcià 1,2 [email_address] Giovanni Cantone 1 Victor R. Basili 2,3 2 Dept. of Computer Science University of Maryland and 2 Fraunhofer Center for ESE Maryland Author Advisors

×