Your SlideShare is downloading. ×
  • Like
Promise Keynote
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply
Published

Norman Fenton

Norman Fenton

Published in Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
873
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
56
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Joke about Savoy. Who I am – show my book. I’ll have something more to say about this book shortly. One of the best books on software metrics was written by Bob Grady of Hewlett Packard in 1987. Bob was responsible for what I believe was recognised as the first true company-wide metrics programme. And his book described the techniques and experiences associated with that at HP. A few years later I was at a meeting where Bob told an interesting story about that metrics programme. He said that one of the main objectives of the programme was to achieve process improvement by learning from metrics what process activities worked and what ones didn’t. To do this they looked at those projects that in metrics terms were considered most successful. These were the projects with especially low rates of customer-reported defects. The idea was to learn what processes characterised such successful projects. It turned out that what they learned from this was very different to what they had expected. I am not going to tell you what it was they learnt until the end of my presentation; by then you may have worked it out for yourselves.

Transcript

  • 1. New Directions for Software Metrics Norman Fenton Agena Ltd and Queen Mary University of London PROMISE 20 May 2007
  • 2. Overview
    • History of software metrics
    • Good and bad news
    • Hard project constraints
    • Project trade-offs
    • Decision-making and intervention
    • The true objective of software metrics?
    • Why we need a causal approach
    • Models in action
    • Results
  • 3. Metrics History: Typical Approach What I really want to measure (Y) What I can measure (X) Y = f (X)
  • 4. Metrics History: the drivers
    • ‘ productivity’=size/effort
    • ‘ effort’=a*size b
    • ‘ quality’=defects/size
  • 5. Metrics history: size matters!
    • LOC
    • Improved size metrics
    • Improved complexity metrics
  • 6. Some Decent News About Metrics
    • Empirical results/banchmarks
    • Significant industrial activity
    • Academic/research output
    • Metrics in programmer toolkits
  • 7. ….But Now the Bad News
    • Lack of commercial relevance
    • Programmes doomed by data
    • Activity poorly motivated
    • Failed to meet true objective of quantitative risk assessment
  • 8.  
  • 9. Regression models….?
  • 10. Using metrics and fault data to predict quality 0 0 Post-release faults 10 20 30 40 80 120 160 Pre-release faults ?
  • 11. Pre-release vs post-release faults: actual Post-release faults 0 10 20 30 0 40 80 120 160 Pre-release faults
  • 12. What we need What I think is... ?
  • 13. The Good News
    • It is possible to use metrics to meet the real objective
    • Don’t need a heavyweight ‘metrics programme’
    • A lot of the hard stuff has been done
  • 14. A Causal Model (Bayesian net) Residual Defects Testing Effort Problem complexity Defects found and fixed Defects Introduced Design process quality Operational defects Operational usage
  • 15. A Model in action
  • 16.  
  • 17. https://intranet.dcs.qmul.ac.uk/courses/coursenotes/DCS235/
  • 18.  
  • 19.  
  • 20. A Model in action
  • 21.  
  • 22.  
  • 23. A Model in action
  • 24.  
  • 25.  
  • 26.  
  • 27.  
  • 28.  
  • 29. Actual versus predicted defects
  • 30. How did we build the models?
    • Expert judgement
    • Published data
    • Company data
  • 31. Conclusions
    • Heavyweight data and classical statistics NOT the answer
    • Empirical studies laid groundwork
    • Causal models for quantitative risk
  • 32. … And
    • You can use the technology NOW
    • www.agenarisk.com