Promise Keynote
Upcoming SlideShare
Loading in...5

Promise Keynote



Norman Fenton

Norman Fenton



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • Joke about Savoy. Who I am – show my book. I’ll have something more to say about this book shortly. One of the best books on software metrics was written by Bob Grady of Hewlett Packard in 1987. Bob was responsible for what I believe was recognised as the first true company-wide metrics programme. And his book described the techniques and experiences associated with that at HP. A few years later I was at a meeting where Bob told an interesting story about that metrics programme. He said that one of the main objectives of the programme was to achieve process improvement by learning from metrics what process activities worked and what ones didn’t. To do this they looked at those projects that in metrics terms were considered most successful. These were the projects with especially low rates of customer-reported defects. The idea was to learn what processes characterised such successful projects. It turned out that what they learned from this was very different to what they had expected. I am not going to tell you what it was they learnt until the end of my presentation; by then you may have worked it out for yourselves.

Promise Keynote Promise Keynote Presentation Transcript

  • New Directions for Software Metrics Norman Fenton Agena Ltd and Queen Mary University of London PROMISE 20 May 2007
  • Overview
    • History of software metrics
    • Good and bad news
    • Hard project constraints
    • Project trade-offs
    • Decision-making and intervention
    • The true objective of software metrics?
    • Why we need a causal approach
    • Models in action
    • Results
  • Metrics History: Typical Approach What I really want to measure (Y) What I can measure (X) Y = f (X)
  • Metrics History: the drivers
    • ‘ productivity’=size/effort
    • ‘ effort’=a*size b
    • ‘ quality’=defects/size
  • Metrics history: size matters!
    • LOC
    • Improved size metrics
    • Improved complexity metrics
  • Some Decent News About Metrics
    • Empirical results/banchmarks
    • Significant industrial activity
    • Academic/research output
    • Metrics in programmer toolkits
  • ….But Now the Bad News
    • Lack of commercial relevance
    • Programmes doomed by data
    • Activity poorly motivated
    • Failed to meet true objective of quantitative risk assessment
  • Regression models….?
  • Using metrics and fault data to predict quality 0 0 Post-release faults 10 20 30 40 80 120 160 Pre-release faults ?
  • Pre-release vs post-release faults: actual Post-release faults 0 10 20 30 0 40 80 120 160 Pre-release faults
  • What we need What I think is... ?
  • The Good News
    • It is possible to use metrics to meet the real objective
    • Don’t need a heavyweight ‘metrics programme’
    • A lot of the hard stuff has been done
  • A Causal Model (Bayesian net) Residual Defects Testing Effort Problem complexity Defects found and fixed Defects Introduced Design process quality Operational defects Operational usage
  • A Model in action
  • A Model in action
  • A Model in action
  • Actual versus predicted defects
  • How did we build the models?
    • Expert judgement
    • Published data
    • Company data
  • Conclusions
    • Heavyweight data and classical statistics NOT the answer
    • Empirical studies laid groundwork
    • Causal models for quantitative risk
  • … And
    • You can use the technology NOW