This document provides an overview and agenda for a starter kit on KI measurement. It discusses why measurement is necessary, defines key terms like metrics, and explains the Goal Question Metric paradigm for developing a measurement program. It also covers establishing measurement objectives by linking them to information needs and business objectives. The document recommends starting by measuring things like time to delivery, costs, quality defects, and customer satisfaction to help improve processes and meet organizational goals.
process - how well process is working
product - is product complete; meet user requirements; (e.g., errors in field; compare to performance)
project - is project following plan?
The goals provide the foundation for the measurement program. They force those considering measurement to clearly define program requirements in terms of goals/strategic or tactical vision.
This ensures that data collection has a purpose and is done according to a process in a methodical manner.
Projects may choose to store project-specific data and results in a project-specific repository. When data are shared more widely across projects, the data may reside in the organization’s measurement repository.
Let us look at an example. An organization’s CEO states that the organization and its projects should meet the delivery time, decrease the cost of poor quality, and meet the functionality promised with each delivery.
What is the information need? “Why is there a focus on “Deliver on Time”?
Have the projects not been delivering on time in the past and it has become a problem?
Is there a market window that when not met causes great financial loss?
Are other business units dependent on our project’s prompt delivery?
Why make an organizational measurement objective to deliver with the promised functionality?
Have projects in the past not been able to deliver on the agreed upon date with the full promised functionality?
Is this causing customer dissatisfaction? Is this causing the organization to fall behind its competitors?
Quality and Process Performance Measurement Objectives
Define the criteria for evaluating the utility of the analyzed results – determine:
Were the results provided on a timely basis?
Were the results understandable?
Were the results able to be used in decision making?
Did the measurement work submitted provide clear benefits to the decision makers?
Was there a significant amount of missing data when the analyses were conducted?
Was there a sampling bias?
Is the measurement statistically repeatable?
Information typically stored includes the following:
Measurement Plans
Specifications of measures
Sets of data that have been collected
Analysis reports and presentations
Development Progress tracking requires that activities/work products have predefined entry and exit criteria
Software defect and rework data help determine: the amount and type of resources needed for rework activities; progress made; and the technical quality of the software and the development processes; expected completion date; and the expected resources needed to support delivered software
The rest of this module takes a look at each of the basic measures. We will attempt to show some of the potential variations. However, the main focus of this module is to discuss the standardization of the measures/data that is going to be collected. We will use the companion manual which contains some guidance in terms of standardizing data.
Technical approach defines a top-level strategy for development of the products. It includes:
Decisions on architectural features such as distributed or client-server systems
Robotics
Composite materials
Geosynchronous versus low-earth-orbit
Artificial intelligence
Other specialty engineering disciplines such as safety, security and ergonomics
Attribute determination depends on the currently available technology
Number of logic gates for integrated circuit design
Lines of code or function points
Complexity of requirements for systems engineering
Risks are analyzed to determine the impact, probability of occurrence , and time-frame in which the problem is likely to occur
Data may be:
Reports
Manuals
Notebooks
Charts
Drawings
Specifications
Files
Correspondence
Select and implement methods for providing the necessary knowledge and skills
Training (Internal and External)
Mentoring
Coaching
On-the-job application of learned skills
Staffing
Ramp up and ramp down by life-cycle phase
Handling of resource conflicts
Losses of most concern are those that are unplanned. An additional concern is whether those added are of comparable skill levels to those that have been lost.
Turnover is defined as the number of unplanned staff losses during the reporting period.
If turnover exceeds 10-15%, the manager should investigate the reasons for the departures.
This figure represents the project as a whole. Additional graphs can be drawn for each software component or for each build.
The effect of turnover is that knowledge already built up is lost, and re-familiarizing new employees with the project will cause some delays in the schedule
A Stakeholder is a group or individual that is affected by or in some way accountable for the outcome of an undertaking.
For each major activity, the stakeholders that are affected by the activity and those who have expertise needed to conduct the activity should be identified
Stakeholders in the later phases of the lifecycle should have early input to the requirements and design decisions that affect them
Systems Engineering
Systems Test
Software Quality Assurance
Software Configuration Management
Documentation Support
Database Administration
Development Progress tracking requires that activities/work products have predefined entry and exit criteria
Software defect and rework data help determine: the amount and type of resources needed for rework activities; progress made; and the technical quality of the software and the development processes; expected completion date; and the expected resources needed to support delivered software
The rest of this module takes a look at each of the basic measures. We will attempt to show some of the potential variations. However, the main focus of this module is to discuss the standardization of the measures/data that is going to be collected. We will use the companion manual which contains some guidance in terms of standardizing data.
The organizational measurement repository must be designed and implemented including:
Developing the procedures for storing and retrieving the measures
Entering the specified measures
Making the people aware of the measurement repository and making the contents available for use throughout the organization
Training the people in making effective use of the measures, including how to interpret them for use on their own projects
Examples of classes of commonly used measures include:
Size of work products (lines of code, function or feature points, complexity)
Effort and cost
Actual measures of size, effort, and cost
Quality measures
Work product inspection coverage
Test or verification coverage
Reliability measures
Revising the measurement repository as:
additional process measurement data becomes available
processes are revised and new product or process measures are needed
finer granularity of data is required
greater visibility into the process is required
measurements are no longer used or needed
A Major defect is one that could cause a malfunction or unexpected result if uncorrected.
For documents it is major if it could cause the user to make a mistake.
A Minor defect is one that won’t cause a malfunction or unexpected result if uncorrected.
Correct-Fix Rate – the percentage of edit correction attempts which correctly fix a defect and do not introduce any new defects
Default: 83% five out of six correction attempts
Fix-Fail-Rate – the percentage of edit correction attempts which either fail to correct the defect or introduce a new defect
Default: 17% one out of six correction attempts
A – TrueA - True
B – TrueB - True
C – TrueC - False
D – False or TrueD - True
E – False or TrueE - False
A – TrueA – False
B – TrueB - True
C – FalseC – False or True
D – TrueD – False or True
E – TrueE – False or True
A – TrueA - True
B – TrueB - False
C – FalseC – False or True
D – FalseD – False or True
E – TrueE – False or True
Software quality is based on user needs
Operational needs - deals with the use of the software to perform the tasks it was intended to perform
Maintenance needs - deals with modifying the software in one way or another to aid the user
Operational Needs
Functionality deals with what the software does while executing
Performance deals with how well it does it
Example:
Functionality of communication software refers to the ability the software has to transmit and receive interprocessor messages
Performance of communication software refers to the rate at which messages can be transmitted and received using it
Maintenance Needs
Change deals with modifying the software either to correct errors, adapt code to new environments, or add new functionality
Management needs deal with planning for change, controlling versions of the software, testing, and installation
Software quality criteria is defined in terms of the characteristics that the software exhibits
The next four slides provides the one sentence description of the Software Quality Criteria
Two of the 27 definitions of Software Quality Criteria are presented here to give the participants an idea of the level of detail!
The complete set of Software Quality Criteria definitions can be found in the Supplemental Material
Two of the 27 definitions of Software Quality Criteria are presented here to give the participants an idea of the level of detail!
The complete set of Software Quality Criteria definitions can be found in the Supplemental Material
The next few slides contain examples of metrics that correspond to the quality factors
This is not an attempt to provide a complete list of metrics, but to provide a few of examples to provide the Project Leaders and metrics engineers with hints so they can create their own metrics depending on the quality factors and quality criteria that is required
Metrics should be created by examining the Software Quality Criteria checklists provided in the Supplemental Material -> Instructors are encouraged to turn to Software Quality Criteria section in the Companion Manual and review a few more of these. At least point them out to the participants
This shows a translation into design activities!!
The next few slides contain examples of metrics that correspond to the quality factors
This is not an attempt to provide a complete list of metrics, but to provide a few of examples to provide the Project Leaders and metrics engineers with hints so they can create their own metrics depending on the quality factors and quality criteria that is required
Metrics should be created by examining the Software Quality Criteria checklists provided in the Supplemental Material -> Instructors are encouraged to turn to Software Quality Criteria section in the Companion Manual and review a few more of these. At least point them out to the participants
This shows a translation into design activities!!
It is important to define and measure your processes with an eye towards improving those processes. The earlier an organization can start collecting and archiving meaningful process data, the easier it will be to perform quantitative management activities.
Process performance measures reflect the effectiveness of the process and/or the effectiveness of following the process.
It is a kind of “quality” indicator for the process and/or for following the process. Examples for process performance measures include cycle time, defect removal rate, productivity, severity of defects, peer review coverage, test coverage, change request open time, reliability, defect density, and rework time.
The CMMI is composed of five maturity levels
Each maturity level with the exception of Level 1 is composed of several process areas
Each process area is organized into five sections called common features
The common features specify the practices
The practices when viewed collectively should accomplish the goals of the process area
This module will discuss each of these topics in detail and get the participants familiar with the physical layout of information in the CMMI
Instructors are encouraged to have the participants open their CMMI reference manuals and refer them to it often as you present this module
Histograms
Can be used to characterize the observed values of almost any product or process attribute
Examples
Module size
Defect repair time
Time between failure
Defects found per test or inspection
Daily backlogs
Histograms can be helpful for revealing differences that have taken place across processes, projects, or times
Examples of methods for selecting defects include:
Pareto analysis
Histograms
Process capability analysis
Histograms
Can be used to characterize the observed values of almost any product or process attribute
Examples
Module size
Defect repair time
Time between failure
Defects found per test or inspection
Daily backlogs
Histograms can be helpful for revealing differences that have taken place across processes, projects, or times
The project managers have ultimate responsibility for the planning and control of everything related to the project. Thus they must manage the interfaces to external entities that influence the product, staff, or customer.
In most organizations, the project manager (as well as the team) must interact with various groups external to the project team, including
Quality Assurance
Configuration management
Senior management
Customer
The customer may be an actual customer who takes delivery, a marketing group responsible for defining requirements, or another group within the organization who receives the results (e.g., a project is defined as the integration test for a large system could define the customer as the qualification test team
Some organizations have established groups external to projects to handle functions such as documentation, validation, final qualification testing… These may also therefore be external groups the project manager needs to interact with.
The project managers have ultimate responsibility for the planning and control of everything related to the project. Thus they must manage the interfaces to external entities that influence the product, staff, or customer.
In most organizations, the project manager (as well as the team) must interact with various groups external to the project team, including
Quality Assurance
Configuration management
Senior management
Customer
The customer may be an actual customer who takes delivery, a marketing group responsible for defining requirements, or another group within the organization who receives the results (e.g., a project is defined as the integration test for a large system could define the customer as the qualification test team
Some organizations have established groups external to projects to handle functions such as documentation, validation, final qualification testing… These may also therefore be external groups the project manager needs to interact with.