This document discusses quality assurance metrics and trends in practice. It begins by providing examples of costly technology failures from bugs in various systems. It then defines various known QA metrics that can be classified as product, process, or QA metrics. The document discusses representing metrics and their dependencies using a graph theory approach to analyze and prioritize metrics based on their calculated weights and dependencies. It provides an example of applying this approach to define and analyze key metrics for a test project. The analysis reveals that certain metrics should be prioritized for improvement based on the calculated weights.
Measuring Quality: Testing Metrics and Trends in PracticeTechWell
In today's fast-paced IT world, companies follow “best” testing trends and practices with the assumption that, by applying these methodologies, their product quality will improve. But that does not always happen. Why? Liana Gevorgyan questions and defines, in the language of metrics, exactly what is expected to be changed or improved, and how to implement these improvements. While your project is in progress, choosing the right metrics and looking at their trends help you understand what must change to improve your methodology. Metrics—customer satisfaction, critical/blocking issues ratio with trends for each iteration, gap analysis results and improvement metrics, automation scripts, and test case coverage—and their priority are defined by assigning weight for each based on current project size, process model, technology, time, and goal. With a long list of metrics and measurement techniques, learn to drill down to what really makes sense in your organization. Develop a model that meets your needs and evaluates changes more effectively.
Measuring Quality: Testing Metrics and Trends in PracticeTechWell
In today's fast-paced IT world, companies follow “best” testing trends and practices with the assumption that, by applying these methodologies, their product quality will improve. But that does not always happen. Why? Liana Gevorgyan questions and defines, in the language of metrics, exactly what is expected to be changed or improved, and how to implement these improvements. While your project is in progress, choosing the right metrics and looking at their trends help you understand what must change to improve your methodology. Metrics—customer satisfaction, critical/blocking issues ratio with trends for each iteration, gap analysis results and improvement metrics, automation scripts, and test case coverage—and their priority are defined by assigning weight for each based on current project size, process model, technology, time, and goal. With a long list of metrics and measurement techniques, learn to drill down to what really makes sense in your organization. Develop a model that meets your needs and evaluates changes more effectively.
A lean model based outlook on cost & quality optimization in software projectsSonata Software
A large quantum of effort and research is being invested to address the Cost and Quality factors in software projects. Though the solutions, models and methodologies are well established through experimented processes, adoption and optimization of the required parameters for a specific project to obtain predictable and acceptable quality with minimum costs has always remained a challenge.
This paper discusses the Lean process in detail with the help of project data and demonstrates that simple, affordable and adoptable processes are more economical and focused on quality. Also, it is observed that the Lean Model enables a project to be well monitored and controlled by focusing on critical elements, thereby reducing overheads of bulky documentation and irrelevant processes. The above findings are statistically analyzed using the coefficient of variation, which strikes a direct correlation with the predictable quality in a project.
Maximo Oil and Gas 7.6.1 HSE: Permit to Work OverviewHelen Fisher
Example use cases:
-Hot work permit
-Confined space entry permit
-Cold work permit
-Work Control Certificate
-Excavation Permit
-Diving Permit
-Breaking containment permit
Software Test Metrics and MeasurementsDavis Thomas
Explains in detail with example about calculation of -
1.Percentage Test cases Executed [Test Coverage]
2.Percentage Test cases not executed
3.Percentage Test cases Passed
4.Percentage Test cases Failed
5.Percentage Test cases BLOCKED/Deferred
6.Defect Density
7.Defect Removal Efficiency (DRE)
8.Defect Leakage
9.Defect Rejection ratio [Invalid bug ratio]
10.Percentage of Critical defects
11.Percentage of High defects
12.Percentage of Medium defects
13.Percentage of Low/Lowest defects
Autonomous control of a thermal distortion testerMichael Sallmen
Senior Design Project Presentation Slides.
Project Description:
The Thermal Distortion Tester measures the thermo-mechanical deformation of a sand mold used in metal casting. A closed-loop feedback system implemented in the graphical programing language LabVIEW™ reduces human interaction and thus improves measurement accuracy. A high capacity power supply enables use of higher temperatures, matching conditions found in the metal casting industry.
Link to final report: https://www.slideshare.net/MichaelSallmen/autonomous-control-of-a-thermal-distortion-tester-final-report?qid=eb1848d2-b81d-4e46-bde2-dda2ddb05b8e&v=&b=&from_search=2
Measurement System Analysis is the first step of the Measure Phase of an improvement project. Before you can pass judgment on the process, you need to ensure that your measurement system is accurate, precise, capable and in control.
A lean model based outlook on cost & quality optimization in software projectsSonata Software
A large quantum of effort and research is being invested to address the Cost and Quality factors in software projects. Though the solutions, models and methodologies are well established through experimented processes, adoption and optimization of the required parameters for a specific project to obtain predictable and acceptable quality with minimum costs has always remained a challenge.
This paper discusses the Lean process in detail with the help of project data and demonstrates that simple, affordable and adoptable processes are more economical and focused on quality. Also, it is observed that the Lean Model enables a project to be well monitored and controlled by focusing on critical elements, thereby reducing overheads of bulky documentation and irrelevant processes. The above findings are statistically analyzed using the coefficient of variation, which strikes a direct correlation with the predictable quality in a project.
Maximo Oil and Gas 7.6.1 HSE: Permit to Work OverviewHelen Fisher
Example use cases:
-Hot work permit
-Confined space entry permit
-Cold work permit
-Work Control Certificate
-Excavation Permit
-Diving Permit
-Breaking containment permit
Software Test Metrics and MeasurementsDavis Thomas
Explains in detail with example about calculation of -
1.Percentage Test cases Executed [Test Coverage]
2.Percentage Test cases not executed
3.Percentage Test cases Passed
4.Percentage Test cases Failed
5.Percentage Test cases BLOCKED/Deferred
6.Defect Density
7.Defect Removal Efficiency (DRE)
8.Defect Leakage
9.Defect Rejection ratio [Invalid bug ratio]
10.Percentage of Critical defects
11.Percentage of High defects
12.Percentage of Medium defects
13.Percentage of Low/Lowest defects
Autonomous control of a thermal distortion testerMichael Sallmen
Senior Design Project Presentation Slides.
Project Description:
The Thermal Distortion Tester measures the thermo-mechanical deformation of a sand mold used in metal casting. A closed-loop feedback system implemented in the graphical programing language LabVIEW™ reduces human interaction and thus improves measurement accuracy. A high capacity power supply enables use of higher temperatures, matching conditions found in the metal casting industry.
Link to final report: https://www.slideshare.net/MichaelSallmen/autonomous-control-of-a-thermal-distortion-tester-final-report?qid=eb1848d2-b81d-4e46-bde2-dda2ddb05b8e&v=&b=&from_search=2
Measurement System Analysis is the first step of the Measure Phase of an improvement project. Before you can pass judgment on the process, you need to ensure that your measurement system is accurate, precise, capable and in control.
Measure, Metrics, Indicators, Metrics of Process Improvement, Statistical Software Process Improvement, Metrics of Project Management, Metrics of the Software Product, 12 Steps to Useful Software Metrics
Otto Vinter - Analysing Your Defect Data for Improvement PotentialTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Analysing Your Defect Data for Improvement Potential by Otto Vinter. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Aplication of on line data analytics to a continuous process polybetene unitEmerson Exchange
This Emerson Exchange, 2013 presentation summarizes the 2013 field trail results achieved by applying on-line continuous data analytics to Lubrizol’s continuous polybutene process. Continuous data analytics may be used to provide an on-line prediction of quality parameters, and enable on-line detection of fault conditions. Information is provided on improvements made in the model used for quality parameter prediction, and how the field trail platform was integrated into the process unit. Presenters Qiwei Li, production engineer, Efren Hernandez and Robert Wojewodka, Lubrizol Corp., and Terry Blevins, principal technologist at Emerson, won best in conference in the process optimization track for this presentation.
The goal of this presentation is to understand how to identify the greatest vendor-related risks for an AMI deployment, understand who the stakeholders are for an AMI roll out, and to review and discuss examples of risk mitigation.
Automation Essentials for the Age of AgileApplause
Applause automation experts share the steps to successfully implementing automation into your agile QA strategy. Everything from evaluating your own testing strategy to exploring automation across the SLDC as you mature to automation best practices are covered.
Driving Innovation with Kanban at Jaguar Land RoverLeanKit
Find out how Kanban is accelerating product design and development at Jaguar Land Rover.
Watch the recorded webinar here: https://vimeo.com/172780037
Hamish McMinn, Automotive and IT Project Manager, will explain how Kanban is improving time, cost and quality across new vehicle development projects at Jaguar Land Rover.
You'll learn:
-Why new product development provides rich opportunities for continuous process improvement.
-Benefits and challenges of transferring agile software techniques to hardware design and development.
-How to visualize work, focus on flow and increase cross-functional collaboration using LeanKit.
Hamish will share learnings from the initial pilot project, and how Kanban is now being scaled across multiple engineering teams.
From sensor readings to prediction: on the process of developing practical so...Manuel Martín
Automatic data acquisition systems provide large amounts of streaming data generated by physical sensors. This data forms an input to computational models (soft sensors) routinely used for monitoring and control of industrial processes, traffic patterns, environment and natural hazards, and many more. The majority of these models assume that the data comes in a cleaned and pre-processed form, ready to be fed directly into a predictive model. In practice, to ensure appropriate data quality, most of the modelling efforts concentrate on preparing data from raw sensor readings to be used as model inputs. This study analyzes the process of data preparation for predictive models with streaming sensor data. We present the challenges of data preparation as a four-step process, identify the key challenges in each step, and provide recommendations for handling these issues. The discussion is focused on the approaches that are less commonly used, while, based on our experience, may contribute particularly well to solving practical soft sensor tasks. Our arguments are illustrated with a case study in the chemical production industry.
Doing Analytics Right - Designing and Automating AnalyticsTasktop
There is no “one-sized fits all” of development analytics. It is not as simple as “here are the measures you need, go implement them.” The world of software delivery is too complex, and software organizations differ too significantly, to make it that simple. As discussed in the first webinar, the analytics you need depend on your unique business goals and environment.
That said, the design of your analytics solution will still require:
* The dashboards,
* the required data, and
* an appropriate choice of analytical techniques and statistics to apply to the data.
This webinar will describe a straightforward method for finding your analytic solution. In particular, we will explain how to adapt the Goal, Question, Metric (GQM) method to development processes. In addition, we will explain how to avoid “the light is brighter here” analytics anti-pattern: the idea that organizations tend to design metrics programs around the data they can easily get, rather than figuring out how to get the data they really need.
One of the most challenging problems that test managers face involves implementing effective, meaningful, and insightful test metrics. Data and measures are the foundation of true understanding, but the misuse of metrics causes confusion, bad decisions, and demotivation. Rex Black shares how to avoid these unfortunate situations by using metrics properly as part of your test management process. How can we measure our progress in testing a project? What can metrics tell us about the quality of the product? How can we measure the quality of the test process itself? Rex answers these questions, illustrated with case studies and real-life examples. Learn how to use test case metrics, coverage metrics, and defect metrics in ways that demonstrate status, quantify effectiveness, and support smart decision making. Exercises provide immediate opportunities for you to apply the techniques to your own testing metrics. Join Rex to jump-start a new testing metrics program or gain new ideas to improve your existing one.
The point of interest of the approach is on development of sigma level with the aid of using QC story
which incorporates the best manipulate and quality improvement. All sorts of first-rate control efforts directly
enhance sigma level of components. Additionally, through decreasing the level of Defectives consistent with
defectives per million (DPM) which immediately affect to the sigma stage. Here, within the paper certain
technique will be discussed to address the problems and dreams which can be an improvement in sigma stage for
the shop and decreased DPM level will be done. In the course of machining operation, nos. of types of defects
would be happened. Categories those defects and after analyze a few standards could be made so that possibility
for going on the defects may be decreased and Sigma level might be improved. The getting to know of the quality
controls procedure has to be surpassed directly to everyone within the company. Total Quality Control can be
achieved by proper methodology and the initially start up for fully implementing TQM may take few months for
any company to claim to be a TQM company. Thereafter, the standardized procedures may have to be followed
by all concerned to retain the progress achieved.
Watch the companion webinar at: http://embt.co/1xNpsuD
With changes in software development methodologies, the role of the data modeler has changed significantly. In many organizations, data modelers now find themselves on the outside looking in, relegated to documentation "after the fact" rather than active participation where the true value is added. In order to participate fully, modelers must not only adapt to an Agile work style, but must also be able to communicate the business value of model-driven development.
This session is based on a real case study where data modeling was introduced part way through a significant software development project that was quickly losing momentum due to high defect levels. Ron Huizenga will show the contrast in metrics and cost when utilizing skilled data modelers versus a development-only approach, with topics including:
Modeler participation in multiple Agile teams
Defect categories and impact
Measurement and analysis techniques
Remediation strategy
Break-through quality improvements
This "must see" session is not only for data modelers and architects, but also the decision makers for these initiatives, with information that is vital to modelers, IT executives and business sponsors. So, bring your boss to the session!
3. 1999
Mars Climate Orbiter Crash
Instead of using the provided
metric system for navigation,
the contractor carried out
measurements using empirical
units and the space craft
crashed into Mars.
COST
$135 Million
4. 1996
ARIANE Failure
Ariane 5 rocket exploded 36.7
seconds after take off. The
engine of this satellite was
much faster than that of the
previous models, but it had a
software bug that went
unnoticed.
COST
>$370 Million
5. 2003
EDS Fails Child Support
EDS created an IT system
for a Child Support
Agency in the UK that had
many software
incompatibility errors.
COST
$1.1 Billion
6. 2013
NASDAQ Trading Shutdown
August 22, 2013 NASDAQ
Stock Market Shut down
trading for three hours
because of a computer
error.
COST
$2 Billion
7. 1985-1987
Therac-25 Medical Accelerator
A software failure caused
wrong dosages of x-rays.
These dosages were
hundreds or thousands of
times greater than
normal, resulting in death
or serious injury.
COST
5 Human Lives
8. Technology In Our Daily Life
Average usage of electronic systems in developed countries:
One PC or desktop in each home.
80% of people are using mobile phones
40% of people are driving cars with various electronic systems
People are traveling via train, plane on an average once a year
Dozens of other embedded systems in our homes
Dozens of software programs in our work place, service systems
Quality of all mentioned systems are equal to the Quality of life!
11. Several Known QA Metrics and Trends
11
Manual & automation time ratio during
regression cycle
Scripts maintenance time during delivery
iteration
Daily test cases manual execution
Automation effectiveness for issues
identification
Issues found per area during regression
Areas impacted after new features integration
Issues identification behavior based on major
refactoring.
Software process timetable metrics
Delivery process productivity metric
Software system availability metrics
Test cases coverage
Automation coverage
Defined issues based on gap analysis
Ambiguities per requirement
Identified issues by criticality
Identified issues by area separation
Issues resolution turnaround time
Backlog growth speed
Release patching tendency and costs
Customer escalations by
Blocker/Critical/Major issues per release
QA engineer performance
Continuous integration efficiency
17. Real Life
Delivery not always ideal
We are familiar what is patching the release
Lack of process tracking data for analysis
Experimental delivery models not exactly the Best practice
models
17
20. Agile Process
20
TESTING/VALIDATION/VERIFICATION
Product Backlog
Client prioritized
product features
Sprint Backlog
Features assigned
to Sprint
Estimated by team
Team Commitment
Working Code Ready For
DEPLOYMENT
Time-Boxed
Test/Develop
PRODUCT BACKLOG BACKLOG TASKS
Scrum Meetings
Every 24 hours
21. Agile Process Metrics
21
SCRUM TEAM SPRINT METRICS
Scrum Team’s Understanding
of Sprint Scope and Goal
Scrum Team’s Adherence to
Scrum Rules & Engineering
Practices
Scrum Team’s
Communication
Retrospective Process
Improvement
Team Enthusiasm
Quality Delivered to
Customer
Team Velocity
Technical Debt
Management
Actual Stories
Completed vs. Planned
22. Processes Are Not Always Best Practices
22
Unique way of Agile
Transition from Waterfall to Agile
Transition from Agile to Kanban
23. Metrics Set Definition for Your Project
23
Process
Technology
Iterations
Project/Team Size
Goal
24. SECTION 4: WEIGHT BASED ANALYSIS
FOR QA METRICS & MEASUREMENTS
Mapping With Graph Theory
25. Metric and Trends for your Project
25
You are watching Metrics/Trends set, are they the right ones?
Trends are in an acceptable range, but the products quality is
not improving?
Trying to improve one metric and another is going down?
How do you analyze and fix it?
26. Mapping QA Metrics Info Graph Theory
26
Process Metrics > A= Metric 1, B=Metric 2…
Actions/Data set that takes effect in Metrics > A1, A2…
Metric dependencies of specific action
Product Metrics > C= Metric 3, D=Metric 4…
27. Preconditions & Definitions For Metrics &
Actions mapped model
27
Node’s initial weight is predefined and has value from 1-10
Edge’s weight is predefined and has value from 1-10
Connections between Nodes is defined based on dependencies of Metrics
from each other and from Actions
All Actions have fixed 1 weight
28. Initial Metrics Model & Dependencies
28
Assume
Current Metric set is:
2 Process Metrics -> M1, M2
2 Product
Metrics -> M3, M4
Where :
M1 has dependency on M3
M1 has dependency on M4
M2 has dependency on M3
There are 3 Actions or Data sets that have effect on
some of the Metrics. Those are A1, A2, A3
Where :
M1 has dependency on A1 and A2
M4 has dependency on A3
Initial Priority
Initial Priority based on
Best Practices
W(M1) = 5
W(M2) = 4
W(M3) = 3
W(M4) = 2
29. Metrics Visualization via Graph
29
M2
M3
M1
4
3
M4
5
2
A2
A1
A3
Process Metrics > A= Metric 1, B=Metric 2…
Actions/Data set that takes effect in Metrics > A1, A2…
Metric dependencies of specific action
Product Metrics > C= Metric 3, D=Metric 4…
30. Weight Assignment On Undirected Graph
30
M2
M3
M1
4
3
M4
5
2
A2
A1
A3
2
3
5
1
1
6
1
1 1
Process Metrics > A= Metric 1, B=Metric 2…
Actions/Data set that takes effect in Metrics > A1, A2…
Metric dependencies of specific action
Product Metrics > C= Metric 3, D=Metric 4…
31. Calculation Formula for Metrics New Priority
31
Priority of the node is calculated the following way:
where
- initial priority of the node
- node weight assigned by user
- cumulative weight of each node's edges
𝑾(𝑨)
34. Metrics Priorities: Current Vs. Calculated
34
Initial Priority Based on
Best Practices
M1
M2
M3
M4
Project Dependent
Calculated Priority
M1
M3
M2
M4
INITIAL PRIORITY NEW PRIORITY
36. Metrics Definition For Test Project
36
Process – Agile with Area ownership
Technology – SAAS Based Enterprise Web & Mobile App
Iteration – 2 weeks
Project Size – 5 Scrum Teams
Goal – Customer Satisfaction, No Blocker, Critical Issues Escalation
by Customer
37. Key Metrics and Dependencies
37
Metrics
M1 - Customer Escalations per defect severity – Product Metric
M2 – Opened Valid Defects per Area – Product Metric
M3 – Rejected Defects – Process Metric
M4 - Test cases Coverage – Process Metric
M5 - Automation Coverage – Process Metric
M6 - Defect fixes per Criticality – Product Metric
Actions and Data Sets
A1 – Customer types per investment and escalations per severity
A2 – Most Buggy areas
41. Key Metric Changes & Improvement Plans
41
Metrics by Calculated Priority
M1 - Customer Escalations
per defect severity
M3 – Rejected Defects
M6 - Defect fixes per
Criticality per Team
M2 – Opened Valid Defects
per Area
M4 - Test cases Coverage
M5 - Automation Coverage
Group Defect by Severity and per Customer investment to understand
real picture. 1000 Minor issues can cost more than 1 High severity issue.
Proceed Trainings to low defect rejection, so developers will not spend
more time on analysis of invalid issues
Make sure Defect fixes are going in Parallel with new feature
development for each sprint
Continuously update Test case after each new issue, to make sure you
have good coverage
Automate as much as possible to cut the costs and increase the coverage
42. Monitoring of Trend Based Priority Metrics
Based on Process Changes
42
60
70 68
75
20
25
29
34
70
80 82
78
0
10
20
30
40
50
60
70
80
90
Jan Feb March April
M1 M3 M6
44. Thank You
Global Footprint
About Us
A leading provider of next-gen mobile application
lifecycle services ranging from design and
development to testing and sustenance.
Locations
Corporate HQ: Silicon Valley
Offices: Conshohocken (PA), Ahmedabad (India),
Pune (India), London (UK)
InfoStretch Corporation
45. References
Narsingh Deo, Graph Theory with Applications to Engineering and Computer Science, Prentice Hall 1974.
A.A. Shariff K, M.A. Hussain, and S. Kumar, Leveraging un- structured data into intelligent information – analysis and
evaluation, Int. Conf. Information and Network Technology, IPCSIT, vol. 4, IACSIT press, Singapore, pp. 153-157, 2011.
http://en.wikipedia.org/wiki/List_of_software_bugs
http://www.starshipmodeler.com/real/vh_ari52.htm
http://news.nationalgeographic.com/news/2011/11/pictures/111123-mars-nasa-rover-curiosity-russia-phobos-lost-curse-
space-pictures/
http://www.bloomberg.com/news/articles/2013-08-22/nasdaq-shuts-trading-for-three-hours-in-latest-computer-error