Este artículo por parte de Minitab Inc., le ayuda a entender cómo mejorar los procesos de inspección.
Para conocer más sobre cómo usar Minitab Statistical Software en procesos de inspección, visite
http://minitab.blackberrycross.com
Better Managed Outsourced Testing
by Nathan Shearer
Introduction
This guide focuses on three key failure points:
1) misunderstanding of the cost of quality;
2) misalignment of contractual arrangements to the required testing/delivery outcomes; and
3) lack of effective metrics.
Conclusion
Organisations failing to fully understand their outsourced contracts and not including effective metrics and role accountabilities to monitor these often see cost overruns and project delays. There can also be reduced ROI against the outsourced agreement.
Although the main driver for outsourced testing is often cost reduction, relying on a contractual arrangement to drive or manage this outcome often leads to a unit cost for testing KPI, instead of a metric based on total cost for quality. As KPIs drive behaviour, the organisation will be left with statistics to prove the KPI was met, but a less than optimal quality outcome. The outsourced testing provider’s approach to this situation would be to involve a professional, external organisation to perform a capability review. Recommended improvements or enhancements to the organisation’s governance of the commercial arrangement and the testing outcomes would be suggested, together with changes to the metrics used to report against the testing outcomes.
Outsourced testing can be successful. It can speed up the test execution, improve quality and reduce costs. It is important to ensure the total cost of quality is understood, the quality outcomes are defined and aligned to the commercial agreement. Appropriate metrics should also be created, recorded and measured by both the provider and the client.
Better Managed Outsourced Testing
by Nathan Shearer
Introduction
This guide focuses on three key failure points:
1) misunderstanding of the cost of quality;
2) misalignment of contractual arrangements to the required testing/delivery outcomes; and
3) lack of effective metrics.
Conclusion
Organisations failing to fully understand their outsourced contracts and not including effective metrics and role accountabilities to monitor these often see cost overruns and project delays. There can also be reduced ROI against the outsourced agreement.
Although the main driver for outsourced testing is often cost reduction, relying on a contractual arrangement to drive or manage this outcome often leads to a unit cost for testing KPI, instead of a metric based on total cost for quality. As KPIs drive behaviour, the organisation will be left with statistics to prove the KPI was met, but a less than optimal quality outcome. The outsourced testing provider’s approach to this situation would be to involve a professional, external organisation to perform a capability review. Recommended improvements or enhancements to the organisation’s governance of the commercial arrangement and the testing outcomes would be suggested, together with changes to the metrics used to report against the testing outcomes.
Outsourced testing can be successful. It can speed up the test execution, improve quality and reduce costs. It is important to ensure the total cost of quality is understood, the quality outcomes are defined and aligned to the commercial agreement. Appropriate metrics should also be created, recorded and measured by both the provider and the client.
Measurement System Analysis is the first step of the Measure Phase of an improvement project. Before you can pass judgment on the process, you need to ensure that your measurement system is accurate, precise, capable and in control.
Measuremen Systems Analysis Training ModuleFrank-G. Adler
The Six Sigma Measurement Systems Analysis (MSA) Training Module includes a MS PowerPoint Presentation including 62 slides covering an Introduction to Measurement Systems Analysis - Relevance - Discrimination - Accuracy - Stability - Linearity - Precision, Variable Gage R&R Study, and Attribute Gage R&R Study.
RCA on Residual defects – Techniques for adaptive Regression testingIndium Software
One of the important issues associated with a system lifespan view that we have ignored in past
years is the effects of enduring defects – defects that persist undetected – across several
releases of a system. Many studies performed to date have evaluated regression testing
techniques under the limited context such as short term assessment which do not fully account
for the industry based solutions.
Reports estimate that regression testing consumes as much as 80% of the overall testing
budget and can consume up to 50% of the cost of software maintenance.
Join us as Howard Zion, Transcat's Director of Service Application of Engineering, defines how improper test uncertainty ratio directly impacts false acceptance/rejection of product.. This webinar, entitled “In-Tolerance Non-Conformance Investigations”, will help you understand:
- The familiar concept of OOT Measurement Risk
- The less-familiar concept of In Tolerance Measurement Risk
- The purpose of Guard Band
- Compare to your organization’s current definitions
- How to apply these concepts to control of your measurement processes in order to reduce/eliminate product false accept/reject situations
Measurement System Analysis is the first step of the Measure Phase of an improvement project. Before you can pass judgment on the process, you need to ensure that your measurement system is accurate, precise, capable and in control.
Measuremen Systems Analysis Training ModuleFrank-G. Adler
The Six Sigma Measurement Systems Analysis (MSA) Training Module includes a MS PowerPoint Presentation including 62 slides covering an Introduction to Measurement Systems Analysis - Relevance - Discrimination - Accuracy - Stability - Linearity - Precision, Variable Gage R&R Study, and Attribute Gage R&R Study.
RCA on Residual defects – Techniques for adaptive Regression testingIndium Software
One of the important issues associated with a system lifespan view that we have ignored in past
years is the effects of enduring defects – defects that persist undetected – across several
releases of a system. Many studies performed to date have evaluated regression testing
techniques under the limited context such as short term assessment which do not fully account
for the industry based solutions.
Reports estimate that regression testing consumes as much as 80% of the overall testing
budget and can consume up to 50% of the cost of software maintenance.
Join us as Howard Zion, Transcat's Director of Service Application of Engineering, defines how improper test uncertainty ratio directly impacts false acceptance/rejection of product.. This webinar, entitled “In-Tolerance Non-Conformance Investigations”, will help you understand:
- The familiar concept of OOT Measurement Risk
- The less-familiar concept of In Tolerance Measurement Risk
- The purpose of Guard Band
- Compare to your organization’s current definitions
- How to apply these concepts to control of your measurement processes in order to reduce/eliminate product false accept/reject situations
The point of interest of the approach is on development of sigma level with the aid of using QC story
which incorporates the best manipulate and quality improvement. All sorts of first-rate control efforts directly
enhance sigma level of components. Additionally, through decreasing the level of Defectives consistent with
defectives per million (DPM) which immediately affect to the sigma stage. Here, within the paper certain
technique will be discussed to address the problems and dreams which can be an improvement in sigma stage for
the shop and decreased DPM level will be done. In the course of machining operation, nos. of types of defects
would be happened. Categories those defects and after analyze a few standards could be made so that possibility
for going on the defects may be decreased and Sigma level might be improved. The getting to know of the quality
controls procedure has to be surpassed directly to everyone within the company. Total Quality Control can be
achieved by proper methodology and the initially start up for fully implementing TQM may take few months for
any company to claim to be a TQM company. Thereafter, the standardized procedures may have to be followed
by all concerned to retain the progress achieved.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Software testing effort estimation with cobb douglas function- a practical ap...eSAT Journals
Abstract Effort estimation is one of the critical challenges in Software Testing Life Cycle (STLC). It is the basis for the project’s effort estimation, planning, scheduling and budget planning. This paper illustrates model with an objective to depict the accuracy and bias variation of an organization’s estimates of software testing effort through Cobb-Douglas function (CDF). Data variables selected for building the model were believed to be vital and have significant impact on the accuracy of estimates. Data gathered for the completed projects in the organization for about 13 releases. Statistically, all variables in this model were statistically significant at p<0.05><0.01 levels. The Cobb-Douglas function was selected and used for the software testing effort estimation. The results achieved with CDF were compared with the estimates provided by the area expert. The model’s estimation figures are more accurate than the expert judgment. CDF has one of the appropriate techniques for estimating effort for software testing. CDF model accuracy is 93.42%.
20% of software projects fail. Failed projects cost the world economy over $500 billion a year. The hidden costs of making, finding, and fixing software defects consume 30% or more of the IT budget. The facts are continuing to get worse. Software bugs are eating your lunch. You can reverse the trend as many companies have by applying some proven, practical methods – Good Software Engineering Practice. You can do much of this on your own if you analyze the Cost of Quality and use it as a fundamental metric to demonstrate the effectiveness and efficiency of everything you do. If you add Fagan Inspections (static testing) to your existing software testing you can start reducing the cost of quality immediately. In this study of one company’s results over several years, we show you step by step how they demonstrated over $6 million in cost savings and avoidance. You can do the same and more. We can help you realize your own dramatic improvements in bottom- and top-line results. Rebecca Staton-Reinstein, president, Advantage Leadership, Inc.
A lean model based outlook on cost & quality optimization in software projectsSonata Software
A large quantum of effort and research is being invested to address the Cost and Quality factors in software projects. Though the solutions, models and methodologies are well established through experimented processes, adoption and optimization of the required parameters for a specific project to obtain predictable and acceptable quality with minimum costs has always remained a challenge.
This paper discusses the Lean process in detail with the help of project data and demonstrates that simple, affordable and adoptable processes are more economical and focused on quality. Also, it is observed that the Lean Model enables a project to be well monitored and controlled by focusing on critical elements, thereby reducing overheads of bulky documentation and irrelevant processes. The above findings are statistically analyzed using the coefficient of variation, which strikes a direct correlation with the predictable quality in a project.
From Data to Insights: How IT Operations Data Can Boost QualityCognizant
By leveraging highly-analyzed operational data - the voice of customers, machines and tests - quality assurance (QA) and IT groups can derive major gains in quality of apps and in user experience.
One of the most challenging problems that test managers face involves implementing effective, meaningful, and insightful test metrics. Data and measures are the foundation of true understanding, but the misuse of metrics causes confusion, bad decisions, and demotivation. Rex Black shares how to avoid these unfortunate situations by using metrics properly as part of your test management process. How can we measure our progress in testing a project? What can metrics tell us about the quality of the product? How can we measure the quality of the test process itself? Rex answers these questions, illustrated with case studies and real-life examples. Learn how to use test case metrics, coverage metrics, and defect metrics in ways that demonstrate status, quantify effectiveness, and support smart decision making. Exercises provide immediate opportunities for you to apply the techniques to your own testing metrics. Join Rex to jump-start a new testing metrics program or gain new ideas to improve your existing one.
Before you can fix a problem, you must first see it. However, the longer you're in the same place, the more difficult it is to see the waste around you.
Taking a 'waste walk' is one way to make the waste visible again. A waste walk is more than just going to the gemba. It is a planned visit to where work is being performed to observe what's happening and to specifically look for waste.
Linda Dodge and Janell Vickers are Lean Six Sigma Black Belts from Catholic Health Partners (CHP). In a webinar hosted by MoreSteam, Linda and Janell shared their experiences utilizing Waste Walks in hospital settings and physician practices to help front line staff open their eyes to find the invisible waste.
These slides will show the following key points will be covered:
The key objectives of a waste walk
Finding your own 'waste eyes' and helping others to find theirs
How to use waste walks to engage employees in problem-solving and operational excellence
A map to conduct your own waste walk
More information:
www.blackberrycross.com
https://www.youtube.com/user/blackberryandcross
Slides from Webcast: Deploying Storytelling at Mercy Health
Stories are powerful change agents. How do you tell a story? How does your organization tell a story? What if we told a story the same way? What if goals were aligned on impactful work?
Join Joe Schultz to hear about Mercy Health's journey to tell stories in an impactful way using an A3 Storyteller. Over the past nine months, the Toledo and Lima regions have set upon a mission to use an A3 Storyteller that combines problem-solving elements from the healthcare industry (SBAR: Situation Background Assessment Recommendation, PDSA, and poster presentation style). Learn from the trials and tribulations that Mercy encountered during the phases of its deployment: assembling the team, designing the tool, piloting with one site, deploying across seven hospitals, and then setting up accountability structures for outcome monitoring.
In this session, the following key points will be covered:
The power of stories, standard work, and making it sticky
How to get senior leaders engaged
Create an A3 your way
Generate momentum around cascading goals / key performance indicators
Create communication strategy and roll-out plan
Create accountability structures and sustaining change
Learn more and download the slides of this webcast:
e-learning course on LEAN Methods:
http://bit.ly/lean-methods-bbcross
All copyrights to MoreSteam LLC. Used here with authorization by MoreSteam.
These slides are part of the Webcast:
A3 Reports: Polishing the Elevator Speech
"So, how's your project going? Hey, I only have 10 minutes, so can you just give me a quick update? No need to spend the time on the detail, just hit the highlights..."
Sound familiar? Maybe it's time for you to use the A3 Report, a Lean tool meant to identify and communicate the critical project information and to facilitate decision-making. The A3 has been widely adopted for use in toll gate project reviews, however, expanding its functionality beyond that realm and getting Belts to internalize 'A3 thinking' can be a formidable challenge.
About the webcast: one-hour Webcast led by Clopay's Tor Chamberlain on how his organization has made great strides in communication and transparency using the A3 Report. There's nothing scary about A3 Reports!
In this session, the following key points will be covered:
Types of projects most suitable for A3s: short-duration Kaizen activities and/or complex DMAIC projects
Use of A3 methodology to communicate effectively to different levels in the organization
Use of A3 methodology to identify and track LSS projects
How to coach A3 thinking
These material is property of MoreSteam LLC. Used with authorization.
Watch the webcast:
Learn more about A3:
www.moresteam.com
www.blackberrycross.com
From unicorns to race horses. Es el momento de Machine Learning para Excelenc...Blackberry&Cross
Desde Minitab Inc., nos llega este excelente contenido para mostrarnos la evolución del mundo digital y cómo sin ser experto en Phyton, C#, C u otros lenguajes, usted puede usar la tecnologia Minitab y elevar su desempeño empresarial a nuevos y mejores niveles.
#Minitabwebinar
Modern tool kit for process excellence, gracias a Minitab Inc.Blackberry&Cross
Del Webinar del mismo nombre, Minitab Inc., aliado de Blackberry&Cross desde 2004 para toda América Central, nos trae estas valiosas ideas sobre como combinar DMAIC con herramientas de Machine Learning.
Excelente contenido.
Descubra más sobre Minitab en http://minitab.blackberrycross.com
Machinelearning: The next step in manufacturing performance Blackberry&Cross
Minitab, nuestro aliado desde 2004, ha dado esta presentación sobre Aprendizaje Automático y aqui les compartimos el futuro.
Haga click, no tanto "code".
Este es un material proporcionado por Prezi.
Blackberry&Cross es aliado de Prezi Business, y queremos ayudarle a ser más exitoso en sus ventas, mercadeo y presentaciones.
Material en idioma portugués
Cpk: indispensable index or misleading measure? by PQ SystemsBlackberry&Cross
Capability analysis is a set of calculations used to assess whether a system is able to meet a set of requirements. Customers, engineers, or managers usually set the requirements, which can be specifications, goals, aims, or standards.
The primary reason for doing a capability analysis is to answer the question: Can we meet customer requirements? To be more specific: Can our system produce consistently within tolerances required by the customer now and in the future?
Capability analysis involves two entities: 1) the producer and 2) the consumer. The consumer sets the requirements and the producer must be able to meet the requirements.
PQ Systems and Blackberry&Cross are partners since 2006.
Visit: http://www.blackberrycross.com
Cpk indispensable index or misleading measure? by PQ SystemsBlackberry&Cross
Capability analysis is a set of calculations used to assess whether a system is able to meet a set of requirements. Customers, engineers, or managers usually set the requirements, which can be specifications, goals, aims, or standards.
The primary reason for doing a capability analysis is to answer the question: Can we meet customer requirements? To be more specific: Can our system produce consistently within tolerances required by the customer now and in the future?
Capability analysis involves two entities: 1) the producer and 2) the consumer. The consumer sets the requirements and the producer must be able to meet the requirements.
Read More.
PQ Systems is a partner of Blackberry&Cross, since 2006.
Visit: http://www.BBCross.com
BPM: Business Process Management con FIreStart BPM Suite de PrologicsBlackberry&Cross
Gracias a la alianza Prologics-Blackberry&Cross, usted puede participar de conferencias web, demostraciones y pruebas para implementar y desplegar BPM en su empresa.
Más información en www.BlackberryCross.com
Universidades, Colegios Técnicos, Secundarias académicas, pueden adquirir Snagit, Camtasia, Minitab, Companion y muchos otros software por medio de Blackberry&Cross. Precios especiales para el sector académico.
8 WAYS TO BOOST BUSINESS WITH SMART DATA ANALYSIS: Minitab Insights Promo e-bookBlackberry&Cross
Una serie de casos de éxitos en la solución de problemas, utilizando #smart data, por medio de Minitab Statistical Software.
Blackberry&Cross es el aliado oficial de Minitab Inc., en América Central, desde 2005.
Minitab Power User: Destacar puntos para análisisBlackberry&Cross
El modo "destacado" en Minitab es una poderosa herramienta de visualización de datos, como parte del análisis.
Esta presentación (confeccionado por medio de TheBpr, en 18 segundos) le muestra los pasos.
Más información en www.blackberrycross.com
Four steps to an audit proof measurement system by PQ SystemsBlackberry&Cross
Cuatro pasos para lograr un sistema de medición a "prueba de balas".
Mejore sus sistema de medición, asegure su calidad.
Gracias a PQ Systems, aliado de Blackberry&Cross desde 2006.
Más información: http://www.blackberycross.com
Advertencia: esta presentación contiene videos incrustados que podría no estar disponibles en el formato que accede. Para poder ver los videos escriba a soporte@blackberrycross.com
Confeccione una catapulta económica y divertida para enseñar a sus estudiantes principios estadísticos, como DOE (Diseño de Experimentos), pruebas de hipótesis, regresión, ANOVA, entre otros. Inspirada en los artículos del Blog de Minitab, esta presentación de Blackberry&Cross se puede complementar con artículos y consejos prácticos disponibles en http://i4is.blackberrycross.com. Un juego ideal para profesores y estudiantes de cursos de estadística, ingeniería industrial, calidad y productividad, o programas LEAN Six Sigma.
Advanced DOE with Minitab (presentation in Costa Rica)Blackberry&Cross
DOE:Diseño de Experimentos
Esta presentación fue dada por Minitab Inc., en Costa Rica, en el año 2007, como parte del trabajo de Blackberry&Cross, socio de Minitab Inc., para América Central, en la promoción y difusión de temas STEM, y de la comercialización de Minitab Statisitical Software.
Licenciamiento concurrente para reducir costos : un caso-ejemplo Blackberry&Cross
Su equipo de trabajo necesita software. Pero hay limitantes de presupuesto. Todos las tenemos.
Quizás es conveniente explorar cómo el licenciamiento concurrente puede ayudar a disminuir costos, y sobre todo inversión inicial. De esta forma, usted dota a sus equipos de las herramienta necesarias, y hace un mejor manejo del dinero a la vez.
Esta presentación invita a estudiantes de ingeniería industrial a revisar la necesidad de fortalecer, reforzar, varias destrezas técnicas en su carrera. Las destrezas mencionadas se derivan de investigaciones y evidencia empírica recolectada por Blackberry&Cross y SixSigmaCR.com entre 2005 y 2015 en América Central.
Esta presentación es sin fines de lucro y cuando se expone se debe contar con un expositor(a) certificado por Blackberry&Cross para poder elaborar.
Igualmente se realiza un #quiz para ilustrar a los estudiantes cómo está bien vivir al tanto de la cultura pop, deportiva o política, pero como también es fundamental conocer la obra, y su aplicación, de grandes personajes de la ingeniería de mejoramiento continuo y calidad.
Se adjuntan notas de descargo sobre uso de imágenes, derechos de marca y otros.
Resumen de ponencia presentada por Blackberry&Cross (http://www.bbcross.com) acerca de la LEAN (LEAN Six Sigma) en un contexto de sostenibilidad, conservación y mejor manejo de recursos ambientales, como parte integral de la gestión de mejoramiento continuo. Se mencionan errores o contradicciones encontrados en investigación de campo.
Blackberry&Cross sus accionistas, trabajadores, aliados ni proveedores se hacen responsables del uso que el lector(a) haga de este material.
Para mayor información contacte a Blackberry&Cross. soporte@blackberrycross.com
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
1. Breakthrough Improvement for Your Inspection Process
By Louis Johnson, Minitab Technical Training Specialist
Introduction
At final inspection, each manufactured part is often visually evaluated for defects and a decision
is made to pass or fail the part. These pass/fail decisions are extremely important to
manufacturing operations because they have a strong impact on select rates as well as process
control decisions. In fact, because Six Sigma projects focus on defect reduction, these pass/fail
decisions are often the basis for determining project success. Yet the same company that
demands precision for continuous measurements to less than 1% of the specification range often
fails to assess—and thus may never improve—their visual inspection processes.
This article presents a process to increase the accuracy of pass/fail decisions made on visual
defects and reduce these defects, advancing your quality objectives in an area that is commonly
overlooked. The Six Step Method for Inspection Improvement is illustrated using a case study
from Hitchiner Manufacturing Company, Inc., a manufacturer of precision metal parts that
successfully implemented this method.
The Six Step Method
The Six Step Method for Inspection Improvement applies established quality tools, such as
Pareto charts, attribute agreement analysis, inspector certification, and operator-by-part matrix,
to:
• Identify the area of greatest return
• Assess the performance of your inspection process
• Fix the issues affecting the accuracy of pass/fail decisions
• Ensure process improvements are permanent
An outline of the Six Step method is shown in Figure 1. The method is not designed to address
factors affecting the detection of defects, such as time allowed for inspection, skill and
experience of the inspector, or visibility of the defect. Instead, the primary goal is to optimize the
accuracy of pass/fail decisions made during visual inspection.
2. Figure 1. Six Step Method for Inspection Improvement
Six Step in Action: Case Study and Process Description
Hitchiner Manufacturing produces precision metal parts for automotive, gas turbine, and
aerospace industries. Its manufacturing operation involves melting metal alloys, casting
individual ceramic molds, vacuum-forging metal parts in the molds, peening to remove the mold,
machining to dimensions, inspecting, and packaging parts for shipment. Before initiating Six
Sigma, Hitchiner realized that defect reduction projects would rely on data from the inspection
process. Therefore, ensuring the accuracy of the inspection data was a critical prerequisite for
implementing Six Sigma. Hitchiner accomplished this by applying the process outlined in the Six
Step Method for Inspection Improvement.
Laying the Foundation for Improvement
Step 1: Analyze a Pareto Chart of Defects
The first step to improve your inspection process is to identify the visual defects that will net the
greatest return for your quality improvement efforts. Most inspection processes involve far too
many defects and potential areas for improvement to adequately address them all. The key to
your project success is to quickly hone in on the most significant ones. To order the defects by
frequency and assess their cumulative percentages and counts, create a Pareto chart of defects. If
2
3. the impact is based not only on defect counts, but is also related to cost, time to repair,
importance to the customer, or other issues, use a weighted Pareto chart to evaluate the true cost
of quality. Another option is to conduct a simple survey of inspectors to determine which defects
are the most difficult to evaluate and therefore require the most attention. No matter which
approach you use, your key first step of the project is to identify the significant few defects on
which to focus your efforts.
Hitchiner used a Pareto analysis to compare the number of defective units for each type of defect
(Figure 2). The chart allowed them to prioritize the defects by their frequency and select six key
inspection defects. As shown by the cumulative percentages in the Pareto chart, these six defects
comprise nearly three-fourths (73%) of their total inspection decisions. In contrast, the other 26
defects represent only about one-quarter (27%) of the visual defects. Allocating resources and
time to improve the decision-making process for these 26 defects would not produce nearly as
high a payback for their investment.
Count
Percent
Defect Count
Count 46 44 43 42 42 303
Percent 21 19 15
469
9 5 4 2 2 2 2 2 2 2
429
13
Cum % 21 39 55 63 68 73 75 77
348
79 81 83 85 87 100
201 109 101 56 49
O
ther
Sm
allbearing
Non
clean
-up
Shrink
in
the n
eck
W
-Slot
Hull shrink
age
No
n-fill
Broken
w
ax
Profile
Handling
issue
Bearing
dam
age
Flat alignm
ent
Y-Slot
W
ax d
rop
2500
2000
1500
1000
500
0
100
80
60
40
20
0
Figure 2. Pareto of Inspection Defects by Defect Type
Step 2: Agree on Defect Specifications
In the second step, have key stakeholders—Quality, Production, and your customers—agree on
definitions and specifications for the selected defects. Base the specifications on the ability to use
the part in your customer's process or on your end-user's quality requirements. When
successfully implemented, this step aligns your rejection criteria with your customer needs so
3
4. that "failed" parts translate into added value for your customers. Collecting inputs from all
stakeholders is important to understand the defect’s characteristics and their impact on your
customers. Ultimately, all the major players must reach consensus on the defect definition and
limits before inspectors can work to those specifications.
At Hitchiner, the end product of step 2 was a set of visual inspection standards for defining
acceptable and unacceptable defect levels for its metal parts. Figure 3 shows an example of one
of these standards for indent marks. By documenting the defects with example photographs and
technical descriptions, Hitchiner established clear and consistent criteria for inspectors to make
pass/fail decisions. For daily use on the plant floor, actual parts may provide more useful models
than photographs, but paper documentation may be more practical when establishing standards
across a large organization.
Figure 3. Defect Definition and Specification
4
5. Step 3: Evaluate Current Inspection Performance using Attribute Agreement Analysis
After defining the defect specifications, you're ready to assess the performance of the current
inspection process using an attribute agreement analysis. An attribute agreement study provides
information on the consistency of inspectors’ pass/fail decisions. To perform the study, have a
representative group of inspectors evaluate about 30 to 40 visual defects and then repeat their
evaluations at least once. From this study, you can assess the consistency of decisions both
within and between inspectors and determine how well the inspectors’ decisions match the
known standard or correct response for each part.
Table 1 summarizes the results of an attribute agreement study from Hitchiner. Five inspectors
evaluated 28 parts twice. Ideally, each inspector should give consistent readings for the same part
every time. Evaluate this by looking at the column # Matched within Inspector. In Table 1, you
can see that inspector 27-3 matched results on the first and second assessments for 19 of 28 parts
(68% agreement). Inspectors 17-2 and 26-1 had the highest self-agreement rates, at 85% and
100%, respectively.
Table 1. Assessment of Inspection Process Performance
Inspector
#
Inspected
# Matched
within
Inspector /
Percent*
Kappa
within
Inspector
# Matched
Standard /
Percent**
Kappa vs.
Standard
Inspector 17-1 28 17 / 61% 0.475 14 / 50% 0.483
Inspector 17-2 28 24 / 85% 0.777 21 / 75% 0.615
Inspector 17-3 28 17 / 61% 0.475 13 / 46% 0.365
Inspector 26-1 28 28 / 100% 1.000 26 / 93% 0.825
Inspector 27-3 28 19 / 68% 0.526 16 / 57% 0.507
Sum of all Inspectors 140 105 / 75% 0.651 90 / 64% 0.559
* Matched within Inspector = both decisions on the same part matched across trials
** Matched Standard = both decisions on the same part matched the standard value
Inspector # Inspected
# Matched
All Inspectors
/ Percent
Kappa
between
Inspectors
# Matched
to Standard
/Percent
Overall
Kappa
Overall Inspectors 28 7 / 25% 0.515 6 / 25% 0.559
5
6. In addition to matching their own evaluations, the inspectors should agree with each other. As
seen in the column # Matched All Inspectors, the inspectors in this study agreed with each other
on only 7 of 28 parts (25% agreement). If you have a known standard or correct response, you
also want the inspectors’ decisions to agree with the standard responses. In the case study, all
assessments for all inspectors matched the standard response for only 6 of 28 parts (21%
agreement).
The percent correct statistics provide useful information, but they shouldn't be your sole basis for
evaluating the performance of the inspection process. Why? First, the percentage results may be
misleading. The evaluation may resemble a test in high school where 18 out of 20 correct
classifications feels pretty good, like a grade of “A-”. But of course, a 10% misclassification rate
in our inspection operations would not be desirable. Second, the analysis does not account for
correct decisions that may be due to chance alone. With only two answers possible (pass or fail),
random guessing results in, on average, a 50% agreement rate with a known standard!
An attribute agreement study avoids these potential pitfalls by utilizing kappa, a statistic that
estimates the level of agreement (matching the correct answer) in the data beyond what one
would expect by random chance. Kappa, as defined in Fleiss (1), is a measure of the proportion
of beyond-chance agreement shown in the data.
Kappa ranges from -1 to +1, with 0 indicating a level of agreement expected by random chance,
and 1 indicating perfect agreement. Negative kappa values are rare, and indicate less agreement
than expected by random chance. The value of kappa is affected by the number of parts and
inspectors, but if the sample size is large enough, the rule of thumb for relating kappa to the
performance of your inspection process (2) is:
Kappa Inspection Process
≥ .9 Excellent
.7 −.9 Good
≤ .7 Needs Improvement
6
7. In Table 1, the within Inspector kappa statistic evaluates each inspector’s ability to match his or
her own assessments for the same part. The kappa value ranges from 0.475 to 1.00, indicating
that the consistency of each inspector varies from poor to excellent. The Kappa vs. Standard
column shows the ability of each inspector to match the standard. For operator 27-3, kappa is
0.507—about midway between perfect (1) and random chance (0) agreement, but still within the
less-than-acceptable range. The kappa statistic between inspectors (.515) and the overall kappa
statistic (.559) show poor agreement in decisions made by different inspectors and poor
agreement with the standard response. Inspectors do not seem to completely understand the
specifications or may disagree on their interpretation. By identifying the inconsistency, Hitchiner
could now begin to address the issue.
Step 4: Communicate Results using an Operator-by-Part Matrix
In step 4, convey the results of the attribute agreement study to your inspectors clearly and
objectively, indicating what type of improvement is needed. An operator-by-part matrix works
well for this.
The operator-by-part matrix for Hitchiner's inspection process is shown in Figure 4. The matrix
shows inspectors' assessments for 20 parts, with color-coding to indicate correct and incorrect
responses. If an inspector passed and failed the same part, an incorrect response is shown. The
matrix also indicates the type of incorrect response: rejecting acceptable parts or passing
unacceptable parts.
An operator-by-part matrix clearly communicates the overall performance of the inspection
process. Each inspector can easily compare his or her responses with those of the other
inspectors to assess relative performance. By also considering each inspector's ability to match
assessments for the same part (the individual within-inspector kappa value), you can use the
matrix to determine whether the incorrect responses are due to uncertainty about the defect (low
self-agreement) or a consistent use of the wrong criteria (low agreement with others). If many
incorrect responses occur in the same sample column, that defect or defect level may not have a
7
8. 8
clear standard response. Potentially, inspectors do not agree on a clear definition or specification
for the defect and may require further training to agree on a severity level for rejection.
Figure 4. Operator-by-Part Matrix
For the Hitchiner study, Figure 4 shows that both types of incorrect decisions were made:
passing an unacceptable part (13 occurrences) and failing an acceptable part (20 occurrences).
Follow-up training with specific operators could reduce the number of these incorrect decisions.
However, to fully improve the accuracy of pass/fail decisions, the defect specifications for parts
PFPFFPPPFPFPPFFPFPFPStandard
PFPFFFPPFPPPPFFPFPFPInspector 38-1
PFFFFFFPPPPPPPPFFPPPInspector 5-2
PFPFFFPPFPFPPFFPFPFPInspector 5-1
PFPFFFFPFPPPPFFFFPFPInspector 13-2
PFPFFFPPFPFPPFFPFPFPInspector 13-1
FFPFPPFPFPPPPFFFPPFPInspector 17-2
PFPFFPPPFPFPPFFPFPFPInspector 17-1
PFPFFPFPFPFPPFFPFPFPInspector 37-2
PPPPFFFFFPPPPFFFFPFPInspector 37-1
PFPFFPPPFPFPPFFPFPFPInspector 16-2
PFPFFPPPFPFPFFFFFPFPInspector 16-1
Waxdrop
Waxdrop
Waxdrop
Waxdrop
FlatAlignment
FlatAlignment
FlatAlignment
FlatAlignment
Profile
Profile
Profile
Profile
Y-Slot
Y-Slot
Y-Slot
Handlingissue
Handlingissue
Bearingdamage
Bearingdamage
Bearingdamage
Defect
Description
2019181716151413121110987654321Tag #
Pass response for failing sample
PFail response for passing sample
F
9. 5, 10, 14 and 15 must also be more clearly defined. In this way, the operator-by-part matrix
provides an excellent summary of the data and serves as a straightforward communication tool
for indicating appropriate follow-up action.
Verifying and Maintaining Your Improved Performance
Step 5: Reevaluate the Process
Once you resolve issues for specific defects and provide new training and samples as needed,
reevaluate your inspection process by repeating steps 3 and 4. The "bottom line" in gauging
improvement to the inspection process is the kappa statistic that measures the overall agreement
between the inspectors' assessments and the standard. This value tells you to what extent the
inspectors are correctly passing or failing parts.
After completing steps 1-4, Hitchiner reassessed its inspection process in step 5 and discovered a
dramatic improvement in the accuracy of pass/fail decisions. The overall kappa level for
matching the standard increased from 0.559 to 0.967. In other words, inspection performance
improved from less-than-acceptable to excellent. With an overall kappa value greater than 0.9,
Hitchiner was confident that inspectors were correctly classifying visual defects.
The company experienced another unexpected benefit as well: by improving the accuracy of
their pass/fail decisions on visual defects, they increased the overall throughput of good pieces
by more than two-fold (Figure 5). Such an improvement is not at all uncommon. Just as a gage
R&R study on a continuous measurement can reveal a surprising amount of measurement
variation caused by the measurement process itself, attribute agreement analysis often uncovers a
large percentage of the overall defect level due to variation in pass/fail decisions. More accurate
pass/fail decisions eliminate the need to “err on the side of the customer” to ensure good quality.
Also, more accurate data lead to more effective process control decisions. In this way, the
improved accuracy of inspection decisions can reduce the defect rate itself.
9
10. RelativeGoodPiecesThroughput
Week
****
161514131211109876543215251504948474645444342414039383736353433
80
70
60
50
40
30
20
Process Stage
Benchmark
Implementation
New Process
Figure 5 - Relative Throughput per Week Over Time
* Data omitted for Special Cause
Step 6: Implement an Inspector Certification Program
It's human nature to slip back into old habits. There's also a tendency to inspect to the current
best-quality pieces as the standard, rather than adhering to fixed specifications. Therefore, to
maintain a high level of inspection accuracy, the final step in improving visual inspections is to
implement a certification and/or audit program. If no such program is implemented, inspection
performance may drift back to its original level.
Hitchiner incorporated several smart features into its program. Inspectors are expected to review
test samples every two weeks to stay on top of defect specifications. Each inspector
anonymously reviews the samples and logs responses into a computer database for easy storage,
analysis, and presentation of results. This process makes the computer the expert on the standard
response rather than the program owner, thereby minimizing differences of opinion on the
definition of the standard. Your test samples should include the key defects and all possible
levels of severity. The ability to find the defect is not being tested, so clearly label the defect to
be evaluated and include only one defect per sample. If inspectors memorize the characteristic of
10
11. 11
each defect with the appropriate pass/fail response, then the audit serves as a great training tool
as well. However, memorizing the correct answer for a specific sample number defeats the
purpose of the review. To prevent this, Hitchiner rotated through three different sets of review
samples every six weeks.
Finally, offer recognition for successful completion of the audit. Reviewing the audit data at
regularly scheduled meetings helps to underscore the importance of the inspection process and
the accuracy of the pass/fail decisions as part of your organization’s culture. The method you use
will depend on your application, but regularly reviewing and monitoring the defect evaluation
performance is crucial to maintain the improvements to your inspection process.
Conclusion
The decision to pass or fail a manufactured part based on visual inspection is extremely
important to a production operation. In this decision, the full value of the part is often at stake, as
well as the accuracy of the data used to make process engineering decisions. All parts of the Six
Step Method for Inspection Improvement are necessary to increase the accuracy of these
decisions and improve the performance of inspection processes, which are often neglected in
manufacturing operations. When used as a critical step of this method, attribute agreement
analysis will help determine where inconsistencies in the pass/fail decisions are occurring, within
inspectors or between inspectors, and from which defects. With this knowledge, the project team
can identify and fix the root causes of poor inspection performance.
References
1. Joseph L. Fleiss, Statistical Methods for Rates and Proportions, Wiley Series in
Probability and Mathematical Statistics, John Wiley and Sons, 1981.
2. David Futrell, "When Quality Is a Matter of Taste, Use Reliability Indexes," Quality
Progress, Vol. 28, No. 5, pp. 81-86.
About the Author
Louis Johnson is a technical training specialist for Minitab Inc. in State College, PA. He earned a
master’s degree in applied statistics from The Pennsylvania State University. He is a certified
ASQ Black Belt and Master Black Belt.