This document provides an overview of analyzing measurement systems, specifically for attributive or subjective measurements. It discusses evaluating measurement systems using the Kappa coefficient, which calculates inter-rater reliability for classification data. A Kappa value above 0.7 indicates an acceptable measurement system, while a value near 1 represents an excellent system. The document uses examples to demonstrate how to calculate Kappa values from classification data involving two or more raters. Analyzing measurement systems with Kappa helps determine if a measurement process is reliable before making process changes.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Analysis of Measure...J. García - Verdugo
This document provides an introduction to measurement system analysis. It discusses key concepts like accuracy, precision, bias, repeatability, reproducibility and linearity. Accuracy refers to how close a measurement is to the true value, while precision describes the variation of repeated measurements. Sources of variation include the measurement system itself and actual process variation. The document emphasizes that the measurement system variation must be determined and separated from the process variation in order to improve the actual process. It provides examples of stability, correlation and the precision to tolerance ratio as a way to evaluate measurement systems.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Z TransformationJ. García - Verdugo
The document provides an overview of Z-transformation and capability calculations based on defective units. It explains the theoretical background of Z-transformation and how it can be used to determine the portion of production outside specifications. An example shows how to calculate the percentage of a normal distribution that is above a given value using Z-transformation. The document also discusses how to calculate sigma values from defect portions and provides tables to convert between defects per million opportunities (DPMO) and sigma.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Statistical MethodsJ. García - Verdugo
This document discusses statistical process control and provides examples of statistical concepts and tools. It begins by explaining why statistics are needed for process improvement, specifically to understand variability and stability. It then gives examples of X-bar control charts to show differences between a stable, controlled process (Process A) and an unstable, uncontrolled process (Process B). Further concepts introduced include sources of variation, process capability, probability, the normal distribution, and descriptive statistics. Analytical approaches and a selection of statistical techniques are presented for analyzing different data types.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Non Normal DataJ. García - Verdugo
This document discusses evaluating and transforming non-normal distributed data sets to normal distributions. It describes the Box-Cox and Johnson transformations which can be used to normalize data. The Box-Cox tries to directly transform data while Johnson "distorts" a normal distribution to model the data distribution. The document provides examples applying the Johnson transformation to uniform and non-normal data sets. Graphical analyses show the transformations successfully produce normal distributions for analysis and process capability evaluations.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Design Of Experimen...J. García - Verdugo
The document provides an introduction to design of experiments (DOE). It discusses key components of experiments including input variables, output variables, experimental design, and validity. It also covers 2k factorial designs, which introduce factors and their levels. 2k factorial experiments are presented as an uncomplicated way to start with DOE techniques. The final pages provide examples of 2k factorial design matrices and calculating effects to determine influential factors.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Multi - vari StudiesJ. García - Verdugo
This document discusses procedures for conducting and analyzing multivariable studies. It begins by providing an overview and explaining that multivariable studies examine how multiple factors interact and influence process outputs. The document then discusses planning a study, collecting data, analyzing the data, and reporting results. It provides an example study looking at contamination levels and examines the effects of factors like day, shift, and time using tools like ANOVA. The results indicate shift has a significant impact on contamination levels.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Chi Square TestJ. García - Verdugo
The Chi Square test can be used for three purposes: testing goodness of fit, testing independence, and testing homogeneity. It is used to determine if a sample comes from a known distribution, if two characteristics are independent of each other, or if multiple samples come from the same population. The test calculates an observed Chi Square value and compares it to a critical value from Chi Square tables to determine if the null hypothesis can be rejected or not. For a goodness of fit test on coin flipping data, the observed Chi Square value exceeded the critical value, indicating the coin was likely manipulated rather than fair.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Process CapabilityJ. García - Verdugo
The document discusses process capability analysis and metrics. It provides information on calculating and interpreting process capability ratios Cp and Cpk using Minitab. Key steps in building a capability study include identifying rational subgroups, collecting a short-term dataset of 30-50 points, and analyzing the data to determine if the process is stable and normally distributed. Process capability can be estimated using pooled standard deviation for potential capability or overall standard deviation for true process capability.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Analysis of Measure...J. García - Verdugo
This document provides an introduction to measurement system analysis. It discusses key concepts like accuracy, precision, bias, repeatability, reproducibility and linearity. Accuracy refers to how close a measurement is to the true value, while precision describes the variation of repeated measurements. Sources of variation include the measurement system itself and actual process variation. The document emphasizes that the measurement system variation must be determined and separated from the process variation in order to improve the actual process. It provides examples of stability, correlation and the precision to tolerance ratio as a way to evaluate measurement systems.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Z TransformationJ. García - Verdugo
The document provides an overview of Z-transformation and capability calculations based on defective units. It explains the theoretical background of Z-transformation and how it can be used to determine the portion of production outside specifications. An example shows how to calculate the percentage of a normal distribution that is above a given value using Z-transformation. The document also discusses how to calculate sigma values from defect portions and provides tables to convert between defects per million opportunities (DPMO) and sigma.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Statistical MethodsJ. García - Verdugo
This document discusses statistical process control and provides examples of statistical concepts and tools. It begins by explaining why statistics are needed for process improvement, specifically to understand variability and stability. It then gives examples of X-bar control charts to show differences between a stable, controlled process (Process A) and an unstable, uncontrolled process (Process B). Further concepts introduced include sources of variation, process capability, probability, the normal distribution, and descriptive statistics. Analytical approaches and a selection of statistical techniques are presented for analyzing different data types.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Non Normal DataJ. García - Verdugo
This document discusses evaluating and transforming non-normal distributed data sets to normal distributions. It describes the Box-Cox and Johnson transformations which can be used to normalize data. The Box-Cox tries to directly transform data while Johnson "distorts" a normal distribution to model the data distribution. The document provides examples applying the Johnson transformation to uniform and non-normal data sets. Graphical analyses show the transformations successfully produce normal distributions for analysis and process capability evaluations.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Design Of Experimen...J. García - Verdugo
The document provides an introduction to design of experiments (DOE). It discusses key components of experiments including input variables, output variables, experimental design, and validity. It also covers 2k factorial designs, which introduce factors and their levels. 2k factorial experiments are presented as an uncomplicated way to start with DOE techniques. The final pages provide examples of 2k factorial design matrices and calculating effects to determine influential factors.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Multi - vari StudiesJ. García - Verdugo
This document discusses procedures for conducting and analyzing multivariable studies. It begins by providing an overview and explaining that multivariable studies examine how multiple factors interact and influence process outputs. The document then discusses planning a study, collecting data, analyzing the data, and reporting results. It provides an example study looking at contamination levels and examines the effects of factors like day, shift, and time using tools like ANOVA. The results indicate shift has a significant impact on contamination levels.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Chi Square TestJ. García - Verdugo
The Chi Square test can be used for three purposes: testing goodness of fit, testing independence, and testing homogeneity. It is used to determine if a sample comes from a known distribution, if two characteristics are independent of each other, or if multiple samples come from the same population. The test calculates an observed Chi Square value and compares it to a critical value from Chi Square tables to determine if the null hypothesis can be rejected or not. For a goodness of fit test on coin flipping data, the observed Chi Square value exceeded the critical value, indicating the coin was likely manipulated rather than fair.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Process CapabilityJ. García - Verdugo
The document discusses process capability analysis and metrics. It provides information on calculating and interpreting process capability ratios Cp and Cpk using Minitab. Key steps in building a capability study include identifying rational subgroups, collecting a short-term dataset of 30-50 points, and analyzing the data to determine if the process is stable and normally distributed. Process capability can be estimated using pooled standard deviation for potential capability or overall standard deviation for true process capability.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Cause and Effect An...J. García - Verdugo
This document discusses a cause and effect matrix tool used in Six Sigma process improvement. It provides instructions for creating a cause and effect matrix, including identifying key customer outputs, rating their importance, evaluating the correlation between process inputs and each output, and calculating total scores to identify important inputs. The document includes an example of a cause and effect matrix applied to a cleaning process, rating inputs like training and regulations on their impact to outputs of clean, undamaged parts. It suggests the matrix helps determine which inputs and process steps require further investigation.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W32 DOE Center Points...J. García - Verdugo
The document discusses using center points in two-level factorial designs to check for linearity. It provides an example of a chemical engineer running a 2x2 design on reaction time and temperature and adding center points. The results show the center points do not significantly deviate from linearity, indicating the model is linear. A second example shows center points having a significant effect, suggesting curvature. The document also discusses incorporating block factors into a design, including an example of using two types of catalysts as block factors.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Minitab - Graphical...J. García - Verdugo
This document introduces various graphical methods in Minitab that can be used at different stages of the DMAIC cycle for process improvement projects. It discusses histograms, run charts, control charts, box plots, dot plots, scatter plots, marginal plots, matrix plots, and Pareto diagrams. For each type of graph, it provides an example using sample data and step-by-step instructions for creating the graph in Minitab. The document emphasizes that graphics are useful for visualizing relationships in data and communicating findings to others.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Full Factorial Desi...J. García - Verdugo
The document discusses full factorial designs for analyzing experiments with two or more factors and levels. It provides examples of 2-factor and 3-factor full factorial designs, and how to customize, evaluate, and analyze the results using statistical software. Graphical analysis methods like effect plots and residual diagnostics are demonstrated. Response surface methodology for investigating quadratic effects is also introduced.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Confidence Intervals J. García - Verdugo
1) The document discusses confidence intervals, which provide a range of values that are likely to include an unknown population parameter based on a sample.
2) Confidence intervals can be calculated for a mean, standard deviation, and proportion based on the sample size and desired confidence level, usually 95%.
3) Examples are provided for calculating 95% confidence intervals for the mean, standard deviation, and proportion to estimate unknown population parameters based on sample data.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Complex Designs J. García - Verdugo
The document discusses evaluating experiments with multiple responses that may conflict. It provides examples of using a response optimizer and overlaid contour plots in Minitab to determine optimal factor settings that maximize desired results. The response optimizer calculates optimal settings by maximizing a desirability function. Overlaid contour plots show the factor space where all responses meet limits, allowing selection of preferred regions. An example optimizes a rubber mix for tires using these tools.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Fractional Factoria...J. García - Verdugo
This document discusses fractional factorial designs, which reduce the number of experimental runs needed compared to full factorial designs. It provides examples of fractional factorial designs with different numbers of factors and resolutions. Lower resolutions mean greater confounding between effects. Plackett-Burman designs can assess more factors than fractional factorials with the same number of runs. Fractional designs allow screening of many factors efficiently while sequential or folded designs provide more information.
Electronics Reliability Prediction Using the Product Bill of MaterialsCheryl Tulkoff
Common MTBF Misconceptions
It is difficult to represent field failures with calculated MTBF models.
It is important for consumers to know how MTBFs were generated
and what the limitations are for those
calculations.
Six Sigma Mechanical Tolerance Analysis 1David Panek
David A. Panek has 18 years of experience in cost engineering. He has expertise in tolerance analysis, Monte Carlo techniques, cost estimation, and neural costing. The document discusses different methods for tolerance analysis including worst case, statistical, and six sigma approaches. It also defines terms related to process variation and discusses measures of process capability like Cp and Cpk. Guidelines are provided for designing optimized tolerances through establishing process standard deviation and computing probabilities to achieve tight assembly gaps.
The document summarizes an intern's work on an EMS engineering internship. It discusses their tasks analyzing transformer tap positions and voltage/VAR performance across different utility areas, identifying issues affecting state estimator residuals. Notable findings included improved residuals from setting transformer taps to nominal values and enabling tap estimation. However, tap estimation worsened residuals for some utilities with low observability. The intern highlighted areas for further model and measurement improvements and automated their analysis for future use.
This document describes Delta Air Lines' expendable parts inventory management system. It discusses key considerations like when to order parts and how much to order. It also outlines the various costs involved like ordering costs, carrying costs, and stockout costs. The goal is to minimize total quantifiable costs while meeting a target customer service level. The system uses a reorder point-reorder quantity model and safety stocks to prevent stockouts. It provides an example of how the economic order quantity is calculated and how safety stocks are determined. The system was implemented in 1996 and provided tangible cost savings and intangible benefits in optimizing inventory levels.
The document discusses various facility layout strategies and concepts. It defines facility layout as determining the placement of departments, workgroups, machines, and stock areas. Key layout formats discussed include process layout, product layout, group technology layout, and fixed-position layout. Assembly line balancing concepts are also covered, including precedence diagrams and determining cycle times and workstation loads.
Computational fluid dynamics (CFD) is a powerful tool to simulate, analyze, and optimize designs. The leading CFD providers will discuss software features and functionality such as flow features and benefits, solver technology, as well as describe an example of CFD use in the real world.
The document describes the steps taken to perform a CFD analysis of flow over a horizontal plate using ANSYS Fluent. The analysis used the Spalart-Allmaras, k-epsilon, and k-omega turbulence models to simulate flow of air and water over the plate at velocities of 10m/s, 100m/s, and 350m/s. The steps included setting up the geometry, meshing, selecting models and materials, applying boundary conditions, running calculations to convergence, and examining results through contours.
This document discusses computational fluid dynamics (CFD). CFD uses numerical analysis and algorithms to solve and analyze fluid flow problems. It can be used at various stages of engineering to study designs, develop products, optimize designs, troubleshoot issues, and aid redesign. CFD complements experimental testing by reducing costs and effort required for data acquisition. It involves discretizing the fluid domain, applying boundary conditions, solving equations for conservation of properties, and interpolating results. Turbulence models and discretization methods like finite volume are discussed. The CFD process involves pre-processing the problem, solving it, and post-processing the results.
This document provides an overview of performing steady-state and transient thermal analyses in ANSYS Workbench. It discusses geometry considerations, contact between assemblies, defining heat loads, material properties, and solution options. Key steady-state assumptions are that there are no transient effects and heat transfer is governed by Fourier's law. Contact regions between assemblies automatically generate where solid bodies touch, allowing heat transfer normal to the interface. Finite thermal contact conductance can be defined to model temperature drops between parts.
CFD troubleshooting guide, if you having issues on a simulation run through these dot points.
Also check out other articles I wrote on the topic :
Modern CFD tips, tricks and best practices, from someone who sees engineers fall into the same old traps : https://www.linkedin.com/pulse/modern-cfd-tips-tricks-best-practices-from-someone-who-hashan-mendis/
Better meshing using ANSYS Fluent Meshing? : https://www.linkedin.com/pulse/better-meshing-using-ansys-fluent-hashan-mendis/
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Statistical Process...J. García - Verdugo
The document provides an introduction to statistical process control (SPC). It discusses how SPC helps recognize normal variation in processes to avoid overreactions, identify sources of variation, and determine actions to reduce variation. The document outlines different types of SPC charts including X-bar and R charts used to monitor average values and variation within subgroups over time. It also discusses how to create X-bar and R charts in Minitab and the guidelines for determining whether a process is in or out of control.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Measurement System ...J. García - Verdugo
This document discusses measurement system analysis for continuous measurements. It introduces the Gage R&R study as a tool to assess measurement systems. Key indices for evaluating measurement systems are the Percentage of Tolerance (P/T) and Percentage of Range and Repeatability (%R&R). P/T assesses how much of the specification tolerance is used by measurement error while %R&R evaluates measurement error relative to total process variation. The document provides guidelines for properly conducting a Gage R&R study and interpreting its results.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Cause and Effect An...J. García - Verdugo
This document discusses a cause and effect matrix tool used in Six Sigma process improvement. It provides instructions for creating a cause and effect matrix, including identifying key customer outputs, rating their importance, evaluating the correlation between process inputs and each output, and calculating total scores to identify important inputs. The document includes an example of a cause and effect matrix applied to a cleaning process, rating inputs like training and regulations on their impact to outputs of clean, undamaged parts. It suggests the matrix helps determine which inputs and process steps require further investigation.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W32 DOE Center Points...J. García - Verdugo
The document discusses using center points in two-level factorial designs to check for linearity. It provides an example of a chemical engineer running a 2x2 design on reaction time and temperature and adding center points. The results show the center points do not significantly deviate from linearity, indicating the model is linear. A second example shows center points having a significant effect, suggesting curvature. The document also discusses incorporating block factors into a design, including an example of using two types of catalysts as block factors.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Minitab - Graphical...J. García - Verdugo
This document introduces various graphical methods in Minitab that can be used at different stages of the DMAIC cycle for process improvement projects. It discusses histograms, run charts, control charts, box plots, dot plots, scatter plots, marginal plots, matrix plots, and Pareto diagrams. For each type of graph, it provides an example using sample data and step-by-step instructions for creating the graph in Minitab. The document emphasizes that graphics are useful for visualizing relationships in data and communicating findings to others.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Full Factorial Desi...J. García - Verdugo
The document discusses full factorial designs for analyzing experiments with two or more factors and levels. It provides examples of 2-factor and 3-factor full factorial designs, and how to customize, evaluate, and analyze the results using statistical software. Graphical analysis methods like effect plots and residual diagnostics are demonstrated. Response surface methodology for investigating quadratic effects is also introduced.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Confidence Intervals J. García - Verdugo
1) The document discusses confidence intervals, which provide a range of values that are likely to include an unknown population parameter based on a sample.
2) Confidence intervals can be calculated for a mean, standard deviation, and proportion based on the sample size and desired confidence level, usually 95%.
3) Examples are provided for calculating 95% confidence intervals for the mean, standard deviation, and proportion to estimate unknown population parameters based on sample data.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Complex Designs J. García - Verdugo
The document discusses evaluating experiments with multiple responses that may conflict. It provides examples of using a response optimizer and overlaid contour plots in Minitab to determine optimal factor settings that maximize desired results. The response optimizer calculates optimal settings by maximizing a desirability function. Overlaid contour plots show the factor space where all responses meet limits, allowing selection of preferred regions. An example optimizes a rubber mix for tires using these tools.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Fractional Factoria...J. García - Verdugo
This document discusses fractional factorial designs, which reduce the number of experimental runs needed compared to full factorial designs. It provides examples of fractional factorial designs with different numbers of factors and resolutions. Lower resolutions mean greater confounding between effects. Plackett-Burman designs can assess more factors than fractional factorials with the same number of runs. Fractional designs allow screening of many factors efficiently while sequential or folded designs provide more information.
Electronics Reliability Prediction Using the Product Bill of MaterialsCheryl Tulkoff
Common MTBF Misconceptions
It is difficult to represent field failures with calculated MTBF models.
It is important for consumers to know how MTBFs were generated
and what the limitations are for those
calculations.
Six Sigma Mechanical Tolerance Analysis 1David Panek
David A. Panek has 18 years of experience in cost engineering. He has expertise in tolerance analysis, Monte Carlo techniques, cost estimation, and neural costing. The document discusses different methods for tolerance analysis including worst case, statistical, and six sigma approaches. It also defines terms related to process variation and discusses measures of process capability like Cp and Cpk. Guidelines are provided for designing optimized tolerances through establishing process standard deviation and computing probabilities to achieve tight assembly gaps.
The document summarizes an intern's work on an EMS engineering internship. It discusses their tasks analyzing transformer tap positions and voltage/VAR performance across different utility areas, identifying issues affecting state estimator residuals. Notable findings included improved residuals from setting transformer taps to nominal values and enabling tap estimation. However, tap estimation worsened residuals for some utilities with low observability. The intern highlighted areas for further model and measurement improvements and automated their analysis for future use.
This document describes Delta Air Lines' expendable parts inventory management system. It discusses key considerations like when to order parts and how much to order. It also outlines the various costs involved like ordering costs, carrying costs, and stockout costs. The goal is to minimize total quantifiable costs while meeting a target customer service level. The system uses a reorder point-reorder quantity model and safety stocks to prevent stockouts. It provides an example of how the economic order quantity is calculated and how safety stocks are determined. The system was implemented in 1996 and provided tangible cost savings and intangible benefits in optimizing inventory levels.
The document discusses various facility layout strategies and concepts. It defines facility layout as determining the placement of departments, workgroups, machines, and stock areas. Key layout formats discussed include process layout, product layout, group technology layout, and fixed-position layout. Assembly line balancing concepts are also covered, including precedence diagrams and determining cycle times and workstation loads.
Computational fluid dynamics (CFD) is a powerful tool to simulate, analyze, and optimize designs. The leading CFD providers will discuss software features and functionality such as flow features and benefits, solver technology, as well as describe an example of CFD use in the real world.
The document describes the steps taken to perform a CFD analysis of flow over a horizontal plate using ANSYS Fluent. The analysis used the Spalart-Allmaras, k-epsilon, and k-omega turbulence models to simulate flow of air and water over the plate at velocities of 10m/s, 100m/s, and 350m/s. The steps included setting up the geometry, meshing, selecting models and materials, applying boundary conditions, running calculations to convergence, and examining results through contours.
This document discusses computational fluid dynamics (CFD). CFD uses numerical analysis and algorithms to solve and analyze fluid flow problems. It can be used at various stages of engineering to study designs, develop products, optimize designs, troubleshoot issues, and aid redesign. CFD complements experimental testing by reducing costs and effort required for data acquisition. It involves discretizing the fluid domain, applying boundary conditions, solving equations for conservation of properties, and interpolating results. Turbulence models and discretization methods like finite volume are discussed. The CFD process involves pre-processing the problem, solving it, and post-processing the results.
This document provides an overview of performing steady-state and transient thermal analyses in ANSYS Workbench. It discusses geometry considerations, contact between assemblies, defining heat loads, material properties, and solution options. Key steady-state assumptions are that there are no transient effects and heat transfer is governed by Fourier's law. Contact regions between assemblies automatically generate where solid bodies touch, allowing heat transfer normal to the interface. Finite thermal contact conductance can be defined to model temperature drops between parts.
CFD troubleshooting guide, if you having issues on a simulation run through these dot points.
Also check out other articles I wrote on the topic :
Modern CFD tips, tricks and best practices, from someone who sees engineers fall into the same old traps : https://www.linkedin.com/pulse/modern-cfd-tips-tricks-best-practices-from-someone-who-hashan-mendis/
Better meshing using ANSYS Fluent Meshing? : https://www.linkedin.com/pulse/better-meshing-using-ansys-fluent-hashan-mendis/
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Statistical Process...J. García - Verdugo
The document provides an introduction to statistical process control (SPC). It discusses how SPC helps recognize normal variation in processes to avoid overreactions, identify sources of variation, and determine actions to reduce variation. The document outlines different types of SPC charts including X-bar and R charts used to monitor average values and variation within subgroups over time. It also discusses how to create X-bar and R charts in Minitab and the guidelines for determining whether a process is in or out of control.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Measurement System ...J. García - Verdugo
This document discusses measurement system analysis for continuous measurements. It introduces the Gage R&R study as a tool to assess measurement systems. Key indices for evaluating measurement systems are the Percentage of Tolerance (P/T) and Percentage of Range and Repeatability (%R&R). P/T assesses how much of the specification tolerance is used by measurement error while %R&R evaluates measurement error relative to total process variation. The document provides guidelines for properly conducting a Gage R&R study and interpreting its results.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Six Sigma IntroductionJ. García - Verdugo
Six Sigma is a structured methodology that uses statistical methods to eliminate defects and reduce process variation. It is applied through projects led by specialists using the DMAIC cycle of Define, Measure, Analyze, Improve, and Control. The goals of Six Sigma are to fully meet customer needs economically and achieve breakthrough process improvement and profitability. It supports existing quality programs to make them more successful by providing a framework to consistently deliver precisely defined financial contributions.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W2 Control PlanJ. García - Verdugo
The document provides guidance on developing effective control strategies and control plans as part of the Six Sigma DMAIC process. It discusses criteria for control strategies, such as linking customer requirements to process inputs and outputs. It also provides a template for a control plan that describes process steps, inputs and outputs, specifications, measurement systems, and actions for improvement. Realistic tolerances are important to define an optimal production window that ensures process outputs meet specifications despite natural variation in inputs. An example shows how to determine tolerances based on the relationship between a process input and output variable.
This document discusses the application of statistical process control (SPC) in automotive manufacturing. It provides 4 case studies that demonstrate both basic and advanced applications of SPC, including using SPC with multivariate analysis and design of experiments. It also describes a case where SPC was ignored, leading to failed experiments. The case studies illustrate how SPC can be used to monitor and improve processes, reduce variation, and gain understanding of process capabilities.
C charts, also known as count charts, are used to monitor the number of defects found in individual units of uniform size. The c chart graphs the total number of nonconformities found in each inspected piece. To create a c chart, the average number of defects (c bar) and standard deviation are calculated from process data. The upper and lower control limits are determined by adding or subtracting 3 times the standard deviation from the average. Individual unit data is then plotted and assessed against the control limits to determine whether the process is in or out of control. C charts can be used in various manufacturing and processing applications to monitor defect counts, such as the number of paint defects on cars.
1. The document discusses the Measure phase of the DMAIC process for Six Sigma innovation projects.
2. Key aspects of the Measure phase include selecting Critical to Quality characteristics, defining performance standards and specifications, establishing a data collection plan, and validating measurement systems.
3. Tools discussed that are useful for the Measure phase include process mapping, fishbone diagrams, Pareto analysis, and Failure Mode and Effects Analysis (FMEA). FMEA involves identifying failure modes, causes, and effects to determine appropriate actions.
Building best in-class quality in footwear manufacturingTony Lopez
The document provides an overview of Tony Lopez's experience and strategies for building best-in-class quality in footwear manufacturing. It details his 28 years of experience in quality roles across various industries. As Director of Quality & Process Engineering at New Balance, his mission was to build a world-class quality mindset and continuous improvement culture. Key strategies included training programs to develop problem-solving skills, building quality awareness, and establishing metrics to drive data-driven improvements. Examples demonstrate focusing investigations on root causes rather than blame, and cross-functional collaboration to resolve complex issues.
This document discusses statistical process control and quality management. It introduces common and assignable causes of variation, measures of process capability like three and six sigma, and how to establish control charts to monitor processes. Control charts can be used for variables, like weight or time, and attributes, like defects. The document provides examples of how to construct X-bar and R charts for variables and P and C charts for attributes using sample data. Quality control is important for both manufacturing and service industries.
This document discusses approaches for assessing the reproducibility of two measurement systems. It describes three common approaches: 1) Performing a standard R&R study treating the measurement devices as operators, 2) Creating an iso-plot to visually assess reproducibility, and 3) Creating a Bland Altman plot to analyze differences between the devices and determine if one is biased. The document emphasizes the importance of ensuring measurement devices provide consistent results to maintain operator confidence.
This document describes a project to improve the reliability of gas chromatography (GC) analysis for quality control of custom refrigerant blends produced by Honeywell. The GC was found to have poor repeatability and a wide range of signals for sample compositions. Root cause analysis identified errors in the weighing procedure for preparing standard mixtures. Improving the standard operating procedure for weighing mixtures validated more accurate composition percentages. While the GC issues were not fully resolved, the project provided insight into prioritizing analytical chemistry work for a refrigeration department.
Statistical process control (SPC) techniques apply statistical methods to measure and analyze variation in manufacturing processes. SPC uses control charts to distinguish between common cause variation inherent to the process and special cause variation that can be assigned to a specific reason. Control charts monitor process data over time against statistical control limits. Process capability analysis compares process variation to product specifications to determine if the process is capable of meeting specifications. Key metrics like Cp, Cpk and Cpm indices quantify a process's capability relative to the specifications. For a process to have a valid capability analysis, it must meet assumptions of statistical control, normality, sufficient representative data, and independence of measurements.
This document provides an overview of metrology and measurements. It discusses key concepts in metrology including calibration, traceability, uncertainty, and accreditation. It defines metrology as the science of measurement and explains its importance. Metrology covers defining measurement units, establishing measurement standards, and documenting measurement accuracy. There are different categories of metrology including scientific, industrial, and legal metrology. The document also discusses various measurement tools and gauges used in industrial metrology.
This document discusses gauge repeatability and reproducibility (GR&R) studies, which are used to assess the reliability of measurement systems. It defines GR&R as a technique that uses analysis of variance to evaluate the repeatability and reproducibility of a measurement process. Repeatability refers to a gauge's ability to provide consistent results, while reproducibility captures variability from operators. The document outlines the benefits of GR&R studies, such as improving process understanding and identifying problems. It also provides examples of how to plan and conduct a proper GR&R study in five steps.
This document provides guidance on calculating and interpreting the process capability index Cpk. It defines Cpk as a ratio that compares the specification tolerance to the process variation expressed in terms of standard deviations. It explains how to calculate Cpk and discusses factors that influence Cpk values such as sample size, process centering, and measurement uncertainty. The document also provides examples of the expected defective parts per million that correspond to different Cpk values and factors to consider when improving Cpk, such as machine, tooling, workholding, and workpiece variables.
This document summarizes work done to redevelop the GUM.validate function in the metRology package for R. The author conducted an experiment measuring the output of a 3D printer to characterize measurement uncertainty. They analyzed the results using the metRology package, which led them to make changes to the GUM.validate function to better model correlation between variables using copulas. The changes improved the function's ability to validate uncertainty analyses. The author gained experience with uncertainty analysis, metrology, and programming in R through this project.
This document discusses improving the quality sigma level of copper terminals through applying a QC story methodology. It begins by introducing QC stories and their use in systematically solving problems to improve sigma level and reduce defects per million (DPM). The paper then describes analyzing production data for copper terminals to identify problematic components, defects, and potential causes. Various quality control tools are applied including Pareto charts, cause-and-effect diagrams, and why-why analysis to validate root causes. Corrective actions include modifying fixtures to eliminate misalignment and allow manufacturing two components per cycle. Experimental results show reductions in DPM levels and increases in sigma level and process capability, demonstrating the effectiveness of applying a QC story approach.
This document describes the development of a customer segmentation tool using machine learning. The goal was to segment customers into 3-4 groups based on survey responses. Unsupervised learning identified the segments, then supervised learning with GLMNET predicted segment membership with 85% accuracy. Dimensionality reduction distilled the predictive models. The final tool embedded 3 logistic regression models into Excel to assign customers to segments.
Mba om 14_statistical_qualitycontrolmethodsNiranjana K.R.
This document provides an overview of statistical quality control techniques including:
- Describing categories of statistical quality control and how to measure quality characteristics.
- Explaining sources of variation, process capability, and how to set control limits for control charts.
- Detailing different types of control charts for variables and attributes including x-bar, R, p, and c charts.
- Defining three sigma and six sigma process capability and how they relate to acceptable defect levels.
- Discussing challenges in measuring quality in service organizations and potential metrics that could be monitored.
Similar to Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Attributive Data (MSA) (20)
The document discusses the concept of poka yoke, which are error-proofing devices used in manufacturing to prevent defects. It describes Shigeo Shingo's definition of poka yoke as preventing inadvertent mistakes. It also discusses different types of inspection processes and how poka yoke aims to eliminate defects by detecting errors early in the production process through the use of simple error-proofing devices built into operations. Shingo's method uses poka yoke systems and devices to achieve zero defects, zero waste, and zero delays in production.
Javier Garcia - Verdugo Sanchez - Trabajo en equipo y dirección de reunionesJ. García - Verdugo
Este documento describe los equipos de trabajo y cómo funcionan de manera eficaz. Explica que los equipos son grupos de personas comprometidas con un objetivo común que trabajan de forma interdependiente. Detalla las características de un equipo eficaz, incluyendo el establecimiento de normas para la comunicación, la toma de decisiones y el aprovechamiento del tiempo. Además, explica que los equipos eficaces desarrollan sus reuniones a través de ciclos que incluyen la definición de objetivos y el feedback.
Javier Garcia - Verdugo Sanchez - The 8D (Eigth Disciplines) MethodologyJ. García - Verdugo
This document provides an overview of the 8-discipline (8-D) problem solving methodology. It discusses the origins of the 8-D system in a US military standard from 1974. The 8-D process uses a team-based approach and eight disciplines to identify and address problems, with the goals of preventing recurrences and achieving continuous improvement. The document also references additional files and graphics that are included to help explain 8-D concepts.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Autocorrelation and...J. García - Verdugo
The document discusses autocorrelation and cross correlation analysis of time series data. It provides an example of measuring daily body weight over 4 weeks and finds autocorrelation at a lag of 1 day. This indicates dependence between successive daily measurements. The document also analyzes viscosity measurements taken hourly and finds autocorrelation up to a lag of 4 hours. An autoregressive model is fitted to account for this autocorrelation. Finally, the document examines cross correlation between methane feed rate and CO2 concentration measurements taken minute-by-minute. The largest correlation is found at a lag of -1 minute, suggesting the CO2 is affected by methane feed rate from the previous minute.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Reliability J. García - Verdugo
The document discusses reliability, including definitions of reliability, reliability phases, reliability importance, reliability calculations for serial and parallel systems, and Weibull analysis. Reliability is defined as the probability that a product or system will function as intended without failure over a specified period of time. There are generally three failure phases: infant mortality with early high failure rates, random failures, and wear out with increasing failure rates over time. Reliability is important for customers, cost savings, and competitiveness. Calculations can determine the reliability of serial and parallel systems based on component reliabilities. Weibull analysis involves plotting failure data to determine the appropriate failure distribution.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Monte Carlo Simulat...J. García - Verdugo
This document describes using Monte Carlo simulation to analyze variations in manufacturing processes and electrical circuits. It discusses generating random input variables based on their distributions, calculating output results using equations, and analyzing the output distribution to determine process capability and variation. An example simulates variations in five stacked metal parts and calculates the total dimension distribution. Another simulates variations in resistor values in an electrical circuit and calculates the distribution of the output voltage.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Lean IntroJ. García - Verdugo
The document provides an overview of lean manufacturing principles and concepts. It discusses the history and evolution of manufacturing approaches from hand crafting to mass production to lean. Key lean concepts are defined, including just-in-time, single piece flow, takt time, cycle time, visual controls, 5S, waste elimination and pull vs push systems. The phases of a typical lean project are outlined as define, measure, analyze, improve and control. Overall the document serves as an introductory guide to lean manufacturing principles, tools and terminology.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Statistical Toleran...J. García - Verdugo
The document discusses statistical tolerance analysis and six sigma tolerancing. It covers topics like worst case tolerancing, root sum of squares tolerancing, statistical tolerancing for linear and non-linear applications. Various tolerance analysis methods like worst case analysis, statistical tolerance analysis and vector analysis are described. Examples of calculating tolerances for assembly gaps and performing statistical tolerance analysis on automotive brake disk assembly are presented.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Taguchi Robust DesignsJ. García - Verdugo
The document describes Taguchi's method for robust design. It discusses using design of experiments to create robust products and processes by minimizing variation from the target value. The key aspects are:
- Taguchi proposed applying DOE techniques to optimize settings for factors influencing a process to make it robust against noise factors like materials and environment.
- Taguchi advocated designing processes to be "on-target" rather than just meeting specifications to reduce costs from variation.
- An example application looks at factors influencing the size of ceramic pieces after baking, with the furnace position as the noise factor. Analysis of the experiment aims to find settings minimizing output variation.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Analysis of CovariatesJ. García - Verdugo
This document discusses the analysis of covariates to account for variation from uncontrollable factors in experimental designs. It provides an example of using filament diameter as a covariate in an experiment investigating the effect of production lines on tensile strength. Accounting for diameter significantly increases the amount of variation explained. The document also examines using locator pin position as a covariate in an experiment on printed circuit board assembly.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 The Binary Logistic...J. García - Verdugo
The document discusses binary logistic regression and provides an example. It analyzes data from a study of 100 men investigating the relationship between age and risk of coronary heart disease. Logistic regression is used to estimate the effect of age on the probability of disease. The analysis finds that for each one year increase in age, the odds of disease increase by 13% (odds ratio of 1.13).
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Multiple RegressionJ. García - Verdugo
The document discusses multiple regression analysis to model the relationship between inputs and outputs of a process. An example is provided of using measurements from 8 X-ray tubes to determine which input parameters influence the width, length, and unbalancedness of the focal spot, which are the critical outputs. Regression analysis indicates that ambient temperature and humidity define the necessary evaporation temperature to control cooling water usage. The analysis allows determining the required air amount based on water temperature to maintain ammonia loss between set limits.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 StarterJ. García - Verdugo
This document outlines the agenda and expectations for a Six Sigma training workshop taking place over the course of a week. The workshop will focus on teaching tools used in the Six Sigma DMAIC process for defining, measuring, analyzing, improving, and controlling processes. Participants will review work from the previous week and learn new tools such as regression analysis, design of experiments, statistical tolerancing, and reliability analysis. Time will also be spent on project reviews where teams will report their progress, results, and next steps. The goals are for participants to learn how to apply these tools in their work to drive improvements and financial benefits in their projects.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 QFD Customer Requir...J. García - Verdugo
This document provides an overview of Quality Function Deployment (QFD). QFD is a structured approach to defining customer needs and translating them into product design requirements and specifications. It involves gathering customer requirements, prioritizing them based on importance, and determining how well competitors meet those needs. Multiple "Houses of Quality" are used to map customer needs to functional requirements to design parameters and ensure the final product will satisfy customers. The process involves gathering customer input, analyzing competitor performance, setting goals for improvements, and calculating development priorities to guide product planning and design.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Financial Integration J. García - Verdugo
This document provides guidelines for calculating the financial benefits of Six Sigma projects. It discusses categories for project benefits such as cost reduction, cost avoidance, cash flow improvement, and growth. It explains how to calculate benefits for different types of projects, including volume projects, cost reduction projects, and cost avoidance projects. Required data for calculating benefits includes production costs, capacities, part numbers, and cost centers. Financial benefits are typically tracked over a period of 12 months after a project is completed.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Median Tests J. García - Verdugo
This document discusses median tests that can be used as alternatives to the analysis of variance (ANOVA) when its assumptions are violated. It describes Mood's median test and the Kruskal-Wallis test, indicating that Mood's test is robust against outliers while Kruskal-Wallis is more robust against unequal distributions. An example analyzes reject rates from a pneumatic module test using both tests to support the conclusions from an initial ANOVA.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 Sample Size J. García - Verdugo
A sample size that is too small increases the risks of overlooking important effects and detecting effects that are not truly present. With a larger sample size, the risks decrease but costs and time increase. The key factors in determining sample size are the desired power, significance level, expected effect size, and standard deviation. Sample size calculators can then determine the necessary sample for a given hypothesis test based on specifying values for these factors.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W3 StarterJ. García - Verdugo
This document provides an agenda and overview for a Green Belt Six Sigma training workshop taking place over one week. It outlines the safety procedures, timing, purpose and expectations for the training. The agenda lists the topics to be covered each day, including recaps of previous weeks, exercises using Six Sigma tools like DOE and RSM, and project reviews. Participants are expected to actively engage in team exercises and apply what they learn to their projects. The training aims to teach the structured use of Six Sigma tools to help participants meet project expectations and goals.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Attributive Data (MSA)
1. Analysis of Measurement Systems
Part 2: Attributive Data
Week 1
Knorr-Bremse Group
About this Module
Based on this technique you can asses and judge
t t h b tt th d ib dmeasurement systems much better than described
in the ISO 9000 standard.
• Part 1: Introduction of Measurement System Analysis
– Concept definition and describing the basic termsConcept definition and describing the basic terms
• Part 2: Attributive Measurements
– Kappa Analysis
• Part 3: Continuous Measurements
– The method for the Gage R&R Study
• Some exercises
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 2/30
2. The DMAIC Cycle
Control
Maintain
DefineMaintain
Improvements
SPC
Control Plans
Project charter
(SMART)
Business Score Card
QFD VOC
D
Documentation QFD + VOC
Strategic Goals
Project strategy
C M
Measure
Improve
AI
Baseline Analysis
Process Map
C + E MatrixAnalyze
Improve
Adjustment to the
Optimum
FMEA
Measurement
System
Definition of critical
Inputs
FMEA
S
FMEA
Statistical Tests
Simulation
Tolerancing
y
Process CapabilityStatistical Tests
Multi-Vari Studies
Regression
Tolerancing
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 3/30
Content and Terminology
• Discrimination • P/T Ratio
• Terms connected with accuracy
T l
− Precision to tolerance
• R&R %− True value
− Systematic Error / Bias
− Linearity
• R&R %
− Repeatability and
ReproducibilityLinearity
• Terms connected with precision • Process capability related
i i f h
p
− Repeatability
− Reproducibility
variation from the measurement
system
− Linearity
Stability (over Time)• Stability (over Time)
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 4/30
3. Possible Sources for Process Variation
Observed Process Variation
Actual Process Variation Measurement Variation
Short Term
Process Variation
Long Term
Process Variation
Variation within a
sample
Variation due to
Measurement
System
Variation due to
Operatorp
System
p
Repeatability
Precision
Calibration Stability Linearity
In order to work on the actual process variation, the measurement
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 5/30
variation has to be determined and separated from the process variation
Sources of Measurement Variation
W k M th d
Operator Training
Ease of Data Entry
M ea
Mechanical instability
Tool Work Methods
Sufficient Work Time
Maintenance Standard
Calibration Frequency
Electrical Instability
Wear
'Measurement Variation'
Operator Technique
Standard Procedures
Algorithm Instabilty
Measurement Variation
Humidity
Cleanliness
Vibration
Line Voltage Variation
Temperature Fluctuation
M ethodsEnvironment
Environment
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 6/30
4. Needed Information
• How big is the measurement error?
• What are the sources of the measurement error?• What are the sources of the measurement error?
• Is the gauge stable over the time?
• Is the gauge suitable for this examination?
• How can we improve the measurement system?
• Measurement tools (Hardware and Software)Measurement tools (Hardware and Software)
• All procedures for using the tools
• Which operator?
• Set-up and handling proceduresp g p
• Off-line calculations and data entry
C lib ti f d t h i
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 7/30
• Calibration frequency and technique
Effects of Measurement Error
Measurement System Bias -
Average
Determined through
“Calibration Study”
Accuracy
µ µ µtotal product measurement= +p
V i bilit
Measurement System
Variability - Determined
through “R&R Study”
Variability
222
Precision
222
tmeasuremenproducttotal σσσ +=
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 8/30
5. The True Process Variation
Observed Variation (Total Variation)
Actual Process Variation Measurement Variation
Can we observe the truth?
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 9/30
Can we observe the truth?
Attributive Measurements
Most administrative assessments are of subjective
nature. We are talking about good vs. bad classification
t fit i if iblor an assessment fit in groups if possible.
These attributive results can be evaluated applying theThese attributive results can be evaluated applying the
Kappa calculation by using contingency tables.
At physical measurements we get continuous results
mostly. Here we can calculate means, standardy ,
deviations and evaluate the root causes for variation.
It is often recognized that continuous checked criteria are judged as
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 10/30
attributes in the practice.
6. Questions for Measurement Systems
Which information about the measure process is existing?
• Is there a description or instruction for the execution?
• Is there a detailed flowchart available?
• Are the inspectors qualified?
Which information we have about:Which information we have about:
• Discrimination
• RepeatabilityRepeatability
• Reproducibility
• Which correlation is there to customers or suppliers?• Which correlation is there to customers or suppliers?
• What is the variation for the process and the measurement
system?system?
Our knowledge determines the further procedure
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 11/30
Our knowledge determines the further procedure
Attributive Measurements
• Attributive Measurements are based on subjective
classifications and ratings.
• Example:
Rating of features as good or bad– Rating of features as good or bad
– Classification of wine aroma or taste
– Rating of employee satisfaction on a scale of 1 - 5.
– Rating of a service in acceptable or unacceptable
We should evaluate these measurement systems before we change
processes. Otherwise we may oversee an important factor which could
b j ti f th b d i ti
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 12/30
be a major portion of the observed variation
7. Reliability Coefficient Kappa
• A statistical method to evaluate attributive data sets is
the reliability coefficient. It does inform about howy
strong the difference of ratings is compared to a
random chance.
• All differences in the rating will be handled equally.
There is no direction given.
f• There are several ways for the evaluation. Just 1 rater
can be evaluated but also several raters against each
th F th th 2 l bother. Furthermore, more than 2 classes can be
evaluated separately.
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 13/30
Die Kappa Technik
This method judges classification data.
• The following conditions should be adhered at the data collection
to get a meaningful result:to get a meaningful result:
• The inspectors take her decisions independently
• Use at least two categories (classes)
• A category can be more frequently used than otherg y q y
• The categories exclude each other
• Kappa (K) is defined as the share in agreement of inspectors or
categories of the at most possible agreement
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 14/30
8. The Kappa Coefficient
PP
The Kappa (K) equation:
chanceobserved
P
PP
K
−
=
1 chance
P−1
Description:p
• P observed = the proportion of results in agreement
both inspectors assesses good or both inspectors assesses= both inspectors assesses good or both inspectors assesses
bad
• P chance = the proportion of results in agreement by chance
= (proportion of good rated units by inspector A x proportion good
rated units by inspector B) + (proportion of bad rates units by
inspector A proportion of bad rated nits b inspector B)
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 15/30
inspector A x proportion of bad rated units by inspector B)
For Clarification
Example 1: 24 parts assessed by 2 inspectors with 96% agreement
Example 1
Good Bad
Good 16 0 16
Number
erB
Rater APart Rater A Rater B
1 g g
2 g g
3 g g
4 b
Bad 1 7 8
17 7 24
Rate
4 g b
5 g g
6 b b
7 g g
8 g g
Good Bad
Good 0,66666667 0 0,66666667
Rater A
erB
Portion
9 g g
10 b b
11 g g
12 b b
13 g g
Bad 0,04166667 0,29166667 0,33333333
0,70833333 0,29166667 1
Rate
g g
14 g g
15 g g
16 b b
17 g g
18 b b
Pobserved = (0,667 + 0,292) = 0,959
18 b b
19 g g
20 g g
21 g g
22 b b
23 g g
Pchance = (0,667 x 0,708) + (0,333 x 0,292) = 0,570
K = (0,959 – 0,570) / (1 – 0,570) = 0,905
23 g g
24 b b
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 16/30
9. For Clarification
Example 2: 24 parts assessed by 2 inspectors with 83% agreement
Example 2
Part Rater A Rater B
1 g g
2 g g
3 g g
4 b
Good Bad
Good 13 2 15
Rater A
erB
Number
4 g b
5 g g
6 b b
7 g b
8 g g
Bad 2 7 9
15 9 24
Rate
9 g g
10 b b
11 b g
12 b b
13 g g
Good Bad
Good 0,54166667 0,08333333 0,625
Rater A
erB
Portion
g g
14 g g
15 g g
16 b b
17 b g
18 b b
Bad 0,08333333 0,29166667 0,375
0,625 0,375 1
Rate
Pobserved = (0,542 + 0,292) = 0,834
18 b b
19 g g
20 g g
21 g g
22 b b
23 g g
Pchance = (0,625 x 0,625) + (0,375 x 0,375) = 0,531
K = (0,834 – 0,531) / (1 – 0,531) = 0,646
23 g g
24 b b
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 17/30
The Kappa Coefficient
• Kappa can have a value between -1 and 1.
• A value of 1 will be achieved at an absolute agreementA value of 1 will be achieved at an absolute agreement.
• A practical rule is that we don’t accept Kappa values < 0.7.
A l d 0 9 lk b ll• At values around 0.9 we talk about an excellent measurement
system.
A l d h h i f “ d”• A value around zero means, that the rating of a part as “good” or
“bad” is the same as would be expected by chance.
f 1• A value of -1 means that ratings are exact contrary, e.g. appraiser
against appraiser or appraiser against a standard
f fKappa values can be calculated for several persons as well for a
single person. We have also the possibility for rating classes
(categories) Examples will follow
Poor Kappa ratings are usually caused by an inadequate “Operational
D fi i i ” l i d
(categories). Examples will follow.
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 18/30
Definition” or a poorly trained rater
10. Example: Leakage Test Evaluation
Sample George 1 George 2 George 3 Kevin1 Kevin2 Kevin3 Paul1 Paul2 Paul3
1 P P P P P P P P P
2 P P P P P P P P P
3 P P P P P P P P P
4 P P P P P P P P P
5 P P P P P P P P P
6 P P P P P P P P P
Measurement System Analysis (MSA):
For attributive data: acceptable / not
acceptable
6 P P P P P P P P P
7 P P P P P P P P P
8 P P P P P P P P P
9 P P P P P P P P P
10 F F F F F F F F F
11 P P P P P P P P P
12 P P P P P P P P P
13
Due to customer complaints of the
leakage test reliability it was decided to
analyze the current measurement
13 F F F F F F F F F
14 P P P P P P P P P
15 P P P P P P P P P
16 P P P P P P P P P
17 P P P P P P P P P
18 P P P P P P P P P
19 P P P P P P P P P
system capability.
The analysis has been performed with
49 samples including 5 not acceptable 20 P P P P P P P P P
21 F F P F F F F F F
22 P P P P P P P P P
23 P P P P P P P P P
24 P P P P P P P P P
25 P P P P P P P P P
26 F F F F F F F F F
49 samples, including 5 not acceptable
parts, and with three appraiser.
A decision for or against an investment
f t t b h d b d 27 P P P P P P P P P
28 P P P P P P P P P
29 P P P P P P P P P
30 P P P P P P P P P
31 P P P P P P P P P
32 P P P P P P P P P
33 P P P P P P P P P
of a new test bench was made based
on the results of this MSA
34 P P P P P P P P P
35 P P P P P P P P P
36 P P P P P P P P P
37 P P P P P P P P P
38 P P P P P P P P P
39 P P P P P P P P P
40 P P P P P P P P P
File: Leak Test Attribute Study.mtw
3 Appraiser:
George Ke in and Pa l 41 P P P P P P P P P
42 P P P P P P P P P
43 P P P P P P P P P
44 F F F F F F F F F
45 P P P P P P P P P
46 P P P P P P P P P
47 P P P P P P P P P
George, Kevin and Paul
3 ratings per appraiser
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 19/30
48 P P P P P P P P P
49 P P P P P P P P P49 independent parts (samples)
Example: Leakage Test Evaluation
Stat
>Quality Tools
>Attribute Agreement Analysis…
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 20/30
11. The Graphical Analysis
Minitab represents the agreement in percent
As additional information with a confidence interval of 95 %
Date of study:
Reported by:
Name of product:
Misc:
Assessment Agreement
100
95,0% C I
Percent
Within Appraisers
95
90
rcent
90
85
Per
PaulKevinGeorge
80
Appraiser
The numbers
for the graphic
Appraiser # Inspected # Matched Percent 95 % CI
George 49 48 97,96 (89,15; 99,95)
Kevin 49 49 100,00 (94,07; 100,00)
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 21/30
for the graphic
Paul 49 49 100,00 (94,07; 100,00)
The Evaluation in the Session Window
Attribute Agreement Analysis for George 1; George 2; George 3; Kevin1; ...
Within Appraisers
Assessment Agreement
Appraiser # Inspected # Matched Percent 95 % CI
Between Appraisers
Assessment AgreementAppraiser # Inspected # Matched Percent 95 % CI
George 49 48 97,96 (89,15; 99,95)
Kevin 49 49 100,00 (94,07; 100,00)
Paul 49 49 100,00 (94,07; 100,00)
# Inspected # Matched Percent 95 % CI
49 48 97,96 (89,15; 99,95)
# Matched: All appraisers' assessments agree with each other# Matched: Appraiser agrees with him/herself across trials.
Fleiss' Kappa Statistics
# Matched: All appraisers assessments agree with each other.
Fleiss' Kappa Statistics
Appraiser Response Kappa SE Kappa Z P(vs > 0)
George F 0,92105 0,0824786 11,1672 0,0000
P 0,92105 0,0824786 11,1672 0,0000
Kevin F 1,00000 0,0824786 12,1244 0,0000
P 1,00000 0,0824786 12,1244 0,0000
Response Kappa SE Kappa Z P(vs > 0)
F 0,974754 0,0238095 40,9397 0,0000
P 0,974754 0,0238095 40,9397 0,0000
, , , ,
Paul F 1,00000 0,0824786 12,1244 0,0000
P 1,00000 0,0824786 12,1244 0,0000
The analysis showed excellent agreements within the appraisers and
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 22/30
also between the appraisers
12. The Worksheet Modification
You may want to analyse the data in one attribute column
Data In the first step we stack the results for each appraiser in a separate
column then we stack the results of all appraiser in 1 column (operator)
>Stack
>Columns…
column, then we stack the results of all appraiser in 1 column (operator).
1
22
For the analysis we
need to store the
operator identification
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 23/30
The Worksheet Modification
Calc
>Make Pattern Data
>Simple Set of Numbers…
In addition we need to
create one column to
identify the samples
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 24/30
13. Example: Surface Inspection
Sample Mary Paul Suzanne
1 A A S
2 A A A
3 D D A
The surface quality for the
base material for PCBs
has to be very high.
Legend
Class 1 (MIL B)
Class 2 (MIL D)
3 D D A
4 B B B
5 B D B
6 A A A
7 S S S
Classification in
accordance to the Norm
MIL 13949 in classes A; D;
Class 2 (MIL D)
Class 3 (MIL A)
Scrap (S)
8 D B D
9 B D D
10 A S A
1 A A A
2 A A A; ;
B or scrap.
In this example 10 panels
have been assessed by 3
2 A A A
3 D D A
4 D B B
5 B D B
6 A A A
S vs A 6
S vs D 0
S vs B 0
have been assessed by 3
inspectors 3 times each.
7 S S S
8 D B D
9 B D B
10 A S A
1 A A S
A vs D 3
A vs B 0
D vs B 10
File:
Attribute Gage Study.xls
Sample Covering
30 11
10 2
1 A A S
2 A A A
3 A D A
4 B B B
5 B D B5 B D B
6 A S A
7 S S S
8 D B D
9 B D B
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 25/30
10 A S A
The Evaluation with Minitab
After checking the table in the worksheet
we can start the evaluation
Stat
>Quality Tools
we can start the evaluation.>Attribute Agreement Analysis…
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 26/30
14. The Graphical Analysis
Minitab represents the agreement in percent
As additional information with a confidence interval of 95 %
Date of study:
Reported by:
Name of product:
Misc:
Assessment Agreement
100 95,0% C I
Percent
Within Appraisers
rcent
80
60
Per
40
20
Appraiser
SuzannePaulMary
0
Appraiser # Inspected # Matched Percent (%) 95,0% CI
Mary 10 8 80,0 ( 44,4, 97,5)
Paul 10 9 90,0 ( 55,5, 99,7)
The numbers
for the graphic
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 27/30
Suzanne 10 8 80,0 ( 44,4, 97,5)
for the graphic
The Evaluation in the Session Window
Fleiss' Kappa Statistics
Appraiser Response Kappa SE Kappa Z P(vs > 0)
Mary A 0,86425 0,182574 4,73371 0,0000
B 0,82955 0,182574 4,54361 0,0000
Withi A iD 0,58333 0,182574 3,19505 0,0007
S 1,00000 0,182574 5,47723 0,0000
Overall 0,80707 0,113821 7,09075 0,0000
Within Appraisers:
If we consider the overall results of
K 0 7 th ld th tPaul A 0,82955 0,182574 4,54361 0,0000
B 1,00000 0,182574 5,47723 0,0000
D 1 00000 0 182574 5 47723 0 0000
Kappa > 0.7, than we could say that
all appraisers are qualified. But have
a look on the details!D 1,00000 0,182574 5,47723 0,0000
S 0,81366 0,182574 4,45662 0,0000
Overall 0,91045 0,106205 8,57258 0,0000
S A 0 86425 0 182574 4 73371 0 0000
a look on the details!
Two of the three appraiser show
weakness with the stabilitySuzanne A 0,86425 0,182574 4,73371 0,0000
B 0,82955 0,182574 4,54361 0,0000
D 0,71154 0,182574 3,89726 0,0000
weakness with the stability
(Repeatability)!
S 0,76000 0,182574 4,16269 0,0000
Overall 0,80831 0,112123 7,20908 0,0000
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 28/30
15. The Evaluation in the Session Window
Now, have a look on the agreement between the appraiser.
B t th i fi d k t Thi h t bBetween the appraiser we find a weak agreement. This has to be
improved. Both classes with the highest quality deliver the most poor
results. It seems that parts with minor failures have the highestp g
chance for misinterpretation.
Fleiss' Kappa StatisticsFleiss Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
A 0 645483 0 0527046 12 2472 0 0000A 0,645483 0,0527046 12,2472 0,0000
B 0,518717 0,0527046 9,8420 0,0000
D 0,299481 0,0527046 5,6823 0,0000
S 0,600000 0,0527046 11,3842 0,0000
Overall 0,525026 0,0312782 16,7857 0,0000
In such cases the appraiser will receive tasks regarding their
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 29/30
experience.
Example : Document Assessment
One additional example you will the file: Attribute Gage Study.xls.
Here 3 inspectors assessed 15 documents
(invoices) two times each( )
First Ass. Second Ass. First Ass. Second Ass. First Ass. Second Ass.
Sample A A B B C C
1 good good good good good good
2 bad bad good bad bad bad
3 good good good good good good
4 good bad good good good good4 good bad good good good good
5 bad bad bad bad bad bad
6 good good good good good good
7 bad bad bad bad bad bad
8 good good bad good good bad
9 good good good good good good
10 bad bad bad bad bad bad
11 good good good good good good11 good good good good good good
12 good good good bad good good
13 bad bad bad bad bad bad
14 good good bad good good good
15 d d d d d d
Knorr-Bremse Group 17 BB W1 Attributive MSA 08, D. Szemkus/H. Winkler Page 30/30
15 good good good good good good