1. DEPARTMENT OF STATISTICS
DR. RICK EDGEMAN, PROFESSOR & CHAIR – SIX SIGMA BLACK BELT
REDGEMAN@UIDAHO.EDU OFFICE: +1-208-885-4410
FAILURE MODES & EFFECTS ANALYSIS
MEASUREMENT SYSTEMS ANALYSIS
AND VALIDATION
2. IS A HIGHLY STRUCTURED STRATEGY FOR ACQUIRING,
ASSESSING, AND ACTIVATING CUSTOMER, COMPETITOR, AND
ENTERPRISE INTELLIGENCE LEADING TO SUPERIOR PRODUCT,
SYSTEM, OR ENTERPRISE INNOVATIONS AND DESIGNS THAT
PROVIDE A SUSTAINABLE COMPETITIVE ADVANTAGE.
IX
S IGMA
S
DEPARTMENT OF STATISTICS
3. What Measurements are Important and
What Tools Should be Used?
1. Select Customer Critical to Quality (CTQ) Characteristics;
2. Define Performance Standards (Numbers & Units);
3. Establish the Data Collection Plan,
4. Validate the Measurement System,
5. and Collect the Necessary Data.
4. Quality Function Deployment (QFD) which relates CTQs to
measurable internal sub-processes or product characteristics.
Process Maps create a shared view of the process, reveal redundant/
unnecessary steps, and compares the “actual” process to the ideal one.
Fishbone Diagrams provide a means for revealing effect causes.
Pareto Analysis provides a useful quantitative means of separating
the vital few causes of the effect from the trivial many, but requires
valid historical data.
Failure Modes and Effects Analysis (FMEA) identifies ways that
a sub-process or product can fail and develops plans to prevent those
failures. FMEA is especially useful with high-risk projects.
5. FAILURE MODES AND EFFECTS ANALYSIS (FMEA)
The FMEA Process is a structured approach that has the goal of
linking the FAILURE MODES to an EFFECT over time for the
purpose of prevention. The structure of FMEA is as follows:
Preparation FMEA Process Improvement
a. Select the team
b. Develop the process map and steps
c. List key process outputs to satisfy internal and external
customer requirements
d. Define the relationships between outputs and process variables
e. Rank inputs according to importance.
6. FAILURE MODES AND EFFECTS ANALYSIS
Preparation FMEA Process Improvement
a. Identify the ways in which process inputs can vary (causes)
and identify associated FAILURE MODES. These are ways
that critical customer requirements might not be met.
b. Assign severity, occurrence and detection ratings to each
cause and calculate the RISK PRIORITY NUMBERS (RPNs).
c. Determine recommended actions to reduce RPNs.
d. Estimate time frames for corrective actions.
e. Take actions and put controls in place.
f. Recalculate all RPNs.
7. FAILURE MODES AND EFFECTS ANALYSIS
Preparation FMEA Process Improvement
Vocabulary:
FAILURE MODE: How a part or process can fail to meet
specifications.
CAUSE: A deficiency that results in a failure mode
sources of variation
EFFECT: Impact on customer if the failure mode is not
prevented or corrected.
8. RATING DEGREE OF SEVERITY LIKELIHOOD OF
OCCURRENCE
ABILITY TO DETECT
1 Customer will not notice the adverse effect or it is
insignificant.
Likelihood of occurrence is
remote.
Sure that the potential failure will be found or
prevented before reaching the next customer.
2 Customer will probably experience slight
annoyance.
Low failure rate with
supporting documentation.
Almost certain that the potential failure will be
found or prevented before reaching the next
customer.
3 Customer will experience annoyance due to slight
degradation of performance.
Low failure rate without
supporting documentation.
Low likelihood that the potential failure will
reach the next customer undetected.
4 Customer dissatisfaction due to reduced
performance.
Occasional failures. Controls may not detect or prevent the potential
failure from reaching the next customer.
5 Customer is made uncomfortable or their
productivity is reduced by the continued
degradation of the effect.
Relatively moderate failure
rate with supporting
documentation.
Moderate likelihood that the potential failure will
reach the next customer.
6 Warranty repair or significant manufacturing or
assembly complaint.
Moderate failure rate without
supporting documentation.
Controls are unlikely to detect or prevent the
potential failure from reaching the next
customer.
7 High degree of customer dissatisfaction due to
component failure without complete loss of
function. Productivity impacted by high scrap or
rework levels.
Relatively high failure rate
with supporting
documentation.
Poor likelihood that the potential failure will be
detected or prevented before reaching the next
customer.
8 Very high degree of dissatisfaction due to the loss
of function without a negative impact on safety or
governmental regulations.
High failure rate without
supporting documentation.
Very poor likelihood that the potential failure will
be detected or prevented before reaching the
next customer.
9 Customer endangered due to the adverse effect on
safe system performance with warning before
failure or violation of governmental regulations.
Failure is almost certain based
on warranty data or significant
DV testing.
Current controls probably will not even detect
the potential failure.
10 Customer endangered due to the adverse effect on
safe system performance without warning before
failure or violation of governmental regulations.
Assured of failure based on
warranty data or significant
DV testing
Absolute certainty that the current controls will
not detect the potential failure.
FMEA Standardized Rating System
1 < RPN = (Degree of Severity)*(Likelihood of Occurrence)*(Ability to Detect) < 1000
9. Process or
Product Name:
Prepared by: Page ____ of ____
Responsible: FMEA Date (Orig) ______________ (Rev) _____________
Process
Step/Part
Number Potential Failure Mode Potential Failure Effects
S
E
V Potential Causes
O
C
C Current Controls
D
E
T
R
P
N
Actions
Recommended Resp.
What are the
process steps?
In what ways can the
process step go wrong?
What is the impact of the
Failure Mode on the
customer?
How
severe
is
the
effect
on
the
customer?
What are the causes of the
Failure Mode?
How
often
does
the
Cause
or
Failure
Mode
occur?
What are the existing controls
and procedures that prevent the
Cause or Failure Mode?
How
well
can
you
detect
the
Cause
or
Failure
Mode?
Calculated
What are the actions
for reducing the
occurrence,
decreasing severity or
improving detection?
Who is
responsible for
the
recommended
action?
0
0
0
0
0
0
0
0
0
0
0
Failure Modes and Effects Analysis
(FMEA)
10. FAILURE MODES AND EFFECTS ANALYSIS (FMEA)
Preparation FMEA Process Improvement
Develop and implement plans to reduce RPN’s.
11. Measurement System Analysis & Validation:
Define Performance Standards: Numbers & Units
Translate customer needs into clearly defined measurable traits.
OPERATIONAL DEFINITION: This is a precise description
that removes any ambiguity about a process and provides a clear
way to measure that process. An operational definition is a key
step towards getting a value for the CTQ that is being measured.
USEFUL TOOL: Outside-In-Thinking
Measurement System Analysis & Validation
12. Measurement System Analysis & Validation
OUTSIDE-IN-THINKING
Outside-In-Thinking refers to understanding a process from a
customer perspective, a key element that feeds customer
satisfaction.
The idea is to enable customers to “feel” and “experience” Six
Sigma.
This requires so-called “wing-to-wing” thinking. Wing-to-wing
thinking assists in discovery of the customer’s scope of the
process.
In other words, according to the customer, when does the
process start and stop? This is the “wing-to-wing” perspective.
An example of wing-to-wing thinking follows.
13. Measurement System Analysis & Validation
OUTSIDE-IN-THINKING
A Green Belt was focusing on reducing the cycle time to complete a
change request to the email system. She began by focusing on the
following scope: change request ticket open to change request ticket
closed.
Upon talking to the customer (anyone who submits a change request),
the Green Belt realized there is more to this process than just opening
and closing the ticket. Before the ticket is opened, the customer fills
out a request form and e-mails it to the appropriate mailbox. The
customer does not know that the work is complete until s/he receives
a call verifying completion. Based on this information, the Green Belt
changed her scope to: customer submits a request form to user
receives call that work is complete.
14. Measurement System Analysis & Validation
OUTSIDE-IN-THINKING
HOW DO I DO IT?
Identify your customer. Although the concept of Outside-In-Thinking
is typically used in conjunction with an external customer, the same
theory applies for a project that has an internal customer.
Understand the process from the customer’s perspective. You can talk
to the customer directly or to experts on your team who have direct
contact with the customer. Use a wing-to-wing perspective, that is:
according to the customer, when does the process start and stop?
Your team can evaluate the process start / stop and decide if this is an
appropriate scope the team should focus on for improvement. Be
realistic in this decision. Make sure that the scope isn’t too big and
that you can realistically influence the improvement effort.
15. Measurement System Analysis & Validation
OUTSIDE-IN-THINKING
Tips: These questions can assist in becoming more customer-centric:
What does the customer need from the process?
How is our process performance from the customers perspective?
How does the customer measure the process?
How does the customer view the process?
What can we do better?
How would the customer like for our process to perform?
Tip: Whether or not you have direct contact with an external customer,
your project must be customer-focused. Identify the customer of your
process and understand the “pain” they feel. This will help drive your
improvement efforts so that the customer feels the impact of Six Sigma.
16. Measure: Define Performance Standards: Numbers & Units (Cont.)
TARGET PERFORMANCE: Where a process or product
characteristic is “aimed” If there were no variation in the
product/ process then this is the value that would always occur.
SPECIFICATION LIMIT: The amount of variation that the
customer is willing to tolerate in a process or product. This is
usually shown by “upper” and “lower” boundaries which, if
crossed, will cause the customer to reject the process or product.
DEFECT DEFINITION: Any process or product characteristic
that deviates outside of specification limits.
Measurement System Analysis & Validation
17. Measure
3. Establish Data Collection Plan, Validate the
Measurement System, and Collect Data.
A Good Data Collection Plan:
a. Provides a clearly documented strategy for collecting
reliable data;
b. Gives all team members a common reference;
c. Helps to ensure that resources are used effectively to collect
only critical data. The cost of obtaining new data should be
weighed vs. its benefit. There may be viable historical data
available.
Measurement System Analysis & Validation
18. Measure: 3. Establish Data Collection Plan, Validate
the Measurement System, and Collect Data.
We refer to “actual process variation” and measure
“actual output”:
a.what is the measurement process used?
b. describe that procedure
c. what is the precision of the system?
d. how was precision determined
e.what does the gage supplier state about:
* Accuracy * Precision * Resolution
f. Do we have results of either a:
* Test-Retest Study or * Gage R&R Study?
Measurement System Analysis & Validation
19. Measure:
3. Establish Data Collection Plan, Validate the Measurement
System, and Collect Data.
Note that our measurement process may also have variation.
a. Gage Variability:
Precision: Accuracy: Both:
Measurement System Analysis & Validation
20. Measure: 3. Establish Data Collection Plan, Validate
Validate the Measurement System, and Collect Data.
b. Operator Variability: Differences between operators
related to measurement.
c. Other Variability: Many possible sources.
Repeatability: Assess effects within ONE unit of your
measurement system, e.g., the variation in the measurements
of ONE device.
Reproducibility: Assesses the effects across the measurement
process, e.g., the variation between different operators.
Resolution: The incremental aspect of the measurement device.
Measurement System Analysis & Validation
21. Measure:
3. Establish Data Collection Plan, Validate the
Measurement System, & Collect Data.
GAGE R&R (Repeatability & Reproducibility) STUDY:
a. Operators – at least 3 recommended;
b. Part – the product or process being measured. At
least 10 representative parts per study reflecting the
range of parts possible are recommended with each
operator measuring the same parts.
c. Trial – each time the item is measured. There
should be at least 3 trials per part, per customer.
Measurement System Analysis & Validation
22. Measure: 3. Establish Data Collection Plan, Validate the
Measurement System, & Collect Data.
GAGE R&R (Repeatability & Reproducibility) STUDY:
Source of Variation % Contribution
Total Gage Repeatability & Reproducibility R1 + R2
Repeatability R1
Reproducibility R2
Part-to-Part 100% - (R1 + R2)
Total Variation 100%
Measurement System Analysis & Validation
23. DEPARTMENT OF STATISTICS
DR. RICK EDGEMAN, PROFESSOR & CHAIR – SIX SIGMA BLACK BELT
REDGEMAN@UIDAHO.EDU OFFICE: +1-208-885-4410
End of Session
FAILURE MODES & EFFECTS ANALYSIS
MEASUREMENT SYSTEMS ANALYSIS
AND VALIDATION