Design For Six Sigma Overview


Published on

Intro to the DFSS methodology.

Published in: Business
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Definition An analytical technique used by a design responsible engineer/team as a means to assure, to the extent possible, that potential failure modes and, their associated causes/mechanisms have been considered and addressed. Design Failure Mode and Effects Analysis (DFMEA)
  • A reliability prediction is simply the analysis of parts and components in an effort to predict and calculate the rate at which an item will fail. A reliability prediction is one of the most common forms of reliability analyses for calculating failure rate and MTBF. What is MTBF? There are many forms of the MTBF definition. In general, MTBF (Mean Time Between Failures) is the mean value of the lengths of time between consecutive failures, under stated conditions, for a stated period in the life of a functional unit. A more simplified MTBF definition for Reliability Predictions can be stated as the average time (usually expressed in hours) that a component works without failure. In order to do a reliability prediction, you must gather information about the components in your system, then use this data in mathematical equations to perform failure rate or MTBF calculations. The prediction models employed to calculate failure rate do not contain listings of failure rate values for devices, but include equations for calculating failure rates of various devices. The complexity and required parameters of these equations varies depending on device type.
  • This standard representation of the loss function demonstrates a few of the key attributes of loss.  For example, the target value and the bottom of the parabolic function intersect, implying that as parts are produced at the nominal value, little or no loss occurs.  Also, the curve flattens as it approaches and departs from the target value.  (This shows that as products approach the nominal value, the loss incurred is less than when it departs from the target.)  Any departure from the nominal value results in a loss! Loss can be measured per part.  Measuring loss encourages a focus on achieving less variation.  As we understand how even a little variation from the nominal results in a loss, the tendency would be to try and keep product and process as close to the nominal value as possible.  This is what is so beneficial about the Taguchi loss.  It always keeps our focus on the need to continually improve
  • Traditional life data analysis involves analyzing times-to-failure data (of a product, system or component) obtained under normal operating conditions in order to quantify the life characteristics of the product, system or component. In many situations, and for many reasons, such life data (or times-to-failure data) is very difficult, if not impossible, to obtain. The reasons for this difficulty can include the long life times of today's products, the small time period between design and release and the challenge of testing products that are used continuously under normal conditions. Given this difficulty, and the need to observe failures of products to better understand their failure modes and their life characteristics, reliability practitioners have attempted to devise methods to force these products to fail more quickly than they would under normal use conditions. In other words, they have attempted to accelerate their failures. Over the years, the term accelerated life testing has been used to describe all such practices. More specifically, accelerated life testing can be divided into two areas: qualitative accelerated testing and quantitative accelerated life testing. In qualitative accelerated testing, the engineer is mostly interested in identifying failures and failure modes without attempting to make any predictions as to the product’s life under normal use conditions. In quantitative accelerated life testing, the engineer is interested in predicting the life of the product (or, more specifically, life characteristics such as MTTF, B(10) life, etc.) at normal use conditions, from data obtained in an accelerated life test Metrology - Applied or industrial metrology , concerns the application of measurement science to manufacturing and other processes and use in society, ensuring the suitability of measurement instruments, their calibration and quality control of measurements. Repeatability is the variation in measurements obtained when one person measures the same unit with the same measuring equipment.
  • Repeatability is the variation in measurements taken by a single person or instrument on the same item and under the same conditions. accuracy is the degree of conformity of a measured or calculated quantity to its actual (true) value . Calibration refers to the process of determining the relation between the output (or response) of a measuring instrument and the value of the input quantity or attribute, a measurement standard.
  • Kaizen definition has been Americanized to mean "Continual Improvement." A closer definition of the Japanese meaning of Kaizen is "to take apart and put back together in a better way." According to Webster - blitz is short for blitzkrieg. And blitzkrieg is (b) -"Any sudden overpowering attack." Therefore, a Kaizen Blitz could be defined as 'a sudden overpowering effort to take something apart and put it back together in a better way." What is taken apart is usually a process, system, product, or service. Read "Goldratt", who wrote the book called "The Goal" A poka-yoke device is any mechanism that either prevents a mistake from being made or makes the mistake obvious at a glance. The ability to find mistakes at a glance is essential
  • Design For Six Sigma Overview

    1. 1. Design For Six Sigma Travis Eck
    2. 2. Selection
    3. 3. Outline <ul><li>Tools for Design </li></ul><ul><li>Design </li></ul><ul><li>Design failure mode </li></ul><ul><li>Effect analysis </li></ul><ul><li>Reliability Prediction </li></ul><ul><li>Design Development </li></ul><ul><li>The Taguchi loss function </li></ul><ul><li>Optimizing Reliability </li></ul><ul><li>Design Verification </li></ul><ul><li>Reliability testing </li></ul><ul><li>Measurement system evaluation </li></ul><ul><li>Process capability Evaluation </li></ul><ul><li>Tools for Process Improvement </li></ul><ul><li>Basic tools </li></ul><ul><li>Flow Charts </li></ul><ul><li>Run Charts </li></ul><ul><li>Control Charts </li></ul><ul><li>Check Sheets </li></ul><ul><li>Histograms </li></ul><ul><li>Pareto Diagrams </li></ul><ul><li>Cause and Effect Diagrams </li></ul><ul><li>Scatter Diagrams </li></ul><ul><li>Other tools </li></ul><ul><li>Kaizen Blitz </li></ul><ul><li>Poka-Yoke </li></ul><ul><li>Process simulation </li></ul><ul><li>Engaging the Workforce </li></ul><ul><li>Improvement </li></ul><ul><li>Skills for team leaders </li></ul><ul><li>Skills for team members </li></ul>
    4. 4. Tools for Design <ul><li>Design failure Mode and effect analysis (DFMEA) </li></ul><ul><li>Design-FMEAs should be used throughout the design process - from the preliminary design to when the product goes into production.  Design-FMEAs uncover potential failures associated with the product that could cause product malfunctions, shortened product life, and safety hazards to name a few. </li></ul>
    5. 5. Tools for Design <ul><li>Reliability Prediction </li></ul><ul><li>The ability to perform over time </li></ul><ul><li>Numerical measurement between 0-1 </li></ul><ul><li>Functional failure </li></ul><ul><li>Reliability failure </li></ul>
    6. 6. Tools for Design Development <ul><li>The Taguchi loss function </li></ul><ul><li>&quot;a minimal loss at the nominal value, and an ever-increasing loss with departure either way from the nominal value.&quot;   - W. Edwards Deming Out of the Crisis. p.141 </li></ul><ul><li>Measuring loss </li></ul><ul><li>Keeps focus </li></ul><ul><li>keep product and process as close to the nominal value as possible </li></ul>
    7. 7. Tools for Design Development <ul><li>Optimizing Reliability </li></ul><ul><li>Crucial Steps: Assess process capabilities to achieve critical design parameters and meet CTQ limits </li></ul><ul><li>Optimize design to minimize sensitivity of CTQs to process parameters </li></ul><ul><li>Design for robust performance and reliability </li></ul><ul><li>Error proofing </li></ul><ul><li>Establish statistical tolerance </li></ul><ul><li>Optimize sigma and cost </li></ul><ul><li>Commission and startup </li></ul>
    8. 8. Tools for Design Verification <ul><li>Reliability Testing </li></ul><ul><li>Life testing </li></ul><ul><li>Accelerated life testing </li></ul><ul><li>Component Stress testing </li></ul><ul><li>Metrology </li></ul><ul><li>Accuracy </li></ul><ul><li>Precision </li></ul><ul><li>Repeatability </li></ul>
    9. 9. Tools for Design Verification
    10. 10. Tools for Design Verification <ul><li>Process Capability Evaluation </li></ul><ul><li>Where is the process centered? </li></ul><ul><li>How much variability exists in the process? </li></ul><ul><li>Is the performance relative to specifications acceptable? </li></ul><ul><li>What proportion of output will be expected to meet expectations? </li></ul><ul><li>What factors contribute to variability? </li></ul>
    11. 11. Tools for Process Improvement
    12. 12. Tools for Process Improvement
    13. 13. <ul><li>Flow and Run Charts </li></ul><ul><li>Easy to use </li></ul><ul><li>Clear Communications </li></ul><ul><li>Identifies areas of opportunities </li></ul><ul><li>Familiar in the business world </li></ul><ul><li>Great for comparing data </li></ul><ul><li>Good for project identifications </li></ul>Basic Tools
    14. 14. <ul><li>Control Charts </li></ul><ul><li>Identify variations </li></ul><ul><li>Predict when process is out of control </li></ul><ul><li>Anticipates change </li></ul><ul><li>Tells a story </li></ul><ul><li>Check Sheets </li></ul><ul><li>Provide clarity of ideas </li></ul><ul><li>Useful in keep discussions ideas on target </li></ul><ul><li>Quickly created </li></ul><ul><li>Easily reorganized </li></ul>Basic Tools
    15. 15. <ul><li>Histograms </li></ul><ul><li>Prioritizes opportunities </li></ul><ul><li>Provides base for projects </li></ul><ul><li>Recognizable </li></ul><ul><li>Pareto Diagrams </li></ul><ul><li>Prioritize work </li></ul><ul><li>Principle that 80% of deviations are caused by 20% of the process </li></ul>Basic Tools Daily Inspections 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0.00- 0.72 0.72- 1.44 1.44- 2.17 2.17- 2.89 2.89- 3.61 3.61- 4.33 4.33- 5.05 5.05- 5.77 5.77- 6.50 6.50- 7.22 Business Days Counts
    16. 16. <ul><li>Cause and Effect Diagrams </li></ul><ul><li>Also known as the 5 whys </li></ul><ul><li>Root Cause Analysis </li></ul><ul><li>Structures thoughts </li></ul><ul><li>Keeps group on track </li></ul><ul><li>Directs improvements to the right areas </li></ul>Basic Tools
    17. 17. Group Activity
    18. 18. <ul><li>Scatter Diagrams </li></ul><ul><li>Help define the correlation between characteristics </li></ul><ul><li>Indicate a cause and effect relationship </li></ul>Basic Tools
    19. 19. <ul><li>Kaizen Blitz </li></ul><ul><li>Poka-Yoke </li></ul><ul><li>&quot;The causes of defects lie in worker errors, and defects are the results of neglecting those errors. It follows that mistakes will not turn into defects if worker errors are discovered and eliminated beforehand“ Shingo 1986, p.50 </li></ul><ul><li>Process Simulation </li></ul>Other Tools
    20. 20. <ul><li>Change the culture to an improvement focus group </li></ul><ul><li>Develop programs that encourage and develop the right skills for team leaders </li></ul><ul><li>Develop training and communication that provide additional skills to all team members </li></ul>Engaging the workforce
    21. 21. Questions?