Tqm3 ppt

2,924 views

Published on

Total Quality Management unit III ppt by c.coomarasamy, Professor TEC, Trichy.

Published in: Education
2 Comments
3 Likes
Statistics
Notes
No Downloads
Views
Total views
2,924
On SlideShare
0
From Embeds
0
Number of Embeds
10
Actions
Shares
0
Downloads
0
Comments
2
Likes
3
Embeds 0
No embeds

No notes for slide
  • 4
  • 9
  • 6
  • 13
  • 15
  • 17
  • 16
  • 18
  • 19
  • 20
  • 21
  • 22
  • Tqm3 ppt

    1. 1. MG 1401 TOTAL QUALITY MANAGEMENT 3. Statistical Process Control 2010-2011
    2. 2. Statistical Process ControlSPC is an effective system for controlling the process parameters by comparing it with standards and take corrective action if there is any deviation by employing statistical methods.SPC may be used to cover all uses of statistical techniques for the analysis of data that may be applied in the control of product quality
    3. 3. Statistical Process ControlStatistical techniques1. The seven tools of quality used by the quality circles SEVEN QUALITY CONTROL TOOLS (or) OLD SEVEN TOOLS2. Control charts Control charts for Variables ( X and R charts) and Process Capability Control charts for Attributes - Defectives (p and np charts) - Defects ( c and u charts )3. Concept of Six Sigma4. Management tools- New seven management tools
    4. 4. The Total Statistical Process Control (TSPC) System
    5. 5. The seven tools of Quality- (Old) Q- 7 toolsSEVEN QUALITY CONTROL TOOLS (or) OLD SEVEN TOOLS1. Check Lists/ Check sheets2. (Frequency) Histograms or Bar Graphs3. Process flow diagrams / Charts4. Cause and Effect, Fishbone, Ishikawa Diagram5. Pareto Diagrams6. Scatter Diagrams / Plots7. Control Charts
    6. 6. The seven tools of Quality ( Q- 7 tools)S.No Tools Problem Solving step 1 Check Lists/ Check How often it is For finding faults sheets done? 2 (Frequency) What do variations For identifying Histograms or Bar look like? problems Graphs 3 Process flow What is done? For understanding diagrams / Charts the “mess” 4 Cause and Effect, What cause the For generating ideas Fishbone, Ishikawa problem? Diagram 5 Pareto Diagrams Which are the big For identifying problems? problems 6 Scatter Diagrams / What are the For developing Plots relationships solutions between factors? 7 Control Charts Which variations to For implementation control and how?
    7. 7. Check sheets/tally sheet (Data collection sheet)the intent and purpose of collecting data is to either control the production process, to see the relationship between cause-and-effect, or for the continuous improvement of those processes that produce any type of defect or nonconforming product.
    8. 8. Check sheets/tally sheet (Data collection sheet)Check Sheet –collecting data to compile in such a way as to be easily used, understood and analyzed automatically. - as it is being completed, actually becomes a graphical representation of the data you are collecting, - thus you do NOT need any computer software, or spreadsheet to record the data. - it can be simply done with pencil and paper!- is a data recording form that has been designed to readily interpret results from the form itself.- needs to be designed for the specific data it is to gather.
    9. 9. Check sheets/tally sheet (Data collection sheet)- used for the collection of quantitative or qualitative repetitive data.- adaptable to different data gathering situations.- minimal interpretation of results required.- easy and quick to use.- no control for various forms of bias – exclusion, interaction, perception, operational, non-response, estimation.
    10. 10. CHECK SHEET (or) DEFECT CONCENTRATION DIAGRAMDESCRIPTION WHEN TO USEA check sheet is a When structured, data can be observed and prepared form for collecting and collected repeatedly by the analyzing data. same person or at theThis is a same location. generic tool that can be When adapted for a collecting data on the frequency or wide variety of patterns of purposes events, problems, defects, defect location, defect causes, etc. When collecting data from a
    11. 11. Check sheet
    12. 12. Check sheet(continuous data use) No.___________ 741 PRODUCT ION CHECK SHEETProduct Name_________________________________ Alternator Pulley Date_________________________________ 12- 02- 02Usage________________________________________ Pulley Bolt Torque Factory_______________________________ Church StreetSpecification__________________________________ 2.2 +/- .5 Section Name__________________________ SI LineNo. of Inspections______________________________ 185 Data Collector__________________________ Sam The ManTotal Number__________________________________ 185 Group Name___________________________Lot Number___________________________________ 1631 Remarks:_____________________________Dimensions 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 SpecUSL SpecLSL 40 35 30 25 XXXXX 20 XXXX XX XX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX 15 X X XXX XXX XX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX 10 XXX XX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX 5 X XX XX 0 X X TOTAL FREQUENCY 1 2 7 13 10 16 19 17 12 16 20 17 13 8 5 6 2 1
    13. 13. Histogram or Bar GraphThe word histogram is derived from Greek: - histos anything set upright‘ (as the masts of a ship, the bar of a loom, or the vertical bars of a histogram); - gramma drawing, record, writing.A generalization of the histogram is kernal smoothing techniques.This will construct a very smooth probability density function from the supplied data- is a graphic summary of variation in a set of data.- it enables us to see patterns that are difficult to see in a simple table of numbers.- can be analyzed to draw conclusions about the data set.- continuous variable is clustered into categories and the value of each cluster is plotted to give a series of bars as above.- without using some form of graphic this kind of problem can be difficult to analyze, recognize or identify.
    14. 14. HistogramIn statistics, a histogram is a graphical display of tabulated frequencies.It shows what proportion of cases fall into each of several categories.A histogram differs from a bar chart in that it is the area of the bar that denotes the value, not the height, a crucial distinction when the categories are not of uniform widthThe categories are usually specified as non-overlapping intervals of some variable.The categories (bars) must be adjacent.
    15. 15. Histogram or Bar Graph• H or B
    16. 16. Histogram or Bar Graph
    17. 17. HistogramGrouping a set of measurements into a Histogram
    18. 18. Histogram shapesLow with gaps Isolated-peakedHigh withfew bars Cog-toothed (or comb)Skewed(this is positive;negative skew Plateauhas tail to right)Exponential Edge-peaked missing barsDual-peaked(bimodal) Truncated
    19. 19. Bar chartUsing the Bar Chart in problem-solving Numbers into bars
    20. 20. . Pie chart Band Chart
    21. 21. Flow Charts• Pictures,• symbols or• text coupled with lines,• arrows on lines show direction of flow.• enables modeling of processes;• problems/opportunities and decision points etc.• develops a common understanding of a process by those involved.• no particular standardization of symbology,• so communication to a different audience may require considerable time and explanation.
    22. 22. . Basic Flowchart elementsDecisions in Flowcharts
    23. 23. .Continuing Flowcharts across pages Delay symbol Sub processes
    24. 24. . Example Flowcharts
    25. 25. Deployment Flowchart
    26. 26. Flow chart
    27. 27. Flow chart
    28. 28. Cause & Effect diagramCause and Effect ,Fishbone,Ishikawa Diagram- is the brainchild of Kaoru Ishikawa, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of DR. KAORU ISHIKAWA (1915–1989) modern management. Disciple of Juran & Feigenbaum. TQC in Japan, SPC,- is used to Cause &Effect Diagram, QC. explore all the potential or real causes (or inputs) that result in a single effect (or output).
    29. 29. Cause & Effect diagram (also called Ishikawa or fishbone chart )DESCRIPTION WHEN TO USE - identifies - When identifying many possible causes possible causes for a for an problem. effect or - especially when a problem. team’s thinking tends to-can be used to fall into structure a ruts brainstorming session. - immediately sorts ideas into DR. KAORU ISHIKAWA (1915–1989) useful categories. Disciple of Juran & Feigenbaum. TQC in Japan, SPC, Cause &Effect Diagram, QC.
    30. 30. Cause & Effect diagramCauses are arranged according to their level of importance or detail, resulting in a depiction of relationships and hierarchy of events.- This can help you search for root causes, identify areas where there may be problems, and compare the DR. KAORU ISHIKAWA (1915–1989) relative importance of different causes. Disciple of Juran & Feigenbaum. TQC in Japan, SPC,- Causes in a Cause &Effect Diagram, QC. cause & effect diagram are frequently arranged into four major categories. While these categories can be anything, you will often see:• manpower, methods, materials, and machinery (recommended for manufacturing)• equipment, policies, procedures, and people (recommended for administration and service).
    31. 31. Cause & Effect diagram• is a method for analyzing process dispersion.• purpose is to relate causes and effects.• three basic types:1. Dispersion analysis,2. Process classification and3. Cause enumeration.Effect = problem to be resolved, opportunity to be grasped, result to be achieved.• excellent for capturing team brainstorming output and for filling in from the wide picture.• helps organize and relate factors, providing a sequential view.• deals with time direction but not quantity.• can become very complex.• can be difficult to identify or demonstrate interrelationships.
    32. 32. Cause & Effect diagram
    33. 33. Cause & Effect diagram
    34. 34. Cause & Effect diagram
    35. 35. Cause & Effect diagram
    36. 36. Cause & Effect diagramThis fishbone diagram was drawn by a manufacturing team to try to understand the source of periodic iron contamination.The team used the six generic headings to prompt ideas.Layers of branches show thorough thinking about the causes of the problem.
    37. 37. Cause & Effect diagram
    38. 38. Cause & Effect diagram
    39. 39. Cause & Effect diagram (Standard )
    40. 40. Pareto diagrams• Pareto diagrams are named after Vilfredo Pareto,• an Italian sociologist and economist, who invented this method of informationAlfredo Pareto (1848-1923) (Europe) presentation toward the end of the 19th century.• The chart is similar to the histogram or bar chart, except that• the bars are arranged in decreasing order from left to right along the abscissa.• The fundamental idea behind the use of Pareto diagrams• for quality improvement is that the first few (as presented on the diagram) contributing causes to a problem usually account for the majority of the result.• Thus, targeting these "major causes" for elimination results in the most cost-effective improvement scheme
    41. 41. Pareto diagramsPareto Principle• The Pareto principle suggests that most effects come from relatively few causes. Alfredo Pareto• In quantitative terms: (1848-1923) (Europe)• 80% of the problems come from 20% of the causes (machines, raw materials, operators etc.);• 80% of the wealth is owned by 20% of the people etc.• Therefore effort aimed at the right 20% can solve 80% of the problems.• Double (back to back) Pareto charts can be used to compare before and after situations.• General use, to decide where to apply initial effort for maximum effect.
    42. 42. Pareto diagrams Alfredo Pareto (1848-1923) (Europe)
    43. 43. Pareto Chart (or) Pareto diagram (or) Pareto analysisA Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or money), and are arranged with Alfredo Pareto (1848-1923) (Europe) longest bars on the left and the shortest to the right.To identify the ‘VITAL FEW FROM TRIVIAL MANY’ and to concentrate on the vital few for improvement.A Pareto diagram indicates which problem we should solve first in eliminating defects and improving the operation.The Pareto 80 / 20 rule80 % of the problems are produced by20 % of the causes.
    44. 44. Alfredo Pareto(1848-1923) (Europe)
    45. 45. Pareto diagrams
    46. 46. The Pareto Chart Finding the right Pareto Chart Convex Pareto Concave or spiky Pareto (clearly allows you to prioritize the action)Prioritizing the action Alfredo Pareto (1848-1923) (Europe)
    47. 47. Before and After .The Sub-Pareto Chart The Pareto Curve
    48. 48. Scatter diagrams• are used to study possible relationships between two variables.• Although these diagrams cannot prove that one variable causes the other, they do indicate the existence of a relationship, as well as the strength of that relationship.• is composed of a horizontal axis containing the measured values of one variable and a vertical axis representing the measurements of the
    49. 49. Scatter Diagram (or) Scatter plot (or) X–Y graph• The purpose of the scatter diagram is to display what happens to one variables when another variable is changed.• is used to test a theory that the two variables are related.• The type of relationship that exits is indicated by the slope of the diagram.• The scatter diagram graphs pairs of numerical data, with one variable on each axis, to look for a relationship between them. If the variables are correlated, the points will fall along a line or curve. The better the correlation, the tighter the points will hug the line.
    50. 50. Scatter diagrams
    51. 51. Scatter diagrams
    52. 52. Scatter Diagram Types Positive Negative Degrees of correlation None High Curved Low Perfect Part-linear
    53. 53. Control chart• In statistical process control, the control chart, also known as the Shewhart chart or process-behaviour chart is a tool used to determine whether a manufacturing or business process is in a state of statistical control or not.• If the chart indicates that the DR. WALTER A. SHEWHART process is (1891–1967) currently under control then it can be used with -TQC &PDSA confidence to predict the future performance of the process.
    54. 54. Control chart• If the chart indicates that the process being monitored is not in control, the pattern it reveals can help determine the source of variation to be eliminated to bring the process back into control.• A control chart is a specific kind of DR. WALTER A. SHEWHART run chart that allows significant change to be (1891–1967) -TQC &PDSA differentiated from the natural variability of the process.• This is key to effective process control and improvement.
    55. 55. Control chart• They enable the control of distribution of variation rather than attempting to control each individual variation.• Upper and lower control and tolerance limits are calculated for a process and sampled measures are regularly plotted about a central line between the two sets of limits.• The plotted line corresponds to the stability/trend of the process.• Action can be taken based on trend rather than on individual variation.• This prevents over-correction/compensation for random variation, which would lead to many rejects. DR. WALTER A. SHEWHART (1891–1967) -TQC &PDSA
    56. 56. Control chartsMean and Control Limits How control limits catch shifts Additional limit lines
    57. 57. Control chart
    58. 58. Control chart
    59. 59. Stratification Analysis• Stratification Analysis determines the extent of the problem for relevant factors.• o        Is the problem the same for all shifts? • o        Do all machines, spindles, fixtures have the same problem? • o        Do customers in various age groups or parts of the country have  similar problems? • The important stratification factors will vary with each problem, but most problems will have several factors.• Check sheets can be used to collect data.• Essentially this analysis seeks to develop a pareto diagram for the important factors.
    60. 60. Stratification Analysis• The hope is that the extent of the problem will not be the same across all factors.• The differences can then lead to identifying root cause.• When the 5W2H and Stratification Analysis are performed, it is important to consider a number of indicators.• For example, a customer problem identified by warranty claims may also be reflected by various in-plant indicators.• Sometimes, customer surveys may be able to define the problem more clearly.• In some cases analysis of the problem can be expedited by correlating different problem indicators to identify the problem clearly
    61. 61. Stratification (or) Flowchart (or) Run chart• Stratification is a technique used in combination with other data analysis tools. When data from a variety of sources or categories have been lumped together, the meaning of the data can be impossible to see• When to Use• Before collecting data.• When data come from several sources or conditions, such as shifts, days of the week, suppliers or population groups.• When data analysis may require separating different sources or conditions.
    62. 62. Benefit from stratification.• Always consider before collecting data whether stratification might be needed during analysis. Plan to collect stratification information. After the data are collected it might be too late.• On your graph or chart, include a legend that identifies the marks or colors used.
    63. 63. Data Analysis • What is Data Analysis • Data analysis is statistics + visualization + know-how• Many of the methods for                                                                          data analysis are based on                                                    multivariate statistics, which poses an additional  problem to the beginner: • multivariate statistics cannot be understood without  a profound knowledge of simple statistics. • Furthermore, several                                                                                      fields in science and engineering have  developed their own nomenclature                                                                     assigning different names to the                                                       same concepts. 
    64. 64. Data Analysis• Thus one has to                                                                             gather considerable knowledge and experience in  order to perform the analysis of data efficiently.•  Possible applications of statistical methods can be in the fields  of medicine, engineering, quality inspection, election polling, analytical chemistry, physics, gambling• Statistics and statistical methodology as the basis of  data analysis are concerned with two basic types of problems • (1) summarizing, describing, and exploring the data (2) using sampled data to infer the nature of the process which produced the data• The first type of problems is covered by descriptive statistics, the second part is covered by inferential statistics.• Another important aspect of data analysis is the data, which  can be of two different types: qualitative data, and  quantitative data .
    65. 65. Data Analysis• Qualitative data does not contain quantitative information.• Qualitative data can be classified into categories.• In contrast, quantitative data represent an amount of something.• A third distinction can be made according to the     number of variables involved in the data analysis.• If only one variable is used, the statistical procedures are summarized as univariate statistics.• More than one variable result in multivariate statistics.• A special case of multivariate statistics with                                           only two variables is sometimes called bivariate statistics.
    66. 66. Statistical Process Control (SPC)Measures performance of a processUses mathematics (i.e., statistics)Involves collecting, organizing, & interpreting data Objective: Regulate product quality Used to – Control the process as products are produced – Inspect samples of finished products
    67. 67. Statistical Process Control What is a process? Inputs PROCESS OutputsA process can be described as a transformation of set of inputs into desired outputs.
    68. 68. WHY STATISTICS? THE ROLE OF STATISTICS ……… µ LSL T USL Statistics is the art of collecting, classifying, presenting, interpreting and analyzing numerical data, as well as making conclusions about the system from which the data was obtained.January 9, 2013 73
    69. 69. Descriptive StatisticsDescriptive Statistics is the branch of statistics which most people are familiar.It characterizes and summarizes the most prominent features of a given set of data (means, medians, standard deviations, percentiles, graphs, tables and charts). January 9, 2013 74
    70. 70. Inferential StatisticsInferential Statistics is the branch of statistics that deals with drawing conclusions about a population based on information obtained from a sample drawn from that population. January 9, 2013 75
    71. 71. StatisticsMEASURES OF CENTRAL TENDENCY Usually, the first topic in statistics is descriptive statistics.The mean, median and mode are used to describe statistical data.Variance and standard deviation help to understand how the data is spread out.Central tendency is a typical or representative score.The three measures of central tendency are the mode, median, and mean.
    72. 72. The term "measures of central tendency" refers to. finding the mean, median and mode. Mean: Average. The sum of a set of data divided by the number of data. (Do not round your answer unless directed to do so.) Median: The middle value, or the mean of the middle two values, when the data is arranged in numerical order. Think of a "median" being in the middle of a highway. Mode: The value ( number) that appears the most. It is possible to have more than one mode, and it is possible to have no mode. If there is no mode-write "no mode", do not write zero (0) .
    73. 73. ModeMode is the data value that occurs at a greater frequency than the others. Data: 1, 2, 3, 3, 3, 4, 4, 5 Mode = 3The mode, symbolized by Mo, is the most frequently occurring score value. If the scores for a given sample distribution are:32 32  35  36  37  38  3 8  39 3 9 39  40  4 0  42  45 - then the mode would be 39 because a score of 39 occurs 3 times, more than any other score.- The mode may be seen on a frequency distribution as the score value which corresponds to the highest point.- For example, the following is a frequency polygon of the data presented above:
    74. 74. Statistics - Mode 32 2 1 132 36 39 42 45 scores
    75. 75. Statistics - ModeA distribution may have - more than one mode if the two most frequently occurring scores occur the same number of times. For example, if the earlier score distribution were modified as follows:32 32 32  36  37  38  38  39 39 39  40  40  42  45 - then there would be two modes, 32 and 39.- Such distributions are called bimodal.- The frequency polygon of a bimodal distribution is presented below.
    76. 76. Statistics - Mode3 332 39
    77. 77. Statistics - ModeIn an extreme case there may be no unique mode, as in the case of a rectangular distribution.The mode is not sensitive to extreme scores.Suppose the original distribution was modified by changing the last number, 45, to 55 as follows: 32  32  35  36  37  38  38  39 39 39  40  40  42  55 The mode would still be 39.In any case, the mode is a quick and dirty measure of central tendency.Quick, because it is easily and quickly computed.Dirty because it is not very useful; that is, - it does not give much information about the distribution.
    78. 78. Statistics - Mode
    79. 79. Statistics - MedianMedianThe median is the exact middle value of a set of data values that have been sorted from the lowest value to highest.If the number of data values even, then the median is the average of the two middle values.Examples: Data: 1, 2, 3, 4, 5 Median: 3 Data: 1, 2, 3, 4, 5, 6 Median: 3.5
    80. 80. Statistics - MedianThe median, symbolized by Md, is the score value which cuts the distribution in half, such that half the scores fall above the median and half fall below it.Computation of the median is relatively straightforward.The first step is to rank order the scores from lowest to highest.The procedure branches at the next step: - one way if there are an odd number of scores in the sample distribution, - another if there are an even number of scores.If there is an odd number of scores as in the distribution below: 32  32  35  36  36  37  38                                                                 38                                                                         39  39  39  40  40  45  46  - then the median is simply the middle number.In the case above the median would be the number 38, because there are 15 scores all together with 7 scores smaller and 7 larger.
    81. 81. Statistics - Median• If there is an even number of scores, as in the distribution below:      32  35  36  36  37  38                                            38 39                                                                      39  39  40  40  42  45• then the median is the midpoint between the two middle scores: • in this case the value 38.5. • It was found by adding the two middle scores together and dividing  by two (38 + 39)/2 = 38.5.•  If the two middle scores are the same value then the                                   median is that value. • In the above system, no account is paid to whether there is a                                      duplication of scores around the median. • In some systems a slight correction is performed to correct for  grouped data, but since the correction is slight and the data is  generally not grouped for computation in calculators or computers, it  is not presented here. 
    82. 82. Statistics - MedianThe median, like the mode, is                                                                   not effected by extreme scores, as the following  distribution of scores indicates:      32  35  36  36  37  38                                           38 39                                                                       39  39  40  40  42  55The median is still the value of 38.5. The median is not as                                                                                  quick and     dirty as the    mode, but    generally it is    not the  preferred measure of     central tendency.
    83. 83. Statistics – The MeanThe mean,      symbolized by         ,     is the sum of the scores divided by the number of scores. The following formula both defines and describes the procedure for  finding the mean: where X is the sum of the scores and  N is the number of scores.
    84. 84. Statistics – The MeanMeanThe mean most frequently used is the  arithmetic mean, which is the same as the average, although the                                                                       geometric mean is         used also         at times. It is the arithmetic mean that is referred to when the word mean is  used by itself.  Expected value is another way of saying the mean. mean =    sum of data values / number of data values         mean μ =    (1 + 2 + 3) / 3    =    6 / 3    =   2  The lower case Greek letter μ is used to represent the mean.If the mean is from a sample of data, then                                                      x is used to represent the sample mean. Also, variables and Sigma notation is used to write the      general form of the mean.
    85. 85. Statistics – The MeanThe mean Application of this formula to the following data  32  35  36  37  38  38  39  39  39  40  40  42  45 yields the following results: Use of means as a way of describing a set of scores is fairly  common;•      batting average,•      bowling average, •      grade point average, and •      average points scored per game               are all means. Note the use of the word "average" in all of the above terms. In most cases when the term "average" is used, it refers to the mean, although not necessarily. When a politician uses the term "average income", for example,  he or she may be referring to the mean, median, or mode. 
    86. 86. Statistics – The MeanThe mean is sensitive to extreme scores. For example, the mean of the following data is 39.0, somewhat larger than the preceding example. 32  35  36  37  38  38  39  39  39  40  40  42  55In most cases the mean is the      preferred measure of central tendency, both as a description of the      data and as an estimate of the parameter. In order for the mean to be meaningful, however, the      acceptance of the interval property of      measurement is      necessary.When this property is obviously violated, it is inappropriate and  misleading to compute a mean.
    87. 87. Kiwi Bird ProblemAs is commonly known,                                                                                            KIWI-birds are native to                                                                                                              New Zealand.They are born                                                                                  exactly one foot tall and grow                                                                          in one foot intervals. That is, one moment they are one foot tall and the next they are two  feet tall. They are also very rare. An investigator goes to New Zealand and                                                        finds four birds. The                                                                                                                             mean of the heights of four birds is 4, the                                                median is 3, and the                                                                                         mode is 2. What are the heights of the four birds?• Hint - examine the constraints of the mode first, the median second,  and the mean last.
    88. 88. Statistics• Skewed Distributions and Measures of Central Tendency• Skewness refers to the asymmetry of the distribution, such that  a symmetrical distribution exhibits                                                        no skewness. In a symmetrical distribution the mean, median, and mode all fall at the same point,  as in the following distribution.
    89. 89. Statistics• An exception to this is the case of a bi-modal symmetrical distribution. • In this case the mean and the median fall at the same point,  while the two modes correspond to the two highest points of  the distribution. An example follows:
    90. 90. Statistics• A positively skewed distribution is asymmetrical and points in the positive direction. If a test was very difficult and almost  everyone in the class did very poorly on it, the resulting distribution  would most likely be positively skewed. • In the case of a positively skewed distribution, the mode is smaller than the median, which is smaller than the mean. This relationship exists because the mode is the point on  the x-axis corresponding to the highest point, that is the score  with greatest value, or frequency. The median is the point on the  x-axis that cuts the distribution in half, such that 50% of the  area falls on each side.  Mo
    91. 91. Statistics• The mean is the balance point of the distribution. Because  points further away from the balance point change the center of  balance, the mean is pulled in the direction the distribution is  skewed. For example, if the distribution is positively skewed, the mean would be pulled in the direction of the skewness, or  be pulled toward larger numbers.• One way to remember the order of the mean, median, and mode in a skewed distribution is to remember that the mean is  pulled in the direction of the extreme scores. In a  positively skewed distribution, the extreme scores are larger, thus  the mean is larger than      the median.  
    92. 92. Statistics
    93. 93. Statistics• A negatively skewed distribution is asymmetrical and points  in the negative direction, such as would result with a very easy test.  On an easy test, almost all students would perform well and only a  few would do poorly. • The order of the measures of central tendency would be the  opposite of the positively skewed distribution, with the mean being smaller than the median, which is smaller than the mode.  
    94. 94. StatisticsMEASURES OF VARIABILITY• Variability refers to the spread or dispersion of scores. • A distribution of scores is said to be highly variable if the  scores differ widely from one another. • Three statistics will be discussed which measure variability: the range, the variance, and the standard deviation. • The latter two are very closely related. RangeRange is the highest data value minus the lowest data values.          Data: 1, 2, 3, 4, 5, 6, 7  Highest Data Value: 7  Lowest Data Value: 1 Range: 7 - 1 = 6• It is a quick and dirty measure of variability, although when a test  is given back to students they very often wish to know the range of  scores. • Because the range is greatly affected by extreme scores, it may give a distorted picture of the scores. 
    95. 95. Statistics• The following two distributions have the same range, 13, yet appear  to differ greatly in the amount of variability. • Distribution 1 32 35 36 36 37 38 40 42 42 43 43 45• Distribution 2 32 32 33 33 33 34 34 34 34 34 35 45• For this reason, among others, the range is not the most important measure of variability. Variance• Variance is used to measure                       how far the data is away from the mean.• The distance of the data point from the mean is a deviation.•  The deviations are added together to get a value  representing all the deviations together. • However, since some deviations can be negative, the total could be zero.• To account for this, the deviations are squared and then  added together.• When divided by the number of deviations, the result is variance.
    96. 96. Statistics Standard Deviation• The standard deviation is just the square root of the variance. • A statistic is an algebraic expression combining scores into a single  number. • Statistics serve two functions:• they estimate parameters in population models and they describe the              data.• i.e., Population variance, standard deviation Sample variance, standard deviation• The variance, symbolized by                "s2", is a measure of variability. Th             standard deviation, symbolized by                "s", is the positive square root of the variance.
    97. 97. Statistics• Variance :formula: • Note that the variance could almost be the  average squared deviation around         the mean if the expression were divided by N rather than N-1.•  It is divided by N-1, called the degrees of freedom (df), for  theoretical reasons.•  If the mean is known, as it must be to compute the numerator of the  expression, then only N-1 scores that are free to vary. • That is if the mean and N-1 scores are known, then it is possible to  figure out the Nth score. • One needs only recall the KIWI-bird problem to convince oneself  that this is in fact true. 
    98. 98. Statistics• The formula for the variance presented above is a definitional formula, it defines  what the variance means. • The variance may be computed from this formula, but in  practice this is rarely done. • The computation is performed in a number of steps, which are  presented below:Steps1. Find the mean of the scores.2. Subtract the mean from every score.3. Square the results of step 2.4. Sum the results of step 3.5. Divide the results of step 4 by N-1.6. Take the square root of step 5.7. The result at step 5 is the sample variance,                          at step 6, the sample standard deviation.
    99. 99. Measures of dispersionQuartiles If we divide a cumulative frequency curve into  quarters,              the value at the lower quarter is referred to as  the lower quartile,              the value at the middle gives the median and the  value at the upper quarter is the upper quartile.A set of numbers may be as follows:          8, 14, 15, 16, 17, 18, 19, 50.         The mean of these numbers is 19.625 . However,  the extremes in this set (8 and 50) distort the range.         The inter quartile range is a method of measuring  the spread of the numbers by finding the middle  50% of the values.          It is useful since it ignore the extreme values. It is a  method of measuring the spread of the data.The lower quartile is (n+1)/4 th value    (n is the cumulative frequency, ie 157 in this case)  and         the upper quartile is the 3(n+1)/4 the value.         The difference between these two is the inter  quartile range (IQR).In the above example,          the upper quartile is the 118.5th value and          the lower quartile is the 39.5th value.  If we draw a cumulative frequency curve, we see that  the lower quartile, therefore, is about 17 and           the upper quartile is about 37. Therefore the IQR is 20 (bear in mind that this is a rough  sketch- if you plot the values on graph paper you  will get a more accurate value). 
    100. 100. Measures of dispersion measure how spread out a set of data is. Variance and Standard Deviation The formulae for the variance and standard deviation are given below. m means the mean of the data. . Variance = s2 = S (xr – m)2 n The standard deviation, s, is the square root of the variance. What the formula means: (1) xr - m means take each value in turn and subtract the mean from each value. (2) (xr - m)² means square each of the results obtained from step (1). This is to get rid of any minus signs. (3) S(xr - m)² means add up all of the results obtained from step (2). Example: (4) For variance divide step (3) by n, which is the number of numbers Find the variance and standard deviation of the following numbers:  the answer to step (4). (5) For the standard deviation, square root          1, 3, 5, 5, 6, 7, 9, 10 .                 The mean = m = 46/ 8 = 5.75 x          (Step 1):  (Step 2):         xr - m        1  (1 - 5.75)  -4.75 (xr - m)² (Step 4):  n = 46, therefore 3  (3 - 5.75)  -2.75  22.563       variance = 61.504/ 46 = 1.34 (3sf) 5  (5 - 5.75)   -0.75     7.563 (Step 5):    5  (5 - 5.75)   -0.75     0.563         standard deviation = 1.16 (3sf) 6  (6 - 5.75)    0.25     0.563  7  (7 - 5.75)    1.25     0.063  9  (9 - 5.75)    3.25     1.56310 (10 - 5.75)   4.25  10.563 46               18.063(n) (Step 3): S(xr - m)² 61.504
    101. 101. Grouped Data There are many ways of writing the formula for the standard deviation. The one above is for a population of numbers. The formula for the standard deviation when the data is grouped is: .Example: The table shows marks (out of 10) such questions, it is often easiest to In obtained by 20 people in a test set your working out in a table: Mark (x) Frequency (f) fx fx² 1 0 0 0 2 1 Sf = 20 2 4 3 1 3 9 Sfx = 118 4 3 12 48 Sfx² = 764 5 2 10 50 variance = Sfx² - ( Sfx )² 6 5 30 180 Sf ( Sf ) 7 5 35 245 = 764 - (118)² 8 2 16 128 20 ( 20 ) 9 0 0 0 = 38.2 - 34.81 = 3.39 10 1 10 100Work out the variance of this data. 20 118 764
    102. 102. Population Vs. Sample (Certainty Vs. Uncertainty) A sample is just a subset of all possible values sample population Since the sample does not contain all the possible values, there is some uncertainty about the population. Hence any statistics, such as mean and standard deviation, are just estimates of the true population parameters.January 9, 2013 107
    103. 103. Population and samplePopulation is the entire (complete) collection of all the  measurements of an observed quality characteristic        -variation pattern is not knownA Sample is a collection of measurements selected from some large  source or population  - i.e., a part of the populationPopulation: Smooth curve      (‘) prime symbol is used to identify parametersParameters: mean (μ), population standard deviation (σ ).- has finite number of items  e.g., production of shafts in a day  - it is impossible to measure all the population  - the conclusion about the population is derived from the  mean and standard deviation of the sampleTypes:    Finite population             - finite number               Infinite population           - infinite number               Existent population         - concrete individuals               Hypothetical population  - possible ways- population of head                 and tail obtained by tossing a coin an infinite number of times
    104. 104. Population and sampleSample:Statistic – average (x), and sample standard deviation (s) HistogramTo analyze  and draw  conclusion about the universe, a sample  is  selected at random to represent the population- small section selected is - sample- process of such selection is - sampling
    105. 105. Population and sample (e.g.,)
    106. 106. Normal curve-the normal curve is the most important frequency curve- is also known as Gaussian curve and probability curve- is symmetrical- is unimodal- is bell shaped distribution with mean, median, and mode having     the same value.-The normal distribution is fully defined by the population mean and population 50.00%  standard deviation 68.26% 95.45% 99.73%-∞ +∞ -3σ -2σ -1σ +1σ +2σ +3σ -0.6745σμ +0.6745σ Area under the normal distribution curve
    107. 107. Normal curveThe mean (μ), and standard deviation (σ ) are the population parametersThe mean (x), and standard deviation (s) are for the                sample quantity drawn from the populationFor practical usageIt is necessary to convert from mean values and standard deviations  other than zero and one respectivelyThis procedure called normalizing involves substituting  z = x – μ thus the values read from the table represent the area  σ ∞under normal curve from –        to  z = x - μ σ σ= 1.5 Normal curves with different standarddeviations but identical means σ= 3.0 σ= 4.5  5 8 11 14 17 20 23 26 29 32 35
    108. 108. Normal curve
    109. 109. Variance Shown in a Probability Distribution
    110. 110. Control Chart (or) Statistical process controlVARIATIONS• Different types of control charts can be used, depending upon the type of data.• The two broadest groupings are for variable data and attribute data.• Variable data are measured on a continuous scale. For example: time, weight, distance or temperature can be measured in fractions or decimals. The possibility of measuring to greater precision defines variable data.
    111. 111. Attribute data are counted and cannot have fractions or decimals.Attribute data arise when you are determining only the presence or absence of something: success or failure, accept or reject, correct or not correct.For example, a report can have four errors or five errors, but it cannot have four and a half errors.
    112. 112. Control Chart• is a graphical representation of the collected information - information may be measured quality characteristics• detects the variation in the process and• warns if there is any deviation from the specification Essential features of a control chart Upper Control Limit Variable Values Central Line Lower Control Limit Time
    113. 113. Control ChartPurposes• Show changes in data pattern – e.g., trends • Make corrections before process is out of control• Show causes of changes in data – Assignable causes • Data outside control limits or trend in data – Natural causes • Random variations around averag In the charts,• If all the points (sample averages and ranges) are within the control limits, - then the process is said to be in “Statistical control”• If any one point or more in the control charts go outside the control limits, - then the process is said to be “out of control”
    114. 114. Quality Characteristics Variables Attributes1. Characteristics that you 1. Characteristics for which you measure, focus on defects e.g., weight, length2. May be in whole or 2. Classify products as either in fractional numbers ‘good’ or ‘bad’, or count # defects – e.g., radio works or not3. Continuous random variables 3. Categorical or discrete random variables
    115. 115. Control Chart Types Control Charts Variables Attributes Charts Charts R X p & np c&uChart Chart Chart Chart
    116. 116. Control ChartClassification:For variables- X and R charts• Measures where the metric consists of a number which indicates a precise value is called Variable data. – Time – Miles/HrFor variables Sample average Range Grant average Grant rangeControl limitsX chart R chart- upper limit - upper limit- lower limit - lower limitBoth the charts should be plotted togetherIf the sub group size is 6 or less LCLR = 0
    117. 117. Variables charts– X and R chart (also called averages and range chart)– X and s chart– chart of individuals (also called X chart, X-R chart, IX- MR chart, Xm R chart, moving range chart)– moving average–moving range chart (also called MA– MR chart)– target charts (also called difference charts, deviation charts and nominal charts)– CUSUM (also called cumulative sum chart)– EWMA (also called exponentially weighted moving average chart)– multivariate chart (also called Hotelling T2)
    118. 118. X ChartType of variables control chart – Interval or ratio scaled numerical data• Shows sample means over time• Monitors process average and tells whether changes have occurred. These changes may due to 1. Tool wear 2. Increase in temperature 3. Different method used in the second shift 4. New stronger material• Example: Weigh samples of coffee & compute means of samples; Plot
    119. 119. R ChartType of variables control chart – Interval or ratio scaled numerical data• Shows sample ranges over time – Difference between smallest & largest values in inspection sample• Monitors variability in process,• it tells us the loss or gain in dispersion. This change may be due to: 1. Worn bearing 2. A loose tool 3. An erratic flow of lubricant to machine 4. Sloppiness of machine operator• Example: Weigh samples of coffee & compute ranges of samples;
    120. 120. Construction of X and R Charts• Step 1: Select the Characteristics for applying a control chart.• Step 2: Select the appropriate type of control chart.• Step 3: Collect the data.• Step 4: Choose the rational sub-group i.e Sample• Step 5: Calculate the average ( X) and range R for each sample.• Step 6: Cal Average of averages of (X) and average of range (R)• Steps 7:Cal the limits for X and R Charts.• Steps 8: Plot Centre line (CL), UCL and LCL on the chart• Steps 9: Plot individual X and R values on the chart.• Steps 10: Check whether the process is in control (or) not.• Steps 11: Revise the control limits if the points are outside.
    121. 121. X Chart Control LimitsUCL = x + A R From x 2 TablesLCL = x − A R x 2 Sub group average X = (x1 + x2 +x3 +x4 +x5 ) /5 Sub group range R = Max Value – Min value
    122. 122. R Chart Control Limits UCL R = D 4 R From Tables LCL R = D 3 RProblem8.1 from TQM by V.Jayakumar Page No 8.5
    123. 123. Control ChartDescription Control charts for individual measurements(e.g., the sample size = 1) use the moving range of two successiveobservations to measure the process variability.The combination of the X Chart for Individuals and the Moving Range chartis often called an X and Rm or XmR Chart.
    124. 124. X-Bar R Chart (Mean-Range Chart)
    125. 125. X-Bar Sigma Chart (Mean-Sigma Chart)
    126. 126. I-R Chart (Individual Range)
    127. 127. Median-Range Chart
    128. 128. X-Bar Sigma Chart with Variable Subgroup Sample Size
    129. 129. EWMA (Exponentially Weighted Moving Average Chart)
    130. 130. MA Chart (Moving Average Chart)
    131. 131. CuSum Chart (Tabular Cumulative Sum Chart )
    132. 132. Types of Control Charts for Attribute Data Measures where the metric is composed of a classification in one or two (or more) categories is called Attribute data. Description Type Sample SizeControl Chart for (defectives) p Chart may changeproportion non conforming unitsControl Chart for (defectives) np Chart must be constantno. of non conforming units in asampleControl Chart for (defects) c Chart must be constantno. of non conformities in asampleControl Chart for (defects) u Chart may Changeno. of non conformities per unit
    133. 133. p Chart for Attributes(also called proportion chart)Type of attributes control chart – Nominally scaled categorical data • e.g., good-bad, Yes/No• Shows % of nonconforming itemsExample: Count # defective chairs & divide by total chairs inspected; Plot – Chair is either defective or not defective
    134. 134. p Chart (also called proportion chart)p = np / n where p = Fraction of Defective np = no of Defectives n = No of items inspected in sub group p = Average Fraction Defective = ∑np/ ∑n = CL p (1 −p ) UCL p =p +z n p (1 −p ) LCL p =p −z n
    135. 135. p Chart Control Limits p (1 − p )UCL p = p + z n z = 3 for 99.7% limits p (1 − p )LCL p = p − z n
    136. 136. Purpose of the p Chart Identify and correct causes of bad quality The average proportion of defective articles submitted for inspection, over a period. To suggest where X and R charts to be used. Determine average Quality Level.• Problem 9.1 Page no 9.3 TQM by V.Jayakumar
    137. 137. np CHART P and np are quiet same Whenever subgroup size is variable, p chart is used. If sub group size is constant, then np is used. FORMULA: Central Line CLnp = n p Upper Control Limit, UCLnp = n p +3√ n p (1- p ) Lower Control Limit, LCLnp = n p -3 √ n p (1- p ) Where p = ∑ np/∑n =Average Fraction Defective n = Number of items inspected in subgroup.• Problem No 9.11 page No 9.11 in TQM by V.Jayakumar
    138. 138. c Chart• (also called count chart)• Type of attributes control chart – Discrete quantitative data• Shows number of nonconformities (defects) in a unit – Unit may be chair, steel sheet, car etc. – Size of unit must be constant• Example: Count no of defects (scratches, chips etc.) in each chair of a sample of 100 chairs; Plot
    139. 139. c Chart Control LimitsUCLc = c + 3 c Use 3 for 99.7% limitsLCLc = c − 3 c
    140. 140. Control Chart• Attribute Control Chart Templates• p-Chart (Fraction or Percent of Defective Parts, Fraction or Percent Non-Conforming),• np-Chart (Number of Defective Parts, Number of Non- Conforming),• c-Chart (Number of Defects, Number of Non-Conformities ) and• u-Chart (Number of Defects per Unit, Number of Non-Conformities Per Unit ).• The p-Chart and u-Chart templates come in versions which support variable subgroup sample sizes.
    141. 141. p-Chart (Percent Defective Parts, or Percent Defective Non-Conforming)
    142. 142. np-Chart (Number Defective Parts, or Number Non-Conforming
    143. 143. c-Chart (Number Defects, or Number Non-Conformities)
    144. 144. u-Chart (Number of Defects Per Unit, or Number of Non-Conformities per Unit)
    145. 145. u-Chart with Variable Subgroup Sample Size
    146. 146. SPC control limits at the 1-, 2- and 3-sigma levels
    147. 147. Process capability• Control limits- as a function of the averages• Specifications- permissible variation in the size of the part, and are therefore, for individual values• The specification or tolerance limits are established by design engineers to meet a particular function• The specifications have an optional location• The control limits, process spread ( process capability), distribution of averages, and distribution of individual values are interdependent and determined by the
    148. 148. Process capability• Even the process (average value of items) is in control, the individual item may not be within the limits• So it is necessary to see whether the process is capable of producing the items within the specified limits• This can be achieved by carrying out the process capability• Process capability is an industrial term that characterizes how tolerance specification of a product relates to the centering (bias) and - variation (Process Capability standard deviation, SD or s) of the process.• High capability means that the process can readily produce a product
    149. 149. Process capability• Low capability means that the process will likely produce products outside the tolerance specifications (i.e., defective products or defects).• Process capability may be defined as the minimum spread of the specific quality characteristic measurements• Quality characteristic will have a normal distribution with mean μ and standard deviation σ• The upper normal tolerance limit μ + 3 σ• The lower normal tolerance limit μ – 3 σ• The spread of the normal distribution between the natural tolerance limits 6 σ, is the process capability• If the process capability 6 σ is less than the specification limits (USL-LSL), the process is capable, otherwise not
    150. 150. Process Capability• The process capability ratio (PCR or Cp) is the ratio between the specification limits and the process capability• PCR or Cp = (USL-LSL) / 6 σIf Cp is >1.00, the process is capable of meeting the specificationsIf Cp is < 1.00, the process is not capable of meeting the specifications• One common measure of process capability is called process capability index Cpk, which is calculated as Cpk = (Tolerance specification - bias)/3SD. - upper capability index = CpU = (USL - μ) / 3 σ - lower capability index = CpL = (μ - LSL ) / 3 σ - process capability index Cpk = { min. CpU, CpL }• If the tolerance specification were 12%,• SD 2%, and• Bias 0.0%, Cpk would be 2.00, which is considered the ideal capability,• i.e., a six-sigma process because six multiples of the SD fit within the tolerance specification.
    151. 151. Process Capability• If the tolerance were 12%, SD 4%, and Bias 0.0%, Cpk would be 1.00, - which is considered the minimum capability for a production process and corresponds to a three-sigma process.• If the tolerance were 12%, SD 2%, and Bias 3.0%, Cpk would be 1.50. Although this initially starts out as a six-sigma process when there is no bias, - the effect of a bias of 1.5 sigma actually reduces the process capability and makes this equivalent to a four-point-five process. This would still be considered a good production process if adequately controlled, - but it would still be desirable to eliminate the bias if possible
    152. 152. Process CapabilityParts per Million• Many companies now measure defects in parts per million.• We will recall that 3 sigma deviations each side of the process mean will encompass 99.73% of the population.• We have been looking at Process Capability using +- 3 sigma so we are really looking at 99.73% of the population.• To give us some safety, we wanted the +- 3 sigma to fall within 75% of the tolerance.• This equates to +- 4 sigma at 100% of the tolerance.• If the +- 3 sigma had covered the total tolerance 0.27% would not be encapsulated in the spread that we were using.• This would equate to 2,700 defective parts per million, 1,350 exceeding top limit and 1,350 failing to reach bottom limit.• Many companies now try for figures much less than this.• If you use +-5 sigma instead of +- 3 sigma in your calculations you will be fairly close to 1 part per million defects provided the process remains centralized and in control.
    153. 153. Capability Ratio % Total = 6 sigma / Total ToleranceCm = Total Tolerance / 6 sigma
    154. 154. Process Capability Analysis
    155. 155. .
    156. 156. Process capability
    157. 157. Six sigma• Six Sigma was intended to improve the quality of processes that are already under control – - major special causes of process problems have been removed. The output of these process usually follows a Normal distribution with the process capability defined as ± 3 sigma. The process mean will vary each time a process is executed using different equipment, different personnel, different materials, etc. The observed variation in the process mean was ± 1.5 sigma.Motorola, one of the world’s leading manufacturers of electronic equipments introduced in 1980s, the concept of 6 sigma process quality to enhance the quality and reliability of the products by the then CEO, Bob Galvin.Motorola decided a design tolerance (specification width) of ± 6 sigma was needed so that there will be only 3.4 ppm defects -- measurements outside the design tolerance. This was defined as Six Sigma quality.
    158. 158. 1
    159. 159. Six sigma• Since shifts or biases equivalent to 1.5s are difficult to detect by statistical QC, a six-sigma process provides - better guarantee that products will be produced within the desired specifications and - with a low defect rate.• Another way of looking at this is that a six-sigma process can be monitored with any QC procedure,• e.g., with 3 SD limits and low N, and any important problems or errors will be detected and can be corrected.• As process capability decreases to• five-sigma to• four-sigma to• three-sigma,• the choice of QC procedure becomes more and more important in order to detect important problems.• Processes with lower capability may not even be controllable to a defined level of quality!
    160. 160. Allowable total error, TEa,• This latter situation is illustrated in the accompanying figure, - where the tolerance specification is replaced by a Total Error specification, - which is a common form of a quality specification for a laboratory test.For example, - the CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa, - thus there is a published list of TEa specifications for regulated analytes.In terms of TEa, - Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5(TEa/6) or TEa/4. - In terms of the industrial process capability, the combination of the six-sigma precision and accuracy goals results in a Cpk of 1.5.
    161. 161. Laboratory TE Criteria vs Process Capability• Laboratories evaluate process capability when they perform method validation studies.• They dont calculate an index such as Cpk,• they do combine the effects of inaccuracy and imprecision for comparison with the allowable total error.• Commonly used TE criteria include• TEa > bias + 4SD,• TEa > bias + 3SD, and• TEa > bias + 2SD, all of which are used on a decision-making tool called the Method Decision Chart.• If the criterion requires that TEa > bias + 4SD,• this corresponds to a four-sigma process if there is no bias,• e.g., if TEa is 12%, bias is 0%, and SD is 3%, Cpk would be 1.33, which is a good production process that should be controllable to the desired quality.
    162. 162. Laboratory TE Criteria vs Process Capability• If the criterion requires that TEa > bias + 3SD, this corresponds to a three-sigma process if there is no bias, e.g., if TEa is 12%, bias is 0%, and SD is 4%, Cpk would be 1.00, which is the minimal capability needed for a production process.• If the criterion requires that TEa > bias + 2SD, this corresponds to a two-sigma process if there is no bias, e.g., if TEa is 12%, bias is 0%, and SD is 6%, Cpk would be 0.67, which is unacceptable for production according to industrial guidelines.• Process performance, as evaluated by commonly used laboratory TE criteria, does not approach the six-sigma capability desired for industrial processes. Improvements in laboratory methods are still needed to achieve five-sigma to six-sigma capability.
    163. 163. Stages or fundamentals in six sigma the DMAIC Roadmap
    164. 164. DMAIC and Lean tools deployed in the Shewhart Cycle
    165. 165. DMAIC six sigma approach.• The six sigma approach for projects is DMAIC (define, measure, analyze, improve and control).• These steps are the most common six sigma approach to project work.• Some organizations omit the D in DMAIC because it is really management work.• With the D dropped from DMAIC the Black Belt is charged with MAIC only in that six sigma approach.• We believe define is too important be left out and sometimes management does not do an adequate job of defining a project.• Our six sigma approach is the full DMAIC
    166. 166. Define (DMAIC)• Define is the first step in our six sigma approach of DMAIC• DMAIC first asks leaders to define our core processes.• It is important to define the selected project scope, expectations, resources and timelines.• identifies specifically what is part of the project and what is not, and explains the scope of the project.• Many times- process documentation are at a general level.• Additional work is often required to adequately understand and correctly document the processes.• As the saying goes “The devil is in the details.”
    167. 167. Measure (DMAIC) - most important thing to know is where we are going. - some of the first information you need before starting any journey is your current location.• The six sigma approach asks - to quantify and benchmark the process using actual data.• At a minimum consider the mean or average performance and some estimate of the dispersion or variation (maybe even calculate the standard deviation). Trends and cycles can also be very revealing.• The two data points and extrapolate to infinity is not a six sigma  approach.• Process capabilities can be calculated once there is performance data
    168. 168. Analyze (DMAIC )- project is understood - baseline performance documented - verified that there is real opportunity,- then - do an analysis of the process.- the six sigma approach applies statistical tools to validate root causes of problems.- any number of tools and tests can be used. objective is to understand the process at a level sufficient to be able to formulate options for improvement.• compare the various options with each other to determine the most promising alternatives.• as with many activities, balance must be achieved.• superficial analysis and understanding will lead to unproductive options being selected, forcing recycle through the process to make improvements.• at the other extreme is the paralysis of analysis.• striking the appropriate balance is what makes the six sigma highly valuable.
    169. 169. Improve (DMAIC )• six sigma approach ideas and solutions are put to work. - discovered and validated all known root causes for the existing opportunity.• The six sigma approach requires to identify  solutions.• Few ideas or opportunities are so good that all are an instant success. - there must be checks to assure that the desired results are being achieved. - some experiments and trials may be required in order to find the best solution.• When making trials and experiments it is important that all project associates understand that these are trials and really are part of the six sigma approach.
    170. 170. Control (DMAIC )• Many people believe the best performance you can ever get from a process is at the very beginning• Over time there is an expectancy that slowly things will get a little worse until finally it is time for another major effort towards improvement.• Contrasted with this is the Kaizen approach that seeks to make everything incrementally better on a continuous basis.• The sum of all these incremental improvements can be quite large.• As part of the six sigma approach performance tracking mechanisms and measurements are in place to assure, at a minimum, that the gains made in the project are not lost over a period of time.• As part of the control step we encourage sharing with others in the organization.• With this the six sigma approach really starts to create phenomenal returns, ideas and• projects in one part of the organization are translated in a very rapid fashion to implementation in another part of the organization.
    171. 171. Dr.Kaoru Ishikawa, Professor at Tokyo University & Father of Q C in Japan.• CAUSE ANALYSIS TOOLS are Cause and Effect diagram, Pareto analysis & Scatter diagram.• EVALUATION AND DECISION MAKING TOOLS are decision matrix and multivoting• DATA COLLECTION AND ANALYSIS TOOLS are check sheet, control charts, DOE, scatter diagram, stratification, histogram, survey.• IDEA CREATION TOOLS are Brainstorming, Benchmarking, Affinity diagram, Normal group technique.• PROJECT PLANNING AND IMPLEMENTATIONTOOLS are Gantt chart and PDCA Cycle.
    172. 172. Second seven tools• In the quality improvement movement in Japan in the latter half of the 20th century, the Japanese Union of Scientists and Engineers (JUSE) were influential in defining a set of basic tools that could be used for improving processes. These came to be known as the first seven tools.• These mostly were useful for quantitative problems, so a second set of seven tools was defined for the more qualitative problems that arise, such as around customer needs. These are:• Relations Diagram• Affinity Diagram• Tree Diagram• Matrix Diagram• Matrix Data Analysis Chart• Process Decision Program Chart• Activity Network• Just to complicate things, the Matrix Data Analysis Chart, which is somewhat complex to use, is often replaced with the Prioritization Matrix. And for further fun, alternative names are used, for example the Relations Diagram is sometimes called the Interrelationship Digraph
    173. 173. Relations Diagram• In many problem situations, there are multiple complex relationships between the different elements of the problem, which cannot be organized into familiar structures such as hierarchies or matrices. The Relations Diagram addresses these situations by showing relationships between items with a network of boxes and arrows.• The most common use of the Relations Diagram is to show the relationship between one or more problems and their causes, although it can also be used to show any complex relationship between problem elements, such as information flow within a process.
    174. 174. Affinity Diagram (KJ diagram)• A diagram that is used as a method of sorting qualitative data, which usually comes in the form of short phrases or sentences (eg. Customers are unhappy with delivery delays).• It is often done with Post-it Notes, although the original method used 3" x 5" cards.• It is a great method of working as a group to sort out issues and fuzzy situations.• It is also useful for sorting such as customer comments from surveys.• Building an Affinity Diagram is often known as doing a KJ, after its originator, Kawakita Jiro (this is in order of surname, given name, as in the Japanese tradition
    175. 175. Affinity diagramAffinity diagram (KJ)
    176. 176. Affinity Diagram components
    177. 177. Affinity Diagram -Moving the cards
    178. 178. .Affinity DiagramExit interview commentsfrom checkout operators
    179. 179. Affinity diagram

    ×