6 Sigma


Published on

This presentation covers the entire aspects of 6 sigma quality methodology. You can have this presentation as a reference to anything related to 6 sigma. This is one of the best material to be refereed before the implementation of 6 sigma in your organization, whether it is in service sector or in manufacturing..

Published in: Technology
  • Please share the ppt...how do I download this ppt?
    Are you sure you want to  Yes  No
    Your message goes here
  • having scanned through this document I have to say this is indeed one of the most complete documents I have seen in some time. well done !
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

6 Sigma

  1. 1. ADVISORY SERVICES Six Sigma Green Belt Training Program
  2. 2. Prelude to Six Sigma Expectations form the Organization
  3. 3. Drivers of Project Selection Voice Of The Employee Business Big Y s Process Ys Y Y Y Y Project Y X 1 Any parameters that influence the Y Bigger Ys Keep an Outside - In perspective Strong Linkage between Projects & Big Ys is Important Key project metric d efined from the customer perspective Key output metrics that summarize process performance Key output metrics that are aligned with the strategic goals / objectives of the business. Big Ys provide a direct measure of business performance X 2 X 3 Voice Of The Customer Voice Of The Shareholder
  4. 4. Gather VOC List your customers Define Customer Segments Narrow the list of customers Review existing VOC data Decide on what to collect Use appropriate tools to gather VOC Surveys Interviews Be a customer Focus Group Customer Observation Listening Posts Competitive Comparison Collect data Customers Requirements determines quantifiable Process Metrics “CTQ’ Identify customer segments that need to be targeted Gather verbatim VOC & Determine Service Quality Issue Translate to needs statement & develop a CTQ - Project Y metric output characteristic
  5. 5. Gather VOC Organize all customer data Translate VOC to specific needs Define CTQs for needs Prioritize CTQs Contain problem if necessary Identify customer segments that need to be targeted Gather verbatim VOC & Determine Service Quality Issue Translate to needs statement & develop a CTQ - Project Y metric output characteristic
  6. 6. Affinity Diagram – Credit Card Example Organize VOC into broad categories Low Interest Rate Variable Terms Pay Back When I Want No Prepayment Penalties/ Charges Pre-Approved Credit Easy Application Easy Access To Capital Quick Decision Know Status Of Loan (Post- Approval) Will Come To My Facility Available Outside Normal Business Hours Available When I Need To Talk Responsive To My Calls Knowledgeable Reps Professional Make Me Feel Comfortable Patient During Process Knows About My Finances Knows About My Business Makes Finance Suggestions Talk To One Person Friendly Cares About My Business Has Access To Experts Provides Answers To Questions Calls If Problems Arise All Charges Clearly Stated Know Status Of Loan During Application Preference If Bank Customer Can Apply Over Phone Verbatim VOC High-Level Needs Flexible Product Easy Process Availability Personal Interface Advice/ Consulting
  7. 7. Translating VOC into CTQ’s Identify customer segments that need to be targeted Gather verbatim VOC & Determine Service Quality Issue Translate to needs statement & develop a CTQ - Project Y metric output characteristic VOC CTQs Customers Requirements determines quantifiable Process Metrics “CTQ’
  8. 8. Example: Translating VOC to CTQ’s What gets measured gets managed… ensure measurable CTQs “ You take too much time in getting back to me!” “ These forms are too cumbersome!” - Quick Response - User Friendly Forms - Process Turn Around Time not more than 10 min. - Form < 2 pages & < 10 minutes to complete Validate CTQ with customer Verbatim Specific Needs CTQs
  10. 10. Prioritizing CTQ’s – Kano Model Kano Model helps to prioritize our efforts towards satisfying customers Satisfaction + Dissatisfaction One-Dimensional Delighters Must Be Innovation Competitive Priority Critical Priority Functional Dysfunctional Safe arrival Accurate booking Baggage arrives with passenger 99% system uptime Seat comfort Quality of refreshments Friendliness of staff Baggage speed On-Time arrival Free upgrades Individual movies and games Special staff attention/services Computer plug-ins (power sources)
  11. 11. Prioritizing CTQ’s - Kano Model Kano Model / Theory: The Kano theory helps prioritize teams improvement efforts after needs & CTQ’s have been established. The team should refrain from translating CTQ’s to the Kano Model but rather have customer VOC verify where the need / CTQ should feature using the model For some customer needs, customer satisfaction is proportional to the extent to which the product or service is fully functional The horizontal axis indicates how fully functional a product/service is; and the vertical axis indicates customer satisfaction It is to be expected over a period of product life cycle that (the product) could shift from being a Delighter to a One Dimensional to a Must - Be (e.g., invention of TV)
  12. 12. Prioritization of Six Sigma Projects Financials help prioritize projects Focus & Motivate Teams to deliver results Incremental Revenue Volume Driven Metrics Increased Capacity Improved product quality/service Improved commercial processes Price Driven Metrics Decreased Cost Reduced Rework Increased Labor efficiency Reduced Operating expenses (fixed and variable costs) Reduced Plant and equipment depreciation or lease expense Projects can focus on Hard Gains or Soft gains
  13. 13. Work out session – Identify area of improvement
  14. 14. Contents Define Customer Expectations of the process? Measure What is the frequency of defects? Analyze Why, Where & When do defects occur? Improve How can we fix the Process? Control How can we keep the process fixed?
  15. 15. Define Customer Expectations of the Organization
  16. 16. Contents - Define CHARTER DETERMINE THE PROJECT CTQs Business Case Problem & Goal Statement Project Scope Milestones Roles & Responsibility Identify your customers Gather “VOC” Organize VOCs Prioritize VOCs Translate VOC to CTQs MAP THE PROCESS Process Operational Definition Benefits of Process Mapping SIPOC Model Levels of Mapping Mapping Guidelines DEFINE THE PROJECT In-Frame/Out-Frame 15 Word Flip Chart Backward Imaging Deliverables: CTQs Identified Charter SIPOC 1 2 3 4
  17. 17. Charter A Charter: Clarifies what is expected of the team Keeps the team focused Keeps the team aligned with organizational priorities
  18. 18. Sample Project Charter
  19. 19. Sample Project Charter CTQ: Accuracy
  20. 20. Elements of a Project Charter Business Case / Benefits Problem / Opportunity Statement Goal Statement Project Scope Milestone / Project Plan Resources / Team Members Roles Business Big Ys Process Ys Does project “Y” link to business Y’s? The problem statement is a description of what is wrong & where? (quantify - % / $$$) Start with a verb (e.g., reduce, eliminate, control, increase) Focus of project (cycle time, accuracy, etc.) Target (by 50%, by 75%) Deadline What is the improvement team seeking to achieve? What is & what is not included? What are the boundaries? In scope/ out scope? Key Milestones / Timelines / Detailed Plan Who are the key resources ? What will be the roles of BB’s / GB’s / Sponsor / MBB’s
  21. 21. Project Scope What process will the team focus on? What are the boundaries of the process we are to improve? Start point? Stop point? What resources are available to the team? What (if anything) is out of bounds for the team? What (if any) constraints must the team work under? What is the time commitment expected of team members? What will happen to our “regular jobs” while we are doing the project?
  22. 22. In Frame / Out Frame - Project Scoping Tool Draw a large square “picture frame” on a flip chart (or use tape on a wall) and use this metaphor to help the team identify what falls inside the picture of their project and what falls out. This may be in terms of scope, goals, and roles
  23. 23. Process Definition & Elements A collection of activities that takes one or more inputs and transforms them into outputs that are of value to the customer “ The Business Process” Supplier(s ) Customer(s) Inputs Outputs
  24. 24. Process Mapping & Benefits Measures P S I O C Process Map Suppliers Inputs Process Outputs Customers CTQs CTQs Measures
  25. 25. Process Mapping & Benefits Supplier: The provider of inputs to your process Input: Materials, resources or data required to execute your process Process: A collection of activities that takes one or more kinds of input and creates output that is of value to the customer Output: The products or services that result from the process Customer: The recipient of the process output – may be internal or external CTQs: Critical to quality characteristics; A specific attribute or quality of the output that is a key requirement for customer satisfaction Boundary: The limits of a particular process, that define the start and stop points of the process
  26. 26. Levels of Process Mapping Core Process (Level I) Subprocesses (Level 2) Subprocesses (Level 3) Exports Imports Negotiation Microprocesses (Level 4 & Below) Credit Ops Messaging Payments Payments Collections Trades Billing
  27. 27. Process Mapping Guidelines Define business process to be reviewed – name it – agree on beginning and end of process – bound it. Refer to CTQ work to identify primary outputs, the customers who receive them, and the customers’ CTQs Use nouns for outputs (e.g., sales call, proposal, etc.) Use adjectives for CTQs (.e.g., timely, knowledgeable, accurate) Identify the process steps using brainstorming and affinity techniques Write large – One step per card – Don't try to establish order All steps should begin with a verb – Don't discuss process steps in detail Use brainstorming and affinity techniques to identify critical inputs which affect the quality of the process For each critical input, identify the “supplier” who provides it Validate to be sure the map represents the situation as it really is today (the “as is” map) – not how you think it is, or how it should be Suppliers Customers Inputs Outputs CTQs Process
  28. 28. G.R.P.I G.R.P.I. Check List - This tool is based on a simple model for team formation. It challenges the team to consider four critical and interrelated aspects of teamwork: Goals, Roles, Processes and Interpersonal relationships. It is invaluable in helping a group become a team .
  29. 29. G.R.P.I Uses: An excellent organizing tool for newly-formed teams or for teams that have been underway for a while, but who have never taken time to look at their teamwork. Ideally, this tool should be used at one of the first team meetings. It can and should be updated as the project unfolds. Steps: Distribute copies of the check list to all team members prior to a team meeting. Invite team members to add details/examples on each of the four dimensions of the check list. Ask each team member to bring his/her completed checklist to the team meeting. At the team meeting discuss and resolve issues related to the check list. Share certain aspects with Champion/Functional Leader if appropriate. OPTION: When there is considerable disagreement or tension within the team environment, team members can choose to complete the questionnaire individually and turn it in to a neutral party who will collate the data and give it back to the team in an aggregate fashion (thus protecting the anonymity of individual team members).
  30. 30. G.R.P.I Expanded Version of the Tool: Useful when a more detailed look at team elements is required. How would you rate the degree to which your team presently has CLARITY, AGREEMENT and EFFECTIVENESS on the following GRPI-related elements?
  31. 31. G.R.P.I
  32. 32. Great Project Be clearly bound with defined goals If it looks too big, it is Be aligned with critical business issues and initiatives It enables full support of business Be felt by the customer There should be a significant impact Work with other projects for combined effect Global or local “Beta Themes” Show improvement that is locally actionable Relate to your day job
  33. 33. Project Selection Success Factors Project scope is manageable Project has identifiable defect Project has identifiable impact Adequate buy-in from key stakeholders To be successful Set up project scope charter and have it reviewed Measure where defects occur in the process Assess and quantify potential impact up-front Perform stakeholder analysis
  34. 34. Project Selection Common Pitfalls Re sourcing of project is inadequate Duplicating another project Losing project momentum Picking the easy Y, not the critical Y Avoiding Pitfalls Identify and get committed resources up-front Research database and translate where possible Set up milestones and communications plan
  35. 35. Define Deliverables Identify project through business theme and personal issue Review Six Sigma Quality Project Tracking database for similar projects Identify internal/external CTQs Create high-level process map Identify potential data sources Identify team members and business functions required Identify Information Technology (IT) requirements Identify financial impact Open project in Six Sigma Quality Project Tracking database
  36. 36. Work out session – Create a Project Charter
  37. 37. Key Learning Points and Summary
  38. 38. Measure Quantification of problem and causes
  39. 39. Measure Deliverables: Data Collection Plan MSA Results Data Plots VARIATION SELECT PROJECT Y Display & Describe Variation Causes of Variation Run Chart Bar Chart Normal Curve Measures CTQ Prioritization Types of Data Fishbone Diagram DEVELOP DATA COLLECTION PLAN 1 2 Establish Data collection Plan Define Operational Definitions Define Sampling Procedures Measurement System Analysis 3
  40. 40. Select Project Y Essential to validate the linkage between Project CTQ, Process CTQs & Big Ys VOC CTQs Process CTQs/ Process Metrics Big ‘Y’s Project CTQ/ Project Ys Retention Rate Employer of Choice Attrition % Example :
  41. 41. Select Project Y The foundation of the measure phase as the name suggests lies in the data collection plan keeping the target of the project, process and business in mind The above steps have an inherent linkage in the sense that it would enable us to cross check if our project CTQ is aligned to the process output characteristic The process output characteristic can than be rolled up into process and business metric
  42. 42. Understanding Measures Measure : Any element relating to the process under study which is operationally definable & quantifiable. Measures can be understood better by classifying them in the following ways: Classification of Measures Based on SIPOC Based on Focus Based on Statistics Input Process Output Continuous Discrete Efficiency Effectiveness
  43. 43. Based on SIPOC Y = f(X 1 , X 2 , X 3 …………………. X n ) Output Y is a function of various Inputs and Process Steps Measures can be classified as Input, Process or Output Measures Garbage in Garbage out How good is my process Accurate?? Timely?? INPUT PROCESS OUTPUT Productive??
  44. 44. Based on SIPOC Where should parameters for measuring be focused? Output - because that is what our customers get Process - because that’s the way we perform & Input - because potentially “Garbage in could be Garbage out” Y is the output Measure where as X relates to various Inputs and Process Measures
  45. 45. Based on Focus Examples Percent Defective Response Time Billing Accuracy Examples Cost per transaction Turnaround time Time per activity Amount of rework Effectiveness Efficiency Measures for looking at Process Performance The amount of resources allocated in meeting and exceeding customer CTQs The degree to which customer CTQs are met and exceeded Customer Focussed Process Focussed
  46. 46. Based on Statistics Binary Ordered categories Count Classified into one of two categories Rankings or ratings Counted discretely Measured on a continuous scale Is this group ready to travel 100 km daily to reach office Yes or No Decision of Group Description Example Discrete Continuous Continuum of Data Types Classification of the group into categories based on distance each travels daily to reach office Categories : (0 - 2 Kms) (2- 5 Kms) (5-10 Kms) (10-20 Kms) (20-50 Kms) (50-100 Kms) Number of people who have come late today Actual distance traveled by each in this group measured in Kms
  47. 47. Exercise – Type of Data Percent defective parts in hourly production Percent cream content in milk bottles (comes in four-bottle container sets) Amount time it takes to respond to a request Number of blemishes per square yard of cloth, where pieces of cloth may be of variable size Daily test of water acidity Number of accidents per month Number of defective parts in lots of size 100 Length of screws in samples of size ten from production lots Number of employees who had an accident
  48. 48. Based on Statistics Some more examples: An instrument processed is defective or non defective Turn Around time for the batch processing Ratings for a Trainer on a point scale (say 5 point) Points to Remember: It is possible to convert Continuous Data to Discrete, however visa-versa is not possible Continuous Data gives more insight on the process performance than Discrete
  49. 49. Must be for “Measures” Following criteria should be kept in mind while choosing Measures Choose the Output Measure from a Customers View (Outside-In) The Cost Benefit Ratio for the data collection should be justifiable The measure should be relevant to the process A clear Operational definition of the measure has been agreed upon by all Stakeholders The measure and subsequent data collection should enable analysis, evaluations and actions The measure should truly be a representative of the deepest understanding of the process Include efficiency & effective measures for process excellence
  50. 50. Cause & Effect Diagram Why do we have Late Processing Creation of Bank ID, Cust ID Bunching of Documents Printer Problem Awaiting Memo Replies (Clarifications) Late Scanning & Receipt Forced Priority of deals Lack of Typing Skill Lack of Training Program for new recruits Fear of committing errors Availability of Imex for long hours Alternations of Errors making in previous steps Instns not clear Handwriting not legible Printing not clear Learning Curve To many amendments To many deals held with same person Complicated Deals Linked Deals not getting released on time Absenteeism Bank Profile with wrong Swift IDs Full doc not scanned Correction of errors in previous deal Delays current deall Changes made in operational data Changes in Bill category by Spoke Man Material Machine Measurement Mother nature Method High Level Prompters to help generate possible causes Write the effect here Write the possible causes here Or use 4 P’s
  51. 51. Cause & Effect Diagram How do we carry it out? Brainstorming without any prior preparation as the methodology Be sure that everyone in the team agrees on the problem statement Bring down the problem statement to a level more in detail as compared to the statement in the Project Charter Follow the principle of “ 5 Why s ?”. Keep asking ”Why” till the group arrives at a logical end to the chain of causes Include maximum possible information on What Where When & How Much Minitab Software helps you to generate the template of a Fish-Bone Diagram. You can draw a blank diagram, or a diagram filled in as much as you like. Although there is no "correct" way to construct a fishbone diagram, some types lend themselves well to many different situations. Stat > Quality Tools > Cause-and-Effect Menu Pull-Down Sequence in Minitab
  52. 52. Data Collection Plan The purpose of the Data Collection exercise Pilot Collection and Validation Plan Train Data Collectors Monitor and Improvise Develop Measurement System Analysis Test and Validate Formulate Data Collection Plan Sampling Strategy Pilot Data Collection Plan Identify Measure Define Operational Definitions What Data to be collected Word of Caution : Wrong Data Leads to Wrong Conclusions Why collect Data? What to Collect ? How to Collect? Ensure Consistency & Stability Collect Data
  53. 53. Data Collection Plan Data collection occurs multiple times throughout DMAIC. The data collection plan described here can be used as the guide for data collection. This help us ensure that we collect useful, accurate data that is needed to answer our process questions It is important to be clear about the data collection goals to ensure the right data is collected. If your data is in the wrong form or format, you may not be able to use it in your analysis Operational definition is a helps to guide thinking on what properties will be measured and how they will be measured. There is no single right way to write an operational definition. There is only what people agree to for a specific purpose. The critical factor is that any two people using the operational definition will be measuring the same thing in the same manner.
  54. 54. Template - Data Collection Plan A format of an excel spread sheet that can be used for data collection plans for our projects. The sample data collection plan above is for the Project CTQ, in this example, Number of Rejections. However one should remember that based on C-E Diagram and the SIPOC all those Xs which the team feels to have a influence over the Y should also be included in the data collection plan. Example of a Data Collection Plan
  55. 55. Work out session – Complete till Fish Bone and Data Collection in the project
  56. 56. Sampling What is Sampling all about?? Sampling is a process of collecting a portion or a sub-set of the total data available. This total data available or could be available is often referred to as population. The sample so drawn up from the population should help us to draw conclusions about the population. Some points to consider while collecting data What amount of time is needed? How much cost is involved? Is it really possible?
  57. 57. Sampling The Six Sigma team would always face a question “How much data do we need to have a valid sample?” Though an important part of data collection is to obtain a sample of reasonable size, it is one of many questions to be addressed during the planning and development of a data collection strategy. Sample size is just one aspect of a valid data collection activity The validity of the data is impacted by many things – for example, operational definitions, data collection procedures and recording In process improvement, there are a number of questions to keep in mind relative to sampling: Is the data representative of the situation or is bias possible? Why am I sampling? To improve or control a process or to describe some characteristic of a population? What are the key considerations for either a process or population situation? What is the approach to sampling (e.g., random, systematic, etc.) and approximately how many to sample?
  58. 58. Sampling When to sample? Collecting all the data is impractical High cost implications due to population study Time availability Data collection can be a destructive process (crash testing of cars) When measuring a high-volume process When not to sample? The subset of data so collected may not be truly representative of the population thus leading to a wrong conclusion (every unit is unique – e.g., structured deals or auctioning events)
  59. 59. Types of Sampling Examples: Cycle tyres being produced in the assembly line have a random check on the daily production 10% quality check on daily processed transactions Examples: Every third tyre used with an aircraft is cut open and checked for quality Sampling for deducing the error rates during high volumes / near cut off times Random / Probability Non Random/Judgmental Types of Sampling All items in the population have an equal chance of getting selected The groups knowledge and opinions are used to identify items from the population
  60. 60. Approaches for Sampling Process Approach Assess the stability of the population over time Are the shifts, trends, or cycles occurring? Do I take a special or common cause variation approach to process improvement? What do you want to study…….the Process or the Output??? Population Approach Make probability statements about the population from the sample I have 95% confidence that the mean of the population for Turn Around Time is between 1.5 and 2.5 seconds Process Data Population Data
  61. 61. Approaches for Sampling Measurement of AC Room Temperature. Collect one data point every 1 hour for the entire day. While studying the arrival rate of documents as dispatched by the customer. The entire day is split up into quadrants, rational being the arrival rate of documents is similar within each quadrant and different between quadrants Survey across our organization to know what percentage of our employees have visited the intranet in last 7 days. Select employees from population at random and collect data. Average Cycle Time for LC Issuance Process of different countries. Each Country is a Strata (Segment). Collect random data from each Strata Approaches for Sampling Process Systematic Sampling Population Rational Sub-grouping Random Sampling Stratified Random Sampling
  62. 62. Approaches for Sampling Systematic sampling is typically used in process sampling situations when data is collected “real time” during process operation. Unlike population sampling, a frequency for sampling must be selected Systematic sampling involves taking samples according to some systematic rule – e.g., every 4th unit, the first five units every hour, etc. One danger of using systematic sampling is that the systematic rule may match some underlying structure and bias the sample For example, the manager of a billing center is using systematic sampling to monitor processing rates. At random times around each hour five consecutive bills are selected and the processing time measured Like random samples, stratified random samples are used in population sampling situations, when reviewing historical or batch data Stratified random sampling is used when the population has different groups (strata) and we need to ensure that those groups are fairly represented in the sample. In stratified random sampling, independent samples are drawn from each group. The size of each sample is proportional to the relative size of the group
  63. 63. Rational Subgrouping What does it mean Sample at point “A” in the process every Xth hour to discover the patterns in operation - hunch plays a role - data drives decision Points to Remember Monitor process frequently enough to catch it in various phases of operation Several small subgroups to be measured at different points in time Higher frequency of measurements for an unstable & unpredictable process The frequency can be lowered in case of a stable process Processes working on lower Turn Around Time need a higher Frequency and vise versa 12noon to 12.20pm 12.45pm to 1pm 1.20pm to 1.45pm Subgroup of samples The Process Sample
  64. 64. Rational Subgrouping Rational subgrouping is the process of putting measurements into meaningful groups to better understand the important sources of variation Rational sub grouping is typically used in process sampling situations when data is collected “real time” during process operations. It involves grouping measurements produced under similar conditions, sometimes called short-term variation. This type of grouping assists in understanding the sources of variation between subgroups, sometimes called long-term variation Guidelines For Using Rational Sub grouping Form subgroups with items produced under similar conditions To ensure items in a subgroup were produced under similar conditions, select items produced close together in time Minimize the chance of “special causes” in variation in the subgroup, and maximize the chance for “special causes” between subgroups Sub grouping over time is the most common approach; sub grouping can be done by other suspected sources of variation (e.g., location, customer, supplier, etc )
  65. 65. Sampling Bias Sampling Bias Opinion on the abolition of a Student Union is being considered in an University…..the target audience are students….a large number of those involved in some political activity Deduction: 93% of the respondents were against the move. Bias occurs when systematic differences are introduced into the sample as a result of the selection process Not representative of the population Will lead to incorrect conclusions about the population
  66. 66. Determining the Sample Size 3 metrics that determine the Sample Size Level of confidence (ZC): The Team needs to be sure of the adequate representation of the population with the sample chosen The term used for the same is Level of confidence which increases with an increase in the sample size Precision (  ) How accurate is the result as a deduction from the sample……what is the level of accuracy needed in my sample…what kind of process are we dealing with Precision increases as sample size increases Standard deviation of the population (s) How much variation is in the total data population? As standard deviation increases, a larger sample size is needed to obtain reliable results Sample Size is not dependent on the “Population Size”
  67. 67. Sample Size for Continuous Data Z C Is defined as the Z value at “c” level of confidence For example a 95% level of confidence is taken for which Z value is 1.96 2 The Sample Size (n) can then be determined as n = (ZC) (s) Hence at 95% level of confidence n = 1.96 (s) The Six Sigma Team can thus infer that the mean for the population lies within m ± with 95% confidence - For service industry purposes we will use 95% confidence Interval What do we infer?? The measured value can lie between these limits Target Value
  68. 68. Sample Size for Discrete Data p Is the estimate of the proportion defective for the population The maximum sample size is obtained at p = 0.5 2 The Sample Size (n) can then be determined as n = (ZC) p (1-p) 2 Hence at 95% level of confidence n = 1.96 p (1-p) The Six Sigma Team can thus infer that the population proportion defective lies between p ± with 95% confidence Proportion Defective - Proportion Defective + Proportion Defective What do we infer?? The measured value can lie between these limits
  69. 69. Measurement System Analysis (MSA) TOTAL VARIANCE Apparent variation = Process variation + Measurement variation Double Challenge: Reduce variation in both processes!
  70. 70. Terms of MSA Accuracy Repeatability Reproducibility Measures the differences between Observed average measurement against a Standard Measures variation when one Operator repeatedly takes the measurements of the same unit with the same measuring equipment Measures variation when various Operators measure the same unit with the same measuring equipment
  71. 71. GRR Example
  72. 72. GRR Example
  73. 73. GRR Example
  74. 74. GRR Example
  75. 75. Analyzing GRR Results Analyzing Gage R&R Results R&R% of Tolerance R&R less than 10% - Measurement System “acceptable” R&R 10% to 30% - May be acceptable – make decision based on classification of Characteristic, Application, Customer Input, etc. R&R over 30% - Not acceptable. Find problem, re-visit the Fishbone Diagram, remove Root Causes. Is there a better gage on the market, is it worth the additional cost? % Contribution (or Gage R&R StdDev): GR&R Variance should be “small” compared to Part-to-Part Variance Number of Distinct Categories < 2 - no value for process control, parts all “look” the same = 2 - can see two groups—high/low, good/bad = 3 - can see three groups—high/mid/low 4 or more acceptable measurement system (higher is better) Effective Resolution 50%, or more, of Xbar Chart outside control limits—implies part variation “exceeds” Measurement System variation
  76. 76. AAA - Example
  77. 77. AAA - Example
  78. 78. AAA - Example
  79. 79. Analyzing AAA Results Within Appraiser Assessment Agreement graph shows you the consistency of each operator's answers. For each appraiser, the graph consists of the following: Blue circle, which provides the actual percent matched Red line, which provides a 95.0% confidence interval for percent matched Blue cross and blue X, which provide the lower and upper limits for the 95.0% confidence interval, respectively. Appraiser versus standard graph If you have a known standard for each part, then the percent matched between appraiser and standard is plotted. The Appraiser versus Standard Assessment Agreement graph shows you the correctness of each operator's answers. For each appraiser, the graph consists of the following: Blue circle, which provides the actual percent matched Red line, which provides a 95.0% confidence interval for percent matched Blue cross and blue X, which provide the lower and upper limits for the 95.0% confidence interval, respectively
  80. 80. Analyzing AAA Results Kappa If kappa = 1, then there is perfect agreement. If kappa = 0, then agreement is the same as would be expected by chance. The higher the value of kappa, the stronger the agreement between rating and standard. Negative values occur when agreement is weaker than expected by chance, but this rarely happens. Depending on the application, kappa less than 0.7 indicates the measurement system needs improvement. Kappa values above 0.9 are considered excellent. Kendall's Use the p-values to choose between two opposing hypotheses, based on your sample data: H0: There is no association among multiple ratings made by an appraiser H1:The ratings are associated with one another The p-value provides the likelihood of obtaining your sample, with its particular Kendall's coefficient of concordance, if the null hypothesis (H0) is true. If the p-value is less than or equal to a predetermined level of significance (a-level), then you reject the null hypothesis and claim support for the alternative hypothesis. Alternate hypothesis is good here as you are trying to judge if the match is due to chance or significant match
  81. 81. Analyzing AAA Results Choosing between Kappa statistics or Kendall's coefficients When your classifications are nominal (true/false, good/bad, crispy/crunchy/soggy), use kappa statistics. When your classifications are ordinal (ratings made on a scale), in addition to kappa statistics, use Kendall's coefficient of concordance. When your classifications are ordinal and you have a known standard for each trial, in addition to kappa statistics, use Kendall's correlation coefficient. Kappa statistics represent absolute agreement among ratings. Kappa statistics treat all misclassifications equally. Kendall's coefficients measure the associations among ratings. Kendall's coefficients do not treat all misclassifications equally. The consequences of misclassifying a perfect (rating = 5) object as bad (rating = 1) are more serious than misclassifying it as very good (rating = 4).
  82. 82. Work out session – Measurement System Analysis
  83. 83. Tools Displaying Data Variation Discrete Data Continuous Data x Frequency Diagram Box Plot Histogram Run Chart Pie Chart Bar Chart For a period of Time Over a period of Time
  84. 84. Characteristics of Data A set of Data points can be compared and interpreted based on their Central Tendency and Variation. Mean Median Mode Quartile Values Standard Deviation (SD) Range Span Stability Factor Characteristics of “data” Central Tendency Dispersion (Variation)
  85. 85. Central Tendency Consider the time taken for people to withdraw Cash (including queue time) from two different ATM counters. Which Counter is better !! Mode Mode is the data point which occurs the most often Mode is extensively used in the case of discrete data which are ordered categories Median (the ‘middle’ data point) The positioning of the data set helps determine median Takes into account all values but does not get skewed Mean The arithmetic average Can get skewed with extreme observations The deviations are measure as distance from the mean x Median is not affected by the presence of extreme values
  86. 86. Dispersion Standard Deviation (When central tendency is mean) Range (When central tendency is mean) Span (When central tendency is median) How much does each point in the data set differ from the mean The aggregate score can help determine variation of the process Range is the difference between the highest and the lowest value of a set of data points The range of the data helps determine the span Extreme 5% observations on either side can be discounted s
  87. 87. Normal Curve A Normal Curve originates from a Histogram. A histogram is a frequency distribution chart showing the number of times a given value of the parameter we are trying to measuring occurs. Stat > Basic Statistics > Display Descriptive Statistics > Graphs Minitab Software can be used to display a histogram, a histogram with a normal curve & other graphical summary to be learnt later. Histogram showing the Number of times ‘55’ has occurred in the data Normal Curve generated by the Minitab Software for curve-fitting Menu Pull-Down Sequence in Minitab
  88. 88. Normal Curve Properties of a Normal Curve: The curve never touches the X axis…asymptotic Mean, Median and Mode coincide Curve can be divided in half with equal pieces falling either side of the most frequently occurring value The peak of the curve represents the center of the process The area under the curve represents virtually 100% of the product the process is capable of producing 68.26% 95.46% 99.73% 34.13% 34.13% 13.60% 13.60% 2.14% 2.14% 0.13% 0.13% -3s -2s -1s 0 +1s +2s +3s Percentage of Data getting covered under the curve
  89. 89. Normality Plot Stat > Basic Statistics > Normality Test Generates a normal probability plot and performs a hypothesis test to examine whether or not the observations follow a normal distribution. The minitab output of normality plot for a data set Menu Pull-Down Sequence in Minitab ‘ P-Value’ Determines the normality of the data
  90. 90. Normal Probability Plot 12 13 14 15 16 17 18 0 1 2 3 4 5 6 7 7 9 11 13 15 17 19 21 23 0 1 2 3 4 5 6 Frequency Bimodal Distribution Bimodal curve Normal Probability Plot for an Exponential Distribution 0 10 20 30 40 50 60 70 80 90 0 10 20 Exponential Distribution Frequency Exponential Curve 7 9 11 13 15 17 19 21 23 0 1 2 3 4 5 6 Frequency Long Tailed Distribution Percent Normal Probability Plot For Long-Tailed Distribution 0 10 20 30 1 5 10 20 30 40 50 60 70 80 90 95 99 “ S” curve Normal Probability Plot for a Normal Distribution Roughly Normal Distribution Straight Line 0 10 20 30 1 5 10 20 30 40 50 60 70 80 90 95 99 Percent Normal Probability Plot for a Bimodal Distribution Percent
  91. 91. Types of Distribution The Shape of the distribution depicted by the data can be studied with the help of a histogram or more precisely with Normal Probability Plot 12 13 14 15 16 17 18 19 0 1 2 3 4 5 6 7 Frequency Roughly Normal Distribution 7 9 11 13 15 17 19 21 23 0 1 2 3 4 5 6 Frequency Bimodal Distribution 0 10 20 30 40 50 60 70 80 90 0 10 20 Exponential Distribution Frequency
  92. 92. Quartile Value Quartile Values (Q1, Q3) Of great Use when the data is highly skewed Aims at dividing the data set in 4 quartiles and measuring the range in each subsequently Each Quartile contains 25 percentile of the data Q1 Q3
  93. 93. Stability Factor Skewed or Stable Ops Distributions: The stability factor (SF) is the ratio of the first quartile and third quartile values. The formula for Stability Factor is: SF = Q1/Q3 The stability factor is interpreted such that the closer SF is to 1, the less variation there is in the process. The further SF is from 1, the more variation there exists in the process 0 More Variation Less Variation 1 Q1 Q3 Q1 Q3 Q1 Q3 <1 More Variation Q1 Q3 1 Less Variation
  94. 94. Box Plot Box Plot is another tool to visually display process dispersion Highest Value Third Quartile (75%) value Lowest Value First Quartile (25%) value Median * Each segment represents 25% of the data points Outlier * *
  95. 95. Box Plot Box Plots are graphical summaries of the patterns of variation in sets of data The horizontal lines at the top and bottom of the plot represent the highest and lowest values beyond which the data becomes an outlier. The horizontal line in the middle of the solid box represents the median, and the solid box represents the middle 50% of the data, with 25% of the data on the top side of the box and 25% of the data on the bottom side of the box Points noted with an asterisk(*) represent extreme values or outliers
  96. 96. Descriptive Statistics Descriptive Statistics can be run on Minitab. The result will be shown as above. One would be able to get information on Normality of the data, visual display of data ( Normal curve), Box Plot, outliers, Mean, Median, Standard Deviation, Quartile values. Stat > Basic Statistics > Display Descriptive Statistics Menu Pull-Down Sequence in Minitab
  97. 97. Run Chart A tool used to depict the behavior of the process under investigation over a period of time A run is defined as a single point or a series of sequential points where no point is on the other side of the median Time Median Parameter
  98. 98. Patterns Observed in Run Charts Shift / Mixtures Trend Same Value / Clusters Cycle / Oscillation
  99. 99. Run Charts
  100. 100. Run Charts
  101. 101. Causes of Variations To distinguish between common & special causes variation, use display tools that study variation over time such as run charts & control charts. Common Cause Special Cause Type of Variation Characteristics Inherent to the process Expected Predictable Normal Random Not Always Present Unexpected Unpredictable Not Normal Not Random Characteristics
  102. 102. Analyze Confer the assumption
  103. 103. Analyze Deliverables: Process Sigma Verified Root Causes Estimate of Benefit QUANTIFY THE OPPORTUNITY IDENTIFY POSSIBLE CAUSES Determine the quantum of Benefits from Root Causes Segmentation & Stratification Sub-process Mapping Process Map Analysis Graphical Analysis Tools Work Value Analysis MUDA/MURI NARROW TO ROOT CAUSES Hypothesis Testing ANOVA t -Tests Chi-square Test Regression Analysis 2 3 4 PROCESS CAPABILITY Base lining DPMO/DPU Cp/Cpk Calculate Sigma 1
  104. 104. Rolled Throughput Yield Final Yield (FY) Yield at the end of the process excluding scrap Final Yield, FY = U/S = Units Passed/Units Submitted FY = (100-30)/100 = .70 or 70% Step 1 Step 2 Step 3 70 Units 100 Units 10 Units 10 Units 10 Units Scrap
  105. 105. Rolled Throughput Yield Final Yield Ignores the Hidden Factory Step 1 Step 2 Step 3 70 Units 100 Units 10 Units 10 Units 10 Units Scrap Rework 10 Units 10 Units 10 Units HIDDEN FACTORY
  106. 106. Rolled Throughput Yield Classical Yield Ignores the Role of Hidden Factory Operation Verify Product Rework HIDDEN FACTORY Scrap
  107. 107. Rolled Throughput Yield Rolled Throughput Yield (RTY) Step 1 Step 2 Step 3 100 Units 10 Units 10 Units 10 Units Scrap Rework 10 Units 10 Units 10 Units HIDDEN FACTORY Product of Throughput Yields across the entire process 90 Units 80 Units = 30 Units = 30 Units FTY 90 % FTY 88.9 % FTY 87.5 % = 100 Units FTY 70 % TPY 80 % TPY 77.8 % TPY 75.0 % RTY .80 RTY .778 RTY .75 = 46.7 % X X Probability of Zero Defects
  108. 108. Rolled Throughput Yield How long can we sustain this situation How much is this performance level costing us Where should we focus to improve the process Rolled Throughput Yield (RTY) Step 1 Step 2 Step 3 100 Units 10 Units 10 Units 10 Units Scrap Rework 10 Units 10 Units 10 Units HIDDEN FACTORY Product of Throughput Yields across the entire process 90 Units 80 Units = 30 Units = 30 Units FTY 90 % FTY 88.9 % FTY 87.5 % = 100 Units FTY 70 % TPY 80 % TPY 77.8 % TPY 75.0 % RTY .80 RTY .778 RTY .75 = 46.7 % X X Probability of Zero Defects
  109. 109. Sigma Calculation Process Sigma (Z ST) Defects Per Million Opportunities (DPMO) 6 99.99966% 3.4 5 99.9770% 230 4 99.3790% 6,210 3 93.320% 66,800 2 69.20% 308,000 Defects in the Process defects w.r.t performance standards Percentage
  110. 110. Common Terms Critical-To-Quality Characteristics (CTQ): Customer performance requirements of a product Defect: Any event that does not meet the specifications of a CTQ Defect Opportunity: Any event which can be measured that provides a chance of not meeting a customer requirement Defective: A unit with one or more defects Dashboard: Graphical representation of process Measures providing a snapshot of process health
  111. 111. Sigma Calculation Calculating Process Sigma: No. of Units Processed / sample size observed (N) No. of defects observed (D) No. of Opportunities of defect per unit (O) Defects Per Opportunity = Defect Per Million Opportunities = Look up for the Sigma value in the Sigma Table Yield = 1 - Percentage Defective (The goodness of the process output) D N * O D N * O x 1,000,000
  112. 112. Abridged Sigma Table
  113. 113. How to improve the process performance The Statistical Problem can be deduced from the data by concentrating our efforts in 2 Directions Six Sigma Challenge…. Center Mean and/or Reduce variation Are we missing on achieving the Target Mean?? Is the problem around the variations…?? OR
  114. 114. The Statistical Problem Shift the process mean (technology case) The process variation is well under control The problem lies with the centering or the mean target of the process output Indicates a need for a look at the technology being used for the process Starting with a DMAIC….if need be a DMADV/DFSS approach can also be taken up Target Target Reduce the process variation (control case) The process is capable of performing around the mean target of the process However the variation is beyond acceptable limits The technology is supportive but the control mechanisms are not very effective Look at process improvements USL LSL USL LSL
  115. 115. Process Capability What is it & how does it help us? Process capability is a simple tool which helps us determine if a process, given its natural variation, is capable of meeting the customer requirements or specifications Tells us if the process is capable? Helps to determine if there has been a change in the process & Also enables the six sigma team to determine the percent of the product / service not meeting the customer requirement to enable Root Cause Analysis A stable process can be represented by a measure of its variation - six standard deviations Comparing six standard deviations of the process variations to customer specifications derives Process Capability Cp : Process Capability USL : Upper Specification Limit Cp = USL - LSL LSL : Lower Specification Limit 6  s : Estimated Standard Deviation While Cp looks at the spread of variation it DOES NOT look at how well the process is centered to the target value Stat > Quality Tools > Capability Analysis (Normal) Menu Pull-Down Sequence in Minitab
  116. 116. Process Capability Process Variation exceeds Specification in this case. Process is producing Defectives Process is just about meeting the specification but Centering needs attention from the six sigma team Process Variation well under control. Keep a check on the centering of the process Upper Spec Limit Lower Spec Limit +1 +2 +3 -1 -2 -3 C p <1 C p =1 C p >1
  117. 117. Other Indices The indices of Cpl and Cpu (for single sided specification limits) & Cpk (for two sided specification limits) measure - Process variation with respect to specifications - Location of the process Average Cpl = X – LSL Cpu = USL- X 3 s 3 s Cpk is considered a measure of process capability and is taken as the smaller of either Cpl or Cpu can then be estimated Cpk = min (Cpl , Cpu ) Last but not the least Minitab helps us do all of it… Target a Cp greater than 1 and a Cpk nearer to 1 But we can definitely estimate the process capability in terms of centricity…... or
  118. 118. Control Impact Matrix When we know the possible root causes …… can we attack all?? HIGH LOW IMPACT IN OUR CONTROL LOW HIGH The Difficult Piece… Use Change Management Strategy Why wait…??? Just Do it Target it now !!! Check the effort vis a vis the results
  119. 119. Control Impact Matrix Classify all the causes that the group arrives at in the brainstorming session for the Control and Impact Matrix Use your team’s process knowledge and business experience to list possible Xs in a Control/Impact Matrix, then use process data to verify or disprove placement of the Xs Prioritization Steps Using the Control/Impact Matrix shown above, examine each X in light of two questions: What is the impact of this X on our process? Is this X in our team’s control or out of our team’s control? With your team, place each X in the appropriate box on the matrix Use process data to verify or disprove your assertions The validated matrix is a guide used to addressing the Xs. Begin with the “High Impact/In Our Control” category “ Out of Our Control” Xs may require special solutions and CAP( Change Accelerating Process) tools for successful sustainable solutions
  120. 120. Segmentation and Stratification Segmentation A process used to divide a large group of data into smaller, logical categories for analysis. Segmentation is commonly used by us in our day to day business to understand and interpret information Example : Segmentation of customers based on the cities Stratification A process which uses summary metrics (central tendency, dispersion) to make the decision about when to separate different processes for continued analysis Unlike segmentation, stratification involves the uses data rather than just “information”. Values of the Central Tendency and Dispersion are used to stratify the given data set. Example : Stratification of the customers based on the business volumes they provide us
  121. 121. Segmentation Tools Can be used to represent the % break up or the absolute numbers Use excel to develop the graph Further break up of each slice is also possible Various combinations & patterns of the Bar chart make it the most widely used tool Can depict process performance in various pockets as against the targets Multiple axes can help depict comparative effects of factors Pie Charts Bar Charts
  122. 122. Pareto Chart Approximately 80% of Defects from Defects D+B+F. 80 % of the effect on Y is caused by 20 % of the factors (X). Frequency Cumulative Percentage 100 90 80 70 60 50 40 30 20 10 Number Of Units Investigated: 8,000 April 1 – June 30 D B F A C E Other 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 Type of Defect Cumulative Summation Line (Cum Sum line) *f = frequency f* of D f* of D+B f* of D+B+F A: Typographical Errors B: Incomplete Info C: Sign not verified D: Illegible E : Process understanding F: Poor Scan Quality LEGEND
  123. 123. Pareto Chart A fully documented Pareto chart will include the “Cumulative Summation line” or Cum Sum line, which depicts the running total of the frequency of each subsequent bar (stratification level). The right- hand axis on the graph will show the cumulative percent of defects. By reading the Cum Sum line against the cumulative percentage, your team can determine which of the stratification levels comprise 80% of total for the problem, and direct their attention to those levels The pareto chart is an important tool to further focus improvement efforts. However, the top bars on the chart are not root causes and the team may need to fully investigate each category in order to pinpoint the root cause. Like the “Five Whys”, the pareto chart can be used in an iterative fashion in order to cascade deeper into a process
  124. 124. Pareto Chart Pareto Chart can be used in the Define Phase to narrow-down the scope of the project and focus on key-factors if historical data available Pareto Chart be used the Measure and Analyze Phase to analyzing the data gathered to identify which are the 20 % Root-causes which cause 80 % of the Effect Pareto Chart should not be used in series repeatedly on the same set of causes as this will reduce the focus area to 80 % of 80 % or 80 %.
  125. 125. Process Mapping Points to Remember: People who work on the process know it the best. Involve people who know (focus on) the “As Is Process” Decide, Clarify and Agree upon Process Boundaries Use Group Activities like Brainstorming Use verb - noun format (e.g., Prepare contract not contracting ) Don’t aim at the person taking care of the activity Respect the boundaries Don’t start “problem solving” Validate and refine before analyzing
  126. 126. Process Mapping Earlier in Define, you developed a high-level or SIPOC process map. By looking at a process from a “big picture” perspective, you evaluated customer needs and supplier inputs, and determined initial measurement objectives Now, you will look in more detail at the sub processes defined in the SIPOC map. Sub process maps provide specifics on the process flow that you can then analyze using several useful techniques. Choose what to sub process map by determining which of the major steps in the SIPOC have the biggest impact on the output (Ys). The block (or blocks) selected is the one on which you create a sub process map – using it to understand how and why it impacts the output If the output is a time measure, which of the blocks consumes the largest portion of total time, or which one has the most variation or delays? If the output is a cost measure, which of the blocks adds the most cost? If the output is a function measure, which block has the most errors or problems? Like working with a puzzle, you begin to assemble the pieces of an area on which it makes sense to focus our efforts
  127. 127. Sub Process Mapping Core Process Business Process SIPOC Sub Process map Suppliers S (ext.) Customers (int.) Cust. Service Dept. C Tasks Procedures
  128. 128. Sub Process Mapping Here are some guidelines on building a sub process map. These are not absolute – but they should help you avoid some of the pitfalls of process mapping: Focus on “As Is” – To find out why problems are occurring in a process, you need to concentrate on how it’s working now Clarify Boundaries – If you’re working from a well-done high level map, this should be easy. If not, you’ll need to clarify start and stop points Brainstorm Steps – It’s usually much easier to identify the steps before you try to build the map Starting each step description with a verb (e.g., “Collate Orders”; “Review Credit Data”) helps you focus on action in the process Who does the step is best left in parentheses (or left out) – you want to avoid equating a person with the process step
  129. 129. Examples: Sub Process Mapping Process flowchart Top Down Flow Chart Deployment Or cross-functional map/flowchart No Yes Planning for a party Determine Party Size Find Location Invite Guests 1.0 2.0 3.0 Decide on Budget Decide Theme Complete Invitations Decide on Guest list Select Location Send Invitations 1.1 2.1 3.1 1.2 2.2 3.2 Dept 1 Dept 2 Dept No Creates List Writes invitation Yes Is the Guest List covered Sends out the invitation Completes List Invitation task completed
  130. 130. LEAN Muda means waste, where waste is any activity that does not add value. Reducing or eliminating muda is, of course, one of the fundamental objectives of any quality-oriented person. Defects Overproduction Inventories Unnecessary processing Unnecessary movement of people Unnecessary transport of goods Waiting Designing goods and services that don’e meet customers’ needs Muda is one of the '3Ms': muda, or waste, mura , meaning irregular, uneven or inconsistent, and muri , meaning unreasonable or excessive strain.
  131. 131. Types of Waste Type of waste Example Complexity Unnecessary steps, excessive documentation, too may permission needed Labor In efficient operations, excess head count Overproduction Producing more than the customer demands. Producing before the customer needs it Space Storage for inventory, parts awaiting disposition, parts awaiting rework and scrap storage. Excessively wide aisles. Other wasted space Energy Wasted power or human energy Defects Repair, rework, repeated service, multiple calls to resolve problems Materials Scrap, ordering more than is needed Idle materials Material that just sits, inventory Time Waste of time Transportation Movement that adds no value Safety Hazards Unsafe or accident-prone environments
  132. 132. Work Value Analysis Data Input Processing Can be recognized and classified as steps that are: Essential for the process Change the form of the input in some way Are done right the first time The customer is willing to pay for them VALUE ADDED WORK NON VALUE ADDED WORK Can be recognized and classified as steps that are Non Essential for the process Do not Change the form of the input in some way Are Repetitive in nature The customer is NOT willing to pay for them VALUE ENABLING WORK Can be recognized and classified as steps that are Essential for the process Even if the customer is not willing to pay for them Like trainings for employees Developmental Training App waiting in the queue Rechecking
  133. 133. Work Value Analysis Value-enabling work is just a different type of nonvalue-added work. While this terminology can be difficult, it does attest the perspective: “If the customer isn’t willing to pay for it, it must be nonvalue-added – by definition.” Value-enabling may be tasks and steps that are still needed given today’s condition. Consider what it would be like if things were perfect. Clues To Nonvalue-Added Work Symptoms: Your organization is grouped by functions To get work done, you need multiple approvals Your policies address only control issue REs are things in your process that are done more than one time (REdo, REcall, REissue) REs usually cause you to loop back to an earlier point in your process REs consume time, add to the complexity, and use additional resources & costs the organizations more in terms of $$s
  134. 134. Cycle Time Analysis Experts say that a 66% of the time spent in a process can be targeted for reduction which is usually spent in activities like: Movement Storage Waiting Time etc We can by a simple Cycle time Analysis select and target such potential areas / activities Movement of the goods Processing Movement Processing Output waiting to be shipped The entire process is taking 1.5 days……can we reduce the TAT?? 5 hrs 2 hrs 4 hrs 4 hrs 4 hrs
  135. 135. Cycle Time Constituents Process Time + Delay Time = Total Cycle Time Where & Why are we spending the highest time?? What kind of activities are these…. VA / NVA / VE?? Why should we retain activities which are NVA and contributing to total TAT??? Process Step 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 TOTAL % TOTAL % STEPS Time Taken 1 120 15 120 3 180 7 1 120 5 10 15 90 15 120 2 120 5 8 957 100 %
  136. 136. Process Flow Analysis Looking at the process together…..work flow & value analysis Process Step 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Total % Total % Steps Est. Avg. Time (Mins) 1 120 15 120 3 180 7 1 120 5 10 15 90 15 120 2 120 5 8 957 100%   Value-Added                                       48 5%   Nonvalue-Added                                             - Internal Failure                                       180 18.80%   - External Failure                                             - Control/ Inspection                                       8 0.80%   - Delay                                       690 72.10%   - Prep/Set-Up                                             - Move                                       30 3.10%   - Value-Enabling                                       1 0.10%   Total 957 100%  
  137. 137. Process Flow Analysis Cycle Time The total time from the point in which a customer requests a good or service until the good or service is delivered to the customer. Cycle time includes process time and delay time Process Time The total time that a unit of work is having something done to it other than time due to delays or waiting . It includes the time taken for value-added steps, internal failure, external failure, control, inspection, preparation/set-up, and move Delay Time Total time in which a unit of work is waiting to have something done to it Linking value analysis with cycle time analysis creates a compelling business case for change Displaying this information on process maps emphasizes focus areas for improvement and begins building data for Quantify the Opportunity Tips/Traps Don’t get stuck on deciding if a step is value-added or nonvalue-added Allow team to discuss it Consensus with 2/3’s majority vote Push team to make process map a true “as is.” The ratio for an average “as is” map is 20% value-added, 80% nonvalue-added
  138. 138. Hypothesis Testing What is Hypothesis Testing? When you suspect some of the Xs to cause differences (variation) on the Y, then that is the ‘hypothesis’. Hypothesis Testing involves statistical validation of the Hypothesis. Examples : Two of the processors (X) have same efficiency (Y) There has been no increase in business volumes (Y) this year (X) compared to last When to test for Hypothesis ? Increases your confidence that probable Xs are statistically significant Used when you need to be certain that a statistical difference exists
  139. 139. Hypothesis Testing What kind of differences are we trying to detect with the help of Hypothesis Testing? Continuous data: Differences in averages Differences in variation Differences in distribution “shape” of values Discrete data: Differences in proportions
  140. 140. Types of Hypothesis Testing There are different types of the Hypothesis Tests. The type of the test depends on Statistical definition (Discrete or Continuous) of Xs & Y Each of these have a differential statistical treatment However the interpretation of results are common. Normal Data A N O V A 1- Sample t Test 2- Sample t Test Regression Chi-Square Test
  141. 141. Hypothesis Testing ANOVA – The simplest type of problem that is studied with the help of the analysis of variance is one in which a single factor is under consideration and this factor possesses several categories of levels, which could have a possible effect on a variable of interest. Example, suppose an office manager is interested in knowing which of the four brands of typewriters has to be purchased for the office. The hypothesis tested is that all brands of the typewriters yield same result. Regression – The regression analysis is basically useful for two primary purposes: It can be used to predict the value of one variable when the value (s) of other related variable(s) is (are) known; and It can be used to examine whether there is relationship between two or more variables and it can further describe the strength of that relationship.
  142. 142. Hypothesis Testing One sample t test – Example: A state agro canning company uses an automatic machine to fill cans to a specified weight. As the automatic machine has a tendency to drift out of adjustments, the quality assurance department draws a random sample of 20 cans from each day’s production and weighs these cans. The average weight of the sample can is found to be 485 g instead of the specified weight of 500 g and the sample standard deviation was 16.5 g. Would you specify, on the basis of the sample information, a readjustment for the automatic machine? Two sample t test – Test concerned two populations and two samples. Is the average expenditure of the male customer is equal to the average expenditure of the female customer. Sales of the product before and after an advertising campaign.
  143. 143. Hypothesis Testing of Mean - Roadmap Parametric F test and Barlett test for variance Non Parametric Mann – Whitney for 2 sample t test Sign Test for 1 sample t test Moods Median for ANOVA Levene’s test for variance 1 Sample t-Test Yes Regression Topics No Yes Are Xs Discrete? Are Ys Continuous? Comparing only 2 Groups ? Yes, Y Is Continuous Yes No, Y Is Discrete No, Multiple Groups  2 (Chi Square) ANOVA No Are You Comparing To A Standard? 2 Sample t-Test
  144. 144. Minitab Directions There are two ways to make a two-group comparison, depending on how you have recorded data in Minitab If each group is in its own column, select the Samples in different columns option If all the data is in one column and you have factor labels in a second column, select the Samples in one column option While you can assume that the variation in the two groups is equal, it’s usually more appropriate to leave the “ Assume equal variances” box unchecked In Minitab, there are two ways to compare several groups: If you have all the response data in one column and the group identity labels in another column, choose the One-way option If you have each group in its own column, choose the One-way (Unstacked) option Stat>Basic Statistics>2 Sample t. Stat>Basic Statistics > 1-Sample t Stat>ANOVA Stat > Tables > Chi-Square Test
  145. 145. T-test: An example The t-test is used when we wish to compare the average performance of two groups. The statistical test examines whether the data supports or disputes the Null Hypothesis – no difference between the means of the two populations . m1 = m2 Alternate Hypothesis – that the means are “not equal” or when there is a difference between the two population means m1 ≠ m2 when you don’t have a clear preference: “less than” or “greater than” m1 < m2 or m1 > m2 A brand Bulbs B brand Bulbs
  146. 146. One Sample t Test
  147. 147. One Sample t Test
  148. 148. Homogeneity of Variance
  149. 149. Homogeneity of Variance
  150. 150. Two sample t test
  151. 151. Two sample t test
  152. 152. ANNOVA
  153. 153. Chi Square Test – An example A process was producing 106 non defectives and 376 defectives per day. After improvement it was performing at a level of 503 non defectives and 0 defectives. Do the two processes differ significantly? Since p value is less than 0.05 we Reject Null Hypothesis. The processes are significantly different Non defectives Defectives Total 609 376 985 Chi-Sq =118.547 +192.008 + 123.712 +200.374 = 634.640 DF = 1, P-Value = 0.000 Thus the two processes are proved to be significantly different from each other on performing a Chi Square test on the values for the before and after the change scenarios 1 C2 503 310.99 C3 0 192.01 TOTAL 503 2 106 298.01 376 183.99 482
  154. 154. Chi Square
  155. 155. Chi Square
  156. 156. Chi Square
  157. 157. Types of Hypothesis Testing Above are test to be used when the data is not normal Non Normal Data Kruskal-Wallis Runs Test Mann-Whitney 1-SampleSignTest Mood's Median Test Chi Square
  158. 158. Hypothesis Testing 1 Sample sign test – It is used to test whether the two different group data has changed by analyzing the signs of the result. Run Test – It is used to test the sample for randomness.
  159. 159. Correlation & Regression To narrow down upon the key variables (root causes) we next go on to a topic that Helps us determine the NATURE & STRENGTH of a relationship between 2 variables
  160. 160. Correlation A Correlation problem arises when the 6s team discusses whether there is any relationship between a pair of variables A Scatter Diagram is a graphical tool for exploring the relationship between predictor variables (Xs) and the response variable (Ys) (i.e., causes and the effect). The equation under consideration stands at Y = f(x) This technique can be used to See whether the potential cause variable and the effect variable are related to one another Range Of The Predictor Variable (Xs) The range is the difference between the largest and smallest values The range of the potential cause variable should be wide enough to show possible relationships with the effect variable Irregularities In The Data Pattern Check whether the data pattern indicates possible problems in the data
  161. 161. Scatter Diagram A Scatter Diagram is an important graphical tool for exploring the relationship between predictor variables (Xs) and the response variable (Ys) (i.e., causes and the effect) Cycle Time (Days) (Y) Amount to be Sanctioned (X) 40 30 25 20 15 10 5 35 1K 2K 3K 4K 5K 6K 7K 8K 9K 10K
  162. 162. Some Scenarios For all charts: Y = Participant satisfaction (scale: 1 – worst to 100 – best) X = Trainer experience (# of hours) No Correlation Positive Correlation Strong Positive Correlation Other Pattern Negative Correlation Strong Negative Correlation 1 2 3 4 5 6
  163. 163. Some Scenarios Interpretation Of Patterns: Positive Relationship: X Increases/Y Increases Graphs 1 and 2 – as X increases, Y increases Graph 1 – Strong, positive relationship between X and Y. Not much scatter. Points form a line with a positive slope Graph 2 – Weaker, positive relationship between X and Y. Scatter is wider. Points still form a line with a positive slope Negative Relationship: X Increases/Y Decreases Graphs 2 and 3 – as X increases, Y decreases Graph 3 – strong negative relationship between X and Y. Not much scatter. Points form a line with a negative slope Graph 4 – weaker negative relationship between X and Y. Scatter is wider As trainer experience increases, participant satisfaction decreases No Relationship: For any value of X, there are many values of Y Graph 5 – there is no relationship between X and Y. No one line best describes the data Non-Linear relationship – The relationship between X and Y is complex and cannot be summarized with a straight line Graph 6 – Inverted U. This is not a linear relationship
  164. 164. Word of Caution Lesson learnt : Correlation doesn’t imply causation Growing Population of Human Beings Growing Population of ants
  165. 165. Regression The correlation coefficient r measures the strength of linear relationships; –1  r  1 When a relationship exists, the variables are said to be correlated Perfect negative relationship r = –1.0 No linear correlation r = 0 Perfect positive relationship r = +1.0 r2 Measures the percent of variation in Y explained by the linear relationship of X and Y
  166. 166. Regression Regression Equation: Y = mX + C, where C = Predicted Value Of Y When X = 0 m= Slope Of Line Change in Y Per Unit Change in X X Y Y = mX + C Line of Best Fit
  167. 167. Forecasting and optimization are complex Come on! It can‘t go wrong every time...
  168. 168. Work out session – Complete the project till choosing hypothesis & Minitab session
  169. 169. Quantify the Opportunity WHAT: Quantifying the Opportunity is all about the Six Sigma Team visiting the Benefits to be accrued from the project. The index at this point of time would comprise of both Hard & Soft gains (with a $ figure attached to hard gains) however no cost factors are involved at this time WHY: To provide the financial rationale of continuing the project, and to gauge the allowable spending on the specific solution (s) that will be identified in Improve. It is a key milestone to be covered before the Analyze tollgate WHEN: Usually after the verification of the vital few root causes Quantify The Opportunity Done in the Analyze phase It is not for a specific solution Considers only Gross numbers Covers the entire process as the scope Cost/Benefit Analysis Done in the Improve stage Calculated for a specific solution Talks about Net gains Scope: A portion of the process
  170. 170. Key Learning Points and Summary
  171. 171. Improve Fix the process (es)
  172. 172. Improve GENERATE/SELECT SOLUTIONS REFINE SOLUTION FMEA Error Proofing Brainstorming DOE Criteria Matrix NGT Pilot planning Verification of Results TEST SOLUTION Idea Screening Tools Cost Benefit Analysis JUSTIFY SOLUTION TRIZ Deliverables: Solution Design Developed Solution Tested on a small Scale Cost Benefit Analysis
  173. 173. Improve - Steps Creative Approach Refine the Idea Test the new Idea Analytical Approach Justify $ Select Best Idea Brainstorming Data Analysis (DOE) Filter Error Proofing / FMEA Pilot
  174. 174. Design of Experiments We will understand the following: Understand the strategy behind Design of Experiments (DOE) Create a design matrix Code factors Understand the limitations of OFAT experiments Interpret interactions
  175. 175. Why Learn about DOE? Properly designed experiments will improve Efficiency in gathering information Planning Resources Predictive knowledge of the process Ability to optimize process Response Control input costs Capability of meeting customer CTQs
  176. 176. What is DOE? A DOE is a planned set of tests on the response variable(s) (KPOVs) with one or more inputs (factors) (PIVs) each at two or more settings (levels) which will: Determine if any factor is significant Define prediction equations Allow efficient optimization Direct activity to rapid process improvements Create significant events for analysis
  177. 177. DOE Terminology Response (Y, KPOV): the process output linked to the customer CTQ Factor (X, PIV): uncontrolled or controlled variable whose influence is being studied Level: setting of a factor (+, -, 1, -1, hi, lo, alpha, numeric) Treatment Combination (run): setting of all factors to obtain a response Replicate: number of times a treatment combination is run (usually randomized) Repeat: non-randomized replicate Inference Space: operating range of factors under study
  178. 178. DOE Objectives Learning the most from as few runs as possible; efficiency is the objective of DOE Identifying which factors affect mean, variation, both or none Screening a large number of factors down to the vital few Modeling the process with a prediction equation: Optimizing the factor levels for desired response Validating the results through confirmation
  179. 179. Strategy of Experimentation Planning Phase Define the problem Set the objective Select the response (s), Ys Select the factors (s), Xs Select the factor levels Select the DOE design Determine sample size Run the experiment Execution Phase Collect data Analyze data Validate results Evaluate conclusions Revise SOPs Train affected workforce Document results Consider next experiment
  180. 180. Response Relationship to Other Tools Inputs Outputs Projector has bright light Projector is quiet Colors correct Power On Bulb life Instructor Training Computer Interface Projector Process Map Cause & Effect Diagram for Bright Light Low Bulb Brightness Measurement Person Machine Method Environment Room Brightness Instructions Light Meter Instructor Power On Computer Settings Bulb Rating of Importance to Customer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Process Inputs Bright Light Quiet Color Correct Total 1 119 2 95 3 23 4 95 5 6 7 Power on Bulb Life Instructor Computer 9 3 2 8 8 1 3 3 1 1 1 2 2 1 1 9 8 7 8 Rating of Importance to Customer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Process Inputs Bright Light Quiet Color Correct Total 1 119 2 95 3 23 4 95 5 6 7 Power on Bulb Life Instructor Computer 9 3 2 8 8 1 3 3 1 1 1 2 2 1 1 9 8 7 8 Cause & Effect Matrix
  181. 181. Factor Relationship to Other Tools Inputs Outputs Projector has bright light Projector is quiet Colors correct Power On Bulb life Instructor Training Computer Interface Projector Process Map Cause & Effect Diagram for Bright Light Low Bulb Brightness Measurement Person Machine Method Environment Room Brightness Instructions Light Meter Instructor Power On Computer Settings Bulb Rating of Importance to Customer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Process Inputs Bright Light Quiet Color Correct Total 1 119 2 95 3 23 4 95 5 6 7 Power on Bulb Life Instructor Computer 9 3 2 8 8 1 3 3 1 1 1 2 2 1 1 9 8 7 8 Rating of Importance to Customer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Process Inputs Bright Light Quiet Color Correct Total 1 119 2 95 3 23 4 95 5 6 7 Power on Bulb Life Instructor Computer 9 3 2 8 8 1 3 3 1 1 1 2 2 1 1 9 8 7 8 Cause & Effect Matrix
  182. 182. Experimental Design Considerations Experimental Design Types Experimental Objective Full Factorial (replication) RSM (Response Surface Method) Optimize Model Fractional Designs OFAT (One Factor at a Time) 2 k Full Factorial Screen 2 k Full Factorial ( w center points/replication) Few Many KPIV’s Less More KNOWLEDGE Less More COST
  183. 183. Factors and the Process DOE can be applied to both business and industrial processes Y: Accounts Receivable Hold Business Process Manufacturing Process Y: Scrap Reduction X 1 Temp X 2 Press X 3 Time X 1 Price X 2 PO X 3 Terms
  184. 184. Coding Three Factors Low-1 High+1 Press = 2 Temp = 10 Resin = 56 Press = 10 Temp = 125 Resin = 3560 -1 +1 press -1 +1 temp resin +1 Coding convention often uses “+” for high and “-” for low
  185. 185. TRIZ – Theory of Inventive Problem Solving A theory of problem solving based on the recognition of the power of psychological inertia and the importance of having a process to move problem solving activities beyond this inertia by identifying contradictions (barriers to an ideal solution) and providing specific tools and strategies for eliminating them Genrikh Altshuller (developer of TRIZ) screened thousands of patents looking for inventive problems and how they were solved. Only 40,000 had inventive solutions; the rest were straightforward improvements. Altshuller categorized these patents by identifying the problem-solving process that had been used repeatedly. He found that the problems had been solved using one of forty inventive principles.
  186. 186. TRIZ – Key Concepts Overcoming Psychological Inertia Psychological Inertia is the sum total of an individual’s personal bias, previous experience and technological knowledge. When challenged with a problem, the solution path will be constrained by the individual’s psychological inertia. Psychological inertia predisposes not only the solution path, but the types of solutions as well. Psychological inertia defines the boundaries of the solution space; if the ideal solution for a problem demands “out-of-the-box” thinking, psychological inertia confines the thought process to the conventional box.
  187. 187. TRIZ – Key Concepts Inventive Principles are rules for system transformations. Each principle is intended to eliminate a single contradiction or several contradictions. Also, these principles can be used simply to stimulate, or initiate, the thought process to create the ideal solution to a problem. They can be used for inspiration in conceptual design, directly utilized in the case-based design or used in various forms of analogical reasoning. Two of the forty principles are listed in next page with accompanying service examples. Apply Selected Inventive Principles To Your Contradiction/ Problem To Develop Innovative Solutions
  188. 188. TRIZ – Key Concepts
  189. 189. Brainstorming Techniques Structured Brainstorming Smooth Flow of Ideas Everyone has an opportunity to express opinions Efficient process in terms of idea generation Less Creative in approach Unstructured Brainstorming Smooth Random Flow of Ideas All opinions may not be captured Lesser number of ideas generated Effective process for creative ideas
  190. 190. Brainstorming Techniques Structured Brainstorming: This is a process in which the ideas are generated in a systematic method by moving from one person to another and obtaining an idea. The person who does not wish to provide an idea, passes. The process is continued until each participant passes and the facilitator determines no fresh ideas are being generated. As with all forms of brainstorming, the ideas are then taken up for discussion. At the end discard any ideas that are similar and debate on each idea to come up with the finalized list of possible solutions Unstructured Brainstorming: In this form of brainstorming, ideas are generated randomly and there is no need to pass. The solutions are discussed freely and can be proposed at any time. This type of brainstorming process is effective when the representation of each of the diverse groups involved in the discussion is large. However, the facilitator should be careful that even in this randomized discussion with large representations, ideas may not be captured completely
  191. 191. Brainstorming Principles “ Must be” for all Brainstorming forms Allow Free Flow of Ideas Create Heterogeneous Groups Focus on Critical Xs & CTQs Avoid Large Groups (6 - 10 optimum) Prevent Biases (Group/Function/Subject) Manage Time Scribe all Details Use Voting Techniques
  192. 192. Channeling / Anti Solution Structured approach to brainstorm on specific category of solutions available to the team Anti Solution Channeling Structured approach to brainstorm on the opposite of the objectives of the discussion
  193. 193. Analogy / Brain writing Brainstorm on ideas on related topic to unblock the thought process Build on ideas written on a sheet of paper randomly distributed amidst the group Brain Writing Analogy
  194. 194. Brainstorming Techniques Channeling: This form of brainstorming entails channeling of thoughts of the participants similar to the discussion in Fishbone diagram. The discussion revolves on specific stream/section of solution. E.g While discussion on methods to improve the sales of soaps, potential solutions can be channeled into: Improvements related to the packaging, fragrance, cutting of costs or distribution system for the product Antisolution: Here the objective of the team is reversed to exactly the opposite of what is to be determined. E.g. In an effort to improve the training program, the team can debate about the ways in which the training program can be worsened. The solutions are opposite of what is determined during the discussion Analogy: Analogy form of brainstorming encompasses a discussion on a process/product that is similar to the one which is being improved.e.g. Instead of discussing on “What are ways to improve the productivity of call center associates for an insurance customer service call center, the team can discuss opinions on productivity of associates in the customer service call center for Automobile Leasing”. Make associations at the end of the discussion Brain writing: Written ideas are exchanged in this process. Each person who receives a written idea tries to expand on it or adds a totally new idea to it. The pieces of paper are rotated & a collections of ideas or solutions are then debated & prioritized. This is used when the team members are unknown to each other
  195. 195. Solution Selection Pay Off Matrix Screening against “Must Be” N/3 Voting Effort Benefits L H H Solutions for further discussions Compliance, Policies & regulations Customer CTQs Business CTQs Generate/Update List of Solutions Combine all similar choices with consensus Allow members to choose 1/3 of the list as choices Tally Votes for each choice Rationalize/Justify/ Reject solutions Criteria Based Matrix Ease of Use Inter site availability Preparation of Charts Information Real time Auto escalation of unresolved issues Filters & Regulated Access Establish Criteria Assign Wt. Soln. 1 Soln. 2 # of Votes Total Score:
  196. 196. Solution Selection Pay-off Matrix: After the possible solutions are generated, they can be assigned to different quadrants of the pay off matrix. The ideas which are low on benefits, are rejected. The discussion is then focused on the high effort/high benefits and these are rationalized for implementation. Some of the solutions in this quadrant are so prohibitively expensive that they can be eliminated without further analysis. Caution needs to be maintained when eliminating potential solutions from this quadrant Screening against “Must be” criteria: Company policies, Local laws & Social considerations are extremely important before proceeding further on any solution. Finally, the extent to which the customer CTQs of the project are satisfied is a key consideration for selection (Use Kano model to evaluate expected extent of satisfaction generated) N/3 Voting: The key consideration in this form is that all rejected solutions should be justified
  197. 197. Criteria Based Matrix Where to use CBM More than a few criteria for solution selection Solutions scoring differently under each criteria The weight for each criteria differs How to use CBM Identify Criteria and assign weightage The Team members to give votes to each idea The votes are then multiplied with the weights of the criteria established for selection The total scores obtained at the bottom of the solution are used to select the same
  198. 198. N G T: Nominal Group Technique What is the best method to reduce cost per transaction? Simplify Application process & consolidate Sites
  199. 199. Error Proofing Error Free Designs or Processes Eliminate the Possibility of setting the Xs beyond the limits Warns Operators before the Xs move to the outside limit so that the preventive action can be taken Can also be used in conjunction with the Risk Management or SPC The opportunity for error needs to be minimized or eliminated Get it right during process design…!
  200. 200. F M E A Definition Failure Mode & Effects Analysis is a tool to identify the relevant importance of possible failure of the process/solution & establish clear mitigation strategy associated with the risks Principle of FMEA Impact of any failure on a process is not only severity of the failure but also the frequency of occurrence & the ability of the control mechanism to detect the failure A Team activity with the subject matter experts…!
  201. 201. F M E A Objective: Addressing rational concerns that have significant affect on the performance of the proposed solution To evolve a consensus on the recommended corrective actions and procedures to follow. Suggestions for completion of the activity in time : It would be best to address concerns from a process perspective rather than have a business contingency plan activity which would affect the entire business (e.g Earthquake, Fires, Natural calamities, Geo-political exigencies, Power back ups etc.). Each individual SME/functional expert would position the argument. More often than not the suggestions/action plans would come from him/her or through a judicious discussion within the group. The process owner takes the final call on the impact except when issue is related to a Legal or a Compliance issue.
  202. 202. F M E A FMEA is carried out on the Process/Product being introduced or redesigned Calculate the Risk Priority Number by multiplying the scores associated with Severity, Occurrence & Detection-revisit frequently to update FMEA is a live document
  203. 203. Calculating RPN RPN = Severity x Occurrence x Detection More the severity, higher is the rating Severity Scale Bad Good Injure a customer or employee Be illegal Render product or service unfit for use Cause extreme customer dissatisfaction Result in partial malfunction Cause a loss of performance which is likely to result in a complaint Cause minor performance loss Cause a minor nuisance, but be overcome with no performance loss Be unnoticed and have only minor effect on performance Be unnoticed and not affect the performance 10 Rating Criteria – A Failure Could: 9 6 5 4 3 2 8 7 1
  204. 204. Calculating RPN RPN = Severity x Occurrence x Detection More often it occurs, higher is the rating Occurrence Scale More than once per day Once every 3-4 days Once per week Once per month Once every 3 months Once every 6 months Once per year Once every 1-3 years Once every 3-6 years Once every 6-100 years 10 9 8 7 6 5 4 3 2 1 Rating Time Period > 30%  30%  5%  1%  .03%  1 Per 10,000  6 Per 100,000  6 Per Million  3 Per 10 Million  2 Per Billion Probability Good Bad
  205. 205. Calculating RPN RPN = Severity x Occurrence x Detection Lower the ability to detect, higher is the rating Detection Scale Defect caused by failure is not detectable Occasional units are checked for defect Units are systematically sampled and inspected All units are manually inspected Units are manually inspected with mistake-proofing modifications Process is monitored (SPC) and manually inspected SPC is used with an immediate reaction to out-of- control conditions SPC as above with 100% inspection surrounding out - of-control conditions All units are automatically inspected Defect is obvious and can be kept from affecting the customer Rating Definition 10 9 5 4 3 2 7 1 6 8 Bad Good
  206. 206. Testing for Solutions Why Test Solutions Confirm to potential solution Obtain feedback and buy-in for proposal Opportunity for refining solution How to Test Solution Small scale experimentation-Piloting Modeling-Physical Models Simulation- Computer models
  207. 207. Testing Essentials Create a Data collection plan & monitor activities for Data consistency & Stability Verify that the process has improved-Process capability, Hypothesis Testing for evaluating Statistical significance of the old & the improved process Why collect Data? What to Collect ? How to Collect? Ensure Consistency& Stability Ensure robust Measurement plan for testing the solution
  208. 208. Justify Solution Tangible Benefits Savings Incremental Revenue Interest Cost & Income Cash Flow Reduced Workforce Intangible Benefits Customer Retention Improved VOE/VOC/VOS Ease of Work Better Governance Prepare Net Financial Gains & supplement with Intangible gains and Obtain Business & Finance Leader approvals before proceeding
  209. 209. C B A Soft Gains Trainings customized to the need of the employees Calendar & Duration of the training according to Organizations need Benefits of Training In-house vs. Employing a Vendor
  210. 210. Key Learning Points and Summary
  211. 211. Control Sustaining the benefits
  212. 212. Control CLOSE PROJECT PROCESS CONTROL PLAN Documentation Communication of Learning Project Closure Process Management Chart Process Documentation Selection of Control Charts Interpreting Control Charts Response Plan IMPLEMENT SOLUTION Stakeholder Analysis Implementation elements 1 2 3 Deliverables: Process Control Developed Fully Implemented Solution Communication of Project
  213. 213. Why Control? To hold the gains of the DMAIC project there by ensuring that the efforts of the project team are not written off Time Process Control No Process Control Improvements
  214. 214. Why Control? Process Control is essential to prevent the loss of gains over a period of time Detect the out of control state of processes & determine the appropriate actions Control System incorporates Risk Management, SPC, Measurement & Audit Plans, Response plans, Documentation & Ownership
  215. 215. What To Control? At the barest minimum, measure CTQs Strive to measure the critical Xs to provide chances to make corrections Install the appropriate data collection plan Audit appropriately Early warning system prevent $$ Impact ( Rework reduction ) Input Measures (Xs) Output Measures Monitoring Points Process Measures
  216. 216. Process Management System Documentation & Standardization Monitoring Response plan Ongoing Auditing What, Who, When, How of data collection & measurements What, Who, When, How of action upon process failure New special cause variations identified What, Who, Where of the process Deployment flow chart Activity Details Key Measures Standards Measurement & Process Procedure for adjustment Containment Procedure for improvement
  217. 217. Process Management Charts Deployment Flowchart Key Process And Output Measures Detail On Key Tasks Monitoring Standards Method For Recording Data Containment Procedure For Process Adjustment Procedure For System Improvement The Plan For Doing The Plan For Doing The Work Documentation Checking The Work Checking The Work Monitoring The Response To Special Causes The Response To Special Causes Response Plan
  218. 218. Documentation Documentation process is an activity that ensures that the knowledge gained by the project team is: Implemented successfully Retained by the business Used to standardize the business processes Future resources can be trained easily Improvements can be shared and translated across Institutionalized
  219. 219. SPC Control Charts Monitor Changes in the Xs & detect changes that are due to Special Causes Used when the Xs cannot be mistake proofed or need to be controlled for inherent variations
  220. 220. Control Charts Select Process, Measures to be charted Establish Data Collection & sampling plan Calculate the Statistics Plot Control Charts(2 Charts for Continuous Data & 1 for Discrete Data)
  221. 221. Interpretation of Variation How you interpret variation . . . Common Causes Special Causes Common Causes Special Causes Mistake 1 Tampering (increases variation) Focus on systematic process change Mistake 2 Under-reacting (missed prevention) Investigate special causes for possible quick-fixes True Variation Type…
  222. 222. Selecting Control Charts
  223. 223. Work out session – Q & A
  224. 224. Selecting Control Charts A team from Marketing is tracking response rates (yes, no) on telemarketing calls. Each day they randomly sample 100 responses from their outgoing calls. The number of positive responses is plotted on the chart. The technical support area is monitoring average response time on help requests. Each hour 5 calls are sampled. A team is monitoring the number of applications received each day with incomplete fields. All applications are being audited. Each day 200-250 applications are received. A team is monitoring the cycle time for the underwriters’ loan decisions. The data comes in slowly; that is, one loan is completed daily. A team is tracking the number of fields left blank on an application. Each day a sample of 100 applications is audited. Another team is doing the same thing as the team in example 5 (counting the number of fields left blank on an application), only they are looking at all the applications that come in daily. Each day 50-100 applications are turned in. The team is tracking the number of abandoned calls to a customer call center in a shift. Each shift takes from 200-400 calls.
  225. 225. Types of Control Charts p-Chart c-Chart u-Chart np-Chart
  226. 226. Types of Control Charts 2 0 1 0 S u b g r o u p 0 6 0 3 6 0 2 6 0 1 6 0 0 5 9 9 5 9 8 S a m p l e s C o l l e c t e d M e a n o f S u b g r o u p s o f 1 1 X = 6 0 0 . 2 3 . 0 S L = 6 0 2 . 4 - 3 . 0 S L = 5 9 8 . 1 9 8 7 6 5 4 3 2 1 0 S a m p l e s S u b g r o u p s o f i n d i v i d u a l R a n g e w i t h i n R = 3 . 7 2 0 3 . 0 S L = 7 . 8 6 6 - 3 . 0 S L = 0 . 0 0 E + 0 0 r o d ( r e q u i r e m e n t 6 0 0 + / - 3 m m ) X b a r / R C h a r t f o r s i z e o f s t e e l