Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Analyze Tollgate


Published on

Lean Six Sigma project phase three.

Published in: Education, Business
  • Be the first to comment

Analyze Tollgate

  1. 1. Lean Six Sigma Improving FTX/STX2 Tank Draw Quality SFC Henry, Don H. II Project Initiation Date: 31/03/08 Analyze Tollgate Date: 03/07/08
  2. 2. Agenda <ul><li>Project Charter and Measure Phase Review </li></ul><ul><li>Critical X’s </li></ul><ul><li>Potential Root Causes Affecting Critical X’s </li></ul><ul><li>Reducing the List of Potential Root Causes </li></ul><ul><li>Root Cause Analysis (Qualitative) </li></ul><ul><li>Impact of Root Causes on Key Outputs (Y) </li></ul><ul><li>Prioritized Root Causes </li></ul><ul><li>Analyze Summary </li></ul><ul><li>Lessons Learned </li></ul><ul><li>Barriers/Issues </li></ul><ul><li>Next Steps </li></ul><ul><li>Storyboard </li></ul>
  3. 3. Analyze – Executive Summary <ul><li>Improve tank maintenance quality by giving the 1/16 soldiers more time to perform maintenance during the draw. </li></ul><ul><li>The project starts at the FTX/STX2 T-6 IPR and ends when the tanks are ready for HETT transport. This project is contained within the Fort Knox Garrison and can transfer to other training support missions on Fort Knox. </li></ul><ul><li>We are feeling the pain in training and tank maintenance. </li></ul><ul><li>Soldiers fail to do a quality PMCS for the lack of time, training, and command emphasis. </li></ul>
  4. 4. Project Charter Review <ul><li>Scope: this process begins with the T-6 IPR and ends when 1/16 loads the tanks on HETT’s. </li></ul><ul><li>Goal: Improve tank draw quality </li></ul>Problem/Goal Statement Tollgate Review Schedule Business Impact Core Team <ul><li>State financial impact of project </li></ul><ul><ul><li>Expenses-none </li></ul></ul><ul><ul><li>Investments-none </li></ul></ul><ul><ul><li>Revenues-potential savings in time 819 hours per year </li></ul></ul><ul><li>Non-Quantifiable Benefits are increased tank maintenance quality, soldiers morale, maintenance fault tracking, and less training time lost. </li></ul><ul><li>PES Name MAJ Mackey, Andre </li></ul><ul><li>PS Name MAJ Mackey, Andre </li></ul><ul><li>DD Name LTC Naething, Robert </li></ul><ul><li>GB/BB Name SFC Henry, Don </li></ul><ul><li>MBB Name Nathan Sprague </li></ul><ul><li>Core Team Role % Contrib. LSS Training </li></ul><ul><li>CW2 Warren SME 20% none </li></ul><ul><li>MAJ Aydelott SME 20% none </li></ul><ul><li>MAJ Mackey SME 20% none </li></ul><ul><li>SSG Jones SME 10% none </li></ul><ul><li>CW4 Lucy SME 10% none </li></ul><ul><li>SFC Henry BB 100% BB </li></ul>Tollgate Scheduled Revised Complete Define: 04/30/08 - 04/29/08 Measure: 05/14/08 04/06/08 04/06/08 Analyze: 06/13/08 07/13/08 XX/XX/08 Improve: 07/18/08 08/13/08 XX/XX/08 Control: 08/23/08 09/13/08 XX/XX/08 <ul><li>Reduce rework during tank draw from 90% to 45%, per FTX/STX2 by 1 October 2008. </li></ul><ul><li>Improve 5988-E fault tracking during tank draw from 10% to 85%, per FTX/STX2 by 1 October 2008. </li></ul><ul><li>Improve tank bumper number accuracy from 10% to 90%, during the T-2 preparation week by 1 October 2008. </li></ul>Problem Statement Soldiers of 1/16 express dissatisfaction with the Unit Maintenance Activities M1 series tank quality prior to mission support. Currently, 90% of the tanks drawn require maintenance for mission readiness. Approximately 10% of faults listed on the 5988-E ‘s completed by soldiers are tracked by UMA. Lastly, tank bumper number accuracy during T-2 is currently at 10% which causes excess work in the last days of the mission support draw.
  5. 5. Baseline Data <ul><li>The current tank draw process has a non-normal distribution </li></ul><ul><li>The mean time to draw one tank is .56 or 34 minutes </li></ul><ul><li>The tank draw range is .25 hours (15 minutes) to 2 hours (120 minutes) and the standard deviation is .4 (24 minutes) </li></ul><ul><li>The mean number of 5988-E’s updated by UMA is .1 or 10% </li></ul>The average Tank Draw Time is 34 minutes +/- 24 minutes.
  6. 6. Baseline Data Cont. <ul><li>33% of tanks presented to draw are not ready for issue. </li></ul><ul><li>10% of 5988-E faults annotated by soldiers during tank draw is updated by UMA clerks. </li></ul><ul><li>67% of tank bumper numbers presented to 1/16 at T-2 by UMA is actually drawn for mission support. </li></ul>These numbers take into account vehicles presented to draw but never actually drawn or PMCSed. These numbers represent what was actually given, PMCS’ed, and drawn.
  7. 7. Critical X’s: Cause and Effect Matrix Cause and Effect Matrix Key Process Output Variables Customer Importance 10 9 2 6 8                     Customer Rank 1 2 3 4 5                     1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Process Step KPIV accurate tank list 5988-E QA/QC DA Form 2062 Dispatch                     Rank Rating Total Process Steps & Key Process Input Variables 1 T-1 RATSS 1 9 9 9 1                     2 7.533 171 2 PMCS Technical Manual 9 1 1 9 9                     1 10 227 3 QA/QC UMA inspector 9 1 1 3 3                     3 6.3 143 4 tank sign over DA 2062 9 1 1 1 3                     4 5.771 131 5 tank dispatch 5987-E 9 1 1 1 1                     5 5.066 115 ###                                     #####  
  8. 8. Potential Root Causes: C & E Diagram Effect: The tank draw takes too long. Man Machine Material Method Spread thinly across multiple tasks Shortage of UMA maintenance personnel Deadlines, AOAP, Service Schedule, affect # of tanks available Tanks already in use by other units/missions BII draw uses excessive people and excess time RATTS request is not referenced by UMA to assign accurate bumper number list Tanks are PMCS’d Tanks are QA/QC’d Tanks are dispatched Excessive delays from lack of UMA personnel 5988-E not updated by UMA
  9. 9. Potential Root Causes: FMEA Process Step / Input Potential Failure Mode Potential Failure Effects SEVERITY Potential Causes OCCURRENCE Current Controls DETECTION RPN   What is the process step and Input under investiga-tion? In what ways does the Key Input go wrong? What is the impact on the Key Output Variables (Customer Requirements)? What causes the Key Input to go wrong? What are the existing controls and procedures (inspection and test) that prevent either the cause or the Failure Mode? T-1 bumper number list not accurate excessive delays 7 lack of organization 7 none 7 343 PMCS not updated rework 7 lack of personnel 6 Army Policy 5 210 QA/QC not timely rework 4 lack of maintenance 4 EXSOP/Army policy 4 64 tank sign over already issued rework 7 lack of organization 2 Army Policy/EXSOP 2 28 tank dispatch does not go wrong no problems 1 no problems 1 EXSOP 5 5
  10. 10. Reducing List of Root Causes: Pareto Analysis Track able causes contained over 90% of the Defects. Our project will focus on tracking vehicle maintenance status.
  11. 11. Root Cause Analysis: Non-Value Add Analysis QAQC Maintenance leader Dispatch Soldier Issues bumper number list to soldier Maintenance leader checks 5988-E and verifies faults/makes repairs if needed Hand receipt Vehicle signed over to soldier Avg. Delay 2 hours Avg.Delay 15 min Avg. Delay 90 min Soldier conducts PMCS and completes 5988-E, turns it in to maintenance leader Passes QAQC Receives signed QAQC sheet Vehicle dispatched to soldier YES NO NVA time is in dark blue Total delay time is 3.75 hours Retrieves info from RATSS system Notify UMA of the # of tanks needed
  12. 12. Root Cause Analysis: Histogram The outlier was a vehicle issued that was actually NMC and required 90 minutes to repair. The vehicle that required 60 minutes was actually dispatched to another unit. Two of the five that required 45 minutes of work were deadline with a third needing a QAQC from UMA 5.25 hours were spent doing rework that is non value added
  13. 13. One-Way ANOVA of Time and Defects <ul><li>The data is distributed non-normally with an outlier shown here </li></ul><ul><li>The variance in the data is also constant but there are no systematic effects due to collection order or time. </li></ul>
  14. 14. One-Way ANOVA of Time and Defects Data <ul><li>Source DF SS MS F P </li></ul><ul><li>DEFECT 3 4915 1638 5.04 0.012 </li></ul><ul><li>Error 16 5199 325 </li></ul><ul><li>Total 19 10114 </li></ul><ul><li>S = 18.03 R-Sq = 48.59% R-Sq(adj) = 38.95% </li></ul><ul><li>Individual 95% CIs For Mean Based on Pooled StDev </li></ul><ul><li>Level N Mean StDev -------+---------+---------+---------+-- </li></ul><ul><li>D 4 63.75 37.50 (-------*------) </li></ul><ul><li>I 1 60.00 * (--------------*--------------) </li></ul><ul><li>N 14 26.79 8.68 (---*---) </li></ul><ul><li>Q 1 45.00 * (--------------*--------------) </li></ul><ul><li>-------+---------+---------+---------+-- </li></ul><ul><li>25 50 75 100 </li></ul><ul><li>Pooled StDev = 18.03 </li></ul>The R-squared value of 48.59% is statistically significant meaning the model predicts nearly half of the variation causing increased tank draw times as being caused by defects. Therefore we reject the null hypothesis.
  15. 15. Mood’s Median Test <ul><li>The medians may tell a more complete story. The outlier falsely inflates the averages, this test omits outliers. </li></ul><ul><li>Based on the P value, two or more medians are significantly different and we reject the null hypothesis </li></ul>Mood Median Test: TIME versus DEFECT Mood median test for TIME Chi-Square = 13.37 DF = 1 P = 0.000 Individual 95.0% CIs DEFECT N<= N> Median Q3-Q1 -----+---------+---------+---------+- D 0 4 45 56 *------------------------) I 0 1 60 Not Used N 13 1 30 15 (----* Q 0 1 45 Not Used -----+---------+---------+---------+- 30 60 90 120 Overall median = 30 * NOTE * Levels with < 6 observations have confidence < 95.0%
  16. 16. Tukey’s Pairwise Comparison One-way ANOVA: TIME versus DEFECT Tukey 95% Simultaneous Confidence Intervals All Pairwise Comparisons among Levels of DEFECT Individual confidence level = 98.87% DEFECT = D subtracted from: DEFECT Lower Center Upper --------+---------+---------+---------+- I -61.47 -3.75 53.97 (----------*-----------) N -66.23 -36.96 -7.70 (-----*----) Q -76.47 -18.75 38.97 (----------*-----------) --------+---------+---------+---------+- -50 0 50 100 DEFECT = I subtracted from: DEFECT Lower Center Upper --------+---------+---------+---------+- N -86.65 -33.21 20.22 (---------*----------) Q -88.01 -15.00 58.01 (--------------*--------------) --------+---------+---------+---------+- -50 0 50 100 DEFECT = N subtracted from: DEFECT Lower Center Upper --------+---------+---------+---------+- Q -35.22 18.21 71.65 (----------*---------) --------+---------+---------+---------+- -50 0 50 100 Statistically significant factors are in RED legend I= issued already N= no defects Q= need QAQC D=deadlined Deadlined tanks are statistically significantly different in terms of time and defects Tanks with no defects are statistically significantly different in terms of time and defects
  17. 17. Current Process Capability <ul><li>The average issue time was 36 minutes </li></ul><ul><li>The target issue time was 30 minutes </li></ul><ul><li>The variability of the process is greater than the specification limits (Cpk<1.33) </li></ul><ul><li>The process is not meeting customer expectations </li></ul>
  18. 18. One-Way ANOVA Boxplot <ul><li>Deadlines were the largest time wasters with a mean time of 63 min. </li></ul><ul><li>Deadlines also represented the largest range of values </li></ul><ul><li>The mean time for the tank that was already issued had a mean of 60 min. </li></ul>
  19. 19. A Different View <ul><li>The same data classified as tanks that are ready “R” and not ready “NR” </li></ul><ul><li>The total NR time is 340 minutes </li></ul><ul><li>The total ready time is 480 minutes </li></ul><ul><li>The defects make up 40% of the time spent on the tank draw! </li></ul>
  20. 20. DELETE ME after tollgate review N= no defect D= deadlined Q=need qa/qc I=issued already <ul><li>DEFECT TIME </li></ul><ul><li>N 15 </li></ul><ul><li>N 15 </li></ul><ul><li>N 15 </li></ul><ul><li>N 15 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>N 30 </li></ul><ul><li>D 45 </li></ul><ul><li>D 45 </li></ul><ul><li>Q 45 </li></ul><ul><li>D 45 </li></ul><ul><li>N 45 </li></ul><ul><li>I 60 </li></ul><ul><li>D 120 </li></ul>Data used in minitab to get calculations, I later changed the D,Q, & I variables into Defects to create different comparisons of defects vs no defects
  21. 21. Impact of Root Causes on Y
  22. 22. Prioritized Root Causes 1 In team’s Control = 9; In team’s sphere of influence = 3; Out of team’s control = 1 2 High impact = 9; Medium impact = 3; Low impact = 1 Effect (Y) Root Cause (X) Hypothesis for Relationship In/Out of Team’s Control 1 Impact 2 Score (Control x Impact) Priority of Effort rework In-accurate bumper numbers Accurate bumper numbers will increase throughput 3 9 27 2 Poor tank maintenance PMCS not completed correctly Correctly performed PMCS will improve tank draw quality 9 9 81 1 Poor tank maintenance 5988-E’s are not regularly updated Regularly update 5988-E’s will improve tank draw quality 3 9 27 3 rework Tanks are issued that are not ready for issue Rework will be reduced if the tanks are ready for issue at the time they are to be issued to units 3 3 9 4
  23. 23. Analyze Summary Impact of Root Causes: Hypothesis Tests Tools Used Reducing List of Root Causes Prioritized Root Causes / Effects <ul><li>Root cause #1: No visual tracking method </li></ul><ul><ul><li>Effect-in-accurate bumper numbers </li></ul></ul><ul><li>Root cause #2: 5988-E’s not completed correctly </li></ul><ul><ul><li>Effect-poor tank maintenance </li></ul></ul><ul><li>Root cause #3: 5988-E’s are not updated regularly </li></ul><ul><ul><li>Effect-poor tank maintenance </li></ul></ul><ul><li>Root cause #4: Tanks issued that are not ready for issue </li></ul><ul><ul><li>Effect-rework </li></ul></ul><ul><li>Value Add Analysis </li></ul><ul><li>Pareto Plot </li></ul><ul><li>Histogram </li></ul><ul><li>One-Way ANOVA </li></ul><ul><li>C&E Matrix </li></ul><ul><li>Cause & Effect Diagram </li></ul><ul><li>FMEA </li></ul><ul><li>Process Capability </li></ul>
  24. 24. Lessons Learned <ul><li>Application of Lean Six Sigma Tools </li></ul><ul><li>Communications </li></ul><ul><li>Team building </li></ul><ul><li>Organizational activities </li></ul><ul><li>Other </li></ul>
  25. 25. Barriers/Issues/Project Action Log <ul><li>Resources </li></ul><ul><li>Unexpected delays </li></ul><ul><li>Team or organizational issues </li></ul><ul><li>Updated risk analysis and mitigation plan </li></ul><ul><li>Revised project scope </li></ul>Lean Six Sigma Project Action Log         Last Revised: 10/15/2007   No Description/ Recommendation Status Open/Closed/Hold Due Date Revised Due Date Resp. Comments / Resolution 1 Leave of critical team member     14 May   4 May     2           3           4           5          
  26. 26. Next Steps <ul><li>Outline activities for Improve Phase </li></ul><ul><li>Planned Lean Six Sigma Tool use </li></ul><ul><li>Barrier/risk mitigation activities </li></ul>
  27. 27. Analyze Storyboard Define Project Charter T-6 IPR T2T RATSS T-5 T2T T-4 T2T T-3 T2T T-2 IPR vehicle Bumper #s Given to unit T-1 Tank draw, HETT, 5988-E update Measure BII draw measured Tank draw measured 5988-E updates measured RIE Baselines collected Critical X’s Identified Potential Root Causes Identified Root Causes Prioritized Analyze Problem: Poor tank quality at issue Goal: Improve tank quality at issue, Reduce rework, improve fault tracking Non-quantifiable Benefits Morale, tank maintenance, fault tracking
  28. 28. Sign Off <ul><li>I concur that the Measure phase was successfully completed on 03 /07/08 . </li></ul><ul><li>I concur the project is ready to proceed to next phase: Improve </li></ul>CW4 David Lucy Resource Manager/Finance COL Leopoldo Quintas Deployment Director SFC Don H. Henry II Black Belt Nathan Sprague Master Black Belt MAJ Andre L. Mackey Sponsor / Process Owner
  29. 29. Analyze Tollgate Checklist <ul><li>Has the team examined the process and identified potential bottlenecks, disconnects, and redundancies that could contribute to the problem statement? </li></ul><ul><li>Has the team analyzed data about the process and its performance to help stratify the problem, understand reasons for variation in the process, and generate hypothesis as to the root causes of the current process performance? </li></ul><ul><li>Has an evaluation been done to determine whether the problem can be solved without a fundamental ‘white paper’ recreation of the process? Has the decision been confirmed with the Project Sponsor? </li></ul><ul><li>Has the team investigated and validated (or devalidated) the root cause hypotheses generated earlier, to gain confidence that the “vital few” root causes have been uncovered? </li></ul><ul><li>Does the team understand why the problem (the Quality, Cycle Time, or Cost Efficiency issue identified in the Problem Statement) is being seen? </li></ul><ul><li>Has the team been able to identify any additional ‘Quick Wins’? </li></ul><ul><li>Have ‘learnings’ to-date required modification of the Project Charter? If so, have these changes been approved by the Project Sponsor and the Key Stakeholders? </li></ul><ul><li>Have any new risks to project success been identified, added to the Risk Mitigation Plan, and a mitigation strategy put in place? </li></ul>Has the team identified the key factors (critical X’s) that have the biggest impact on process performance? Have they validated the root causes? <ul><li>Deliverables: </li></ul><ul><li>List of Potential Root causes </li></ul><ul><li>Prioritized List of Validated Root Causes </li></ul><ul><li>Additional “Quick Wins”, if applicable </li></ul><ul><li>Refined Charter, as necessary </li></ul><ul><li>Updated Risk Mitigation Plan </li></ul><ul><li>Green Belt/Black Belt Actions: </li></ul><ul><li>Deliverables Uploaded in PowerSteering </li></ul><ul><li>Deliverables Inserted into the Project “Notebook” (see Deployment Director) </li></ul>Tollgate Review Stop