Evidence-Based Maintenance: How to Evaluate the Effectiveness of your Maintenance Strategies

5,518 views

Published on

Binseng Wang, ScD, CCE – Vice President, Performance Management & Regulatory Compliance, ARAMARK Healthcare’s Clinical Technology Services

Clinical engineering (CE) professionals have realized for some time that the “preventive maintenance” (PM) that they have been performing for many years is no longer able to prevent any failures, although some safety and performance inspections (SPIs) can help detect hidden and potential failures that affect patient safety. To help CE professionals decide whether they should continue to perform scheduled maintenance (SM) or not, a systematic method for determining maintenance effectiveness has been developed. This method uses a small set of codes to classify failures found during repairs and SM (PMs and SPIs). Analysis of the failure patterns and their effects on patients and users allows CE professionals to compare the effectiveness of different maintenance strategies, and justify changes in strategies, such as decreasing SM, deploying statistical sampling, or even eliminating SM.

1 Comment
3 Likes
Statistics
Notes
No Downloads
Views
Total views
5,518
On SlideShare
0
From Embeds
0
Number of Embeds
1,309
Actions
Shares
0
Downloads
134
Comments
1
Likes
3
Embeds 0
No embeds

No notes for slide

Evidence-Based Maintenance: How to Evaluate the Effectiveness of your Maintenance Strategies

  1. 1. Evidence‐Based MaintenanceHow to Evaluate the Effectiveness of your  y Maintenance Strategies Binseng Wang Clinical Technology Services May 5, 2011
  2. 2. What is your definition of PM? • Preventive Maintenance (or Preventative  ( Maintenance) • Predictive Maintenance • Planned Maintenance or Proactive Maintenance  • Percussive Maintenance: the fine art of whacking the  crap out of an electronic device (or anything else) to  crap out of an electronic device (or anything else) to get it to work again. (Manny Roman, DITEC Ink) • Percussive Management: the fine art of managing  g g g people with 2"x4" boards (or whatever else heavy is  Censored by HS & HR… handy) but not killing them, aka waterboarding.2
  3. 3. How you currently decide on PM? • OEM said to do it OEM said to do it • Joint Commission said to do it (100% for life support  & less for non‐ life support) pp ) • Our state licensing code (or CMS rules) require 100%  PM on everything y g • Even a single injury or death would be unacceptable ‐ > total, absolute safety • That is always what and how we have done it in the  last >20‐30 years!4 Remember the roast beef!
  4. 4. Good News and Bad News Good News and Bad News • Good News • No significant changes to TJC Med Equip Mgmt standards  from 2010 • Even Better News • CMS accepted TJC standards in lieu of “according to OEM  recommendations recommendations” • Bad News • Both CMS and TJC are going to scrutinize more carefully  g g y maintenance programs (strategies) • How do you prove your non‐OEM maintenance strategy is  not shortchanging patient safety?! t h t h i ti t f t ?!5
  5. 5. Table of Contents • Introduction – How do you convince surveyors that your  maintenance program is effective? • Evidence Based Maintenance Evidence‐Based Maintenance  – Maintenance planning (plan) – Maintenance implementation (do) Plan – Maintenance monitoring (check) Act Do – Maintenance improvement (act) Check • Discussion and Conclusions Discussion and Conclusions – Implementation lessons – Conclusions6
  6. 6. Acknowledgement • The data presented here were collected by dozens of BMETs  at hospitals managed by ARAMARK Healthcare under the  leaderships of the following Technology Managers: – Jim Fedele – Len Barnett – Tim Huffman, Steve Zellers – Bob Pridgen, Bob Wakefield, Allan Williams – Chad Granade – Bobby Stephenson – Dana Lesueur – Steve Cunningham Steve Cunningham – Bob Helfrich – Scott Newman – Jared Koslosky Jared Koslosky7
  7. 7. REFERENCE • B. Wang, E. Furst, T. Cohen, O.R. Keil, M. Ridgway, R.  Stiefel, Medical Equipment Management Strategies,  Biomed Instrum & Techn, May/June 2006, 40:233‐237  • B. Wang, Evidence‐Based Maintenance, 24x7  magazine, April 2007  • B. Wang, Evidence‐Based Medical Equipment  Maintenance Management, in L. Atles (ed.), A  Practicum for Biomedical Technology & Management  Issues, Kendall‐Hunt, 2008 • M. Ridgway, Optimizing Our PM Programs, Biomed  Instrum & Techn, May/June 2009, 244‐254 • M. Rigway, L.R. Atles & A. Subhan, Reducing Equipment  Downtime: A New Line of Attack, J Clin Eng, 34:200‐8 204, 2009
  8. 8. Related Publications Related Publications • Wang B, Fedele J, Pridgen B, Rui T, Barnett L, Granade C,  g , , g , , , , Helfrich R, Stephenson B, Lesueur D, Huffman T, Wakefield JR,  Hertzler LW & Poplin B. Evidence‐Based Maintenance: I ‐ Measuring maintenance effectiveness with failure codes, J  Measuring maintenance effectiveness with failure codes J Clin Eng, July‐Sept 2010, 35:132‐144. • Wang  et al. Evidence‐Based Maintenance: II ‐ Comparing  maintenance strategies using failure codes, J. Clin. Eng., Oct‐ Dec 2010, 35:223‐230 • Wang et al Evidence Based Maintenance: III Enhancing Wang  et al. Evidence‐Based Maintenance: III ‐ Enhancing  patient safety using failure code analysis , J. Clin. Eng., Apr‐ June 2011, 36:72‐849
  9. 9. How do you convince surveyors that  your maintenance program is effective? ff ? • Adopted “risk”‐based inclusion criteria Adopted  risk based inclusion criteria – Good intentions (plans) do not guarantee good outcomes • PM completion per TJC requirements – Most “PMs” do not prevent failures but only find failures that already  occurred.  Process ≠ outcome. • Fast repair turnaround time p – Depending on mission criticality and the availability of back‐ups, some  failures and turnaround times are NOT acceptable to users • Repeat work orders < certain threshold Repeat work orders < certain threshold – Reasonable threshold depends on the type of failure • Failed PMs < certain threshold10 – idem
  10. 10. How do you convince surveyors that  your maintenance program is effective? ff ? • Adopted “risk”‐based inclusion criteria Adopted  risk based inclusion criteria – Good intentions (plans) do not guarantee good results (outcomes) • PM completion per TJC requirements – Most “PMs” do not prevent failures but only find failures that already  occurred.  Process ≠ outcome. • Fast repair turnaround time p – Depending on mission criticality and the availability of back‐ups, some  failures and turnaround times are NOT acceptable to users • Repeat work orders < certain threshold Repeat work orders < certain threshold – Reasonable threshold depends on the type of failure • Failed PMs < certain threshold11 – idem
  11. 11. Table of Contents • Introduction – How do you convince surveyors that your  maintenance program is effective? • Evidence Based Maintenance Evidence‐Based Maintenance  – Maintenance planning (plan) – Maintenance implementation (do) Plan – Maintenance monitoring (check) Act Do – Maintenance improvement (act) Check • Discussion and Conclusions Discussion and Conclusions – Implementation lessons – Conclusions12
  12. 12. Maintenance Monitoring Act A t Plan Do D Check • Process Measures Do the right thing right! g g g – SPI/PM SPI/PM completion rates (TJC) l ti t (TJC) – Maintenance logs (CMS) Did you earn your diploma by – Repair call response or turn‐ day-dreaming every day in class around time around time (perfect attendance)?13 (Wang et al., CE Benchmarking, JCE, Jan-Mar 2008)
  13. 13. Maintenance Monitoring Act A t Plan Do D Check • Process Measures Do the right thing right! g g g – SPI/PM SPI/PM completion rates (TJC) l ti t (TJC) – Maintenance logs (CMS) Did you earn your diploma by – Repair call response or turn‐ day-dreaming every day in class around time around time (perfect attendance)? • Outcome/Effectiveness  Measures (evidence) – Uptime – Global failure rate  – Patient incidents (including “near misses”) “ i ”) – Failure codes – Repeated repairs – Others: MTBF customer Others: MTBF, customer 14 satisfaction, etc. (Wang et al., CE Benchmarking, JCE, Jan-Mar 2008)
  14. 14. Maintenance Monitoring Act A t Plan Do D Check • Process Measures Do the right thing right! g g g – SPI/PM SPI/PM completion rates (TJC) l ti t (TJC) – Maintenance logs (CMS) Did you earn your diploma by – Repair call response or turn‐ day-dreaming every day in class around time around time (perfect attendance)? • Outcome/Effectiveness  Measures (evidence) – Uptime – Global failure rate  – Patient incidents (including “near misses”) “ i ”) – Failure codes – Others: MTBF, customer  satisfaction, etc. satisfaction etc15 Do the right thing right!
  15. 15. Data from the aviation industry (1968)16
  16. 16. Maintenance Categories Failure patterns Maintenance Strategies • Proactive maintenance: tasks undertaken before a  failure occurs to prevent the equipment from failing.   Proactive maintenance must be technically feasible and worth doing.  Typically useful for failure patterns  d hd i i ll f l f f il Failure rate A, B and C.  • Reactive (“default”) maintenance: actions undertaken  after a failure has occurred (to restore the equipment  to original performance standards).  Typically useful  for failure patterns D, E and F.17 time
  17. 17. Failure Codes Equipment Failures MAINTENANCE FAILURE DESCRIPTION TYPE CODE Scheduled EF Evident failure, i.e., a problem that can be maintenance (SM) detected--but was not reported--by the user including inspection inspection, without running any special tests or using calibration, and specialized test/measurement equipment. preventive HF Hidden failure, i.e., a problem that could not maintenance be detected by the user unless running a y g special test or using specialized test/measurement equipment. PF Potential failure, i.e., a failure that is either about to occur or in the process of occurring but has not yet caused the equipment to stop working or problems to patients or users.18 NPF No problem found.
  18. 18. Failure Codes Equipment Failures MAINTENANCE FAILURE DESCRIPTION TYPE CODE Corrective UPF Unpreventable failure, evident to user, typically maintenance caused by normal wear and tear but is unpredictable. ( (CM), including ), g USE Failures induced by use e g abuse abnormal wear use, e.g., abuse, repairs performed & tear, accident, or environment issues. Does NOT for failures detected include use error (typically no equipment failure) during SM PPF Preventable and predictable failure, evident to user. SIF Service-induced failure, i.e., failure induced by corrective or scheduled maintenance that was not p p y properly completed or a p that was replaced and p part p had premature failure (“infant mortality”). CND Cannot duplicate. Includes use errors. Same as NPF. FFPM Failure found during PM (to avoid duplication of19 codes)
  19. 19. Failure Codes Peripheral Failures MAINTENANCE FAILURE DESCRIPTION TYPE CODE CM or SM BATT Battery failure, i.e., battery(ies) failed before the scheduled replacement time. ACC Accessory (excluding batteries) failures evident to user, typically caused by normal wear and tear. NET Failure in or caused by network, while the equipment itself is working without problems. Applicable only to networked equipment. equipment NOTE: Any resemblance to prior works by A Subhan, P Thorburn, and M Ridgway is NOT mere coincidence. coincidence20
  20. 20. Failure Codes Data Collection Total #Staffed  Total  Starting Hospital p Beds #Equipment Teaching Nature q p g Date #Work orders  A 161  5,200  Non‐Teaching 9/1/08 12,892  B 256  2,800  Non‐Teaching 3/1/09 6,265  C 360  4,500  Non‐Teaching 4/1/09 9,205  D 415  6,800  Non‐Teaching 10/1/08 18,201  E 586  9,200  Minor Teaching 11/1/09 12,733  F 169  3,200  Major Teaching 11/1/09 5,414  G 159  3,300  Minor Teaching 11/1/09 / / 5,396  H 193  2,400  Non‐Teaching 2/1/10 3,402  I 439  6,600  Minor Teaching 8/1/08 17,391  J 335  335 5,300  5 300 Non‐Teaching Non Teaching 1/1/08 18,293  18 293 K 169  3,000  Minor Teaching 11/1/09 5,616  L 318  5,500  Minor Teaching 8/1/08 14,762  M 370  370 4,700  4,700 Non Teaching Non‐Teaching 3/1/09 7,087  7,08721 TOTAL 3,930  62,500  136,657 
  21. 21. Failure Codes Data – Single equipment type from a single hospital • 24 consecutive months of SM data Single Channel Infusion Pumps - SM only (Hospital D - 316 Units) 100% ated probabilit for each SM M 80% 60% ty 40% estima 20% 0% Remember the NPF ACC BATT EF HF PF Law of Large L fL22 Numbers!
  22. 22. Failure Codes Data – Single equipment type from a single hospital • 24 consecutive months of CM data Single Channel Infusion Pumps - CM only (Hospital D - 316Units) 100% ated probability for each CM h 80% 60% 40% estima 20% 0% Remember the CND UPF ACC BATT USE SIF PPF Law of Large L fL23 Numbers!
  23. 23. Annual Failure Probability (AFP) Annual Failure Probability (AFP) AFP is the probability of finding a particular class of  AFP is the probability of finding a particular class of failure (e.g., HF) during a year, calculated as below: • SM failure codes (EF, PF & HF): – #codes/#SMs completed • CM failure codes (UPF, USE, PPF & SIF) – # d /#CM #codes/#CMs completed * ETFR, where l d * ETFR h ETFR = #CMs/year/#units (equipment type failure rate) • ACC & BATT ACC & BATT – Combine SM and CM probabilities as calculated above • No Fail(ure)24 – No Fail = 1 – sum (all other failure probabilities)
  24. 24. Failure Codes Data – Single equipment type from a single hospital • Combining SM & CM data ‐> Annual Failure Probability (AFP) g y( ) 100% Single Channel Infusion Pumps (Hospital D - 316 Units) 80% Estimated AFP per unit 10% 60% 5% 40% 0% SIF HF PF PPF 20% 0% No Fail UPF ACC BATT USE EF SIF HF PF PPF25
  25. 25. Failure Codes Data – Single equipment type from a single hospital • Comparing AFP from 2 consecutive years p g y 100% Single Channel Infusion Pumps (Hospital D - 316 Units) 80% Estimate AFP per unit Year 1 10% Year 2 60% 5% ed 40% 0% SIF HF PF PPF 20% 0% No Fail UPF ACC BATT USE EF SIF HF PF PPF26
  26. 26. Failure Codes Data – Single equipment type from a single hospital 100% Vital Signs Monitor (Hospital A - 174 units) 80% nit Estimated AFP per un 60% A 40% 20% E 0% No UPF ACC BATT USE EF SIF HF PF PPF Fail27
  27. 27. Failure Codes Data – Single equipment type from a single hospital 100% Portable Patient Monitors (Hospital C - 170 units) 80% nit 10% Estimated AFP per un 60% 5% A 40% 0% SIF HF PF PPF 20% 0% No UPF ACC BATT USE EF SIF HF PF PPF28 Fail
  28. 28. Failure Codes Data – Single equipment type from multiple hospitals 100% A-3 General Purpose Electrosurgical Unit (ESU) B-18 C-21 D-24 80% E-21 F-8 10% Estimated AFP per unit G-10 H-8 60% I-25 5% I-23 K-13 3 d L-37 40% 0% M-25 SIF HF PF PPF mean 20% 0% No Fail UPF ACC BATT USE EF SIF HF PF PPF29
  29. 29. Failure Codes Data – Single equipment type from multiple hospitals 100% C-70 Electronic Thermometer El t i Th t D-362 E-531 G-170 80% H-95 I-378 Estimated AFP per unit t 10% I-226 K-32 60% L-183 M-48 5% mean d 40% 0% SIF HF PF PPF 20% 0% No Fail UPF ACC BATT USE EF SIF HF PF PPF30
  30. 30. Failure Codes Data – Single equipment type from multiple hospitals 100% A-32 Battery-Powered Mon/Pace/Defibrillator B-30 C-42 D-60 80% E-70 F-25 10% Estimated AFP per unit G-42 H-23 60% I-81 5% I-55 K-44 d L-52 40% 0% M-57 SIF HF PF PPF mean 20% % 0% No Fail UPF ACC BATT USE EF SIF HF PF PPF31
  31. 31. Using Failure Codes Data  • Analyses performed in two ways: Analyses performed in two ways: A. Comparing data obtained using different maintenance  strategies within each equipment class‐> determine  effectiveness of maintenance strategies ff f B. Considering all data for each class of equipment  B Considering all data for each class of equipment (regardless of maintenance strategy adopted) ‐>  evaluating the effectiveness of CE activities, comparing current activities (SPI/PM, repairs, etc.) versus potential  i i i (SPI/PM i ) i l activities (i.e., impact of CE on equipment failures)32
  32. 32. A. Maintenance Strategies Comparison Two ways to compare maintenance strategies: Two ways to compare maintenance strategies: • Data from different sites (lateral comparisons) – Advantage:  no need to wait for data collection  g (assuming the same failure codes are adopted) – Disadvantage:  there could be differences in  brand/model and/or accessories, user care, etc. / / • Data from same site (longitudinal studies) – Advantage: no differences in brand/model and/or Advantage:  no differences in brand/model and/or  accessories, user care, etc. – Disadvantage: need to wait for data collection g33
  33. 33. (Lateral) Comparison of Maintenance  Strategies • Types of Maintenance Strategies adopted at different Types of Maintenance Strategies adopted at different  site: – F3 ‐ Fixed schedule full service or inspection every 3  months  – F6 ‐ Fixed schedule full service or inspection every 6  months – F12 ‐ Fixed schedule full service or inspection every 12  months – Samp ‐ Statistical sampling Samp ‐ Statistical sampling – R/R ‐ Repair or replace34
  34. 34. Battery‐powered  defibrillator/monitor/ pacemaker • Any detectable differences? y 80% F3-80 F6-327 10% 60% Estimated AFP per unit 5% P 40% 0% SIF HF PF PPF E 20% 0%35 No Fail UPF ACC BATT USE EF SIF HF PF PPF
  35. 35. Vital Signs Monitor • Any detectable differences? 80% Samp-147 Vital Signs Monitor F12-655 R/R-71 10% 60% stimated AFP per unit 5% P 40% 0% SIF HF PF PPF Es 20% 0%36 No Fail UPF ACC BATT USE EF SIF HF PF PPF
  36. 36. Pulse Oximeters • Any detectable differences? 100% Samp-149 Pulse Oximeter F12-464 R/R-206 80% 10% Estimated AFP per unit 60% P 5% 40% 0% SIF HF PF PPF 20% 0%37 No Fail UPF ACC BATT USE EF SIF HF PF PPF
  37. 37. Sequential & Intermittent  Compression Devices • Any detectable differences? 80% Sequential & Intermittent Compression Devices Samp-278 F12-722 10% 60% stimated AFP per unit 5% P 40% 0% SIF HF PF PPF Es 20% 0%38 No Fail UPF ACC BATT USE EF SIF HF PF PPF
  38. 38. Single‐channel infusion pumps • Any detectable differences? y 80% Single-Channel Infusion Pumps Samp-542 F12-1150 10% 60% stimated AFP per unit 5% P 40% 0% SIF HF PF PPF Es 20% 0%39 No Fail UPF ACC BATT USE EF SIF HF PF PPF
  39. 39. Radiant Infant Warmers • Any detectable differences? 100% Radiant Infant Warmer F6-69 F12-91 Samp-19 Samp 19 80% 10% Estimated AFP per unit 60% P 5% % 40% 0% SIF HF PF PPF E 20% 0% %40 No Fail UPF ACC BATT USE EF SIF HF PF PPF
  40. 40. Electronic Thermometers Electronic Thermometers • Any detectable differences? y 100% Electronic Thermometer F12‐231 F12 231 80% R/R‐1862 Estimated AFP per unit 10% 60% 5% 40% 0% 20% SIF HF PF PPF 0% % No Fail UPF ACC BATT USE EF SIF HF PF PPF41
  41. 41. Answer to Surveyor Question Answer to Surveyor Question • How do you prove your non‐OEM maintenance How do you prove your non OEM maintenance  strategy is not shortchanging patient safety?! • Compare AFPDs between “in according to OEM  recommendation” and “my maintenance strategy”: – No difference (difference < SD): I should be  allowed to use “my maintenance strategy” – Difference found: change maintenance strategy  and monitor again => Maintenance Improvement d it i M i t I t • In general, statistical sampling is preferable to Repair/Replace  ( (“run to failure”) as you can monitor trends instead of waiting  ) y g42 for annual reviews.
  42. 42. Table of Contents • Introduction – How do you convince surveyors that your  maintenance program is effective? • Evidence Based Maintenance Evidence‐Based Maintenance  – Maintenance planning (plan) – Maintenance implementation (do) Plan – Maintenance monitoring (check) Act Do – Maintenance improvement (act) Check • Discussion and Conclusions Discussion and Conclusions – Implementation lessons – Conclusions43
  43. 43. Maintenance Improvement  • Maintenance Revision & Continual Improvement  – Inventory classification revision – SM frequency revision – Work instruction (tasks) revision  while continuing to monitor effectiveness (evidence) and  efficiency using efficiency using – Uptime Plan – Failure rate  Act Do – Patient incidents (including “near misses”) ( “ ”) – Failure codes Check – Others: MTBF, customer satisfaction, etc. – Financial indicators44
  44. 44. B. Evaluation of CE Activities Grouping of failure codes by CE action Failure Code CE Responsibility Action Class NPF none None or review UPF advise Purchasing FUTURE ACC guide users and Purchasing BATT guide users and Purchasing NET work with IT INDIRECT USE guide users and Facilities ALL EF guide users SIF educate staff and advise OEMs HF review SM program DIRECT PF review SM program PPF review SM program45
  45. 45. Battery‐powered  defibrillator/monitor/ pacemaker 100% Battery-Powered Mon/Pace/Defibrillator 80% Estimated AFP per unit 10% 60% 5% 40% 0% SIF HF PF PPF 20% 0% No Fail UPF ACC BATT USE EF SIF HF PF PPF CE future CE indirect CE direct46
  46. 46. Failure Code Grouping Results Failure Code Grouping Results Battery-Powered Mon/Pace/Defibrillator Vital Signs Monitors Direct Direct 2% 2% No Failure 35% Indirect 28% Indirect 47% No Failure Future 61% 9% Future 16% Pulse Oximeters Single-Channel Infusion Pumps Direct Direct 1% 3% No Failure 17% Indirect 22% Future 6% Indirect Future 56% 24% No Failure 71%47
  47. 47. Using the Risk‐Management Approach  to Determine Impact • Risk is defined as “The combination of the Risk is defined as  The combination of the  probability of occurrence of harm and the  severity of that harm. (ISO/IEC Guide severity of that harm ”  (ISO/IEC Guide  51:1999 and ISO 14971:2007) • Calculated risk = probability * severity [of  harm] The “risk-based criteria” should actually be called “severity-based criteria,” due t th lack f d to the l k of probability ! b bilit48
  48. 48. Estimation of Risk • Estimation of the  Probability of Harm – A very exaggerated estimate  of the probability is the APFD  of the probability is the APFD (because it ignores other protective  mechanisms) • Estimation of the Severity  of Harm – The severity is assigned between The severity is assigned between  0% and 100%, depending on the  impact on patient (no harm ‐49 death) Figure adapted from Reason (2000), Duke Univ. MC patientsafetyed.duhs.duke.edu/module_e/swiss_cheese.html
  49. 49. Fennigkoh & Smith Model Funct Mainten Equipment Type #Hospitals #Units #WOs ion "Risk" ance EM Anesthesia machine 7 152 767 10 5 5 20 zed Neonatal ventilator 3 28 79 10 5 5 20 Portable ventilator 3 60 226 10 5 5 20 Types Analyz Volume ventilator 3 50 180 10 5 5 20 Batt-pow mon/pace/defibrillator 7 407 1567 10 5 4 19 PCA pump 7 430 700 9 5 4 18 Syringe infusion p p y g pump 5 251 438 9 4 4 17 Multi-channel infusion pump 5 256 498 9 4 4 17 Single-channel infusion pump 6 1692 4175 9 4 4 17 ESU, general purpose 7 164 411 9 4 3 16 Equipment T Blood warmer, circ. fluid , 4 56 212 9 3 3 15 Enteral feeding pump 8 301 488 8 4 3 15 Physiological monitoring system 5 286 280 7 4 3 14 Ultrasound scanner, generic 5 59 245 6 3 5 14 Seq & interm compression dev q p 7 1000 1287 8 4 2 14 Vital signs monitor 7 872 1921 6 3 3 12 Pulse oximeter 6 818 840 6 3 2 11 NIBP monitor 6 223 403 6 3 2 11 Infant scale 8 159 175 2 3 2 7 Infant warmer 7 179 448 2 3 2 750 Blanket warmer 6 157 164 2 1 2 5 Patient scale, floor model 6 314 330 2 1 1 4
  50. 50. Estimated Annual Failure Probability F&S Equipment Type q p yp FUTURE INDIRECT DIRECT ALL EM Neonatal ventilator 23.6% 16.9% 11.1% 51.6% 20 Physiological monitoring system 13.1% 22.7% 9.3% 45.1% 14 Volume ventilator 43.4% 18.7% 9.0% 71.1% 20 Blood warmer, circ. fluid , 1.2% 5.6% 5.8% 12.6% 15 robab y bility Anesthesia machine 29.0% 25.7% 5.3% 60.0% 20 Portable ventilator 27.0% 31.9% 5.3% 64.2% 20 Single-channel infusion pump 24.4% 55.6% 2.7% 82.7% 17 Syringe infusion p p y g pump 12.4% 11.4% 2.7% 26.5% 17 PCA pump 11.8% 17.8% 2.4% 32.0% 18 Vital signs monitor 15.8% 47.0% 2.2% 65.0% 12 Ultrasound scanner, generic 28.3% 14.7% 2.0% 45.0% 14 ESU, general p p ,g purpose 12.7% 8.1% 2.0% 22.8% 16Pr Batt-pow mon/pace/defibrillator 8.6% 28.3% 1.9% 38.9% 19 Infant warmer 19.1% 9.5% 1.8% 30.4% 7 NIBP monitor 24.3% 47.2% 1.8% 73.2% 11 Infant scale 4.2% 18.8% 1.8% 24.8% 7 Enteral feeding pump 8.6% 16.3% 1.5% 26.4% 15 Pulse oximeter 5.7% 22.3% 1.5% 29.5% 11 Blanket warmer 18.5% 7.6% 1.3% 27.4% 5 Patient scale, floor model 7.6% 17.8% 1.1% 26.4% 4 Seq & interm compression dev 14.1% 18.6% 0.5% 33.2% 1451 Multi-channel infusion pump 14.7% 26.0% 0.4% 41.1% 17 Mean 16.7% 22.2% 3.3% 42.3% Standard deviation 10.0% 13.2% 3.0% 19.4%
  51. 51. Calculated Annual Risk F&S Equipment Type Severity FUTURE INDIRECT DIRECT ALL EM Volume ventilator 100 43 19 9 71 20 Portable ventilator 100 27 32 5 64 20 Anesthesia machine 100 29 26 5 60 20 Neonatal ventilator 100 24 17 11 52 20 Single-channel Single channel infusion pump 60 15 33 2 50 17 Batt-pow mon/pace/defibrillator 90 8 25 2 35 19 Physiological monitoring system 70 9 16 7 32 14 NIBP monitor 40 10 19 1 29 11 Calcul d Risk lated k PCA pump 90 11 16 2 29 18 Multi-channel infusion pump 70 10 18 0 29 17 Vital signs monitor 40 6 19 1 26 12 Ultrasound scanner, generic 50 14 7 1 23 14 Syringe infusion pump 80 10 9 2 21 17 Infant scale 80 3 15 1 20 7 Infant warmer 50 10 5 1 15 7 Pulse oximeter 50 3 11 1 15 11 ESU, ESU general purpose 60 8 5 1 14 16 Enteral feeding pump 40 3 7 1 11 15 Seq & interm compression dev 30 4 6 0 10 14 Blanket warmer 30 6 2 0 8 5 blood warmer circ fluid warmer, circ. 50 1 3 3 6 1552 Patient scale, floor model 20 2 4 0 5 4 Mean 11.6 14.2 2.6 28.3 Standard deviation 10.5 9.3 3.0 19.5
  52. 52. Mean Values of Probability & Risks Mean Values of Probability & Risks • Why are you chasing the smallest slices if there are Why are you chasing the smallest slices if there are  “low‐hanging fruits” (larger slices) out there? Mean AFP for 22 Equipment Types Mean Annual Risk for 22 Equipment Types Direct Direct 3% 2.6 26 Indirect 22% Future 11.6 No Failure Indirect Future 59% 14.2 16%53
  53. 53. Performance Improvement Performance ImprovementNOT just maintenance improvementFAILURE  FAILURE TYPE PERFORMANCE IMPROVEMENT GROUP ACTIONSDirect Service induced failures (SIF) Review and revise maintenance  Failures no‐evident to (hidden from) users (HF) program, e.g., increase frequency, add  Deteriorations in progress that are likely to  new tasks, and change strategy. become  failures – potential failures (PF)  Preventable and predictable failures (PPF)Indirect Accessory failures (ACC) Provide training to users, and  Battery failures (BATT) feedback to purchasing, and  Network failures (NET) Network failures (NET) assistance to facility managers  assistance to facility managers Failures induced by abuse, accidents,  in reducing power line issues,  or environment issues (USE) water and air quality, HVAC,  Failures evident to users but not  reported (EF) t d (EF) humidity control, etc. humidity control, etc.Future Unpreventable failure (UPF) Improve selection in future  acquisitions favoring more  reliable products and  reliable products and standardization. 54 54
  54. 54. CE Impact Analysis ‐ Conclusions • CE Impact is reaching its limits., i.e., significant investment of  p g g resources are needed for small gains in reducing risks. • However, much higher impact (reduction of risks) can be  achieved by broadening the horizon and helping users,  achieved by broadening the horizon and helping users, Facilities, and Purchasing. ‐> i.e., should NOT focus solely on  what CE can do (i.e., SM). • The NIBP monitor example shows that the old myth of zero The NIBP monitor example shows that the old myth of zero  (negligible) “PM yield” needs to be abandoned.  Need to  consider the frequency and the severity of all the failures (ALL risk), not just those managed by CE.  risk) not j st those managed b CE • In essence,  – Reach out of your comfort zone (maintenance) to bring more impact  to patient care/risk using your expertise! to patient care/risk using your expertise!55
  55. 55. Table of Contents • Introduction – How do you convince surveyors that your  maintenance program is effective? • Evidence Based Maintenance Evidence‐Based Maintenance  – Maintenance planning (plan) – Maintenance implementation (do) Plan – Maintenance monitoring (check) Act Do – Maintenance improvement (act) Check • Discussion and Conclusions Discussion and Conclusions – Implementation lessons – Conclusions56
  56. 56. Implementation Lessons  (aka how we made it work) • Put failures codes at the top of selectable Put failures codes at the top of selectable  choices (e.g., by adding numbers to the front  of the codes, so the  float to the top: 1NPF) of the codes so the “float” to the top: 1NPF). • Encourage staff to discuss questionable codes  and HF with manager to ensure coding  and HF with manager to ensure coding accuracy. • Monthly verification and corrections: Monthly verification and corrections: – Missing codes (work orders without codes) – Logically‐wrong codes (e g HF in repairs) Logically‐wrong codes (e.g., HF in repairs)57
  57. 57. Conclusions • Clinical Engineering must evolve together with  healthcare – Follow progress of medical equipment design and  manufacturing (JC 10 year root cause analysis (RCA) of sentinel events indicate most of  manufacturing (JC 10 year root‐cause‐analysis (RCA) of sentinel events indicate most of them are due to use errors and communication problems) – Incorporate the mission‐criticality concept – Adopt the separation of risk and maintenance needs (high Adopt the separation of risk and maintenance needs (high  risk ≠ high maintenance but low incidence of failed SM ≠ no  SM needed) – Learn from Reliability‐Centered Maintenance (RCM) Learn from Reliability Centered Maintenance (RCM)  experience accumulated in industrial maintenance (but not  fully adopting it) – Progress from subjective intuitive craftsmanship to Progress from subjective, intuitive craftsmanship to 58 scientific, evidence‐based engineering 
  58. 58. Conclusions2 • Refocus resources from “scheduled  maintenance” – SM (SPIs and PMs) to higher‐ impact tasks, e.g., use error tracking, “self‐ identified” failures and repairs (“rounding”),  identified” failures and repairs (“rounding”) user training, and working with Facilities and  Purchasing. • It is always a balancing act:  – Needs (mission, safety, revenue, etc.) – Re$ource$ (human technical financial etc ) Re$ource$ (human, technical, financial, etc.) (that’s why it is engineering:  find the best “balanced”59 solution) )
  59. 59. Plan Bottom Line Act Do Check • Evidence‐based Maintenance (EBMaint) allow us to prove to  CMS and TJC that we are NOT shortchanging patient safety  when we deviate from OEM recommendations  (effectiveness) . • EBMaint allows us to move beyond complying with CMS  requirements and TJC standards and enhance user  satisfaction and patient safety. satisfaction and patient safety. • EBMaint motivates us to continually review and improve  equipment maintenance strategies. • EBM i also helps to prove to the healthcare organizations  EBMaint l h l h h lh i i that we are using their limited resources in the most  productive manner (efficiency) 60
  60. 60. THANK YOU! • Please contact us if you have any  questions or suggestions Binseng Wang, ScD, CCE, fAIMBE, fACCE • Vice President, Performance Mgmt & Regulatory Compliance • Telephone: 704‐948‐5729 • Email: wang‐binseng@aramark.com61

×