Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Vital QMS Process Validation Statistics - OMTEC 2018

1,122 views

Published on

According to 21 CFR, Part 820, medical device manufacturers are required to validate as well as monitor and control parameters for their processes. The guideline on Quality Management Systems does not specify how this is accomplished; only that “a process is established that can consistently conform to requirements” and “studies are conducted demonstrating” this. Thorough process development, optimization and control using appropriate statistical methods and tools is recommended for demonstrating that your process is both stable and capable. This session will demonstrate ways to efficiently and effectively apply recommended statistical methods and tools to process validation—with no statistical expertise needed. Using realistic process data, participants will learn how to apply tools, interpret results and draw meaningful conclusions throughout Installation Qualification (IQ), Operational Qualification (OQ) and Performance Qualification (PQ).

Published in: Health & Medicine
  • Be the first to comment

  • Be the first to like this

Vital QMS Process Validation Statistics - OMTEC 2018

  1. 1. Vital QMS Process Validation Statistics W. Heath Rushing Principal Consultant 206-369-5541 Heath.Rushing@adsurgo.com
  2. 2. 3 Contents 1. Introduction 2. Overview 3. Application of Statistical Methods: - Installation Qualification (IQ) - Operational Qualification (OQ) - Performance Qualification (PQ)
  3. 3. 4 Why are you here? According to the Quality System Regulation (QSR), “Where appropriate, each manufacturer shall establish and maintain procedures for identifying valid statistical techniques required for establishing, controlling, verifying the acceptability of process capability and product characteristics.” Although there are many statistical methods that may be applied to satisfy this portion of the QSR, there are some commonly accepted methods that all companies can and should be using to develop acceptance criteria, to ensure accurate and precise measurement systems, to fully characterize manufacturing processes, to monitor and control process results and to select an appropriate number of samples.
  4. 4. Overview
  5. 5. 6 Statistical Techniques “Valid in-process specifications for such characteristics shall be consistent with drug product final specifications and shall be derived from previous acceptable process average and process variability estimates where possible and determined by the application of suitable statistical procedures where appropriate.” 21 CFR 211.110 (b) “Where appropriate, each manufacturer shall establish and maintain procedures for identifying valid statistical techniques required for establishing, controlling, and verifying the acceptability of process capability and product characteristics.” 21 CFR 820.250 (a)
  6. 6. 7 GHTF Process Validation Guidance for Medical Device Manufacturers 0. Introduction 1. Purpose and scope 2. Definitions 3. Processes that should be validated 4. Statistical methods and tools for process validation – Appendix A 5. Conduct of a validation – Getting started, Protocol Development, IQ, OQ, PQ, Final report 6. Maintain a state of validation – Monitor and Control and Revalidation 7. Use of historical data in process validation 8. Summary of activities Annexes: A. Statistical methods and tools for process validation B. Example validation
  7. 7. 8 Statistical Methods and Tools for Process Validation Listed in GHTF Guidance, Annex A Acceptance Sampling Plan Analysis of Means Analysis of Variance Capability Study Challenge Test Component Swapping Study Control Chart Design of Experiments Dual Response Approach to Robust Design Failure Modes and Effects Analysis Fault Tree Analysis Gauge R&R Study Mistake Proofing Methods Multi-variable Control Chart Response Surface Study Robust Design Methods Robust Tolerance Analysis Screening Experiment Taguchi Methods Tolerance Analysis Variance Components Analysis
  8. 8. 9 Applying Statistical Methods Throughout Process Validation Installation Qualification • Sample size calculations • Hypothesis testing • Data intervals • MSA Operational Qualification Performance Qualification • Ishikawa diagram • FMEA • DOE • RSM • SPC • Process capability • Robust Design Methods • SPC • Process capability • FMEA
  9. 9. 10 Applying Statistical Methods Throughout Process Validation Installation Qualification • Sample size calculations • Hypothesis testing • Data intervals • MSA Operational Qualification Performance Qualification • Ishikawa diagram • FMEA • DOE • RSM • SPC • Process capability • Robust Design Methods • SPC • Process capability • FMEA
  10. 10. Application of Statistical Methods in IQ
  11. 11. 12 Installation Qualification “Installation Qualification (IQ): establishing by objective evidence that all key aspects of the process equipment and ancillary system installation adhere to the manufacturer’s approved specification and that the recommendations of the supplier of the equipment are suitably considered.” GHTF Guidance on Process Validation
  12. 12. 13 Installation Qualification “Each medical device manufacturer is ultimately responsible for evaluating, challenging, and testing the equipment and deciding whether the equipment is suitable for use in the manufacture of a specific device(s).” GHTF Guidance on Process Validation
  13. 13. 14 IQ for Heat Sealer A new heat sealer will be installed, checked, and calibrated. This installation qualification will ensure the average exhaust of pressurized air in the clean room does not exceed the requirements of 14 psi. Also, the heat sealer contains a device which measures seal strength. As part of the IQ, ensure the device provides accurate and precise measurements of seal strength. Confidence interval for the true mean (one-sided) One-sample t-test (one-tailed) Sample size considerations Two-sample t-test and Equivalence (Comparability)
  14. 14. 15 Point Estimators You are using a sample from a larger population to estimate the mean, variance, and standard deviation; you use these estimators to describe your sample. Because you are estimating the true (population) parameter with a single value, this is called a point estimator. If instead you used a range of values to estimate the true parameter, this range is called an interval estimator.
  15. 15. 16 Confidence Intervals Confidence intervals treat the mean as the point estimate and account for the variability associated with that point estimate with a margin of error.
  16. 16. 17 Hypothesis Testing The null hypothesis (H0) is the statement about what you assume about the population parameter.  Usually, this is a statement that there is no difference. The alternate hypothesis (Ha) is the statement about what you prove about the population parameter.  Usually, this is a statement that there is a difference.
  17. 17. 18 Hypothesis Testing Does the seal strength equal 6.5? Is the seal strength the same for Supplier A and B? Is the seal strength the same for each size pouch (small, medium, large)? Does the supplier effect depend on the size of the pouch? Do different levels of time, temperature, pressure, and rate affect the seal strength?
  18. 18. 19 One-sample t-test H0: µ > 14 the true mean is greater than 14 Ha: µ < 14 the true mean is less than 14 α = 0.05 95% confidence t-stat = p-value =
  19. 19. 20 Types of Errors Did you make the right decision? The probability of a Type I error is α. The probability of a Type II error is β. The power of the test is 1- β. True Conclude H0 Ha H0 CORRECT Type II error Ha Type I error CORRECT
  20. 20. 21 Power Power is the ability to detect differences that actually exist. Power depends on: • Sample size (n) • α • Difference to detect (δ) or effect size • Standard deviation (σ) 5.5 6.5
  21. 21. 22 Using an alpha level of 0.05, a standard deviation of 1.0, a difference to detect of 0.5, and a power of (at least) 80%, determine an appropriate sample size. Power and Sample Size 22
  22. 22. 23 One-sample t-test Using the randomly generated data, determine if the true average exhaust of pressurized air in the clean room is less than the requirement of 14 psi. H0: µ > 14 the true mean is greater than 14 Ha: µ < 14 the true mean is less than 14 α = 0.05 95% confidence t-stat = p-value = One-sided (95%) confidence interval: (µ < ) Conclusion:
  23. 23. 24 Using the randomly generated data, determine if the true average exhaust of pressurized air in the clean room is less than the requirement of 14 psi. One sample t-test 24
  24. 24. 25 Two-Sample t Test H0: µA = µB The means are equal. Ha: µA ≠ µB The means are different. α = 0.05 95% confidence t stat = p-value = t Test Reactor B-Reactor A Assuming equal variances Difference Std Err Dif Upper CL Dif Lower CL Dif Confidence -16.272 1.858 -12.465 -20.079 0.95 t Ratio DF Prob > |t| Prob > t Prob < t -8.75575 28 <.0001* 1.0000 <.0001*
  25. 25. 26 Two-Sample t Test H0: µA = µB The means are equal. Ha: µA ≠ µB The means are different. α = 0.05 95% confidence t stat = -8.756 p-value = <0.0001 t Test Reactor B-Reactor A Assuming equal variances Difference Std Err Dif Upper CL Dif Lower CL Dif Confidence -16.272 1.858 -12.465 -20.079 0.95 t Ratio DF Prob > |t| Prob > t Prob < t -8.75575 28 <.0001* 1.0000 <.0001*
  26. 26. 27 Equivalence Testing The t test can conclude only that two sample means are different. It cannot be used to show that the means are the same. An equivalence test reverses the null and alternative hypotheses from the t test. If the result of an equivalence test is significant, then the conclusion is that the two means are practically equivalent.
  27. 27. 28 Equivalence Testing H0: |µA − µB| > δ The means differ by more than δ. HA: |µA − µB| ≤ δ The means differ by at most δ. α = 0.05 95% confidence An equivalence test is performed by forming a confidence interval around the difference in sample means. If this confidence interval is entirely contained within a user-selected interval (−δ, δ), then equivalence is concluded.  Check whether the 90% CI formed around xA − xB is contained within the interval (−δ, δ).  A test size of α constructs a (1 − 2α) confidence interval because two different comparisons are being performed (against the lower and upper sides of the CI).  The selection of δ is subjective and depends on subject- matter expertise.
  28. 28. 29 Equivalence Margin Selection of the equivalence criteria (δ) is the key to the outcome of similarity. Reference: Tsong, Yi, and OB CMC Analytical Biosimilar Method Development Team (Meiyu Shen, Cassie Xiaoyu Dong). 2015. Development of Statistical Approaches for Analytical Biosimilarity Evaluation [PowerPoint]. DIA/FDA Statistics Forum.
  29. 29. 30 Using the randomly generated data, determine if two products are comparable (practically equivalent). Two Sample t-test and Equivalence 30
  30. 30. Application of Statistical Methods in OQ
  31. 31. 32 Operational Qualification “Operational Qualification (OQ): establishing by objective evidence process control limits and action levels which result in product that meets all predetermined requirements.” GHTF Guidance on Process Validation
  32. 32. 33 Operational Qualification “In this phase the process parameters should be challenged to assure that they will result in a product that meets all defined requirements under all anticipated conditions of manufacturing, i.e., worst case testing. During routine production and process control, it is desirable to measure process parameters and/or product characteristics to allow for the adjustment of the manufacturing process at various action level(s) and maintain a state of control. These action levels should be evaluated, established and documented during process validation to determine the robustness of the process and ability to avoid approaching ‘worst case conditions.’ ” GHTF Guidance on Process Validation
  33. 33. 34 GHTF Process Validation Guidance for Medical Device Manufacturers [Considerations include] “Potential failure modes, action levels and worst case conditions (Failure Modes and Effects Analysis, Fault Tree Analysis)” “The use of statistically valid techniques such as screening experiments to establish key process parameters and statistically designed experiments to optimize the process can be used during this phase.”
  34. 34. 35 OQ for Heat Sealer First, determine potential key process parameters. Next, evaluate the stability of these parameters; determine levels for screening experiments. Then conduct both a screening experiment to set Installation optimal conditions and a response surface study to center the process and determine Installation process capability. Lastly, determine the sensitive of the process to variations in these key process parameters and establish process capability (Cpk > 1.0). Cause-and-effect diagrams and FMEA SPC Screening experiment Response surface study Process capability
  35. 35. 36 Factors using Ishikawa The first step to establishing key process parameters is to brainstorm which process parameters (factors ) may ‘cause’ an ‘effect’ on seal strength. A key quality tool to accomplish this is a cause-and-effect diagram (also known as a Ishikawa or fishbone diagram).
  36. 36. 37 Factors using FMEA The next step is to prioritize which process parameters/factors to include in your experiments. Both Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis can be used to accomplish this.
  37. 37. 38 FMEA and FTA “An FMEA is a systematic analysis of the potential failure modes. It includes the identification of possible failure modes, determination of the potential causes and consequences and an analysis of the associated risk…FMEA can be performed on both the product and the process. Typically, an FMEA is performed at the component level, starting with potential failures and then tracing up to the consequences. This is a bottoms up approach. A variation is a Fault Tree Analysis, which starts with possible consequences and traces down to the potential causes.” GHTF Guidance on Process Validation
  38. 38. 39 FMEA During FMEA brainstorming sessions, the following ratings for Severity (Sev), Probability of Occurrence (Occ), and the Probability of Detection (Det) are determined. The Risk Priority Number (RPN) is computed as: RPN = Sev * Occ * Det Item/Function Potential Failure Model Potential Effect(s) of Failure Severity Potential Cause(s) of Failure Occurrence Current Design Controls Detectability RPN Platen Platen too hot Seal Strength too low 10 Temp setting too high 5 1 Platen defective 3 3 Platen too cool Seal Strength too low 10 Temp setting too high 5 1 Platen defective 3 3
  39. 39. 40 Control Charts “Control charts are used to detect changes in the process. A sample, typically consisting of 5 consecutive units, is selected periodically. The average and range of each sample is calculated and plotted. The plot of the averages is used to determine if the process average changes. The plot of the ranges is used to determine if the process variation changes. To aid in determining if a change has occurred, control limits are calculated and added to the plots. The control limits represent the maximum amount that the average or range should vary if the process does not change. A point outside the control limits indicates the process has changed. When a change is identified by the control chart, an investigation should be made as to the cause of the change. Control charts help identify key input variables causing the process to shift and aid in reduction of the variation. Control charts are also used as part of a capability study to demonstrate that the process is stable or consistent.” - GHTF Guidance on Process Validation
  40. 40. 41 Common Control Charts Variables charts  XBar  R  I  MR Attribute charts  p  np  c  u
  41. 41. 42 XBar and R Chart
  42. 42. 43 I & MR chart
  43. 43. 44 Nelson Control Rules *Taken from JMP 8.0.2 documentation.
  44. 44. 45 Design of Experiments (DOE) “The term designed experiment is a general term that encompasses screening experiments, response surface studies, and analysis of variance. In general, a designed experiment involves purposely changing one or more inputs and measuring the resulting effect on one or more outputs.” GHTF Guidance on Process Validation
  45. 45. 46 DOE for 2-Level Process Parameters DOE allows you to detect the significance of main effects as well as their interactions. Time Press 1 2 3 4 - 1 +1 - 1 +1 - 1 - 1 +1 +1 +1 - 1 - 1 +1 Time * Press Seal Strength 5.7 6.3 7.0 7.5 + _ _ Time + Press
  46. 46. 47 DOE for 2-Level Process Parameters The benefits of designed experiments increases as the number of key process parameters are added to the design. Add Temperature to the design. + _ Temp _ _ + Time + Press
  47. 47. 48 DOE for 2-Level Process Parameters The benefits of designed experiments increases as the number of key process parameters are added to the design. Add Rate to the design. + _ Press Temp _ _ + Time + − Rate + + _ _ _ + Time +
  48. 48. 49 Screening Experiment “A screening experiment is a special type of designed experiment whose primary purpose is to identify the key input variables. Screen experiments are also referred to as fractional factorial experiments...” ` GHTF Guidance on Process Validation
  49. 49. 50 Screening Experiment Fractional factorial experiments give up information about some of all interactions in favor of examining more parameters. For the heat sealer, we may want to know whether Time, Temperature, Pressure, or Rate has the largest effect on Seal Strength. A 24 full-factorial design will have 16 runs. A half-fraction factorial will have 8 runs. Temp Press 1 2 3 4 5 6 7 8 - 1 +1 +1 - 1 +1 - 1 - 1 +1 - 1 +1 - 1 +1 - 1 +1 - 1 +1 - 1 - 1 +1 +1 - 1 - 1 +1 +1 Time Rate - 1 - 1 - 1 - 1 +1 +1 +1 +1
  50. 50. 51 This demonstration illustrates how to design and analyze a screening design with Seal Strength as the response and Time, Temperature, Pressure, and Rate as the factors. For Seal Strength, Match a Target of 6.5 (specification is 5.5 – 7.5). Screening Designs 51
  51. 51. 52 Response Surface Study “A response surface study is a special type of designed experiment whose purpose is to model the relationship between the key input variables and the outputs. Performing a response surface study involved running the process at different settings for the inputs, called trials, and measuring the resulting outputs. An equation can be fit to the data to model the effect of the inputs on the outputs. This equation can then be used to find optimal targets...To ensure that only key input variables are included in the study, a screening experiment is frequently performed first.” GHTF Guidance on Process Validation
  52. 52. 53 Central Composite Design A Central Composite Design (CCD) is a widely-used response surface design. Adds axial runs to the initial design. Each factor in the design has 5 levels. Each (added) experimental run has one factor at its axial value and all others at 0. + _ Temp _ _ + Time + Press
  53. 53. 54 This demonstration illustrates how to design and analyze a CCD. What are your optimal settings to match a Seal Strength of 6.5? Run confirmation runs at the process settings to determine Installation process capability. Response Surface Design 54
  54. 54. 55 Capability Study “Capability studies are performed to evaluate the ability of a process to consistently meet a specification. A capability study is performed by selecting a small number of units periodically over time. Each period of time is called a subgroup. For each subgroup, the average and range is calculated. The averages and ranges are plotted over time using a control chart to determine if the process is stable or consistent over time. If so, the samples are then combined to determine whether the process is adequately centered and the variation is sufficiently small. This is accomplished by calculating capability indexes. The most commonly used capability indices are Cp and Cpk. If acceptable values are obtained, the process consistently produces product that meets the specification limits. Capability studies are frequently used towards the end of validation to demonstrate that the outputs consistently meet the specifications.” GHTF Guidance on Process Validation
  55. 55. 56 Is the Process Capable? The most commonly used capability index is Cpk. Example: LSL USL
  56. 56. 57 Is the Process Capable? The most commonly used capability index is Cpk. Example: LSL USL
  57. 57. 58 Process Capability LSL USL LSL USL Cpk = 1.0 Cpk = 2.0 When Cpk = 1, 27/10,000 results will fall outside of the specification. When Cpk = 2, 2/1,000,000,000 results will fall outside of the specification.
  58. 58. 59 This demonstration illustrates the use of confirmation runs to determine if your process is capable (Cpk > 1.0). Process Capability 59
  59. 59. 60 This demonstration illustrates the use of confirmation runs to determine if your process is capable at: a. Optimal settings b. All low settings c. All high settings Determine the Cpk for each of the three settings. Process Capability 60
  60. 60. Application of Statistical Methods in PQ
  61. 61. 62 Performance Qualification “Performance Qualification (OQ): establishing by objective evidence that the process, under anticipated conditions, consistently produces a product which meets all predetermined requirements.” GHTF Guidance on Process Validation
  62. 62. 63 Performance Qualification “In this phase the key objective is to demonstrate the process will consistently produce acceptable product under normal operating conditions.’” GHTF Guidance on Process Validation
  63. 63. 64 PQ for Heat Sealer In IQ, we ensured the heat sealer was installed correctly. In OQ, we conducted tests to ensure the seal strength would meet the pre-determined specifications under all manufacturing conditions. In PQ, we want to demonstrate process consistency under normal operating conditions. In order to accomplish this, we need to test seal strength for an extended period of time; we need to determine if our process is stable and capable. We would also like to evaluate if our process is centered - how our process average compares to the target. SPC Process Capability
  64. 64. 65 This demonstration illustrates the use of process control and capability during PQ. Process Control & Capability 65
  65. 65. Adsurgo provides direct engagement consulting services and training workshops focused on the use of analytics. Our passion is for solving interesting, challenging, and meaningful problems in collaborative, team-based engagements with our clients. W. Heath Rushing Principal Consultant 206-369-5541 Heath.Rushing@adsurgo.com

×