Introduction to Design of Experiments by Teck Nam Ang (University of Malaya)Teck Nam Ang
This set of slides explains in a simple manner the purpose of experiment, various strategies of experiment, how to plan and design experiment, and the handling of experimental data.
The document discusses two examples of early experiments involving design of experiments principles:
1) During World War II, scientists simplified the complex factors involved in explosive detonation down to a cylinder expansion test with standardized materials and measurements to better study the fundamental mechanisms.
2) In 1747, James Lind conducted a controlled experiment providing different dietary supplements to groups of scurvy patients, finding those given oranges and lemons recovered while others did not, identifying citrus as a cure for scurvy. However, the experiment was not fully randomized and some treatments lacked scientific basis.
This document discusses Quality by Design (QbD) principles and Design of Experiments (DOE) methodology. It explains that QbD aims to design quality into products and processes through an understanding of key factors and their interactions. DOE provides a systematic approach to determine these factors and optimize conditions through carefully designed experiments. Common DOE steps include screening experiments to identify important factors, followed by optimization experiments to determine optimal levels and robustness testing to ensure consistent performance under variations.
This document discusses statistical process control (SPC) techniques for managing quality. It covers various SPC methods including error detection, error prevention, and process control systems. The benefits of SPC include controlling processes, predicting behavior, avoiding waste, and achieving defect prevention. Key SPC tools include data collection, summarization using charts, histograms, and control charts to monitor processes and detect issues. The document also discusses process capability, measurement of variation, and using frequency distributions and histograms to analyze process capability.
Statistical process control is defined as and use of statistical technique to control a process or production method .It is used in manufacturing or production process to measure how consistently a product perform according to its design specification.
This document discusses statistical process control (SPC), which uses statistical methods to monitor and control processes to improve quality. SPC aims to ensure processes operate efficiently and produce specification-conforming products with less waste. Key SPC tools include control charts, histograms, cause-and-effect diagrams and check sheets. Control charts in particular plot process data over time to identify changes or variability. SPC provides benefits like reduced waste, lower costs, improved customer satisfaction and early problem detection and prevention.
Factorial design ,full factorial design, fractional factorial designSayed Shakil Ahmed
This document discusses factorial designs and their application in formulation. It defines factorial experiments as those involving two or more factors, each with different levels. Full factorial designs involve every combination of all factors and levels, while fractional factorial designs examine multiple factors efficiently with fewer runs. Applications mentioned include formulation and processing, clinical chemistry, and studying the effects of factors like disintegrant and lubricant concentration on tablet dissolution. IVIVC and BCS classification are also discussed in relation to predicting oral absorption of immediate release formulations.
- Response surface methodology (RSM) uses statistical techniques to model and analyze problems with response variables influenced by multiple independent variables. The goal is to optimize the response.
- RSM has been used since the 1930s and was reviewed in landmark papers in 1966 and 1976. It is commonly used in industries, agriculture, medicine, and other fields to optimize processes and products.
- There are two main experimental strategies in RSM - first-order models to initially evaluate relationships between factors and responses, and second-order models to account for curvature and find optimal points if curvature is present.
Introduction to Design of Experiments by Teck Nam Ang (University of Malaya)Teck Nam Ang
This set of slides explains in a simple manner the purpose of experiment, various strategies of experiment, how to plan and design experiment, and the handling of experimental data.
The document discusses two examples of early experiments involving design of experiments principles:
1) During World War II, scientists simplified the complex factors involved in explosive detonation down to a cylinder expansion test with standardized materials and measurements to better study the fundamental mechanisms.
2) In 1747, James Lind conducted a controlled experiment providing different dietary supplements to groups of scurvy patients, finding those given oranges and lemons recovered while others did not, identifying citrus as a cure for scurvy. However, the experiment was not fully randomized and some treatments lacked scientific basis.
This document discusses Quality by Design (QbD) principles and Design of Experiments (DOE) methodology. It explains that QbD aims to design quality into products and processes through an understanding of key factors and their interactions. DOE provides a systematic approach to determine these factors and optimize conditions through carefully designed experiments. Common DOE steps include screening experiments to identify important factors, followed by optimization experiments to determine optimal levels and robustness testing to ensure consistent performance under variations.
This document discusses statistical process control (SPC) techniques for managing quality. It covers various SPC methods including error detection, error prevention, and process control systems. The benefits of SPC include controlling processes, predicting behavior, avoiding waste, and achieving defect prevention. Key SPC tools include data collection, summarization using charts, histograms, and control charts to monitor processes and detect issues. The document also discusses process capability, measurement of variation, and using frequency distributions and histograms to analyze process capability.
Statistical process control is defined as and use of statistical technique to control a process or production method .It is used in manufacturing or production process to measure how consistently a product perform according to its design specification.
This document discusses statistical process control (SPC), which uses statistical methods to monitor and control processes to improve quality. SPC aims to ensure processes operate efficiently and produce specification-conforming products with less waste. Key SPC tools include control charts, histograms, cause-and-effect diagrams and check sheets. Control charts in particular plot process data over time to identify changes or variability. SPC provides benefits like reduced waste, lower costs, improved customer satisfaction and early problem detection and prevention.
Factorial design ,full factorial design, fractional factorial designSayed Shakil Ahmed
This document discusses factorial designs and their application in formulation. It defines factorial experiments as those involving two or more factors, each with different levels. Full factorial designs involve every combination of all factors and levels, while fractional factorial designs examine multiple factors efficiently with fewer runs. Applications mentioned include formulation and processing, clinical chemistry, and studying the effects of factors like disintegrant and lubricant concentration on tablet dissolution. IVIVC and BCS classification are also discussed in relation to predicting oral absorption of immediate release formulations.
- Response surface methodology (RSM) uses statistical techniques to model and analyze problems with response variables influenced by multiple independent variables. The goal is to optimize the response.
- RSM has been used since the 1930s and was reviewed in landmark papers in 1966 and 1976. It is commonly used in industries, agriculture, medicine, and other fields to optimize processes and products.
- There are two main experimental strategies in RSM - first-order models to initially evaluate relationships between factors and responses, and second-order models to account for curvature and find optimal points if curvature is present.
This document discusses quality by design (QbD) approaches for biopharmaceutical development. QbD focuses on designing quality into the product and process based on an understanding of critical quality attributes and critical process parameters. Key aspects of QbD include identifying critical attributes and parameters, using tools like design of experiments to understand their impact, defining a design space, and ensuring robustness through continuous monitoring and improvement. Statistical tools and multidisciplinary teams are important for successful QbD implementation.
Statistical process control (SPC) involves using statistical methods to monitor and control processes to ensure they produce conforming products. Variation exists in all processes, and SPC helps determine when variation is normal versus requiring correction. Key SPC tools include control charts, which graph process data over time to identify special causes of variation needing addressing. Process capability analysis also examines whether a process can meet specifications under natural variation. Together these tools help processes run at full potential with minimal waste.
The document describes using design of experiments (DoE) to optimize a pharmaceutical formulation. Two factors, the ratio of ingredients A:B and compressional force, were selected as independent variables. Response variables like disintegration time, hardness, and dissolution were measured for 9 formulations designed using a central composite design. Analysis of variance showed the compressional force factor significantly affected particle size index (PDI), while the ratio had little effect. The final equation related PDI to the factors in coded units. Point and interval predictions were also presented.
Control charts are graphs used to monitor quality during manufacturing. They allow issues to be identified and addressed early to maintain consistent product quality. Key aspects of control charts include:
- Plotting statistics like the mean or range of sample measurements over time
- Using statistical limits to identify processes that are in or out of control
- Interpreting patterns in the charts to determine if corrective action is needed
Control charts enable manufacturers to efficiently produce uniform products by catching problems early and avoiding unnecessary adjustments to processes that are performing normally.
This document discusses Process Analytical Technology (PAT). It begins with an introduction to PAT, defining it as a system to design, analyze, and control manufacturing through timely measurements of critical quality attributes. It then discusses how PAT works by selecting a suitable PAT system and identifying critical process parameters. It highlights some key benefits of PAT such as improving process understanding and control, enhancing safety, and reducing variation. The document also provides examples of common PAT applications and discusses regulatory guidance around implementing PAT from agencies like the FDA.
This document discusses control charts for variables. It begins by defining variation and its sources in manufacturing processes. It then introduces control charts, explaining that they are used to visualize variations in process central tendency and determine if a process is stable and predictable. The document provides detailed instructions for establishing control charts, including selecting a quality characteristic, rational subgroups, collecting data, determining trial control limits, and revising the charts over time. It describes different types of control charts for variables and attributes. The overall purpose of control charts is to improve quality, determine process capability, and make decisions about production and specifications.
The document discusses process capability and defines key terms related to process capability. It provides the standard formula for process capability using 6 sigma and explains how process capability is compared to specification limits. It then discusses different process capability indices including Cp, Cpk, and Cpm. It explains how these indices measure both potential and actual process capability. The document also discusses limitations of the Cp index and the use of Cpk to address process centering. It describes how to calculate confidence intervals for process capability ratios and discusses some key process performance metrics.
This presentation provides an overview of response surface methodology (RSM). RSM is a statistical technique used to build models and optimize responses based on the relationships between several input variables. The presentation covers the basics of RSM, including its introduction, methodology, examples, and applications. Key topics discussed are modeling responses with polynomial equations, designing experiments, and using RSM to find an optimal response by varying input variables.
ICH Guideline Q9 - Quality Risk Managementmuna_ali
A presentation of the ICH guideline Q9 (Quality Risk Management). It discusses the basic risk management procedure, list of recognized risk management tools and its role in pharmaceutical industry.
Medicinal products must comply with their approved specifications before they are released into the market. Compliance with release specifications can be demonstrated by performing a complete set of tests on the active substance and/or finished product, according to the approved specifications. Under certain conditions, an alternative strategy to systematic end product testing is possible. So far this concept has been mainly applied to sterility testing of terminally sterilised products and has become associated with parametric release applications. Recent guidelines adopted in the ICH context (ICH Q8, Q9 and Q10) have made it possible to apply a similar release decision process to tests other than sterility, this approach has been called Real Time Release Testing (RTRT).
RTRT is a system of release that gives assurance that the product is of intended quality, based on the information collected during the manufacturing process, through product knowledge and on process understanding and control. RTRT recognises that under specific circumstances an appropriate combination of process controls (critical process parameters) together with pre-defined material attributes may provide greater assurance of product quality than end-product testing and the context as such be an integral part of the control strategy. The RTRT principle is already authorised for use as an optional alternative to routine sterility testing of products terminally sterilised in their final container i.e. parametric release. Enhanced product knowledge and process understanding, the use of quality risk management principles and the application of an appropriate pharmaceutical quality system, as defined within ICH Q8,Q9 and Q10 provide the platform for establishing RTRT mechanisms for other applications, for new products as well as established marketed products. Release of a product can be a combination of a RTR approach for certain critical quality attributes (CQAs) and a more conventional evaluation for other CQAs (partial RTR).
This presentation deals with the concepts of Real Time Release Testing. This presentation was compiled from material freely available from FDA , ICH , EMEA and other free resources on the world wide web.
Statistical process control involves using statistical tools to monitor production processes and ensure quality. Descriptive statistics describe quality characteristics, while statistical process control uses techniques like control charts to determine if a process is producing products within a predetermined range. Control charts monitor processes over time, with samples plotted against control limits. If samples fall outside limits, it suggests the process is out of control. There are different types of control charts for variables that can be measured and attributes that can be counted. Monitoring processes with control charts helps distinguish common from assignable causes of variation.
Control charts and statistical process control (SPC) allow companies to monitor processes, detect issues, and enact improvements. Control charts display process data over time and help identify when processes are behaving unusually due to "special causes." SPC uses statistics to set control limits on charts and determine whether a process is in or out of statistical control. Implementing control charts involves selecting processes and variables to measure, collecting baseline data to create charts, training operators, and continuously monitoring and improving processes.
Statistical process control (SPC) is a method that uses statistical methods to monitor processes and ensure they operate efficiently. Key tools in SPC include control charts, which graph process data over time and establish upper and lower control limits to detect assignable causes of variation. Control charts come in two main types - variables charts that monitor quantitative measurements like weight or temperature, and attributes charts that count defects. The advantages of SPC include increased stability, predictability, and ability to detect attempts to improve processes. SPC has various applications in pharmaceutical manufacturing for monitoring characteristics like drug potency, fill weight, and microbial counts.
This document provides guidance on calculating and interpreting the process capability index Cpk. It defines Cpk as a ratio that compares the specification tolerance to the process variation expressed in terms of standard deviations. It explains how to calculate Cpk and discusses factors that influence Cpk values such as sample size, process centering, and measurement uncertainty. The document also provides examples of the expected defective parts per million that correspond to different Cpk values and factors to consider when improving Cpk, such as machine, tooling, workholding, and workpiece variables.
The document provides an overview of design of experiments (DOE) and factorial experiments. It defines key terms like factors, levels, treatments, responses, and noise. It explains the objectives of conducting experiments and the different types of experiments. It provides examples of 2-factor and 3-factor factorial experiments and how to analyze them. It discusses the principles of replication, randomization, and blocking. Finally, it demonstrates how to set up and analyze a general full factorial design with factors having more than two levels.
Dear All, I have prepared this presentation to get a better understanding of Statistical Process Control (SPC). This is a very informative presentation and giving information about the History of SPC, the basics of SPC, the PDCA approach, the Benefits of SPC, application of 7-QC tools for problem-solving. You can follow this technique in your day to day business working to solve the problems. Thanking you.
The document discusses the steps for conducting a response surface methodology (RSM) experiment using central composite design (CCD). It involves determining independent and dependent variables, selecting an appropriate CCD, conducting the experiment runs according to the design, analyzing the data using statistical methods to develop a mathematical model and check its adequacy, and using the model to optimize responses. Key aspects of RSM and CCD covered include developing the design, analyzing results through ANOVA and regression, and checking model validity.
Selecting experimental variables for response surface modelingSeppo Karrila
This document discusses selecting variables and designing experiments for response surface modeling. It recommends beginning with identifying all possible factors, then controlling some and selecting others to experiment with using a design with three levels per variable, like a Box-Behnken design. Response surface modeling fits a quadratic surface to the experimental results rather than optimizing one variable at a time, which can miss the true optimal conditions. The goal is to approximate the maximum or minimum response across variable levels through designed experiments and modeling rather than sequential optimization of individual factors.
This document discusses quality by design (QbD) approaches for biopharmaceutical development. QbD focuses on designing quality into the product and process based on an understanding of critical quality attributes and critical process parameters. Key aspects of QbD include identifying critical attributes and parameters, using tools like design of experiments to understand their impact, defining a design space, and ensuring robustness through continuous monitoring and improvement. Statistical tools and multidisciplinary teams are important for successful QbD implementation.
Statistical process control (SPC) involves using statistical methods to monitor and control processes to ensure they produce conforming products. Variation exists in all processes, and SPC helps determine when variation is normal versus requiring correction. Key SPC tools include control charts, which graph process data over time to identify special causes of variation needing addressing. Process capability analysis also examines whether a process can meet specifications under natural variation. Together these tools help processes run at full potential with minimal waste.
The document describes using design of experiments (DoE) to optimize a pharmaceutical formulation. Two factors, the ratio of ingredients A:B and compressional force, were selected as independent variables. Response variables like disintegration time, hardness, and dissolution were measured for 9 formulations designed using a central composite design. Analysis of variance showed the compressional force factor significantly affected particle size index (PDI), while the ratio had little effect. The final equation related PDI to the factors in coded units. Point and interval predictions were also presented.
Control charts are graphs used to monitor quality during manufacturing. They allow issues to be identified and addressed early to maintain consistent product quality. Key aspects of control charts include:
- Plotting statistics like the mean or range of sample measurements over time
- Using statistical limits to identify processes that are in or out of control
- Interpreting patterns in the charts to determine if corrective action is needed
Control charts enable manufacturers to efficiently produce uniform products by catching problems early and avoiding unnecessary adjustments to processes that are performing normally.
This document discusses Process Analytical Technology (PAT). It begins with an introduction to PAT, defining it as a system to design, analyze, and control manufacturing through timely measurements of critical quality attributes. It then discusses how PAT works by selecting a suitable PAT system and identifying critical process parameters. It highlights some key benefits of PAT such as improving process understanding and control, enhancing safety, and reducing variation. The document also provides examples of common PAT applications and discusses regulatory guidance around implementing PAT from agencies like the FDA.
This document discusses control charts for variables. It begins by defining variation and its sources in manufacturing processes. It then introduces control charts, explaining that they are used to visualize variations in process central tendency and determine if a process is stable and predictable. The document provides detailed instructions for establishing control charts, including selecting a quality characteristic, rational subgroups, collecting data, determining trial control limits, and revising the charts over time. It describes different types of control charts for variables and attributes. The overall purpose of control charts is to improve quality, determine process capability, and make decisions about production and specifications.
The document discusses process capability and defines key terms related to process capability. It provides the standard formula for process capability using 6 sigma and explains how process capability is compared to specification limits. It then discusses different process capability indices including Cp, Cpk, and Cpm. It explains how these indices measure both potential and actual process capability. The document also discusses limitations of the Cp index and the use of Cpk to address process centering. It describes how to calculate confidence intervals for process capability ratios and discusses some key process performance metrics.
This presentation provides an overview of response surface methodology (RSM). RSM is a statistical technique used to build models and optimize responses based on the relationships between several input variables. The presentation covers the basics of RSM, including its introduction, methodology, examples, and applications. Key topics discussed are modeling responses with polynomial equations, designing experiments, and using RSM to find an optimal response by varying input variables.
ICH Guideline Q9 - Quality Risk Managementmuna_ali
A presentation of the ICH guideline Q9 (Quality Risk Management). It discusses the basic risk management procedure, list of recognized risk management tools and its role in pharmaceutical industry.
Medicinal products must comply with their approved specifications before they are released into the market. Compliance with release specifications can be demonstrated by performing a complete set of tests on the active substance and/or finished product, according to the approved specifications. Under certain conditions, an alternative strategy to systematic end product testing is possible. So far this concept has been mainly applied to sterility testing of terminally sterilised products and has become associated with parametric release applications. Recent guidelines adopted in the ICH context (ICH Q8, Q9 and Q10) have made it possible to apply a similar release decision process to tests other than sterility, this approach has been called Real Time Release Testing (RTRT).
RTRT is a system of release that gives assurance that the product is of intended quality, based on the information collected during the manufacturing process, through product knowledge and on process understanding and control. RTRT recognises that under specific circumstances an appropriate combination of process controls (critical process parameters) together with pre-defined material attributes may provide greater assurance of product quality than end-product testing and the context as such be an integral part of the control strategy. The RTRT principle is already authorised for use as an optional alternative to routine sterility testing of products terminally sterilised in their final container i.e. parametric release. Enhanced product knowledge and process understanding, the use of quality risk management principles and the application of an appropriate pharmaceutical quality system, as defined within ICH Q8,Q9 and Q10 provide the platform for establishing RTRT mechanisms for other applications, for new products as well as established marketed products. Release of a product can be a combination of a RTR approach for certain critical quality attributes (CQAs) and a more conventional evaluation for other CQAs (partial RTR).
This presentation deals with the concepts of Real Time Release Testing. This presentation was compiled from material freely available from FDA , ICH , EMEA and other free resources on the world wide web.
Statistical process control involves using statistical tools to monitor production processes and ensure quality. Descriptive statistics describe quality characteristics, while statistical process control uses techniques like control charts to determine if a process is producing products within a predetermined range. Control charts monitor processes over time, with samples plotted against control limits. If samples fall outside limits, it suggests the process is out of control. There are different types of control charts for variables that can be measured and attributes that can be counted. Monitoring processes with control charts helps distinguish common from assignable causes of variation.
Control charts and statistical process control (SPC) allow companies to monitor processes, detect issues, and enact improvements. Control charts display process data over time and help identify when processes are behaving unusually due to "special causes." SPC uses statistics to set control limits on charts and determine whether a process is in or out of statistical control. Implementing control charts involves selecting processes and variables to measure, collecting baseline data to create charts, training operators, and continuously monitoring and improving processes.
Statistical process control (SPC) is a method that uses statistical methods to monitor processes and ensure they operate efficiently. Key tools in SPC include control charts, which graph process data over time and establish upper and lower control limits to detect assignable causes of variation. Control charts come in two main types - variables charts that monitor quantitative measurements like weight or temperature, and attributes charts that count defects. The advantages of SPC include increased stability, predictability, and ability to detect attempts to improve processes. SPC has various applications in pharmaceutical manufacturing for monitoring characteristics like drug potency, fill weight, and microbial counts.
This document provides guidance on calculating and interpreting the process capability index Cpk. It defines Cpk as a ratio that compares the specification tolerance to the process variation expressed in terms of standard deviations. It explains how to calculate Cpk and discusses factors that influence Cpk values such as sample size, process centering, and measurement uncertainty. The document also provides examples of the expected defective parts per million that correspond to different Cpk values and factors to consider when improving Cpk, such as machine, tooling, workholding, and workpiece variables.
The document provides an overview of design of experiments (DOE) and factorial experiments. It defines key terms like factors, levels, treatments, responses, and noise. It explains the objectives of conducting experiments and the different types of experiments. It provides examples of 2-factor and 3-factor factorial experiments and how to analyze them. It discusses the principles of replication, randomization, and blocking. Finally, it demonstrates how to set up and analyze a general full factorial design with factors having more than two levels.
Dear All, I have prepared this presentation to get a better understanding of Statistical Process Control (SPC). This is a very informative presentation and giving information about the History of SPC, the basics of SPC, the PDCA approach, the Benefits of SPC, application of 7-QC tools for problem-solving. You can follow this technique in your day to day business working to solve the problems. Thanking you.
The document discusses the steps for conducting a response surface methodology (RSM) experiment using central composite design (CCD). It involves determining independent and dependent variables, selecting an appropriate CCD, conducting the experiment runs according to the design, analyzing the data using statistical methods to develop a mathematical model and check its adequacy, and using the model to optimize responses. Key aspects of RSM and CCD covered include developing the design, analyzing results through ANOVA and regression, and checking model validity.
Selecting experimental variables for response surface modelingSeppo Karrila
This document discusses selecting variables and designing experiments for response surface modeling. It recommends beginning with identifying all possible factors, then controlling some and selecting others to experiment with using a design with three levels per variable, like a Box-Behnken design. Response surface modeling fits a quadratic surface to the experimental results rather than optimizing one variable at a time, which can miss the true optimal conditions. The goal is to approximate the maximum or minimum response across variable levels through designed experiments and modeling rather than sequential optimization of individual factors.
Experiment Design - strategy and markets determine what you should testFirmhouse
Slides from our talk at Leanconf 2016 in Manchester. Get in touch if you have any questions regarding our talk or this topic. The Experiment Design guide can be found here: http://bit.ly/experiment-design
Psychology research methods aim to test hypotheses scientifically. Researchers must account for biases that skew logic, such as hindsight bias, overconfidence, and the Barnum effect. There are two main research methods: experimental and correlational. Experimental research manipulates an independent variable to determine its causal effect on a dependent variable, while controlling for confounding variables through random assignment and double-blind procedures. Correlational research observes relationships between variables without manipulating them. Proper research requires following ethical guidelines to protect human and animal subjects.
1) The document describes a design experiment conducted in China that engaged over 300 active participants and 5,000 passive participants without them understanding the overall goal.
2) The experiment involved tasks like designing a flag, purchasing red mats from 100 different shops across China, and assembling the mats into a large design.
3) Only Chinese services could be used and the process was reviewed after each task while maintaining transparency without revealing the overall goal.
This presentation is about a lecture I gave within the "Green Lab" course of the Computer Science master program, of the Vrije Universiteit Amsterdam.
http://www.ivanomalavolta.com
This document discusses common mistakes made when designing and measuring experiments. It provides templates and guidance for running smart, measurable experiments with clear learning goals, hypotheses, metrics, and plans. Key mistakes highlighted include not defining a clear learning goal, assuming a hypothesis is true without testing it, mismeasuring or misinterpreting experimental metrics, and failing to conduct retrospectives to learn from experiments. Templates are provided to help structure experiments and ensure mistakes are avoided.
How Teaching UX is One Giant Participatory Design ExperimentTricia Okin
Talk given at the UX Antwerp Meet Up - 05-26-2015
Teaching UX (and anything in general) is a large exercise in understanding how others learn and building empathy towards them. You’re teaching people from 24 years old a few years out of university all the way up to 50-something years old and whom are lawyers. You have to be able to make people comfortable enough to acknowledge what they don’t know and not be ashamed of learning. We’ll go through the fun part of teaching and the dark side while translating teaching methods into ways to help your clients understand your design process.
Mobile Apps quality - a tale about energy, performance, and users’ perceptionIvano Malavolta
This document summarizes an experiment conducted to assess the impact of service workers on the energy efficiency of progressive web apps (PWAs) under different conditions. Seven popular PWAs were tested on high-end and low-end Android devices over WiFi and 2G networks with and without service workers enabled. The results showed that service workers did not significantly influence energy consumption, but network conditions and device type did, with WiFi and high-end devices using less energy. PWA-specific factors also impacted energy usage to a smaller degree. A manual review of the service worker code found variation in complexity and caching strategies but no clear patterns relating to energy efficiency.
FDA’s emphasis on quality by design began with the recognition that increased testing does not improve product quality (this has long been recognized in other industries).In order for quality to increase, it must be built into the product. To do this requires understanding how formulation and manufacturing process variables influence product quality.Quality by Design (QbD) is a systematic approach to pharmaceutical development that begins with predefined objectives and emphasizes product and process understanding and process control, based on sound science and quality risk management.
This presentation - Part VI in the series- deals with the concepts of Design of Experiments. This presentation was compiled from material freely available from FDA , ICH , EMEA and other free resources on the world wide web.
This document provides instructions on how to conduct a valid experiment. It explains that a good experiment tests one variable at a time and is fair and unbiased. It also states that a good experiment is valid by testing the hypothesis and involves repeated trials to reduce errors. It defines the three types of variables - the independent variable that is intentionally changed, the dependent variable that is observed and measured, and the controlled variables that stay the same. It provides examples of experiments and identifies the independent, dependent, and controlled variables.
The scientific method involves formulating a question, conducting research and observations, developing a hypothesis, performing experiments, analyzing results, and reporting conclusions. It uses a systematic process of experimentation to test hypotheses. An example is provided of a student investigating how the amount of sugar affects bread rising by varying sugar amounts in bread recipes and measuring the size of the finished loaves.
1. The document describes an analysis of response surface methodology to model two responses (Y1 and Y2) based on three factors (A, B, C).
2. A central composite design with 17 runs was used to collect data on the responses across varying levels of the factors. Response surface regressions were then used to model each response as a function of the factors and their interactions.
3. For response Y1, the regression identified factors B and C as significant, while for response Y2, factors C and B*B were found to be significant based on a 95% confidence level. Contour and surface plots of the responses are also presented.
This document summarizes key concepts in building multiple regression models, including:
1) Analyzing nonlinear variables, qualitative variables, and building and evaluating regression models.
2) Transforming variables to improve model fit, including using indicator variables for qualitative data.
3) Common model building techniques like stepwise regression, forward selection, and backward elimination.
Final project for my vibrations class was to preform modal analysis of an object of our choosing as well as compare our results using FEA software simulations.
This document contains data and calculations related to linear regression analysis. It includes regression equations, calculations of mean and standard deviation, and use of Cramer's rule to determine regression coefficients from sample data. Regression lines are fitted to several data sets to determine the relationships between variables.
The document describes the design of a discrete-time decoupling control system for a two-input, two-output system. Channel 1 uses an output feedback controller with a settling time of 2 seconds and 10% overshoot, while channel 2 uses a state variable feedback controller with a state observer and has a settling time of 4 seconds and 0% overshoot. The controller designs, including transfer function models and filter/controller calculations, are shown. Simulation results demonstrating the decoupled control of each channel are also presented.
The document provides solutions to calculating various statistical measures - arithmetic mean, median, mode, harmonic mean, and geometric mean - for 5 sets of data. For each data set, the document calculates the measures using the relevant formulas. The statistical measures included arithmetic mean, median, mode, harmonic mean, and geometric mean. Formulas are provided for calculating each measure.
This document discusses monitoring distributed high performance computing systems. It describes using the Nudnik infrastructure monitoring tool to collect metrics from systems, parse the metrics, and take actions. Nudnik can collect baseline metrics with small latencies, load test metrics under CPU, memory, disk and network stress, and introduce chaos by setting failure percentages or response latencies randomly. It reports metrics to databases and services like InfluxDB, Elasticsearch and Prometheus.
This document summarizes novel statistical methods for genetic association studies, including those that account for population structure. It describes methods for detecting gene-gene interactions and inferring copy number variations. For interactions, it proposes using graphics processing units to efficiently search large model spaces. For copy number analysis, it presents a hidden Markov model approach to deconvolve tumor profiles from normal cell contamination. Speedups of over 100x were achieved by parallelizing the model training on a GPU.
This document discusses statistical process control and control charts. It defines the goals of control charts as collecting and visually presenting data to see when trends or out-of-control points occur. Process control charts graph sample data over time and show the process average and upper and lower control limits. Attribute control charts indicate whether points are in or out of tolerance, while variables charts measure attributes like length, weight or temperature over time. Examples are provided to illustrate p-charts, R-charts and X-bar charts using hotel luggage delivery time data.
This document discusses multicollinearity, beginning with definitions and the case of perfect multicollinearity. It then examines the case of near or imperfect multicollinearity using data on the demand for widgets. There is high multicollinearity between the price and income variables, resulting in unstable coefficient estimates with large standard errors and insignificant t-statistics. The document outlines methods to detect multicollinearity such as high R-squared but insignificant variables, high pairwise correlations, auxiliary regressions, and variance inflation factors. It provides an example using data on chicken demand.
This document summarizes the time series analysis and forecasting model for Turkey's Consumer Price Index (CPI). It presents the results of stationarity tests, correlation analyses, and unit root tests on the CPI and other variables. It then estimates a linear regression model using first differences of the log-transformed time series to forecast CPI. Diagnostic tests show the model has no autocorrelation or heteroskedasticity and the residuals are normally distributed. The model provides statistically significant short-term forecasts of inflation.
It is a short project on the Boston Housing dataset available in R. It shows the variables in the dataset and its interdependencies. A Regression Model is created taking some of the most dependent variables and adjusted to make a best possible fit. Lastly, the variances are analysed and adjusted.
Structural analysis II by moment-distribution CE 313,turja deb mitun id 13010...Turja Deb
The document summarizes the solution to determining the reactions and drawing the shear and bending moment diagrams for a beam using the moment distribution method. Key steps include: 1) calculating the stiffness factors and distribution factors for each joint; 2) using these factors to calculate the fixed end moments in a moment distribution table; 3) iteratively solving the table to determine the internal moments at each joint; and 4) using the internal moments to calculate the reactions at each support and plot the shear and bending moment diagrams.
Push And Pull Production Systems Chap7 Ppt)3abooodi
The document discusses push and pull production control systems. A push system like MRP initiates production based on forecasts, while a pull system like JIT initiates production based on current demand. MRP involves gathering demand data, determining planned order releases using explosion calculus, and developing shop floor schedules. Lot sizing algorithms aim to balance setup and holding costs. Capacity constraints and improvement steps can further optimize production planning.
I presented these slides at a meeting of ACM data mining group. I discuss using data mining to improve performance of an existing trading system. The presentation was video taped. You can see the video at:
http://fora.tv/2009/05/13/Michael_Bowles_Neural_Nets_and_Rule-Based_Trading_Systems
if you have any questions or comments contact me: mike@mbowles.com or
http://www.linkedin.com/in/mikebowles
The document summarizes a hybrid binary coded genetic algorithm (HBGA-C) for constrained optimization problems. It discusses how HBGA-C uses techniques like tournament selection, single point crossover, bitwise mutation, and quadratic approximation to find optimal solutions while satisfying constraints. The summary compares the performance of HBGA-C to a standard binary coded genetic algorithm (BGA-C) on test problems, finding that HBGA-C generally performs better in terms of success rate, function evaluations, standard deviation, and objective function values, though it may take more time. In conclusion, HBGA-C is presented as superior to BGA-C for constrained optimization.
This document describes the results of a factorial design experiment with 3 factors (Fill Rate, Ramp Rate, Suck Back) each at 2 levels. It provides details of the experimental design such as the number of runs and replicates. It presents the effects and coefficients from the factorial regression analysis and identifies Suck Back as having the largest effect on the response (Means of repeats). Residual plots indicate a good model fit except for one outlier. Reduced models removing non-significant terms like Ramp Rate are also examined.
This document describes a regression analysis conducted on data containing 97 observations of PSA levels and 7 predictor variables. Initially, a full regression model was fit using the first 65 observations. Diagnostic plots of the residuals showed some lack of randomness, indicating a need for transformation. A Box-Cox transformation with lambda=0.5 was applied to the response variable before refitting the model. The transformed model will be validated using the remaining 32 observations to select the best regression model for predicting PSA levels from this data.
The document discusses push and pull production control systems. A push system like MRP initiates production based on forecasts, while a pull system like JIT initiates production in response to current demand. MRP breaks down a master production schedule into detailed component schedules using bill of materials explosion and lot sizing algorithms. It incorporates demand forecasts, safety stocks, and other factors to determine planned order releases. The goal is to minimize costs by balancing set up costs with holding costs.
Tip for presentations general advice research paperCHUN-HAO KUNG
This document provides tips for presenting a paper or research article in English. It emphasizes focusing on the audience by identifying what is important for them to learn and explaining information clearly. Presentations should have a clear main message and structure, using an introduction, supporting information, and conclusion to tell the story. Visual aids should complement the spoken content with simple, high contrast designs using words and figures. When speaking, presenters should talk to the audience at a clear volume while using body language and audience interaction techniques. Thorough preparation is important through practicing and rehearsing the presentation.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
5. • Help you better understand and optimize your
response.
• Used to refine models after you have determined
important factors using factorial designs
Advantages of Response surface design
6. Factorial Points : Estimated main factor & interaction
Axial Points : Estimated pure quadratic form
Center Points : Estimated pure Error
→ Building a quadratic response surface
→ Resolves both main effects and interactions
Central composite design (CCD)
14. 14
Model is fixed
Algebra equation
Time-consuming
Model reduction
SAS regression
Interaction effect
Save time
0.00 0.25 0.50 0.75 1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
IBr
Cl
Binary Design (A) Ternary Design (B)
Modified mixture design methods
15. Advantages of mixture design
• Designs for these experiments are useful
because many product design and development
activities in industrial situations involve
formulations or mixtures.
23. Contour plot- e.g. PCE
A=tril(meshgrid(0:0.001:1));
B=tril(meshgrid(1:-0.001:0)');
C=tril(1-A-B);
x=tril(0.5.*(1+C-B));
y=tril((3^0.5)*0.5.*A);
z=5.57799.*A +4.53861.*B+6.53751.*C -5.83005.*A.*B
+64.07944.*A.*B.*C+11.75880.*B.*C.*(B-C);
[C,h] = contourf(x,y,z,
[1,2,3,4,5,6,6.5,7,7.2,7.4,7.6],'LineWidth',1);
axis([0,1,0,1]);
clabel(C,h,'manual','fontsize',15);
hold on
plot([0.375,0.625],[0.6495,0.6495],'k:');
hold on
plot([0.25,0.75],[0.433,0.433],'k:');
hold on
plot([0.375,0.75],[0.6495,0],'k:');
hold on
plot([0.25,0.5],[0.433,0],'k:');
hold on
plot([0.125,0.25],[0.2165,0],'k:');
hold on
plot([0.125,0.875],[0.2165,0.2165],'k:');
hold on
plot([0.25,0.625],[0,0.6495],'k:');
hold on
plot([0.50,0.75],[0,0.433],'k:');
hold on
plot([0.75,0.875],[0,0.2165],'k:');
先建立矩
陣數列,
從0~1,
間格0.001
利用SAS迴歸得到的Eq.
29. Factors and levels for the 23 Full Factorial Design
factors
levels
+ -
A, P3HT:PCBM concentration
(wt%)
2.5 1.5
B, rpm 600 1000
C, time (s) 60 40
Full Factorial Design