This document discusses response surface methodology (RSM) and how it can be used to optimize a response with multiple factors. RSM combines different design of experiments (DOE) techniques to map the relationship between factors and responses. It involves performing an initial experiment, following the slope of steepest ascent to find an optimal region, and then using a central composite design within that region to estimate quadratic effects and locate the true optimum. The document provides a chemical engineering example to illustrate how RSM would be applied step-by-step to maximize chemical yield based on reaction time and temperature.
Advanced DOE with Minitab (presentation in Costa Rica)Blackberry&Cross
This document describes using a split-plot design for a wind tunnel experiment to optimize the aerodynamic performance of a racecar. The experiment had 4 factors, with 2 that were hard-to-change (front and rear ride heights) and 2 that were easy-to-change (yaw angle and grill cover). A split-plot design was used to reduce the total time needed, collecting data from 45 runs over 10 hours instead of 36 runs over 30 hours. The analysis accounted for two sources of error and showed several significant factors for improving downforce and reducing drag.
Heuristic search techniques use heuristics or rules of thumb to help find approximate solutions faster when classic problem-solving methods are too slow or cannot solve a problem. Some common heuristic search techniques described in the document include hill climbing, simulated annealing, A* search, and best-first search. Heuristics help guide the search process by evaluating information at each step and choosing which path or branch to follow next based on ranking alternatives. While heuristic methods may not guarantee an optimal solution, they can help solve problems more efficiently than uninformed search techniques.
Optimum engineering design - Day 6. Classical optimization methodsSantiagoGarridoBulln
The document discusses various optimization methods for engineering design problems, including direct search methods, the Nelder-Mead algorithm, and simulated annealing. Direct search methods like Nelder-Mead, particle swarm optimization, and genetic algorithms solve optimization problems without gradients by evaluating the objective function. The Nelder-Mead algorithm finds the minimum within a simplex by reflecting, expanding, contracting, or shrinking it in each iteration. Simulated annealing models the physical annealing process and accepts worse solutions probabilistically to avoid local optima.
This document discusses various heuristic search techniques, including generate-and-test, hill climbing, best first search, and simulated annealing. Generate-and-test involves generating possible solutions and testing them until a solution is found. Hill climbing iteratively improves the current state by moving in the direction of increased heuristic value until no better state can be found or a goal is reached. Best first search expands the most promising node first based on heuristic evaluation. Simulated annealing is based on hill climbing but allows moves to worse states probabilistically to escape local maxima.
This document discusses various heuristic search techniques used in artificial intelligence. It begins by defining heuristics as techniques that find approximate solutions faster than classic methods when exact solutions are not possible or not feasible due to time or memory constraints. It then describes heuristic search, hill climbing, simulated annealing, A* search, and best-first search. Hill climbing is presented as an example heuristic technique that evaluates neighboring states to move toward an optimal solution. The document also discusses problems that can occur with hill climbing like getting stuck in local maxima.
The reduced model is better than the full model based on the following criteria:
1. The reduced model has a higher R-squared and adjusted R-squared value indicating it fits the data better.
2. The predicted R-squared of the reduced model is closer to the adjusted R-squared indicating it has better predictive power.
3. The PRESS value which indicates prediction accuracy is lower for the reduced model.
4. The lack of fit F-value is higher (better) for the reduced model indicating it fits the data as well as the full quadratic model without the extra terms.
Therefore, the reduced model is more statistically significant and has better predictive ability compared to the full quadratic model based on these
- Response surface methodology (RSM) uses statistical techniques to model and analyze problems with response variables influenced by multiple independent variables. The goal is to optimize the response.
- RSM has been used since the 1930s and was reviewed in landmark papers in 1966 and 1976. It is commonly used in industries, agriculture, medicine, and other fields to optimize processes and products.
- There are two main experimental strategies in RSM - first-order models to initially evaluate relationships between factors and responses, and second-order models to account for curvature and find optimal points if curvature is present.
- Response surface methodology (RSM) is a statistical technique used to optimize processes and develop new products. It was developed in the 1950s to improve chemical processes.
- RSM uses experimental designs and mathematical/statistical techniques to model and analyze the relationship between inputs and outputs or responses. The goal is to optimize the response by selecting the best setting of each input variable.
- Common RSM methods include steepest ascent/descent, central composite design, and Box-Behnken design. They are used to estimate coefficients in a polynomial regression model and determine optimal settings for the inputs.
Advanced DOE with Minitab (presentation in Costa Rica)Blackberry&Cross
This document describes using a split-plot design for a wind tunnel experiment to optimize the aerodynamic performance of a racecar. The experiment had 4 factors, with 2 that were hard-to-change (front and rear ride heights) and 2 that were easy-to-change (yaw angle and grill cover). A split-plot design was used to reduce the total time needed, collecting data from 45 runs over 10 hours instead of 36 runs over 30 hours. The analysis accounted for two sources of error and showed several significant factors for improving downforce and reducing drag.
Heuristic search techniques use heuristics or rules of thumb to help find approximate solutions faster when classic problem-solving methods are too slow or cannot solve a problem. Some common heuristic search techniques described in the document include hill climbing, simulated annealing, A* search, and best-first search. Heuristics help guide the search process by evaluating information at each step and choosing which path or branch to follow next based on ranking alternatives. While heuristic methods may not guarantee an optimal solution, they can help solve problems more efficiently than uninformed search techniques.
Optimum engineering design - Day 6. Classical optimization methodsSantiagoGarridoBulln
The document discusses various optimization methods for engineering design problems, including direct search methods, the Nelder-Mead algorithm, and simulated annealing. Direct search methods like Nelder-Mead, particle swarm optimization, and genetic algorithms solve optimization problems without gradients by evaluating the objective function. The Nelder-Mead algorithm finds the minimum within a simplex by reflecting, expanding, contracting, or shrinking it in each iteration. Simulated annealing models the physical annealing process and accepts worse solutions probabilistically to avoid local optima.
This document discusses various heuristic search techniques, including generate-and-test, hill climbing, best first search, and simulated annealing. Generate-and-test involves generating possible solutions and testing them until a solution is found. Hill climbing iteratively improves the current state by moving in the direction of increased heuristic value until no better state can be found or a goal is reached. Best first search expands the most promising node first based on heuristic evaluation. Simulated annealing is based on hill climbing but allows moves to worse states probabilistically to escape local maxima.
This document discusses various heuristic search techniques used in artificial intelligence. It begins by defining heuristics as techniques that find approximate solutions faster than classic methods when exact solutions are not possible or not feasible due to time or memory constraints. It then describes heuristic search, hill climbing, simulated annealing, A* search, and best-first search. Hill climbing is presented as an example heuristic technique that evaluates neighboring states to move toward an optimal solution. The document also discusses problems that can occur with hill climbing like getting stuck in local maxima.
The reduced model is better than the full model based on the following criteria:
1. The reduced model has a higher R-squared and adjusted R-squared value indicating it fits the data better.
2. The predicted R-squared of the reduced model is closer to the adjusted R-squared indicating it has better predictive power.
3. The PRESS value which indicates prediction accuracy is lower for the reduced model.
4. The lack of fit F-value is higher (better) for the reduced model indicating it fits the data as well as the full quadratic model without the extra terms.
Therefore, the reduced model is more statistically significant and has better predictive ability compared to the full quadratic model based on these
- Response surface methodology (RSM) uses statistical techniques to model and analyze problems with response variables influenced by multiple independent variables. The goal is to optimize the response.
- RSM has been used since the 1930s and was reviewed in landmark papers in 1966 and 1976. It is commonly used in industries, agriculture, medicine, and other fields to optimize processes and products.
- There are two main experimental strategies in RSM - first-order models to initially evaluate relationships between factors and responses, and second-order models to account for curvature and find optimal points if curvature is present.
- Response surface methodology (RSM) is a statistical technique used to optimize processes and develop new products. It was developed in the 1950s to improve chemical processes.
- RSM uses experimental designs and mathematical/statistical techniques to model and analyze the relationship between inputs and outputs or responses. The goal is to optimize the response by selecting the best setting of each input variable.
- Common RSM methods include steepest ascent/descent, central composite design, and Box-Behnken design. They are used to estimate coefficients in a polynomial regression model and determine optimal settings for the inputs.
IB Internal Assessment Guide1. Your overall IB mark (the one s.docxwilcockiris
IB Internal Assessment Guide
1. Your overall IB mark (the one sent to universities after the IB test) in any IB science course is based upon two kinds of assessments or grades:
· External Assessment: Your score on end-of-course exam (76% of total IB mark)
· Internal Assessment: Your performance on in class laboratory work (24% of total IB mark)
2. Internal Assessment is a collection of work completed by the student during the course of the IB class. Each candidate must keep all investigations carried out and summarize them on form 4/PSOW (Group 4 -- Experimental Science, Practical Scheme of Work). HL students are required to demonstrate 60 hours of laboratory/field investigations over the two years of the course. SL students are required to demonstrate 40 hours over 1 year.
3. Your laboratory work and report write-ups will be assessed (that means ‘graded’) using very strict IB criteria. All IB science teachers world-wide must use the same criteria and apply them in the same way—quite a challenge!! To ensure that everyone is following the rules and applying the criteria correctly, schools must send samples of graded student lab reports to IB for monitoring. If a teacher is being too hard or too soft, that teacher’s marks which were awarded to students will be adjusted accordingly.
4. All IB lab reports are graded using up to five IB Internal Assessment Criteria. They are:
· Design (D)
· Data Collection and Processing (DCP)
· Conclusion & Evaluation (CE)
· Manipulative Skills (MS)
· Personal Skills (PS)
5. Each of these criteria is further divided into three parts called ‘Aspects’. When I grade your lab report, I will determine whether you met each aspect completely, partially, or not at all (c, p or n). This will then determine what total mark you earn on that section of your lab report. You can earn up to 6 points for each section—a “complete” is worth 2 points, a “partial” is worth 1 point, and a “not at all” is worth 0 points toward that total score. I will maintain careful records of the experiments we do and what marks each student achieved. Additionally, you will maintain all your graded lab work in a 3-ring binder.
6. Wow, this looks like an awful lot of work for lab reports! But you must keep in mind some very important points:
· You will not have to write a full lab report (using all the criteria) for every lab!! In fact, most of the labs we do will focus on only one or two of the criteria, so you will only write up these parts. Only a few labs will assess each of the first three criteria.
· Your overall mark (0-6) in each criterion is not an average of all your labs. Instead, it is a summary mark that reflects your level of achievement by the end of the course. So don’t worry if you get some low scores initially. They won’t count against you in your IB internal assessment grade as long as you steadily improve. There is plenty of time to learn and improve as the course goes on!
7. In addition to this very strict .
The document discusses various methods for optimizing fermentation media to maximize product yield, including:
1. Classical methods that vary one variable at a time require many experiments as more variables are added.
2. Statistical methods like Plackett-Burman design, response surface methodology, and central composite design require fewer experiments while analyzing interactions between multiple variables. They are better for industrial optimization.
3. The optimization process identifies the most important media components and conditions, then determines their optimal concentrations/levels to maximize biomass or desired product concentration through experimental design and statistical analysis.
1_Introduction.pdf1 Dynamics and Control Topic .docxeugeniadean34240
1_Introduction.pdf
1
Dynamics and Control
Topic 1: Introduction
2
Tentative Lecture Schedule
3
What is Control?
Think of some “control” examples:
• From your study
• From industry
• From real-life
Commonly used terms in “control”?
4
Open Loop Control vs. Closed Loop Control?
Open Loop Control:
• There is no feedback
• Calibration is the key!
• Can be sensitive to disturbances
5
Open Loop Control:
• Objective: make bread toasted
• Control action: set timer
• Control result (performance)?
• How to improve the performance?
• Use temp sensor
• Use microprocessor and actuator to control the spring
• Worth it?
Feedback
Closed Loop Control:
• There is a feedback (sensor)
• Compare actual behavior with desired behavior
• Make corrections based on the error
• The sensor is the key element of a feedback control
• Design a proper control algorithm is the focus of this
subject!
Room Temp Control: Open or Closed?
• Feedback?
• Control objective?
• Controller?
• System/plant?
• Disturbance?
Applications of Feedback Control
• Manufacturing
Applications of Feedback Control
• Robotics
UNDERWATER ROBOT
Applications of Feedback Control
• Aerospace and Astronautics
Applications of Feedback Control: BIG DOG in Action
BOSTON DYNAMICS
Sensing Actuation Control Plant ControlSystem = + + +
BIG DOG CONTROL: Block Diagram
DISTURBANCE
Environment
Foot
Trajectory
Planning
Desired
Walking
Speed
REFERENCE
Joint angles
& velocities
Robot
Joint
torques
Robot Robot
Virtual
Leg IK
Virtual
Leg FK
Virtual
Leg VM
PD
Servo
Virtual
Leg Coords
Virtual
Leg Forces
State
Machine
Leg State
Gait
Coordination
Mechanisms
Influences to/from other legs
+
IK=inverse kinematics
FK=forward kinematics
VM=virtual model
Exercise:
• Use Block Diagram to represent this driving system
• Indicate: sensor, actuator, control, plant, reference &
disturbance
Brain Hand Foot
Eye
Car _
Desired
Direction
Desired
Speed
Direction
Speed
Controller Actuators Plant
Error Disturbances
Sensor
Control Actuation
Sensor
Plant
_
Reference Response
Error
General Block Diagram Representation
Feedback Control Concept
Not limited to engineering systems only!
Human Grasping Motion
Feedback Control Concept…cont’d
Not limited to engineering systems only!
e.g. Taking Dynamics & Control…
Two Main Criteria of Good Feedback Control
• Acceptable Dynamic Responses
• Stability
Crash of SAAB JAS-39 due to instability
Systematic Control Design Process
Goa.
Using Monte Carlo Simulation in Project Estimates by Akram Najjar
The PMI Lebanon is glad to announce that Akram Najjar is the speaker for the a lecture titled “Using Monte Carlo Simulation in Project Estimates” delivered on Thursday, 28 July 2016
Lecture Outline
* Why are single point estimates unreliable and what is the alternative?
* What are distributions and how do we extract random samples from them (using Excel)? Two costing examples.
* How to setup a Monte Carlo Simulation model in a spreadsheet?
* Two PM examples (in detail)
* How to statistically analyze the thousands of runs to reach reliable estimates?
Lecture Objectives
* A Project Manager usually knows how certain parameters (such as duration, resource rates or quantities) behave. However, the PM can almost never define reliable single point estimates for these parameters. The result: many projects fail due to unreliable estimates. The alternative? The PM has to use his/her knowledge of how specific parameters behave statistically. For example, the PM knows that a specific task’s duration is distributed according to the bell shaped curve OR that another is uniformly distributed (flat variation), or triangular, or Beta-PERT, etc. The PM can then use Monte Carlo Simulation (MCS) to arrive at statistically significant and robust results. Monte Carlo Simulation (MCS) is a technique that relies on two processes. Process 1 aims at developing a spreadsheet model that calculates the critical path or the total cost, etc. The calculation is setup in a single row (or Run). This row is then duplicated a large number of times (thousands). Process 2 aims at inserting Excel functions in each of the parameters (durations, costs). In each row (or Run), such functions will provide a sample drawn from a statistical distribution that properly describes the behavior of that parameter. For example, a specific duration follows a Normal (Bell) distribution with an Average A and a Standard Deviation S. The model will then generate for each run and for that duration a different value that conforms with the bell shaped curve as defined (A and S). Each of these thousands of runs will provide the PM with a different “simulation” of the duration or the total cost, etc. By statistically analyzing the thousands of results, the PM can arrive at a robust and reliable estimate. Proprietary Add On’s for Monte Carlo Simulation in Microsoft Project are available. However, it is easy, free and more flexible to use native Microsoft functions to carry out the full simulation. The talk covered all the steps needed for such simulations giving several examples
ICML2017 best paper (Understanding black box predictions via influence functi...Antosny
This document introduces influence functions, which can be used to explain black-box model predictions by analyzing how predictions would change based on small modifications to the training data. It provides background on Taylor series and Newton's method. Influence functions are defined based on how a model's parameters and test loss would change if a single training point was upweighted. Efficient calculation methods are discussed, as are extensions to non-differentiable losses and non-convex models. Potential use cases include understanding model behavior, identifying and fixing mislabeled data, and generating adversarial training examples.
This document discusses various optimization techniques, including classical optimization, statistical design of experiments, simulation and search methods. Classical optimization uses calculus to find the maximum or minimum of a function with one or two variables. Statistical design of experiments is a structured method to determine relationships between factors and responses using techniques like factorial designs. Simulation and search methods do not require differentiability, and include methods like steepest ascent, response surface methodology, and contour plots to find optimal values of responses.
Calculator-Techniques for engineering.pptxSoleil50
This document provides instructions for using calculator techniques in a statistics course. It discusses using the shift solve and calc functions to find values and roots of expressions. It also covers using the equation solver, complex, and statistics modes to solve systems of equations, operate on complex numbers, and perform statistical calculations like regression. Examples are provided for each technique explained.
The document describes a case study involving optimizing a catapult to hit targets within a specified range. A team is tasked with developing a process to reliably hit targets from 5-12 feet away within 6 inches of accuracy. The team conducts experiments to identify key factors (stop pin position, draw back angle, front tension pin) affecting the distance and variation. A full factorial design of experiments is used to determine the relationship between factors and the distance response. The analysis results in an equation to predict distance based on factor settings. Based on minimizing variation, the recommended settings are a stop pin of 2, front tension pin of 2, and a draw back angle that satisfies the equation to hit the 60 inch target distance.
This document discusses fundamentals of programming including iteration, while loops, for loops, and common loop patterns. It provides examples of using while loops to iterate until a condition is met, using break and continue statements to control loop execution, and using for loops to iterate over lists. It also demonstrates common loop patterns such as counting items, summing values, and finding the maximum/minimum value.
Cointegration and error correction models are used to analyze the relationship between non-stationary time series variables. The Dickey-Fuller test determines if variables contain a unit root and are non-stationary. If two non-stationary variables have a stationary linear combination, they are cointegrated, indicating a long-run equilibrium relationship. An error correction model represents the short-run dynamic adjustment between cointegrated variables back to their long-run equilibrium when shocked.
Regression takes a group of random variables, thought to be predicting Y, and tries to find a mathematical relationship between them. This relationship is typically in the form of a straight line (linear regression) that best approximates all the individual data points.
ICPSR - Complex Systems Models in the Social Sciences - Lab Session 7, 8 - Pr...Daniel Katz
This document provides instructions for using BehaviorSpace, a tool in NetLogo that automates running models multiple times while systematically varying parameters. It discusses setting up an experiment in BehaviorSpace to test different combinations of the blue fertility rate, red fertility rate, and carrying capacity in the Simple Birth Rates model. While BehaviorSpace can test the parameter space much faster than a human, fully exploring all possible combinations for this three-variable model would take over a year to run due to the large number of combinations. Limitations of BehaviorSpace and options for addressing them are discussed.
This document provides guidance on how to write up a chemistry experiment or project. It outlines the key sections that should be included such as developing a research question, describing the methodology and procedure, collecting and recording data, analyzing results through calculations and/or graphs, and stating conclusions. Safety considerations and identifying sources of error are also important aspects of the write up. The document uses examples of investigating the rate of a reaction to illustrate how to label variables, construct tables and graphs, and discuss findings. Proper formatting of references is also addressed.
The document describes work optimizing HIV intervention programs. It discusses:
- HIV prevalence and burden globally and in sub-Saharan Africa
- Common HIV interventions like ART, VMMC, PrEP, and their costs
- The motivation to optimize intervention spending to minimize HIV's effects within a given budget
- Using an epidemic model (EMOD) to simulate populations over time and evaluate intervention parameters and their costs and effects in DALYs
- Developing a framework to find the optimal intervention allocation that minimizes DALYs for a given budget by evaluating parameters with EMOD and surrogate models.
Quantitative Forecasting Techniques in SCMYountek1
The document discusses quantitative forecasting techniques, including moving average forecasts and exponential smoothing. It explains the direct procedure and cross-validation procedure for building and evaluating forecasting models. As an example, it demonstrates how to generate forecasts using simple exponential smoothing, including initializing the model, calculating values recursively using the smoothing formula, selecting the alpha parameter, and producing a one-period ahead forecast.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
IB Internal Assessment Guide1. Your overall IB mark (the one s.docxwilcockiris
IB Internal Assessment Guide
1. Your overall IB mark (the one sent to universities after the IB test) in any IB science course is based upon two kinds of assessments or grades:
· External Assessment: Your score on end-of-course exam (76% of total IB mark)
· Internal Assessment: Your performance on in class laboratory work (24% of total IB mark)
2. Internal Assessment is a collection of work completed by the student during the course of the IB class. Each candidate must keep all investigations carried out and summarize them on form 4/PSOW (Group 4 -- Experimental Science, Practical Scheme of Work). HL students are required to demonstrate 60 hours of laboratory/field investigations over the two years of the course. SL students are required to demonstrate 40 hours over 1 year.
3. Your laboratory work and report write-ups will be assessed (that means ‘graded’) using very strict IB criteria. All IB science teachers world-wide must use the same criteria and apply them in the same way—quite a challenge!! To ensure that everyone is following the rules and applying the criteria correctly, schools must send samples of graded student lab reports to IB for monitoring. If a teacher is being too hard or too soft, that teacher’s marks which were awarded to students will be adjusted accordingly.
4. All IB lab reports are graded using up to five IB Internal Assessment Criteria. They are:
· Design (D)
· Data Collection and Processing (DCP)
· Conclusion & Evaluation (CE)
· Manipulative Skills (MS)
· Personal Skills (PS)
5. Each of these criteria is further divided into three parts called ‘Aspects’. When I grade your lab report, I will determine whether you met each aspect completely, partially, or not at all (c, p or n). This will then determine what total mark you earn on that section of your lab report. You can earn up to 6 points for each section—a “complete” is worth 2 points, a “partial” is worth 1 point, and a “not at all” is worth 0 points toward that total score. I will maintain careful records of the experiments we do and what marks each student achieved. Additionally, you will maintain all your graded lab work in a 3-ring binder.
6. Wow, this looks like an awful lot of work for lab reports! But you must keep in mind some very important points:
· You will not have to write a full lab report (using all the criteria) for every lab!! In fact, most of the labs we do will focus on only one or two of the criteria, so you will only write up these parts. Only a few labs will assess each of the first three criteria.
· Your overall mark (0-6) in each criterion is not an average of all your labs. Instead, it is a summary mark that reflects your level of achievement by the end of the course. So don’t worry if you get some low scores initially. They won’t count against you in your IB internal assessment grade as long as you steadily improve. There is plenty of time to learn and improve as the course goes on!
7. In addition to this very strict .
The document discusses various methods for optimizing fermentation media to maximize product yield, including:
1. Classical methods that vary one variable at a time require many experiments as more variables are added.
2. Statistical methods like Plackett-Burman design, response surface methodology, and central composite design require fewer experiments while analyzing interactions between multiple variables. They are better for industrial optimization.
3. The optimization process identifies the most important media components and conditions, then determines their optimal concentrations/levels to maximize biomass or desired product concentration through experimental design and statistical analysis.
1_Introduction.pdf1 Dynamics and Control Topic .docxeugeniadean34240
1_Introduction.pdf
1
Dynamics and Control
Topic 1: Introduction
2
Tentative Lecture Schedule
3
What is Control?
Think of some “control” examples:
• From your study
• From industry
• From real-life
Commonly used terms in “control”?
4
Open Loop Control vs. Closed Loop Control?
Open Loop Control:
• There is no feedback
• Calibration is the key!
• Can be sensitive to disturbances
5
Open Loop Control:
• Objective: make bread toasted
• Control action: set timer
• Control result (performance)?
• How to improve the performance?
• Use temp sensor
• Use microprocessor and actuator to control the spring
• Worth it?
Feedback
Closed Loop Control:
• There is a feedback (sensor)
• Compare actual behavior with desired behavior
• Make corrections based on the error
• The sensor is the key element of a feedback control
• Design a proper control algorithm is the focus of this
subject!
Room Temp Control: Open or Closed?
• Feedback?
• Control objective?
• Controller?
• System/plant?
• Disturbance?
Applications of Feedback Control
• Manufacturing
Applications of Feedback Control
• Robotics
UNDERWATER ROBOT
Applications of Feedback Control
• Aerospace and Astronautics
Applications of Feedback Control: BIG DOG in Action
BOSTON DYNAMICS
Sensing Actuation Control Plant ControlSystem = + + +
BIG DOG CONTROL: Block Diagram
DISTURBANCE
Environment
Foot
Trajectory
Planning
Desired
Walking
Speed
REFERENCE
Joint angles
& velocities
Robot
Joint
torques
Robot Robot
Virtual
Leg IK
Virtual
Leg FK
Virtual
Leg VM
PD
Servo
Virtual
Leg Coords
Virtual
Leg Forces
State
Machine
Leg State
Gait
Coordination
Mechanisms
Influences to/from other legs
+
IK=inverse kinematics
FK=forward kinematics
VM=virtual model
Exercise:
• Use Block Diagram to represent this driving system
• Indicate: sensor, actuator, control, plant, reference &
disturbance
Brain Hand Foot
Eye
Car _
Desired
Direction
Desired
Speed
Direction
Speed
Controller Actuators Plant
Error Disturbances
Sensor
Control Actuation
Sensor
Plant
_
Reference Response
Error
General Block Diagram Representation
Feedback Control Concept
Not limited to engineering systems only!
Human Grasping Motion
Feedback Control Concept…cont’d
Not limited to engineering systems only!
e.g. Taking Dynamics & Control…
Two Main Criteria of Good Feedback Control
• Acceptable Dynamic Responses
• Stability
Crash of SAAB JAS-39 due to instability
Systematic Control Design Process
Goa.
Using Monte Carlo Simulation in Project Estimates by Akram Najjar
The PMI Lebanon is glad to announce that Akram Najjar is the speaker for the a lecture titled “Using Monte Carlo Simulation in Project Estimates” delivered on Thursday, 28 July 2016
Lecture Outline
* Why are single point estimates unreliable and what is the alternative?
* What are distributions and how do we extract random samples from them (using Excel)? Two costing examples.
* How to setup a Monte Carlo Simulation model in a spreadsheet?
* Two PM examples (in detail)
* How to statistically analyze the thousands of runs to reach reliable estimates?
Lecture Objectives
* A Project Manager usually knows how certain parameters (such as duration, resource rates or quantities) behave. However, the PM can almost never define reliable single point estimates for these parameters. The result: many projects fail due to unreliable estimates. The alternative? The PM has to use his/her knowledge of how specific parameters behave statistically. For example, the PM knows that a specific task’s duration is distributed according to the bell shaped curve OR that another is uniformly distributed (flat variation), or triangular, or Beta-PERT, etc. The PM can then use Monte Carlo Simulation (MCS) to arrive at statistically significant and robust results. Monte Carlo Simulation (MCS) is a technique that relies on two processes. Process 1 aims at developing a spreadsheet model that calculates the critical path or the total cost, etc. The calculation is setup in a single row (or Run). This row is then duplicated a large number of times (thousands). Process 2 aims at inserting Excel functions in each of the parameters (durations, costs). In each row (or Run), such functions will provide a sample drawn from a statistical distribution that properly describes the behavior of that parameter. For example, a specific duration follows a Normal (Bell) distribution with an Average A and a Standard Deviation S. The model will then generate for each run and for that duration a different value that conforms with the bell shaped curve as defined (A and S). Each of these thousands of runs will provide the PM with a different “simulation” of the duration or the total cost, etc. By statistically analyzing the thousands of results, the PM can arrive at a robust and reliable estimate. Proprietary Add On’s for Monte Carlo Simulation in Microsoft Project are available. However, it is easy, free and more flexible to use native Microsoft functions to carry out the full simulation. The talk covered all the steps needed for such simulations giving several examples
ICML2017 best paper (Understanding black box predictions via influence functi...Antosny
This document introduces influence functions, which can be used to explain black-box model predictions by analyzing how predictions would change based on small modifications to the training data. It provides background on Taylor series and Newton's method. Influence functions are defined based on how a model's parameters and test loss would change if a single training point was upweighted. Efficient calculation methods are discussed, as are extensions to non-differentiable losses and non-convex models. Potential use cases include understanding model behavior, identifying and fixing mislabeled data, and generating adversarial training examples.
This document discusses various optimization techniques, including classical optimization, statistical design of experiments, simulation and search methods. Classical optimization uses calculus to find the maximum or minimum of a function with one or two variables. Statistical design of experiments is a structured method to determine relationships between factors and responses using techniques like factorial designs. Simulation and search methods do not require differentiability, and include methods like steepest ascent, response surface methodology, and contour plots to find optimal values of responses.
Calculator-Techniques for engineering.pptxSoleil50
This document provides instructions for using calculator techniques in a statistics course. It discusses using the shift solve and calc functions to find values and roots of expressions. It also covers using the equation solver, complex, and statistics modes to solve systems of equations, operate on complex numbers, and perform statistical calculations like regression. Examples are provided for each technique explained.
The document describes a case study involving optimizing a catapult to hit targets within a specified range. A team is tasked with developing a process to reliably hit targets from 5-12 feet away within 6 inches of accuracy. The team conducts experiments to identify key factors (stop pin position, draw back angle, front tension pin) affecting the distance and variation. A full factorial design of experiments is used to determine the relationship between factors and the distance response. The analysis results in an equation to predict distance based on factor settings. Based on minimizing variation, the recommended settings are a stop pin of 2, front tension pin of 2, and a draw back angle that satisfies the equation to hit the 60 inch target distance.
This document discusses fundamentals of programming including iteration, while loops, for loops, and common loop patterns. It provides examples of using while loops to iterate until a condition is met, using break and continue statements to control loop execution, and using for loops to iterate over lists. It also demonstrates common loop patterns such as counting items, summing values, and finding the maximum/minimum value.
Cointegration and error correction models are used to analyze the relationship between non-stationary time series variables. The Dickey-Fuller test determines if variables contain a unit root and are non-stationary. If two non-stationary variables have a stationary linear combination, they are cointegrated, indicating a long-run equilibrium relationship. An error correction model represents the short-run dynamic adjustment between cointegrated variables back to their long-run equilibrium when shocked.
Regression takes a group of random variables, thought to be predicting Y, and tries to find a mathematical relationship between them. This relationship is typically in the form of a straight line (linear regression) that best approximates all the individual data points.
ICPSR - Complex Systems Models in the Social Sciences - Lab Session 7, 8 - Pr...Daniel Katz
This document provides instructions for using BehaviorSpace, a tool in NetLogo that automates running models multiple times while systematically varying parameters. It discusses setting up an experiment in BehaviorSpace to test different combinations of the blue fertility rate, red fertility rate, and carrying capacity in the Simple Birth Rates model. While BehaviorSpace can test the parameter space much faster than a human, fully exploring all possible combinations for this three-variable model would take over a year to run due to the large number of combinations. Limitations of BehaviorSpace and options for addressing them are discussed.
This document provides guidance on how to write up a chemistry experiment or project. It outlines the key sections that should be included such as developing a research question, describing the methodology and procedure, collecting and recording data, analyzing results through calculations and/or graphs, and stating conclusions. Safety considerations and identifying sources of error are also important aspects of the write up. The document uses examples of investigating the rate of a reaction to illustrate how to label variables, construct tables and graphs, and discuss findings. Proper formatting of references is also addressed.
The document describes work optimizing HIV intervention programs. It discusses:
- HIV prevalence and burden globally and in sub-Saharan Africa
- Common HIV interventions like ART, VMMC, PrEP, and their costs
- The motivation to optimize intervention spending to minimize HIV's effects within a given budget
- Using an epidemic model (EMOD) to simulate populations over time and evaluate intervention parameters and their costs and effects in DALYs
- Developing a framework to find the optimal intervention allocation that minimizes DALYs for a given budget by evaluating parameters with EMOD and surrogate models.
Quantitative Forecasting Techniques in SCMYountek1
The document discusses quantitative forecasting techniques, including moving average forecasts and exponential smoothing. It explains the direct procedure and cross-validation procedure for building and evaluating forecasting models. As an example, it demonstrates how to generate forecasts using simple exponential smoothing, including initializing the model, calculating values recursively using the smoothing formula, selecting the alpha parameter, and producing a one-period ahead forecast.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Similar to Ch19_Response_Surface_Methodology.pptx (20)
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
3. Real DOE: Combination of Methods
• Rare that a DOE will consist of a single design, and one ANOVA table
giving a useful answer.
• A good DOE will produce as more questions as well as answers.
• More likely:
• Screening Experiment to reduce large list of factors to few
• Experiments to identify optimal settings from important factors
• Repeat these experiments as much as possible to further optimize
• Confirmation Experiments to validate optimal settings.
4. Response Surface Methods
• Response Surface Methodology (RSM) is a
great tool that combines together various
methods learned in course so far.
• Gives the researcher the ability to read
complicated relationships between many
potential factors (X’s) and predict an
optimal response (Y)
• Could even predict multiple responses (Y)!
5. Response Surface Methods Steps
1. Evaluate current or known region (operating
conditions)
2. Find and follow slope of Steepest Ascent for
optimum
3. Explore region of optimum for optimal
response
4. Repeat if best region is further still
Cannot be done as single experiment. Must be a
sequence with multiple experiments getting closer
to ideal.
Picture it: on a mountain trying to
find the peak!!
6. Finding the top of a mountain
• Imagine: we are on a large
mountain and have to reach the
summit.
• No rocky cliffs or trees or
dangerous animals obstructing us
• Extremely foggy, so can only see
about 10 meters around us at a
time.
• Best bet to find the top: start
climbing in the steepest direction
possible.
• Keep doing this will reach the top
eventually
7. Optimal Response = Mountain Summit
• Similar concept applies to finding optimal
response with complicated system
• Get bearings on current location, and identify a
slope
• Follow that slope until hitting some sort of a
peak
• Get bearings on that location to see if summit is
nearby.
• Keep climbing on another slope if necessary.
• Note: This could also be applied to finding a
minimum response or even to coming as close
as possible to a set target
8. Center Points
• A standard 2k experiment is useful for estimating Main Effects and
Interactions.
• Cannot estimate Quadratic effects
• Cannot easily estimate responses in between the 2 chosen factor
levels.
• Center Points added to a 2k experiment will allow for quadratic
estimates.
• Will allow ability to estimate inside of just the 2 factors.
• Will allow for estimate of our old friend, “naturally occurring
variation” (process error)
9. Center Points
• Add on to experiment multiple experimental runs where
a=b=c=…= 0
11. Chemical Engineering Example
• A chemical process has two factors that can easily be controlled
(reaction time and reaction temperature)
• Current Reaction Time is 35 minutes (X1)
• Current Reaction Temperature is 155 degrees F (X2)
• Various factors exist that could be improved, but Yield is most
important right now. Currently around 40.
• Need to find the maximum possible Yield by altering Time and Temp.
12. Initial Experiment: Chem Engineering
Example
• Run 5 center points at current setting (35, 155)
• Corner points at +/5 time and +/- 5 temp
• (150, 160) and (30, 40)
• Can estimate Temp effect, Time Effect,
Interaction, and overall Quadratic (not
individual quadratics)
30
160
150
40
35, 155
14. First Experiment ANOVA (Regression)
• Main Time and Main Temp Effects
significant!
• Overall model has significance, no
real interaction or quadratic effect.
• Could be a slope happening here,
but no minimum or maximum.
• Note: This is set up in Minitab as a
simple 2 Factor Factorial design with
5 center points. Do not use
“Response Surface” platform … yet.
15. First Experiment: Contour
• Graphically, we appear to be
in the middle of a slope.
• Like on the mountain, can
approach Summit if we climb
up!
• This slope is called slope of
Steepest Ascent.
17. Steepest Ascent: Which “Slope” is Steepest?
• Should take experimental
samples along “slope” within
Temp/Time space, as Yield will
increase.
• Analyze simple model on X1,
X2 (-1, 1) with no interactions
or quadratics.
18. Determining “Slope of Steepest Ascent”
1. Determine overall slope
within “X” space (coded
values)
2. Convert into actual units
3. Run experimental runs
moving upward along slope
19. Step 1: Determine Slope (Coded)
• Use regression coefficients to
determine slope.
• Slope of best line can be
found by dividing coefficients
• Slope=0.325/0.775=0.42
• Steepest Ascent will start at
(0,0) and go up 0.42 (X1) for
every 1 X2)
20. Slope of Steepest Ascent (or Descent)
• General formula (use if more than 2 X factors)
• Coded slope compared to other terms
∆𝑥𝑖 =
𝛽𝑖
𝛽𝑗
∆𝑥𝑗
=
0.325
0.775
1
= 0.42
21. Step 2: Convert to Actual Units
• Reminder: 1 unit of X1 (Time) = 5 minutes
• Reminder: 1 unit of X2 (Temperature) = 5 degrees
• Since slope=∆𝑥1 = 0.42
For every 5 minute increase in Time, Temp increase is:
5*0.42 ~= 2 degrees
23. 3. Following the Slope
• Like climbing straight up a mountain
• Go up until you hit a peak.
• The actual highest point on the mountain
may be there, or it may be near there.
24. Is This the Optimal Value?
• Do we believe this is the best we can do?
• (Yield = 80.3, Time = 85, Temp = 175)
• Can further optimize in this area.
25. Peak, or maybe Relative Peak
Relative Peak may
be here
Need to find
Summit
27. Benefit of Following Slope
• Initial Experiment
identifies a slope for
improvement
• Following slope
identifies potential
optimal region.
• Massive experiments
over entire
Time/Temp space:
inefficient!
30 50 70 90 110
150
160
180
170
Previously Known Region
Potentially High Yield Region
28. Next Step: Further Optimization?
• Simple Factorial with center points
again.
• NOTE: X1 and X2 have different
meanings now.
• Centered around (Time, Temp) =
(85, 175)
• Are we on a slope? Are we near a
peak?
• A peak would be present if there is a
quadratic effect.
29. Factorial Results
• Quadratic and Interaction are
significant.
• Sign could be close to a “peak” of
desired Yield variable.
• Cannot really estimate Optimal
values, since we only have estimate of
overall quadratic, not each individual
X.
30. Central Composite Design
• Need to estimate Main
Effects, Interactions, and
Quadratic Effects
(curvature) for each Factor
• Can add to the previous
Factorial design with
addition of Axial Points.
• Axial Points: +/- 1.414
• 1.414 = 2
31. Axial Points
• All points are equal distance from Center
(square root of 2)
• Leads to “Orthogonality” or stability of
design.
• Allow for estimate of individual curvature
effects.
• Central Composite is a very common type of
Response Surface Design
1.414
33. Factorial Plus Axial Points (plus other
responses)
• Want to maximize yield and
also understand ideal Viscosity
and Molecular Weight.
34. Response Surface in MTB (Create RSM Table)
1. Select “Stat” Menu at top.
2. Select “DOE” platform.
3. Chose “Response Surface”
4. Chose “Create Response Surface Design”
1. Choose “Central Composite” for Type of Design.
2. This design has 2 continuous factors, and 0 categorical
factors.
3. Click “Designs…”
4. Click “Model” to select model specifics.
35. RSM Analysis in MTBesponse Surface in MTB
(Create RSM Table)
1. Under “Create Response Surface Designs”:
2. Number of Center Points is 5
3. Default Alpha (1.414) is correct.
4. No blocking in this case, only 1 replicate
5. Click “OK”
Will create basic data table with 2x2 factorial, 5
center points, and 4 axial points at 1.414.
Same as Factorial data set, but with Axial
points added in.
36. RSM Analysis in MTB
1. Select “Stat” Menu at top.
2. Select “DOE” platform.
3. Chose “Response Surface”
4. Chose “Analyze Response Surface Design”
1. Select the “Yield” column as the “Response”
2. Select “Terms” to set model parameters.
37. RSM Analysis in MTB
1. Selection of “Full quadratic” under “Include
the following terms” will set analysis for all
Main Effects, Quadratics, and Interactions.
2. Click “OK”
3. Click “OK” from Analysis window to run
analysis.
38. RSM Analysis in MTB
• Model shows strong statistical significance for
Quadratic and Interaction.
• Indicates a relative maximum is within this range.
• 3D Relationship can be determined with
Regression Equation.
• Maximum is point where partial derivatives of all
factors = 0.
40. Note: Need for Contour Examination
• Could be “peak” or
maximum
• Could be relative
maximum or “saddle
point
41. Maximum Yield
• Maximum Yield predicted at point
where coded Time = 0.3857, coded
Temp = 0.3
• Reminder, when (A, B) = (0, 0) then
(Time, Temp) = (85, 175) and +/- 1
in coded variables is +/- 5 in real
variables.
42. Maximum Yield
• Ideal Time = 85 + 0.3857*(5) = 86.9
• Ideal Temp = 175 + 0.3*(5) = 176.5
• IS this the best? Could be. Would
need to run confirmation runs to
determine.
44. Multiple Responses Example
• Using this data, Chemical
Yield is most important.
Data also gathered on
Viscosity and Molecular
Weight.
• Goals:
• Maximize Yield
• Viscosity as close as
possible to 65
• Minimize Molecular Weight.
46. Response Surface Optimization in MTB
1. Select “Stat” Menu at top.
2. Select “DOE” platform.
3. Chose “Response Surface”
4. Chose “Response Optimizer”
1. Select goals for the responses (Options are to Maximize,
Minimize, or hit a Target).
2. “Options” can if needed set constraints on inputs.
3. “Setup” can if needed weight the importance of Outputs
4. Click “OK”
47. Response Optimization
• Will show how Responses will change relative
to Inputs.
• Desirability: Measure of combined strength of
optimal settings.
• Can go to “Interactive Mode”
48. Interactive Response Optimizer
• Can move red lines around to
see how these will interact.
• In ideal situation, this process is
a long collaborative process.
• Selection of ideal settings
should take into account many
perspectives, considerations.
49. Types of Designs to Fit a Response Surface
• Various designs exist to estimate the Response Surface.
• Some combination of
• Values at +/- 1
• Center Points
• Axial Points
51. Central Composite Design
• Corner Points (A, B, C, … at +1 or -1)
• Center Points (A=B=C= … = 0)
• Axial Points (At +/- 𝛼)
• Is “Rotatable”, meaning all points at same
distance from center.
• Consistent variation of estimates.
52. Box-Behnken Design
• 3 level design
• No corner points
• Center Points
• Points on “side” of cube
• All points at consistent
radius
• Useful if corners or axial
points are at points
physically impossible.
53. Mixture Experiments
• An alloy is being created using some combination of lead and tin.
• It is possible to have between 40% to 60% lead (Pb) in the alloy. This
also means between 40 to 60% tin (Sn).
• Does this initial factorial experiment work?
Pb
Sn
60%
40%
40%
60%
54. Mixture Experiments
• Mixture levels constrained, that sum of percents will add up to 100%
• Increase of one level will lower the other levels.
0 ≤ 𝑥𝑖 ≤ 1, i = 1, 2, … , p
𝑥1 + 𝑥2 + ⋯ + 𝑥𝑝 = 1
56. Mixture Experiment Example
• 3 chemical components are combined
into a fiber to be spun into a yarn.
• Polyethylene (x1)
• Polystyrene (x2)
• Poly-propylene (x3)
• Response is yarn elongation strength.
• Simplex design chosen.
• Either pure blends or half
57. Experiment Results
• Statistical test for interactions
prioritized over main effects (since at
heart mixtures test interactions)
• Ideal setting:
X2=0
X3=80%
X1=20%