Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this document? Why not share!

- Combined experimental and modeling ... by Soil and Water Co... 482 views
- Experimental analysis of Component ... by Harish Rajagopal 427 views
- Analysis of runoff for vishwamitri ... by vishvam Pancholi 1963 views
- 1 Introduction to Psychological Ass... by Mary Anne (Riyan)... 1460 views
- Two-Factor Experiment and Three or ... by Mary Anne (Riyan)... 1292 views
- ANOVA & EXPERIMENTAL DESIGNS by vishwanth555 30323 views

1,330 views

Published on

No Downloads

Total views

1,330

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

65

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Design and Analysis of Computer Experiments Nathan Soderborg DFSS Master Black Belt Ford Motor Co. WCBF DFSS Conference Workshop Feb 9, 2009 1Outline Background: Six Sigma Context Foundation: Useful Computer Models Deterministic vs. Probabilistic Approaches Monte Carlo Simulation Design of Computer Experiments Analysis of Computer Experiments Case StudiesFebruary 2009 N. Soderborg 2
- 2. Background: Six Sigma Context 3Design for Six Sigma A scientific PD approach that leverages Six Define Sigma culture CTS’s A means to re-instill rigorous deductive and inductive reasoning in PD processes… Definition of objective engineering metrics withCharacterize targets correlated to customer needs and desires System Characterization of product performance using transfer functions to assess risks Optimization of designs through transfer function Optimize Product/ Process knowledge and identification of counter-measures to avoid potential failure modes Verification that designs perform to targets and counter-measures eliminate potential failure Verify Results modes (Ford DCOV)February 2009 N. Soderborg 4
- 3. Definition of a Transfer Function A mathematical model that relates an output measure Y to input variables (x’s): Y = F(y1, …, yn), y1 = f(x1, …, xn), etc. Why “transfer” function? (“function” or “equation” would suffice) For purposes of today’s discussion, transfer functions are computer modelsFebruary 2009 N. Soderborg 5Where Transfer Functions Come From TF based Deduction: using first principles to characterize on system physics, geometry, or material properties “First Principles” Physics Equations that Describe Function Increasing Degree of Approximation e.g., V=IR, f=ma, f=kx, k.e.= ½mv2 Finite Element and other Analytic Models, e.g., computer models not expressible in closed-form equations Geometric Descriptions of Parts and Systems e.g., equations from schematics based on reverse engineering, lumped mass models, drawings & prints; variation/tolerance stack-up Induction: analyzing experimental, empirical data Directed Experimentation e.g., response surface or multivariate regression equation from DOE using analytic models or hardware TF based Analysis of Existing Data on e.g., regression to enhance informed observations Empirical DataFebruary 2009 N. Soderborg 6
- 4. What Transfer Functions are Used For In early phases of a project, a typical goal is to develop or improve transfer functions that Correlate customer needs to objective metrics Provide a formula for system output “y” based on input “x’s” In latter phases, a typical goal is to exploit those transfer functions to identify optimal robust designs, i.e., achieve performance Target Target On target With minimal variability At affordable cost Original Optimized Design Design y y This requires probabilistic capability & analysis, i.e., being able to represent the output of the model as a probability distributionFebruary 2009 N. Soderborg 7 Foundation: Useful Computer Models “All models are wrong; some are useful.” --George Box 8
- 5. Characteristics of a Good Model Fits Data For a deductive, first principles based model: Fits data collected from physical tests For an inductive, statistical model: Fits the data sample used to construct the model Predicts Well Predicts responses well at points not included in the data sample or regions of space used to construct the model Interpolates well Extrapolates well Did we do the modeling right?February 2009 N. Soderborg 9Characteristics of a Good Model Parsimonious (conceptually) Is the simplest of competing models that adequately predicts a phenomenon Note: introducing more terms in a model may improve fit, but over- complicate the model (and impair prediction) Parsimonious (from a business perspective) Incurs reasonable development cost compared to the knowledge and results expected Incurs containable computation costs Did we do the modeling right?February 2009 N. Soderborg 10
- 6. Characteristics of a Good Model Interpretable Correctly applies & represents physics, geometry, & material properties Provides engineering insight; answers the desired questions Contains terms that are fundamental (e.g., dimensional analysis) Has clear purpose & boundaries (domain can be small and still useful) Did we model the right things?February 2009 N. Soderborg 11P-Diagram In engineering, computer models should help us simulate or predict performance under real-world conditions We would like to account for variability in build, environment, and usage (aka: noise) A high-level framework for this is the Parameter Diagram (see Phadke, Davis) Noise Factors xN System xS Signal Ideal Function y y = f (xS,xC , xN ) error states/failure modes y Control Factors xCFebruary 2009 N. Soderborg 12
- 7. Challenge of Representing Noise in Models Models based on first principles will include factors from physics, such as: Loads, energy transfer Properties of materials Dimensions and geometries Often the particular noise factors we identify are not factors in our model—but are there “surrogates?” Try to understand and estimate the effect of variability in noise factors on factors included in the model Typical Noise Factor Types: • Manufacturing variation Typical Model Factor Types • Deterioration over time • Load/Energy Transfer • Customer usage/duty cycles • Material Properties • External environment • Geometry & Dimensions • System interactions Translate effects of variation in these into variation in these.February 2009 N. Soderborg 13 Deterministic vs. Probabilistic Approaches 14
- 8. Levels of Design Refinement Trial & Error Looking for a new Hand Calculations design concept? That requires a Physical tests as needed different set of tools. Learning from experience Planned Physical Experimentation (DOE) Empirical Learning Statistical Analysis Analytic Modeling (Deterministic) Computer calculations “What-if” Scenarios Analytic Robust Design (Stochastic/Probabilistic) Designed Experiments Optimization (Single and Multiple Objective)February 2009 N. Soderborg 15Deterministic Analysis Inputs Outputs Nominal or Worst Computer Model Point Estimate of Case values of • Performance • Dimensions • Life • Materials Safety Factor or • Load Design Margin • etc... Input Examples Model Examples Output Examples • Gages •Finite Element Analysis •Deflection • Young’s Modulus •Regression Equation •Life •Cylinder Pressure •Numerical Model •VoltageFebruary 2009 N. Soderborg 16
- 9. Deterministic Analysis Safety MarginDESIGN 1 Mean Mean Which design is Stress Strength more reliable? Safety MarginDESIGN 2 Mean Mean Stress StrengthFebruary 2009 N. Soderborg 17Probabilistic Analysis Safety Margin The interference region between stress and strength defines the DESIGN 1 probability of failure. This Smaller safety factor, higher reliability determines reliability. Mean Mean Stress Strength Safety Margin DESIGN 2 A design with a larger safety Larger safety factor, lower reliability factor may have lower reliability depending upon stress and strength variability. Mean Mean Stress StrengthFebruary 2009 N. Soderborg 18
- 10. Probabilistic Analysis & Optimization Inputs Outputs Computer Model For a given nominal, sample Performance variability the assumed distribution at nominal around nominal: • Dispersion • Dimensions • Local Sensitivity • Material properties • Reliability Assessment • Loads • Usage Performance variability • Manufacturing Iteration over across multiple designs • etc. multiple • Global Sensitivity nominal values • Robust Design DirectionFebruary 2009 N. Soderborg • Robust Design Optimization19Probabilistic Optimization: ExampleEngine Block Mfg. Fixture Range of Response Variability Objective: Find fixture design that minimizes 120 deflection, accounting for Deflection: smaller is better 100 manufacturing variation Design Variables: 80 1 Locator Positions (4) Clamp Positions (4) 60 Clamp Force 40 2 1 2 20 1. Optimization without variability 2. Optimization including variability 0 900 920 940 960 Clamp PositionFebruary 2009 N. Soderborg 20
- 11. Challenges to Probabilistic Design Statistical distributions for input factors may be unknown and costly to ascertain Data that is available may be imprecise The organization may Lack statistical expertise or training Have difficulty dealing with results that include uncertainty All of this is OK! The goal should not be to predict reliability precisely Rather, the goal is to make and demonstrate improvement Learn by using data from similar processes when available Try a variety of assumptions to convey a range of possible risks Use analyses to make comparisons instead of absolute predictionsFebruary 2009 N. Soderborg 21 Monte Carlo Simulation 22
- 12. Monte Carlo Simulation Transfer Function/Computer Model limit y = f (x1, x2, … xd) PDF: y PDF: x1 PDF: x2 PDF: xd 1. Assign a probability distribution to each input variable, xi a. Generate a “random” instance for each xi from its distribution b. Calculate and record the value of y from substituting the generated instance from each xi into the transfer function 2. Repeat steps a & b many times (e.g., 100-1,000,000) 3. Calculate y statistics, e.g., mean, std. dev., histogram 4. Estimate success or failure probability based on targets/limitsFebruary 2009 N. Soderborg 23Example: Door “Drop-off” Performance Variable: Door drop-off Model: Finite Element Analysis Design Variables Number of missing welds Materials: Door, Hinge, Reinforcement, Hinge Pillar Gages: Door, Hinge, Reinforcement, Hinge Pillar Central Gravity Location Trim Weight Design Requirement: Drop-off < 1.5 mm Goal of the Study Check if the drop-off requirement is met when variations in design variables are considered Explore opportunities for design improvement or cost reductionFebruary 2009 N. Soderborg 24
- 13. Example: Door “Drop-off” Conclusions Design meets the drop-off Door Drop-off Distribution requirement even when 0.1 variations in gages, material, 0.08 frequency trim weight, and center of 0.06 gravity are present 0.04 99th Percentile Door hinge reinforcement is = 0.9560 mm most dominant factor for 0.02 controlling door drop-off 0 May be able to reduce cost by 0.87 0.89 0.91 0.93 0.95 0.97 0.99 Door Drop-off, (mm) downgaging door hinge reinforcement from 2.4 mm Contribution to Variability: to 2.0 mm (must Door reinforcement gage: 37% demonstrate fatigue Trim weight: 14% Central gravity: 12% requirements can still be met) Hinge pillar reinforcement gage: 11%February 2009 N. Soderborg 25Example: Vehicle Vibration Problem: Irritating Vibration Phenomenon Response: Seat Track Shake Model: Vibration Analysis Tool Design Variables: Stiffness of over 30 Bushings Stiffness and/or Damping of over 20 Engine Mounts Over 20 others: characteristics of Struts Structural Mounts Subframe Subframe Mounts etc.February 2009 N. Soderborg 26
- 14. Example: Vehicle Vibration Main Effects Plot - Means for Shake@58 590 540 Shake@58 490 440 180.0 365.5 550.9 736.4 921.8 1107.0 Engm nt1 Stiffness Baseline Design Robust Design Mount Type A Mount Type BFebruary 2009 N. Soderborg 27Software for Monte Carlo Simulation Dimensional Variation Analysis tools such as VSA® employ Monte Carlo Simulation (MCS) Minitab® facilitates random number generation that can be used for MCS Several Excel-based tools are for sale—the most widespread is Crystal Ball®, which provides a custom interface for MCS in Excel Allows user to identify cells as “assumptions” (x-variables) and “forecasts” (y-variables) Includes automatic generation of y-histograms, real-time updating with simulation, optional optimization routines Excel’s built-in random number generation can suffice when supplemental software is not availableFebruary 2009 N. Soderborg 28
- 15. Monte Carlo Simulation in Excel Without supplemental software… -generate “random” numbers for x’s -calculate y-values with Excel formula -use data analysis & histogram tools to characterize y-distributionSelect…Tools/Data Analysis(available fromAnalysisToolPak Add-in)February 2009 N. Soderborg 29Random Number Generation in Excel No. of xi’s (columns) No. of instances of each xi’s (rows) Distributions (PDFs): e.g., Uniform, Normal, Bernoulli, Binomial, Poisson, Patterned, Discrete Seed: same seed repeats same set of pseudo- random numbers Output: Worksheet & cell range where numbers are storedFebruary 2009 N. Soderborg 30
- 16. Additional DistributionsIf the desired distribution is not an 11 0.9automatic selection in Excel (but 0.8 CDF 0.7 of xithe inverse CDF can be coded as a 0.6function) 0.5 For each xi, generate a set of 0.4 0.3 uniformly distributed random 0.2 numbers between 0 and 1 0.1 Substitute each of the numbers 00 -4 -3 -2 -1 0 1 2 3 4 uniform into the inverse CDF of xi, to obtain a set distributed as xi Calculate response y for each element in this new set Create a histogram of the set of new dist for xi response values y and calculate statistics y=f(x) frequency of yFebruary 2009 N. Soderborg 31Door Latch Example A door latch design & production team developed mathematical equations for key customer outputs (outside release effort, outside travel) using the part drawings and applying principles of trigonometry and elementary physics These equations were coded into an Excel spreadsheet The team had production data (capability, mean, standard deviation) available for the input variables in the equations Part dimensions Part edge curvature and geometry Spring forces Etc.February 2009 N. Soderborg 32
- 17. Door Latch Example—Spreadsheet Model Transfer Function Equation (example) y=D4*(SQRT(Z4*Z4+AA4*AA4)/SQRT(AB4*AB4+AC4*AC4)) Factor data: nominal, spread…February 2009 N. Soderborg 33Door Latch Example—Simulation & Results d1 d2 d3 d4 d5 d6 YT Variable d1 d2 d3 d4 d5 d6 τO/S Variable Variable Variable Variable Variable Variable . . . CalculatedNOMINAL 15.8000 25.3000 19.0400 16.2000 17.9500 1.6800 6.171921 Each column a separate distribution Calculate y for each row using the 15.94288 25.36693 19.10459 16.17876 17.98629 1.626887 6.258756 15.86396 25.34382 19.01017 16.17493 17.92128 1.711719 6.031181 15.80739 25.13663 18.96766 16.17173 17.98257 1.670076 6.009915 Generate 1000 Rows 15.82119 25.28508 19.06259 16.17272 17.98329 1.680418 5.990283 transfer function 15.79549 25.29365 19.1008 16.2428 17.93715 1.64764 6.373316 15.88546 25.29864 19.13414 16.21519 17.8696 1.730449 . . . 6.09988 15.74837 25.50067 18.98401 16.13 17.95443 1.717248 5.877883 15.80886 25.39548 18.91719 16.20812 17.87082 1.646429 6.072534 Draw histogram 15.88649 25.21907 19.07608 16.1767 18.04452 1.737719 6.251613 . . . . . . . . . . . . . . . . . . . . . 15.61927 25.31412 19.00349 16.18069 17.95738 1.648598 15.73831 25.19662 19.08068 16.33519 17.93109 1.72337 . . . 6.114448 6.226314 70 LSL* Outside Travel USL* 60 50 40 30 20 Estimate % of product 10 0 outside specs based on 9 2 5 78 84 96 02 08 14 26 32 38 44 56 5. 6. 6. 5. 5. 5. 6. 6. 6. 6. 6. 6. 6. 6. variation assumptions for x’sFebruary 2009 N. Soderborg *Limits are examples only 34
- 18. Crystal Ball ExampleFebruary 2009 N. Soderborg 35 Design of Computer Experiments 36
- 19. Motivation If you have a computer model already, why do designed experiments to create a model of the model? Make design decisions faster and cheaper Some models are computationally intensive, time consuming, and expensive to set up and run Robust design analysis needs a probabilistic approach that requires many runs You can replace expensive models with approximations (metamodels) for carrying out Monte Carlo simulation, robust design, multi-objective optimization, etc. Gain insight into the original model Often the model cannot be expressed explicitly (it is a “black box”), e.g. Finite Element Analysis A metamodel can be used to efficiently understand effects, interactions, and sensitivitiesFebruary 2009 N. Soderborg 37A Flow for Analytic Robust Design Develop & Document System Noise Factors Signal Resp. Understanding P-Diagram Understand functions/failures, P-Diagram Control Factors Run # x1 x40 1 x1,1 x40,1 Design a Computer Experiment Sample for uniformity, orthogonality n x1,n x40,n Run the Experiment Evaluate the model at each sample point Develop a Response Surface Model Apply advanced regression, other methods Analyze Optimize for Assess Reliability Sensitivities Quantify risk Robustness Find important factors x1 x2 Select a robust design x3 x4 x5February 2009 N. Soderborg 38
- 20. Computer-based Experimentation The move toward Analytic Robust Design along with ever- increasing computing power has fueled the development of a new field of study over the past few decades: Design and Analysis of Computer Experiments (DACE) Early computer experimenters realized that traditional Experimental Designs were sometimes inadequate or inefficient compared to alternatives In addition, certain non-parametric techniques for fitting the data may offer more useful models than polynomial regressionFebruary 2009 N. Soderborg 39Physical vs. Computer Experiments Physical Computer Responses are stochastic Responses are determinis- (involve random error) tic (no random error) Replication helps improve Replication has no value precision of results Some inputs are unknown All inputs are known Randomization is Randomization has no recommended value Blocking nuisance factors Blocking is irrelevant may helpFebruary 2009 N. Soderborg 40
- 21. Physical vs. Computer Experiments Physical Computer Experiment logistics can Experiment logistics often be resource-intensive require fewer resources Minimizing the number of A relatively large number runs is generally desirable of runs may be feasible Parameter adjustment Parameter adjustments requires physical work take place in software The set up is usually only The set up can be “saved” available for a short time and returned to; a period (e.g., interruption sequential approach is of production) more feasibleFebruary 2009 N. Soderborg 41Physical vs. Computer Experiments Physical Computer Logistical requirements Relative logistical ease limit sampling to 2 or 3 allows variable sampling levels per variable over many levels Thus, models are typically Multiple level sampling limited to be linear or allows high order, quadratic nonlinear models Typical design is a Flexible alternatives to standard orthogonal array, standard arrays available, e.g., full or fractional e.g., Latin Hypercube, factorial, or response Uniform designs, etc. surface methods (close to orthogonal)February 2009 N. Soderborg 42
- 22. Desirable Computer Experiment Properties Is Balanced Each factor has an equal number of runs at each level This weights levels equally in estimating effects (The number of runs will be a multiple of the number of levels of each factor) Captures Response Non-linearity if Present Two levels for a factor allow modeling of linear effects Modeling higher order non-linearity requires a higher number of levels per factor Exhibits Good Projective Properties Projections onto significant factor subspaces… Include no “pseudo-replicates” Avoid significant point-clustering Maximizes information related to significant factor behaviorFebruary 2009 N. Soderborg 43Desirable Computer Experiment Properties Is Orthogonal or Close to Orthogonal Correlation between factors is zero or close to zero (column orthogonality) Allows effects of factors to be distinguished and estimated cleanly Fills the Design Space Sample points are spread throughout the design space as evenly or uniformly as possible Helps model the full range of design behavior without any assumptions on factor importance Improves interpolation capability for building a good metamodel How “space filling” a design is can be measured by various criteria; in practice, seek designs that have relatively good orthogonality and good space filling propertiesFebruary 2009 N. Soderborg 44
- 23. Computer Experiment DesignExample Strategies Orthogonal Array Traditional Approaches Response Surface Methods Space Filling Designs: sampling based Random Sample Latin Hypercube Sample Space Filling Designs: based on optimizing various criteria Management of minimum and maximum distances between points Minimum “discrepancy” or departure from uniformity Maximum “entropy” or unpredictability Low Discrepancy (Quasi Monte Carlo) SequencesFebruary 2009 N. Soderborg 45Latin Hypercube Sampling Latin Squares E D C B A Latin Hypercubes are extensions of Latin D C B A E Squares to higher dimensions C B A E D An NxN Latin Square has the property B A E D C that each of N symbols appears exactly once in each row and exactly once in A E D C B each column Latin Hypercubes Latin Hypercube Sampling divides each • dimension of the design space into N intervals • A set of N points is selected so that when • the set is projected onto any dimension, • exactly one point is in each of the intervals for that dimension • (Kind of like Sudoku!)February 2009 N. Soderborg 46
- 24. Latin Hypercube/Factorial Comparison If x2 is not significant, there are essentially 3 repeat Large regions unsampled points, for each level of x1: “Pseudo-replicates” 1 1 0.8 0.8 0.6 0.6 x2 x2 0.4 0.4 0.2 0.2 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 x1 x1 9 Run Factorial 9 Run Latin Hypercube Typical Physical Experiment Feasible in Computer Experimentation 3 replicates for each variable No replicates for variable projection projections 3 levels for each variable: at 9 levels for each variable: most quadratic effects can be higher order nonlinear effects captured can be capturedFebruary 2009 N. Soderborg 47Latin Hypercube Example 4-factor, 11-level LH DesignDesign Matrix Matrix Plots: 2-dimensional projections x1 x2 x3 x4 0.8 -0.8 -0.2 -0.6 0.5 x1 0.2 -0.4 0.2 1 -0.5 -1 -0.6 -0.6 0 -0.2 -1 0.6 0.4 0.5 0.4 1 0.4 -0.4 x2 -0.4 0.6 -1 -0.2 -0.5 0 0.2 -0.4 -1 -0.6 -0.2 1 -0.8 0.5 0.6 0 -0.8 0.6 x3 -0.8 0.8 0 0.8 -0.5 1 0.4 0.8 0.2 0.5Pearson Correlation x4 x1 x2 x3 x4 -0.5 x1 * * * 0.027 0.145 -0.009 p 0.937 0.670 0.979 .5 0 .5 .5 0.5 .5 0 .5 .5 0 .5 -0 -0 -0 -0 x2 *** -0.073 -0.036 p 0.832 0.915 x3 *** -0.027 p 0.937 x4 ***February 2009 N. Soderborg 48
- 25. Uniform Designs A uniform design is a sample of points that minimizes some measure of discrepancy Discrepancy is a metric quantifying “how far” the points are from being uniformly distributed Uniform designs allow different numbers of levels for each factor An existing design can be “optimized” for uniformity, e.g. Subset of a full factorial Latin Hypercube Initial Latin Hypercube Optimized for UniformityFebruary 2009 N. Soderborg 49Uniform Mixed-Level Design Example4-factor, 12 Run, Mixed Level Design (subset of a full factorial design)Design Matrix Matrix Plots: 2-dimensional projections x1 x2 x3 x4 0 1 1 -1 0.5 0 0 .3 3 -0 .3 0 .6 x1 -1 0 .3 3 -1 -0 .6 -0.5 0 -1 0 .3 3 -0 .6 1 1 -1 -0 .2 0.5 -1 -0 .3 1 -0 .2 x2 1 -0 .3 -0 .3 -1 -0.5 1 0 .3 3 0 .3 3 0 .2 1 -1 1 0 .6 0.5 -1 1 0 .3 3 1 x3 -0.5 -1 -1 -0 .3 0 .2 0 -0 .3 -1 1 0.5Pearson Correlation x4 x1 x2 x3 x4 -0.5x1 *** 0 0 -0.1195 5 5 5 5p 1 1 0.71139 -0. 0.5 -0. 0.5 -0. 0.5 -0. 0.5x2 *** -0.1333 -0.0436p 0.67953 0.89287x3 *** -0.0873 Number of Levels for each factorp 0.78736 x1: 3, x2: 4, x3: 4, x4: 6x4 ***February 2009 N. Soderborg 50
- 26. Low Discrepancy Sequences A sequential approach to identifying experimental points Useful when experiments can proceed sequentially, especially if the computer model is slow While waiting for the model to generate the next output, the analyst can do preliminary work to decide if the results are accurate enough Sequences are based on Monte Carlo approaches to space-filling sequences used for integration Such sequences may be used to substitute for sampling from a uniform probability distribution (Quasi-Monte Carlo) Some sequences are specifically designed to have low discrepancy Roughly speaking, the discrepancy of a sequence is low if the number of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average in the case of a uniform distribution (Wikipedia)February 2009 N. Soderborg 51Low Discrepancy Sequence Example Examples of sequences in the literature Sobol Sequence Hammersley Sequence Halton Sequence “Open” spaces 100 Monte Carlo Samples 100 Halton Samples Point clusteringFebruary 2009 N. Soderborg 52
- 27. Design of Computer Experiment Summary Latin Hypercube Computationally inexpensive to generate Allows large number of runs and factors sampled at many levels Good projective properties on low dimensional subspaces Available in many software sources Number of levels = number of runs—this can be a big constraint Uniform Designs Design matrices with good orthogonality and projective properties can be refined to improve uniformity Algorithms apply to any number of levels and factors per level Not as common in software as Latin Hypercube (JMP?) Computation required to optimize designs grows with number of runs, factors, and levels—can consume some time for big designsFebruary 2009 N. Soderborg 53Design of Computer Experiment Summary Low Discrepancy Sequences Provides a sequence of points that fill space close to uniformly Allows sequential experimentation Typically, number of levels is same as number of runs Can be used in the place of “random” sequences when more uniform sampling is desired Slowly becoming available in commercial software; code can be downloaded from various websitesFebruary 2009 N. Soderborg 54
- 28. Analysis of Computer Experiments 55Analysis of Computer Experiments Develop & Document System Noise Factors Signal Resp. Understanding P-Diagram Understand functions/failures, P-Diagram Control Factors Run # x1 x40 1 x1,1 x40,1 Design a Computer Experiment Sample for uniformity, orthogonality n x1,n x40,n Run the Experiment Evaluate the model at each sample point Analysis Develop a Response Surface Model Apply advanced regression, other methods Analyze Optimize for Assess Reliability Sensitivities Quantify risk Robustness Find important factors x1 x2 Select a robust design x3 x4 x5February 2009 N. Soderborg 56
- 29. Generating Response Surfaces A traditional approach is to treat the computer model as an unknown transfer function, f Assume that transfer function has a particular form e.g., Polynomial Trigonometric function, etc. Find coefficients β that provide the “best fit” of a function of the assumed form to the response data, i.e., y=f(β,x)β Responses from physical experiments will not match the output of the generated function exactly due to Experimental measurement error Differences in the assumed form of the function vs. the true form Absence of some influential factors in the experiment Etc.February 2009 N. Soderborg 57Interpolation However, with computer experiments it would be desirable for experimental responses to match the output of the generated function exactly Computer experiments are not subject to experimental error— responses reflect the true output of the analytical model All input factors are known The challenge is to generate a metamodel that Matches response data AND Predicts well the response values at points not used to construct the metamodel This is an interpolation problem: a specific case of curve fitting, in which the function must go exactly through the data pointsFebruary 2009 N. Soderborg 58
- 30. Interpolation Examples In the plane, 2 points determine a unique line, 3 points determine a unique 2nd order polynomial, etc. However, if data subject to experimental error is interpolated assuming a polynomial functional form, the result can be severe “over-fitting”16 10 514 9 4.5 8 412 7 3.510 6 3 8 5 2.5 6 4 2 3 1.5 4 2 1 2 1 0.5 0 0 0 0 2 4 6 8 10 0 2 4 6 8 0 0.1 0.2 0.3 0.4 0.5 2 points fit by a 1st 3 points fit by a 2nd 6 points, subject to order polynomial order polynomial experimental error, fit by a 6th order polynomial; a better predictor is a best fit lineFebruary 2009 N. Soderborg 59Metamodel Building For any given true model there are many metamodels To find a metamodel with good prediction capability, often the best approach is to Try both “best fit” and interpolation methods and combinations Choose a final model based on validation studies Generally, the metamodel is constructed as a linear combination of elements from a set of building block functions called basis functions, i.e., M fˆ(x ) = ∑ β j B j (x ) = β0B0 (x ) + β1B1 (x ) + ... + βM BM (x ) j =0 Where βi are the coefficients and Bj are the basis functionsFebruary 2009 N. Soderborg 60
- 31. Types of Basis Functions Polynomials •Most powerful for low-dimension input Splines variables (terms grow exponentially with dimension) Fourier Functions •Results are interpretable in terms of Wavelets familiar functions Radial Basis Functions •May be more natural for high-dimension Kriging Functions input variables Neural Networks •Results may be difficult to interpret in terms of familiar functions Etc.February 2009 N. Soderborg 61Basis Function Examples Polynomials (up to 2nd order) Fourier basis (1-dimension, over [0,1]) B0 (x ) = 1 constant B1 (x ) = x1 B0 (x ) = 1 1st order M B1(x ) = cos(2πx ) terms Bd (x ) = xd B2 (x ) = sin(2πx ) Bd +1 (x ) = x 2 1 M 2nd order M terms B2k −1(x ) = cos(2kπx ) B2d (x ) = xd 2 B2k (x ) = sin(2kπx ) B2d +1 (x ) = x1x 2 M Interaction terms B2d +d ( d −1) / 2 (x ) = xd −1xdFebruary 2009 N. Soderborg 62
- 32. Metamodel Building For interpolation, find and select sufficiently many (M) basis functions so that y=Bβ can be solved for β, i.e., β β y 1 B0 (x1 ) L BM (x1 ) 0 y B (x ) β 2 = 0 2 L BM (x 2 ) 1 M M K K K M y n B0 (x n ) L BM (x n ) β M Response Matrix of M basis functions, each Coefficient Vector evaluated at the n sample points Vector ˆ For “best fit,” find basis functions and β that minimizes 2 n M y − Bβ = ∑ y i − ∑ β j B j (x i ) , i.e., β = BT B BT y 2 ˆ −1 ( ) i =1 j =0 Least-squares estimatorFebruary 2009 N. Soderborg 63Example: Splines, MARSMultivariate Adaptive Regression Splines An automated, adaptive regression method Developed by Prof. Jerome Friedman of Stanford University in early 1990s, available in commercial software from Salford Systems Basis functions are built from functions of the form x − κ, if x > κ κ − x, if x < κ ( x − κ )+ = (κ − x )+ = 0, otherwise 0, otherwise “knot” y y -κ κ 0 x1 0 x1 κ κFebruary 2009 Piece-wise linear “hockey stick” functions N. Soderborg 64
- 33. Example: MARS MARS Model Components Combined y1 = 5.02196 y2 = 0.238230*[x1-6.035000] Result y y 5 5 y = y1 + y2 + y3 + y4 5 5 y 0 x x 5 0 1 1 y 5 y 5 0 x x 5 0 1 1 x 0 1 -5 -5y3 = -0.977209*[5.100000-x1] y4 = -1.227350*[x1-5.100000]Notation: [.] is also used to denote (.)+February 2009 N. Soderborg 65Example: Gaussian Stochastic Kriging Proposed by Matheron for modeling spatial data in geostatistics (1963) Systematically introduced to computer experiments by Mitchell (1989) Uses continuous basis functions of the form d 2 exp− ∑ θ j (x j − x ij ) j =1 where xij is the jth dimension of the ith sample point, and θi is estimated in the process of generating the response surfaceFebruary 2009 N. Soderborg 66
- 34. Example: Gaussian Stochastic Kriging GSK one variable example; 4 data pointsFebruary 2009 N. Soderborg 67Comparison of Different Methods 5 5 120 y The true function contains < 110 4 4 110 120 - - 120 130 a reasonable amount of 130 - 140 3 140 - > 150 150 nonlinearity 3 Using the same samplingx2 x2 2 2 110 strategy (30 points, LHS), 1 compare the fits to the true 140 1 150 130 0 0 1 2 x1 3 4 5 function of different 0 0 1 2 3 4 5 modeling methods: x1 Polynomial Regression, RSMActual function: Polynomial Regression,y = (30+x1*SIN(x1))*(4+EXP(-x2)) stepwise 160 MARS y 140 Kriging 120 4.5 100 3.0 1.5 x2 0.0 1.5 3.0 0.0 x1 4.5February 2009 N. Soderborg 68
- 35. Polynomial Regression: RSM Fit 5 5 y y1 < 110 < 100 110 - 120 100 - 110 120 - 130 110 - 120 4 130 - 140 4 120 - 130 140 - 150 130 - 140 > 150 140 - 150 > 150 3 3x2 x2 2 2 1 1 0 0 0 1 2 3 4 5 0 1 2 3 4 5 x1 x1 Actual function: Function y sampled with LH DOE, 30 y = (30+x1*SIN(x1))*(4+EXP(-x2)) runs: i.e., x1, x2 have 30 levels in [0,5] 2D Response y = 142.93+9.56x1 - 13.51x2- Surface Method 2.87x12 + 1.81x22 (Minitab)February 2009 N. Soderborg 69Polynomial Regression: Stepwise Fit 5 5 y y2 < 110 < 100 110 - 120 100 - 110 120 - 130 110 - 120 4 130 - 140 4 120 - 130 140 - 150 130 - 140 > 150 140 - 150 > 150 3 3x2 x2 2 2 1 1 0 0 0 1 2 3 4 5 0 1 2 3 4 5 x1 x1 Actual function: Function y sampled with LH DOE, 30 y = (30+x1*SIN(x1))*(4+EXP(-x2)) runs: i.e., x1, x2 have 30 levels in [0,5] 2D Stepwise y = 144.9+14.35x1-5.3x12+0.326x13 Regression Method -23.2 x2 +6.7 x22-0.65x23 (Minitab)February 2009 N. Soderborg 70
- 36. MARS Fit 5 5 y y3 120 < 110 < 100 110 - 120 100 - 110 120 - 130 110 - 120 4 130 - 140 4 120 - 130 140 - 150 130 - 140 > 150 140 - 150 > 150 3 3 x2x2 2 2 1 140 110 1 150 130 0 0 0 1 1 2 2 3 3 4 45 5 0 1 2 3 4 5 x1 x1 x1 Actual function: Function y sampled with LH DOE, 30 y = (30+x1*SIN(x1))*(4+EXP(-x2)) runs: i.e., x1, x2 have 30 levels in [0,5] 2D MARS y = 134.05-11.88(x1-2.24)+- Prediction (Ford 4.76(2.24-x1)+ -1.50(x2-1.55)+ Encore software) +13.74(1.55-x2)+February 2009 N. Soderborg 71Gaussian Stochastic Kriging Fit 5 5 y Kriging < 110 < 100 110 - 120 100 - 110 120 - 130 110 - 120 4 130 - 140 4 120 - 130 140 - 150 130 - 140 > 150 140 - 150 > 150 3 3 x2x2 2 2 1 1 0 0 0 1 2 3 4 5 0 1 2 3 4 5 x1 x1 Actual function: y = Function y sampled with LH DOE, 30 (30+x1*SIN(x1))*(4+EXP(-x2)) runs: i.e., x1, x2 have 30 levels in [0,5] 2D GSK Prediction (Ford Encore software)February 2009 N. Soderborg 72
- 37. MARS/Kriging Comparison MARS Strengths Non-parametric: no assumption of underlying model required Well suited for high-dimensional problems; good for data mining Reasonably low computational demand MARS Limitations While often useful for understanding general trends, models sometimes do not accurately capture local behavior GSK Strengths Interpolates data GSK Limitations Relatively computationally demanding Can over-fit dataFebruary 2009 N. Soderborg 73 Case Studies 74
- 38. Case Study: Piston Slap Problem/Opportunity REFERENCE Piston slap is an unwanted engine noise SAE Paper: 2003-01-0148, that results from piston secondary motion “Robust Piston Design and Optimization Using Piston A combination of transient forces and Secondary Motion certain piston clearances can result in Analysis” Lateral movement of the piston within the cylinder Rotation of the piston about the piston pin This can cause the piston to impact the cylinder wall at regular intervals The design team has developed CAE model that predicts piston secondary motion, so this phenomenon can be explored analytically Goals Achieve minimal piston friction and minimal piston noise simultaneously Reduce customer complaintsFebruary 2009 N. Soderborg “Deep Dive” 75Case Study: Steering Wheel Nibble Problem/Opportunity REFERENCE Steering System is highly coupled: SAE Paper: 2005-01-1399, Desire efficiency steering wheel to road wheel “Using Computer Aided Desire inefficiency road wheel to steering wheel Engineering to Find and Steering wheel nibble (undesired tangential Avoid the Steering Wheel ‘Nibble’ Failure Mode” oscillation between 10-20 Hz) is a potential failure mode typically in autos with rack and pinion steering A result of a chassis system response to wheel- end excitations Excitations can result in up to 0.05 mm of steering rack displacement which gets amplified into undesired steering wheel oscillations of up to 0.2o Goals Use any and all approaches to avoid the Nibble Failure Mode Focus on making the designs less sensitive to NoiseFebruary 2009 N. Soderborg “Deep Dive” 76
- 39. Case Study: Side Impact Design Criteria Problem/Opportunity REFERENCE IIHS Side Impact Evaluation SAE Paper: 2005-01-0291, New test mode – side impact “Model of IIHS Side Impact Torso Response Measures Develop guidelines to specify using Transfer Function minimum targets Equations” Improve program efficiency by providing vehicle content guidelines Currently a variety of vehicle specific solutions are being developed Measures to be balanced with existing Regulatory & Company requirements Goals Develop transfer functions Develop design guidelinesFebruary 2009 N. Soderborg “Deep Dive” 77Case Study: Hybrid Electric Vehicle Motor Problem/Opportunity Ford’s Hybrid Electric Escape is the company’s first production Hybrid Electric Vehicle (HEV) The Power Split Transmission incorporates new technologies, including high power permanent magnet motors & torque control Goals Ensure that the Escape’s electric motor meets targets based on comparable gas engine performance for Torque Accuracy Power Loss Traction Motor Noise, Vibration, HarshnessFebruary 2009 N. Soderborg “Deep Dive” 78
- 40. Bibliography (1 of 2) T. Davis, “Science, engineering, and statistics,” Applied Stochastic Models in Business and Industry, Vol. 22, Issue 5-6, pp. 401-430, 2006. K.-T. Fang, R. Li, and A. Sudjianto, Design and Modeling for Computer Experiments, Chapman & Hall/CRC, New York, 2006. I. Farooq, J. Pinkerton, N. Soderborg, et. al., “Model of IIHS Side Impact Torso Response Measures using Transfer Function Equations,” SAE World Congress, April 11-14, 2005, SAE-2005-01-0291. R. Hoffman, A. Sudjianto, X. Du, and J. Stout, “Robust Piston Design and Optimization Using Piston Secondary Motion Analysis,” SAE World Congress, March 3-6, 2003, SAE-2003-01-0148. J. Lee, et. al., “An Approach to Robust Design Employing Computer Experiments, Proceedings of DETC ’01, ASME Design Automation Conference, Sept 9-12, 2001, Pittsburgh, PA, DETC2001/DAC-21095. T. Santner, B. Williams, and W. Notz, The Design and Analysis of Computer Experiments, Springer Verlag, New York, 2003. T. Simpson, “Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle,” NASA/CR-1998-206935. N. Soderborg, “Challenges and Approaches to Design for Six Sigma in the Automotive Industry,” SAE World Congress, April 11-14, 2005, SAE-2005- 01-1211.February 2009 N. Soderborg 79Bibliography (2 of 2) N. Soderborg, “Design for Six Sigma at Ford,” Six Sigma Forum Magazine, November 2004, 15-22. N. Soderborg, “Applications and Challenges in Probabilistic and Robust Design Based on Computer Modeling,” Invited Talk, Proceedings of the American Statistical Association Section on Physical and Engineering Sciences, 1999 Spring Research Conference on Statistics in Industry and Technology, June 2, 1999, Minneapolis, MN, pp. 207-212. R. Thomas, N. Soderborg, and S. Borders, “Using CAE to Find and Avoid Failure Modes: A Steering Wheel ‘Nibble’ Case Study,” SAE World Congress, April 11-14, 2005, SAE-2005-01-1399. G. Wang and S. Shan, “Review of Metamodeling Techniques in Support of Engineering Design Optimization”, Transactions of the ASME, Vol. 129, Apr. 2007, p. 370-380. S. Wang, Reliability & Robustness Engineering Using Computer Experiments (AR&R), 2000 Spring Research Conference on Statistics in Industry and Technology, Seattle, WA, June 26, 2000. G. Wiggs, “Design for Six Sigma (DFSS): The First 10 Years at GE,” SAE 2008, Application of Lean and Six Sigma for the Automotive Industry conference, Dec. 2-3, 2008.February 2009 N. Soderborg 80

No public clipboards found for this slide

Be the first to comment