Sensitivity analysis is the study of how uncertainty in the inputs of a mathematical model propagates to uncertainty in the model's outputs. It is useful for understanding relationships between inputs and outputs, identifying important inputs, and reducing uncertainty. Sensitivity analysis typically involves running the model many times while varying inputs, and calculating sensitivity measures from the resulting outputs to determine which inputs most influence uncertainty in the outputs. Common methods include variance-based approaches and screening methods.
What Does Sensitivity Analysis Mean?
A technique used to determine how different values of an independent variable will impact a particular dependent variable under a given set of assumptions. This technique is used within specific boundaries that will depend on one or more input variables, such as the effect that changes in interest rates will have on a bond's price.
Sensitivity analysis is a way to predict the outcome of a decision if a situation turns out to be different compared to the key prediction(s).
-What is Sensitivity Analysis in Project Risk Management?
-Example on Sensitivity Analysis….
-Types of Sensitivity Analysis……
-Advantages & Disadvantages
What Does Sensitivity Analysis Mean?
A technique used to determine how different values of an independent variable will impact a particular dependent variable under a given set of assumptions. This technique is used within specific boundaries that will depend on one or more input variables, such as the effect that changes in interest rates will have on a bond's price.
Sensitivity analysis is a way to predict the outcome of a decision if a situation turns out to be different compared to the key prediction(s).
-What is Sensitivity Analysis in Project Risk Management?
-Example on Sensitivity Analysis….
-Types of Sensitivity Analysis……
-Advantages & Disadvantages
Regression Analysis is simplified in this presentation. Starting with simple linear to multiple regression analysis, it covers all the statistics and interpretation of various diagnostic plots. Besides, how to verify regression assumptions and some advance concepts of choosing best models makes the slides more useful SAS program codes of two examples are also included.
Today’s overwhelming number of techniques applicable to data analysis makes it extremely difficult to define the most beneficial approach while considering all the significant variables.
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic factors have a statistical influence on the given data set, while the random factors do not. Analysts use the ANOVA test to determine the influence that independent variables have on the dependent variable in a regression study.
Sir Ronald Fisher pioneered the development of ANOVA for analyzing results of agricultural experiments.1 Today, ANOVA is included in almost every statistical package, which makes it accessible to investigators in all experimental sciences. It is easy to input a data set and run a simple ANOVA, but it is challenging to choose the appropriate ANOVA for different experimental designs, to examine whether data adhere to the modeling assumptions, and to interpret the results correctly. The purpose of this report, together with the next 2 articles in the Statistical Primer for Cardiovascular Research series, is to enhance understanding of ANVOA and to promote its successful use in experimental cardiovascular research. My colleagues and I attempt to accomplish those goals through examples and explanation, while keeping within reason the burden of notation, technical jargon, and mathematical equations.
Guidelines to Understanding Design of Experiment and Reliability Predictionijsrd.com
This paper will focus on how to plan experiments effectively and how to analyse data correctly. Practical and correct methods for analysing data from life testing will also be provided. This paper gives an extensive overview of reliability issues, definitions and prediction methods currently used in the industry. It defines different methods and correlations between these methods in order to make reliability comparison statements from different manufacturers' in easy way that may use different prediction methods and databases for failure rates. The paper finds however such comparison very difficult and risky unless the conditions for the reliability statements are scrutinized and analysed in detail.
Regression Analysis is simplified in this presentation. Starting with simple linear to multiple regression analysis, it covers all the statistics and interpretation of various diagnostic plots. Besides, how to verify regression assumptions and some advance concepts of choosing best models makes the slides more useful SAS program codes of two examples are also included.
Today’s overwhelming number of techniques applicable to data analysis makes it extremely difficult to define the most beneficial approach while considering all the significant variables.
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic factors have a statistical influence on the given data set, while the random factors do not. Analysts use the ANOVA test to determine the influence that independent variables have on the dependent variable in a regression study.
Sir Ronald Fisher pioneered the development of ANOVA for analyzing results of agricultural experiments.1 Today, ANOVA is included in almost every statistical package, which makes it accessible to investigators in all experimental sciences. It is easy to input a data set and run a simple ANOVA, but it is challenging to choose the appropriate ANOVA for different experimental designs, to examine whether data adhere to the modeling assumptions, and to interpret the results correctly. The purpose of this report, together with the next 2 articles in the Statistical Primer for Cardiovascular Research series, is to enhance understanding of ANVOA and to promote its successful use in experimental cardiovascular research. My colleagues and I attempt to accomplish those goals through examples and explanation, while keeping within reason the burden of notation, technical jargon, and mathematical equations.
Guidelines to Understanding Design of Experiment and Reliability Predictionijsrd.com
This paper will focus on how to plan experiments effectively and how to analyse data correctly. Practical and correct methods for analysing data from life testing will also be provided. This paper gives an extensive overview of reliability issues, definitions and prediction methods currently used in the industry. It defines different methods and correlations between these methods in order to make reliability comparison statements from different manufacturers' in easy way that may use different prediction methods and databases for failure rates. The paper finds however such comparison very difficult and risky unless the conditions for the reliability statements are scrutinized and analysed in detail.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
Modeling and simulation is the use of models as a basis for simulations to develop data utilized for managerial or technical decision making. In the computer application of modeling and simulation a computer is used to build a mathematical model which contains key parameters of the physical model.
On Confidence Intervals Construction for Measurement System Capability Indica...IRJESJOURNAL
Abstract: There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification. For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi. In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelinesin the use of the GCI approach. We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.
Decentralized data fusion approach is one in which features are extracted and processed individually and finally fused to obtain global estimates. The paper presents decentralized data fusion algorithm using factor analysis model. Factor analysis is a statistical method used to study the effect and interdependence of various factors within a system. The proposed algorithm fuses accelerometer and gyroscope data in an inertial measurement unit (IMU). Simulations are carried out on Matlab platform to illustrate the algorithm.
Episode 12 : Research Methodology ( Part 2 )
Approach to de-synthesizing data, informational, and/or factual elements to answer research questions
Method of putting together facts and figures
to solve research problem
Systematic process of utilizing data to address research questions
Breaking down research issues through utilizing controlled data and factual information
SAJJAD KHUDHUR ABBAS
Chemical Engineering , Al-Muthanna University, Iraq
Oil & Gas Safety and Health Professional – OSHACADEMY
Trainer of Trainers (TOT) - Canadian Center of Human
Development
Sensitivity Analysis, Optimal Design, Population Modeling.pptxAditiChauhan701637
Sensitivity analysis is the study of the unreliability related to output and input of mathematical model or numerical system which can be divided and allocated to various sources.
The process of outcome under possible speculation to find out the impact of a variable under sensitivity analysis can be useful for a range of purpose, consisting -
1. In the existence of unreliability, prefer testing of the results of a model or system.
2. Enhanced understanding of correlation between input and output variables in a model or system.
Sensitivity analysis methods:
There are many number of methods to study the sensitivity analysis, many of which have been developed to address one or more of the limitations discussed above. By the type sensitivity analysis measurement they are differentiate, be it based on variance decompositions, partial derivatives or elementary effects.
Episode 18 : Research Methodology ( Part 8 )
Approach to de-synthesizing data, informational, and/or factual elements to answer research questions
Method of putting together facts and figures
to solve research problem
Systematic process of utilizing data to address research questions
Breaking down research issues through utilizing controlled data and factual information
SAJJAD KHUDHUR ABBAS
Chemical Engineering , Al-Muthanna University, Iraq
Oil & Gas Safety and Health Professional – OSHACADEMY
Trainer of Trainers (TOT) - Canadian Center of Human
Development
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...Amil baba
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...PinkySharma900491
Class khatm kaam kaam karne kk kabhi uske kk innings evening karni nnod ennu Tak add djdhejs a Nissan s isme sniff kaam GCC bagg GB g ghan HD smart karmathtaa Niven ken many bhej kaam karne Nissan kaam kaam Karo kaam lal mam cell pal xoxo
1. Sensitivity analysis
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or
system (numerical or otherwise) can be apportioned to different sources of uncertainty in its
inputs.[1]
A related practice is uncertainty analysis, which has a greater focus on uncertainty
quantification and propagation of uncertainty. Ideally, uncertainty and sensitivity analysis should
be run in tandem.
Sensitivity analysis can be useful for a range of purposes,[2]
including
Testing the robustness of the results of a model or system in the presence of uncertainty.
Increased understanding of the relationships between input and output variables in a system
or model.
Uncertainty reduction: identifying model inputs that cause significant uncertainty in the output
and should therefore be the focus of attention if the robustness is to be increased (perhaps
by further research).
Searching for errors in the model (by encountering unexpected relationships between inputs
and outputs).
Model simplification – fixing model inputs that have no effect on the output, or identifying and
removing redundant parts of the model structure.
Enhancing communication from modelers to decision makers (e.g. by making
recommendations more credible, understandable, compelling or persuasive).
Finding regions in the space of input factors for which the model output is either maximum or
minimum or meets some optimum criterion (see optimization and Monte Carlo filtering).
In case of calibrating models with large number of parameters, a primary sensitivity test can
ease the calibration stage by focusing on the sensitive parameters. Not knowing the
sensitivity of parameters can result in time being uselessly spent on non-sensitive ones.[3]
Taking an example from economics, in any budgeting process there are always variables that
are uncertain. Future taxrates, interest rates, inflation rates, headcount, operating expenses and
other variables may not be known with great precision. Sensitivity analysis answers the question,
"if these variables deviate from expectations, what will the effect be (on the business, model,
system, or whatever is being analyzed), and which variables are causing the largest deviations?"
Overview[edit]
A mathematical model is defined by a series of equations, input variables and parameters aimed
at characterizing some process under investigation. Some examples might be a climate model,
an economic model, or a finite element model in engineering. Increasingly, such models are
highly complex, and as a result their input/output relationships may be poorly understood. In such
cases, the model can be viewed as a black box, i.e. the output is an opaque function of its inputs.
Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors
of measurement, absence of information and poor or partial understanding of the driving forces
and mechanisms. This uncertainty imposes a limit on our confidence in the response or output of
the model. Further, models may have to cope with the natural intrinsic variability of the system
(aleatory), such as the occurrence of stochastic events.[4]
Good modeling practice requires that the modeler provides an evaluation of the confidence in the
model. This requires, first, a quantification of the uncertainty in any model results (uncertainty
analysis); and second, an evaluation of how much each input is contributing to the output
uncertainty. Sensitivity analysis addresses the second of these issues (although uncertainty
analysis is usually a necessary precursor), performing the role of ordering by importance the
strength and relevance of the inputs in determining the variation in the output.[1]
In models involving many input variables, sensitivity analysis is an essential ingredient of model
building and quality assurance. National and international agencies involved in impact
2. assessment studies have included sections devoted to sensitivity analysis in their guidelines.
Examples are the European Commission (see e.g. the guidelines for impact assessment), the
White House Office of Management and Budget, the Intergovernmental Panel on Climate
Change and US Environmental Protection Agency's modelling guidelines.
Settings and Constraints[edit]
The choice of method of sensitivity analysis is typically dictated by a number of problem
constraints or settings. Some of the most common are
Computational expense: Sensitivity analysis is almost always performed by running the
model a (possibly large) number of times, i.e. a sampling-based approach.[5]
This can be a
significant problem when,
A single run of the model takes a significant amount of time (minutes, hours or longer).
This is not unusual with very complex models.
The model has a large number of uncertain inputs. Sensitivity analysis is essentially the
exploration of the multidimensional input space, which grows exponentially in size with
the number of inputs. See the curse of dimensionality.
Computational expense is a problem in many practical sensitivity analyses. Some
methods of reducing computational expense include the use of emulators (for large
models), and screening methods (for reducing the dimensionality of the problem).
Another method is to use an event-based sensitivity analysis method for variable
selection for time-constrained applications.[6]
This is an input variable selection method
that assembles together information about the trace of the changes in system inputs and
outputs using sensitivity analysis to produce an input/output trigger/event matrixthat is
designed to map the relationships between input data as causes that trigger events and
the output data that describes the actual events. The cause-effect relationship between
the causes of state change i.e. input variables and the effect system output parameters
determines which set of inputs have a genuine impact on a given output. The method has
a clear advantage over analytical and computational IVS method since it tries to
understand and interpret system state change in the shortest possible time with minimum
computational overhead.[6][7]
Correlated inputs: Most common sensitivity analysis methods
assume independence between model inputs, but sometimes inputs can be strongly
correlated. This is still an immature field of research and definitive methods have yet to
be established.
Nonlinearity: Some sensitivity analysis approaches, such as those based on linear
regression, can inaccurately measure sensitivity when the model response
isnonlinear with respect to its inputs. In such cases, variance-based measures are more
appropriate.
Model interactions: Interactions occur when the perturbation of two or more
inputs simultaneously causes variation in the output greater than that of varying each of
the inputs alone. Such interactions are present in any model that is non-additive, but will
be neglected by methods such as scatterplots and one-at-a-time perturbations.[8]
The
effect of interactions can be measured by the total-order sensitivity index.
Multiple outputs: Virtually all sensitivity analysis methods consider a
single univariate model output, yet many models output a large number of possibly
spatially or time-dependent data. Note that this does not preclude the possibility of
performing different sensitivity analyses for each output of interest. However, for models
in which the outputs are correlated, the sensitivity measures can be hard to interpret.
Given data: While in many cases the practitioner has access to the model, in some
instances a sensitivity analysis must be performed with "given data", i.e. where the
sample points (the values of the model inputs for each run) cannot be chosen by the
analyst. This may occur when a sensitivity analysis has to be performed retrospectively,
3. perhaps using data from an optimisation or uncertainty analysis, or when data comes
from a discrete source.[9]
Core methodology[edit]
Ideal scheme of a possibly sampling-based sensitivity analysis. Uncertainty arising from different
sources—errors in the data, parameter estimation procedure, alternative model structures—are
propagated through the model for uncertainty analysis and their relative importance is quantified
via sensitivity analysis.
Sampling-based sensitivity analysis by scatterplots. Y (vertical axis) is a function of four factors.
The points in the four scatterplots are always the same though sorted differently, i.e.
by Z1, Z2, Z3, Z4 in turn. Note that the abscissa is different for each plot: (−5, +5) for Z1, (−8, +8)
4. for Z2, (−10, +10) for Z3 and Z4. Z4 is most important in influencing Y as it imparts more 'shape'
on Y.
There are a large number of approaches to performing a sensitivity analysis, many of which
have been developed to address one or more of the constraints discussed above.[1]
They are
also distinguished by the type of sensitivity measure, be it based on (for example) variance
decompositions, partial derivatives or elementary effects. In general, however, most
procedures adhere to the following outline:
1. Quantify the uncertainty in each input (e.g. ranges, probability distributions). Note
that this can be difficult and many methods exist to elicit uncertainty distributions
from subjective data.[10]
2. Identify the model output to be analysed (the target of interest should ideally have a
direct relation to the problem tackled by the model).
3. Run the model a number of times using some design of experiments,[11]
dictated by
the method of choice and the input uncertainty.
4. Using the resulting model outputs, calculate the sensitivity measures of interest.
In some cases this procedure will be repeated, for example in high-dimensional problems
where the user has to screen out unimportant variables before performing a full sensitivity
analysis.
This section discusses various types of "core methods", distinguished by the various
sensitivity measures that are calculated (note that some of these categories "overlap"
somewhat). The following section focuses on alternative ways of obtaining these measures,
under the constraints of the problem.
One-at-a-time (OAT/OFAT)[edit]
One of the simplest and most common approaches is that of changing one-factor-at-a-time
(OFAT or OAT), to see what effect this produces on the output.[12] [13] [14]
OAT customarily
involves
Moving one input variable, keeping others at their baseline (nominal) values, then,
Returning the variable to its nominal value, then repeating for each of the other inputs in
the same way.
Sensitivity may then be measured by monitoring changes in the output, e.g. bypartial
derivatives or linear regression. This appears a logical approach as any change observed in
the output will unambiguously be due to the single variable changed. Furthermore, by
changing one variable at a time, one can keep all other variables fixed to their central or
baseline values. This increases the comparability of the results (all ‘effects’ are computed
with reference to the same central point in space) and minimizes the chances of computer
programme crashes, more likely when several input factors are changed simultaneously.
OAT is frequently preferred by modellers because of practical reasons. In case of model
failure under OAT analysis the modeller immediately knows which is the input factor
responsible for the failure.[8]
Despite its simplicity however, this approach does not fully explore the input space, since it
does not take into account the simultaneous variation of input variables. This means that the
OAT approach cannot detect the presence of interactions between input variables.[15]
Local methods[edit]
Local methods involve taking the partial derivative of the output Y with respect to an input
factor Xi:
5. ,
where the subscript X0
indicates that the derivative is taken at some fixed point in the
space of the input (hence the 'local' in the name of the class). Adjoint modelling[16][17]
and
Automated Differentiation[18]
are methods in this class. Similar to OAT/OFAT, local
methods do not attempt to fully explore the input space, since they examine small
perturbations, typically one variable at a time.
Scatter plots[edit]
A simple but useful tool is to plot scatter plots of the output variable against individual
input variables, after (randomly) sampling the model over its input distributions. The
advantage of this approach is that it can also deal with "given data", i.e. a set of
arbitrarily-placed data points, and gives a direct visual indication of sensitivity.
Quantitative measures can also be drawn, for example by measuring
the correlation between Y and Xi, or even by estimating variance-based measures
by nonlinear regression.[9]
Regression analysis[edit]
Regression analysis, in the context of sensitivity analysis, involves fitting a linear
regression to the model response and using standardized regression coefficients as
direct measures of sensitivity. The regression is required to be linear with respect to the
data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because
otherwise it is difficult to interpret the standardised coefficients. This method is therefore
most suitable when the model response is in fact linear; linearity can be confirmed, for
instance, if the coefficient of determination is large. The advantages of regression
analysis are that it is simple and has a low computational cost.
Variance-based methods[edit]
Main article: Variance-based sensitivity analysis
Variance-based methods[19][20][21]
are a class of probabilistic approaches which quantify the
input and output uncertainties as probability distributions, and decompose the output
variance into parts attributable to input variables and combinations of variables. The
sensitivity of the output to an input variable is therefore measured by the amount of
variance in the output caused by that input. These can be expressed as conditional
expectations, i.e. considering a model Y=f(X) for X={X1, X2, ... Xk}, a measure of
sensitivity of the ith variable Xi is given as,
where "Var" and "E" denote the variance and expected value operators respectively,
and X~i denotes the set of all input variables except Xi. This expression essentially
measures the contribution Xi alone to the uncertainty (variance) in Y (averaged over
variations in other variables), and is known as the first-order sensitivity index or main
effect index. Importantly, it does not measure the uncertainty caused by interactions
with other variables. A further measure, known as thetotal effect index, gives the
total variance in Y caused by Xi and its interactions with any of the other input
variables. Both quantities are typically standardised by dividing by Var(Y).
Variance-based methods allow full exploration of the input space, accounting for
interactions, and nonlinear responses. For these reasons they are widely used when
it is feasible to calculate them. Typically this calculation involves the use of Monte
Carlo methods, but since this can involve many thousands of model runs, other
methods (such as emulators) can be used to reduce computational expense when
necessary. Note that full variance decompositions are only meaningful when the
input factors are independent from one another.[22]
6. Screening[edit]
Screening is a particular instance of a sampling-based method. The objective here is
rather to identify which input variables are contributing significantly to the output
uncertainty in high-dimensionality models, rather than exactly quantifying sensitivity
(i.e. in terms of variance). Screening tends to have a relatively low computational
cost when compared to other approaches, and can be used in a preliminary analysis
to weed out uninfluential variables before applying a more informative analysis to the
remaining set. One of the most commonly used screening method is the elementary
effect method.[23][24]
Alternative Methods[edit]
A number of methods have been developed to overcome some of the constraints
discussed above, which would otherwise make the estimation of sensitivity
measures infeasible (most often due to computational expense). Generally, these
methods focus on efficiently calculating variance-based measures of sensitivity.
Emulators[edit]
Emulators (also known as metamodels, surrogate models or response surfaces)
are data-modelling/machine learning approaches that involve building a relatively
simple mathematical function, known as an emulator, that approximates the
input/output behaviour of the model itself.[25]
In other words, it is the concept of
"modelling a model" (hence the name "metamodel"). The idea is that, although
computer models may be a very complex series of equations that can take a long
time to solve, they can always be regarded as a function of their inputs Y=f(X). By
running the model at a number of points in the input space, it may be possible to fit a
much simpler emulator η(X), such that η(X)≈f(X) to within an acceptable margin of
error. Then, sensitivity measures can be calculated from the emulator (either with
Monte Carlo or analytically), which will have a negligible additional computational
cost. Importantly, the number of model runs required to fit the emulator can be
orders of magnitude less than the number of runs required to directly estimate the
sensitivity measures from the model.[26]
Clearly the crux of an emulator approach is to find an η (emulator) that is a
sufficiently close approximation to the model f. This requires the following steps,
1. Sampling (running) the model at a number of points in its input space. This
requires a sample design.
2. Selecting a type of emulator (mathematical function) to use.
3. "Training" the emulator using the sample data from the model – this
generally involves adjusting the emulator parameters until the emulator
mimics the true model as well as possible.
Sampling the model can often be done with low-discrepancy sequences, such as
the Sobol sequence or Latin hypercube sampling, although random designs can also
be used, at the loss of some efficiency. The selection of the emulator type and the
training are intrinsically linked, since the training method will be dependent on the
class of emulator. Some types of emulators that have been used successfully for
sensitivity analysis include,
Gaussian processes[26]
(also known as kriging), where the any combination of
output points is assumed to be distributed as a multivariate Gaussian
distribution. Recently, "treed" Gaussian processes have been used to deal
with heteroscedastic and discontinuous responses.[27][28]
Random forests,[25]
in which a large number of decision trees are trained, and the
result averaged.
7. Gradient boosting,[25]
where a succession of simple regressions are used to
weight data points to sequentially reduce error.
Polynomial chaos expansions,[29]
which use orthogonal polynomials to
approximate the response surface.
Smoothing splines,[30]
normally used in conjunction with HDMR truncations (see
below).
The use of an emulator introduces a machine learning problem, which can be
difficult if the response of the model is highly nonlinear. In all cases it is useful to
check the accuracy of the emulator, for example using cross-validation.
High-Dimensional Model Representations (HDMR)[edit]
A high-dimensional model representation (HDMR)[31][32]
(the term is due to H.
Rabitz[33]
) is essentially an emulator approach, which involves decomposing the
function output into a linear combination of input terms and interactions of increasing
dimensionality. The HDMR approach exploits the fact that the model can usually be
well-approximated by neglecting higher-order interactions (second or third-order and
above). The terms in the truncated series can then each be approximated by e.g.
polynomials or splines (REFS) and the response expressed as the sum of the main
effects and interactions up to the truncation order. From this perspective, HDMRs
can be seen as emulators which neglect high-order interactions; the advantage
being that they are able to emulate models with higher dimensionality than full-order
emulators.
Fourier Amplitude Sensitivity Test (FAST)[edit]
Main article: Fourier amplitude sensitivity testing
The Fourier Amplitude Sensitivity Test (FAST) uses the Fourier series to represent a
multivariate function (the model) in the frequency domain, using a single frequency
variable. Therefore, the integrals required to calculate sensitivity indices become
univariate, resulting in computational savings.
Other[edit]
Methods based on Monte Carlo filtering.[34][35]
These are also sampling-based and the
objective here is to identify regions in the space of the input factors corresponding to
particular values (e.g. high or low) of the output.
Other issues[edit]
Assumptions vs. inferences[edit]
In uncertainty and sensitivity analysis there is a crucial TRADE off between how
scrupulous an analyst is in exploring the input assumptions and how wide the
resulting inference may be. The point is well illustrated by the econometrician
Edward E. Leamer (1990):[36]
I have proposed a form of organized sensitivity analysis that I call ‘global
sensitivity analysis’ in which a neighborhood of alternative assumptions is
selected and the corresponding interval of inferences is identified.
Conclusions are judged to be sturdy only if the neighborhood of assumptions
is wide enough to be credible and the corresponding interval of inferences is
narrow enough to be useful.
Note Leamer’s emphasis is on the need for 'credibility' in the selection of
assumptions. The easiest way to invalidate a model is to demonstrate that it is
fragile with respect to the uncertainty in the assumptions or to show that its
assumptions have not been taken 'wide enough'. The same concept is expressed by
Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be
suppressed lest outputs become indeterminate.[37]
8. Pitfalls and difficulties[edit]
Some common difficulties in sensitivity analysis include
Too many model inputs to analyse. Screening can be used to reduce
dimensionality.
The model takes too long to run. Emulators (including HDMR) can reduce the
number of model runs needed.
There is not enough information to build probability distributions for the inputs.
Probability distributions can be constructed from expert elicitation, although even
then it may be hard to build distributions with great confidence. The subjectivity
of the probability distributions or ranges will strongly affect the sensitivity
analysis.
Unclear purpose of the analysis. Different statistical tests and measures are
applied to the problem and different factors rankings are obtained. The test
should instead be tailored to the purpose of the analysis, e.g. one uses Monte
Carlo filtering if one is interested in which factors are most responsible for
generating high/lowvalues of the output.
Too many model outputs are considered. This may be acceptable for quality
assurance of sub-models but should be avoided when presenting the results of
the overall analysis.
Piecewise sensitivity. This is when one performs sensitivity analysis on one sub-
model at a time. This approach is non conservative as it might overlook
interactions among factors in different sub-models (Type II error).
Applications[edit]
Some examples of sensitivity analyses performed in various disciplines follow here.
Environmental[edit]
Environmental computer models are increasingly used in a wide variety of studies
and applications. For example, global climate models are used for both short-
termweather forecasts and long-term climate change. Moreover, computer models
are increasingly used for environmental decision-making at a local scale, for
example for assessing the impact of a waste water treatment plant on a river flow, or
for assessing the behavior and life-length of bio-filters for contaminated waste water.
In both cases sensitivity analysis may help to understand the contribution of the
various sources of uncertainty to the model output uncertainty and the system
performance in general. In these cases, depending on model complexity, different
sampling strategies may be advisable and traditional sensitivity indices have to be
generalized to cover multiple model outputs,[38]
heteroskedastic effects and
correlated inputs.[7]
Business[edit]
In a decision problem, the analyst may want to identify cost drivers as well as other
quantities for which we need to acquire better knowledge in order to make an
informed decision. On the other hand, some quantities have no influence on the
predictions, so that we can save resources at no loss in accuracy by relaxing some
of the conditions. See Corporate finance: Quantifying uncertainty. Additionally to the
general motivations listed above, sensitivity analysis can help in a variety of other
circumstances specific to business:
To identify critical assumptions or compare alternative model structures
To guide future data collections
To optimize the tolerance of manufactured parts in terms of the uncertainty in
the parameters
9. To optimize resources allocation
However there are also some problems associated with sensitivity analysis in the
business context:
Variables are often interdependent (correlated), which makes examining each
variable individually unrealistic. E.G. changing one factor such as sales volume,
will most likely affect other factors such as the selling price.
Often the assumptions upon which the analysis is based are made by using past
experience/data which may not hold in the future.
Assigning a maximum and minimum (or optimistic and pessimistic) value is open
to subjective interpretation. For instance one person's 'optimistic' forecast may
be more conservative than that of another person performing a different part of
the analysis. This sort of subjectivity can adversely affect the accuracy and
overall objectivity of the analysis.
Social Sciences[edit]
Examples from research-led sensitivity analyses can be found on gender wage gap
in Chile[39]
and water sector interventions in Nigeria.
In modern econometrics the use of sensitivity analysis to anticipate criticism is the
subject of one of the ten commandments of applied econometrics (from Kennedy,
2007[40]
):
Thou shall confess in the presence of sensitivity. Corollary: Thou shall
anticipate criticism [•••] When reporting a sensitivity analysis, researchers
should explain fully their specification search so that the readers can judge
for themselves how the results may have been affected. This is basically an
‘honesty is the best policy’ approach, advocated by Leamer, (1978[41]
).
Sensitivity analysis can also be used in model-based policy assessment
studies.[42]
Sensitivity analysis can be used to assess the robustness of composite
indicators,[43]
also known as indices, such as the Environmental Performance Index.
Chemistry[edit]
Sensitivity Analysis is common in many areas of physics and chemistry.[44]
With the accumulation of knowledge about kinetic mechanisms under investigation
and with the advance of power of modern computing technologies, detailed complex
kinetic models are increasingly used as predictive tools and as aids for
understanding the underlying phenomena. A kinetic model is usually described by a
set of differential equations representing the concentration-time relationship.
Sensitivity analysis has been proven to be a powerful tool to investigate a complex
kinetic model.[45][46][47]
Kinetic parameters are frequently determined from experimental data via nonlinear
estimation. Sensitivity analysis can be used for optimal experimental design, e.g.
determining initial conditions, measurement positions, and sampling time, to
generate informative data which are critical to estimation accuracy. A great number
of parameters in a complex model can be candidates for estimation but not all are
estimable.[47]
Sensitivity analysis can be used to identify the influential parameters
which can be determined from available data while screening out the unimportant
ones. Sensitivity analysis can also be used to identify the redundant species and
reactions allowing model reduction.
Engineering[edit]
Modern engineering design makes extensive use of computer models to test
designs before they are manufactured. Sensitivity analysis allows designers to
assess the effects and sources of uncertainties, in the interest of building robust
10. models. Sensitivity analyses have for example been performed in biomechanical
models,[48]
tunneling risk models,[49]
amongst others.
In meta-analysis[edit]
In a meta analysis, a sensitivity analysis tests if the results are sensitive to
restrictions on the data included. Common examples are large trials only, higher
quality trials only, and more recent trials only. If results are consistent it provides
stronger evidence of an effect and of generalizability.[50]
Multi-criteria decision making[edit]
Sometimes a sensitivity analysis may reveal surprising insights about the subject of
interest. For instance, the field of multi-criteria decision making (MCDM) studies
(among other topics) the problem of how to select the best alternative among a
number of competing alternatives. This is an important task in decision making. In
such a setting each alternative is described in terms of a set of evaluative criteria.
These criteria are associated with weights of importance. Intuitively, one may think
that the larger the weight for a criterion is, the more critical that criterion should be.
However, this may not be the case. It is important to distinguish here the notion
of criticality with that of importance. By critical, we mean that a criterion with small
change (as a percentage) in its weight, may cause a significant change of the final
solution. It is possible criteria with rather small weights of importance (i.e., ones that
are not so important in that respect) to be much more critical in a given situation than
ones with larger weights.[51][52]
That is, a sensitivity analysis may shed light into issues
not anticipated at the beginning of a study. This, in turn, may dramatically improve
the effectiveness of the initial study and assist in the successful implementation of
the final solution.
Time-critical decision making[edit]
Producing time-critical accurate knowledge about the state of a system (effect)
under computational and data acquisition (cause) constraints is a major challenge,
especially if the knowledge required is critical to the system operation where the
safety of operators or integrity of costly equipment is at stake, e.g., during
manufacturing or during environment substrate drilling. Understanding and
interpreting, a chain of interrelated events, predicted or unpredicted, that may or may
not result in a specific state of the system, is the core challenge of this research.
Sensitivity analysis may be used to identify which set of input data signals has a
significant impact on the set of system state information (i.e. output). Through a
cause-effect analysis technique, sensitivity can be used to support the filtering of
unsolicited data to reduce the communication and computational capabilities of a
standard supervisory control and data acquisition system.[7]
Related concepts[edit]
Sensitivity analysis is closely related with uncertainty analysis; while the latter
studies the overall uncertainty in the conclusions of the study, sensitivity analysis
tries to identify what source of uncertainty weighs more on the study's conclusions.
The problem setting in sensitivity analysis also has strong similarities with the field
of design of experiments. In a design of experiments, one studies the effect of some
process or intervention (the 'treatment') on some objects (the 'experimental units'). In
sensitivity analysis one looks at the effect of varying the inputs of a mathematical
model on the output of the model itself. In both disciplines one strives to obtain
information from the system with a minimum of physical or numerical experiments.