SlideShare a Scribd company logo
Sensitivity analysis
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or
system (numerical or otherwise) can be apportioned to different sources of uncertainty in its
inputs.[1]
A related practice is uncertainty analysis, which has a greater focus on uncertainty
quantification and propagation of uncertainty. Ideally, uncertainty and sensitivity analysis should
be run in tandem.
Sensitivity analysis can be useful for a range of purposes,[2]
including
 Testing the robustness of the results of a model or system in the presence of uncertainty.
 Increased understanding of the relationships between input and output variables in a system
or model.
 Uncertainty reduction: identifying model inputs that cause significant uncertainty in the output
and should therefore be the focus of attention if the robustness is to be increased (perhaps
by further research).
 Searching for errors in the model (by encountering unexpected relationships between inputs
and outputs).
 Model simplification – fixing model inputs that have no effect on the output, or identifying and
removing redundant parts of the model structure.
 Enhancing communication from modelers to decision makers (e.g. by making
recommendations more credible, understandable, compelling or persuasive).
 Finding regions in the space of input factors for which the model output is either maximum or
minimum or meets some optimum criterion (see optimization and Monte Carlo filtering).
 In case of calibrating models with large number of parameters, a primary sensitivity test can
ease the calibration stage by focusing on the sensitive parameters. Not knowing the
sensitivity of parameters can result in time being uselessly spent on non-sensitive ones.[3]
Taking an example from economics, in any budgeting process there are always variables that
are uncertain. Future taxrates, interest rates, inflation rates, headcount, operating expenses and
other variables may not be known with great precision. Sensitivity analysis answers the question,
"if these variables deviate from expectations, what will the effect be (on the business, model,
system, or whatever is being analyzed), and which variables are causing the largest deviations?"
Overview[edit]
A mathematical model is defined by a series of equations, input variables and parameters aimed
at characterizing some process under investigation. Some examples might be a climate model,
an economic model, or a finite element model in engineering. Increasingly, such models are
highly complex, and as a result their input/output relationships may be poorly understood. In such
cases, the model can be viewed as a black box, i.e. the output is an opaque function of its inputs.
Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors
of measurement, absence of information and poor or partial understanding of the driving forces
and mechanisms. This uncertainty imposes a limit on our confidence in the response or output of
the model. Further, models may have to cope with the natural intrinsic variability of the system
(aleatory), such as the occurrence of stochastic events.[4]
Good modeling practice requires that the modeler provides an evaluation of the confidence in the
model. This requires, first, a quantification of the uncertainty in any model results (uncertainty
analysis); and second, an evaluation of how much each input is contributing to the output
uncertainty. Sensitivity analysis addresses the second of these issues (although uncertainty
analysis is usually a necessary precursor), performing the role of ordering by importance the
strength and relevance of the inputs in determining the variation in the output.[1]
In models involving many input variables, sensitivity analysis is an essential ingredient of model
building and quality assurance. National and international agencies involved in impact
assessment studies have included sections devoted to sensitivity analysis in their guidelines.
Examples are the European Commission (see e.g. the guidelines for impact assessment), the
White House Office of Management and Budget, the Intergovernmental Panel on Climate
Change and US Environmental Protection Agency's modelling guidelines.
Settings and Constraints[edit]
The choice of method of sensitivity analysis is typically dictated by a number of problem
constraints or settings. Some of the most common are
 Computational expense: Sensitivity analysis is almost always performed by running the
model a (possibly large) number of times, i.e. a sampling-based approach.[5]
This can be a
significant problem when,
 A single run of the model takes a significant amount of time (minutes, hours or longer).
This is not unusual with very complex models.
 The model has a large number of uncertain inputs. Sensitivity analysis is essentially the
exploration of the multidimensional input space, which grows exponentially in size with
the number of inputs. See the curse of dimensionality.
Computational expense is a problem in many practical sensitivity analyses. Some
methods of reducing computational expense include the use of emulators (for large
models), and screening methods (for reducing the dimensionality of the problem).
Another method is to use an event-based sensitivity analysis method for variable
selection for time-constrained applications.[6]
This is an input variable selection method
that assembles together information about the trace of the changes in system inputs and
outputs using sensitivity analysis to produce an input/output trigger/event matrixthat is
designed to map the relationships between input data as causes that trigger events and
the output data that describes the actual events. The cause-effect relationship between
the causes of state change i.e. input variables and the effect system output parameters
determines which set of inputs have a genuine impact on a given output. The method has
a clear advantage over analytical and computational IVS method since it tries to
understand and interpret system state change in the shortest possible time with minimum
computational overhead.[6][7]
 Correlated inputs: Most common sensitivity analysis methods
assume independence between model inputs, but sometimes inputs can be strongly
correlated. This is still an immature field of research and definitive methods have yet to
be established.
 Nonlinearity: Some sensitivity analysis approaches, such as those based on linear
regression, can inaccurately measure sensitivity when the model response
isnonlinear with respect to its inputs. In such cases, variance-based measures are more
appropriate.
 Model interactions: Interactions occur when the perturbation of two or more
inputs simultaneously causes variation in the output greater than that of varying each of
the inputs alone. Such interactions are present in any model that is non-additive, but will
be neglected by methods such as scatterplots and one-at-a-time perturbations.[8]
The
effect of interactions can be measured by the total-order sensitivity index.
 Multiple outputs: Virtually all sensitivity analysis methods consider a
single univariate model output, yet many models output a large number of possibly
spatially or time-dependent data. Note that this does not preclude the possibility of
performing different sensitivity analyses for each output of interest. However, for models
in which the outputs are correlated, the sensitivity measures can be hard to interpret.
 Given data: While in many cases the practitioner has access to the model, in some
instances a sensitivity analysis must be performed with "given data", i.e. where the
sample points (the values of the model inputs for each run) cannot be chosen by the
analyst. This may occur when a sensitivity analysis has to be performed retrospectively,
perhaps using data from an optimisation or uncertainty analysis, or when data comes
from a discrete source.[9]
Core methodology[edit]
Ideal scheme of a possibly sampling-based sensitivity analysis. Uncertainty arising from different
sources—errors in the data, parameter estimation procedure, alternative model structures—are
propagated through the model for uncertainty analysis and their relative importance is quantified
via sensitivity analysis.
Sampling-based sensitivity analysis by scatterplots. Y (vertical axis) is a function of four factors.
The points in the four scatterplots are always the same though sorted differently, i.e.
by Z1, Z2, Z3, Z4 in turn. Note that the abscissa is different for each plot: (−5, +5) for Z1, (−8, +8)
for Z2, (−10, +10) for Z3 and Z4. Z4 is most important in influencing Y as it imparts more 'shape'
on Y.
There are a large number of approaches to performing a sensitivity analysis, many of which
have been developed to address one or more of the constraints discussed above.[1]
They are
also distinguished by the type of sensitivity measure, be it based on (for example) variance
decompositions, partial derivatives or elementary effects. In general, however, most
procedures adhere to the following outline:
1. Quantify the uncertainty in each input (e.g. ranges, probability distributions). Note
that this can be difficult and many methods exist to elicit uncertainty distributions
from subjective data.[10]
2. Identify the model output to be analysed (the target of interest should ideally have a
direct relation to the problem tackled by the model).
3. Run the model a number of times using some design of experiments,[11]
dictated by
the method of choice and the input uncertainty.
4. Using the resulting model outputs, calculate the sensitivity measures of interest.
In some cases this procedure will be repeated, for example in high-dimensional problems
where the user has to screen out unimportant variables before performing a full sensitivity
analysis.
This section discusses various types of "core methods", distinguished by the various
sensitivity measures that are calculated (note that some of these categories "overlap"
somewhat). The following section focuses on alternative ways of obtaining these measures,
under the constraints of the problem.
One-at-a-time (OAT/OFAT)[edit]
One of the simplest and most common approaches is that of changing one-factor-at-a-time
(OFAT or OAT), to see what effect this produces on the output.[12] [13] [14]
OAT customarily
involves
 Moving one input variable, keeping others at their baseline (nominal) values, then,
 Returning the variable to its nominal value, then repeating for each of the other inputs in
the same way.
Sensitivity may then be measured by monitoring changes in the output, e.g. bypartial
derivatives or linear regression. This appears a logical approach as any change observed in
the output will unambiguously be due to the single variable changed. Furthermore, by
changing one variable at a time, one can keep all other variables fixed to their central or
baseline values. This increases the comparability of the results (all ‘effects’ are computed
with reference to the same central point in space) and minimizes the chances of computer
programme crashes, more likely when several input factors are changed simultaneously.
OAT is frequently preferred by modellers because of practical reasons. In case of model
failure under OAT analysis the modeller immediately knows which is the input factor
responsible for the failure.[8]
Despite its simplicity however, this approach does not fully explore the input space, since it
does not take into account the simultaneous variation of input variables. This means that the
OAT approach cannot detect the presence of interactions between input variables.[15]
Local methods[edit]
Local methods involve taking the partial derivative of the output Y with respect to an input
factor Xi:
,
where the subscript X0
indicates that the derivative is taken at some fixed point in the
space of the input (hence the 'local' in the name of the class). Adjoint modelling[16][17]
and
Automated Differentiation[18]
are methods in this class. Similar to OAT/OFAT, local
methods do not attempt to fully explore the input space, since they examine small
perturbations, typically one variable at a time.
Scatter plots[edit]
A simple but useful tool is to plot scatter plots of the output variable against individual
input variables, after (randomly) sampling the model over its input distributions. The
advantage of this approach is that it can also deal with "given data", i.e. a set of
arbitrarily-placed data points, and gives a direct visual indication of sensitivity.
Quantitative measures can also be drawn, for example by measuring
the correlation between Y and Xi, or even by estimating variance-based measures
by nonlinear regression.[9]
Regression analysis[edit]
Regression analysis, in the context of sensitivity analysis, involves fitting a linear
regression to the model response and using standardized regression coefficients as
direct measures of sensitivity. The regression is required to be linear with respect to the
data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because
otherwise it is difficult to interpret the standardised coefficients. This method is therefore
most suitable when the model response is in fact linear; linearity can be confirmed, for
instance, if the coefficient of determination is large. The advantages of regression
analysis are that it is simple and has a low computational cost.
Variance-based methods[edit]
Main article: Variance-based sensitivity analysis
Variance-based methods[19][20][21]
are a class of probabilistic approaches which quantify the
input and output uncertainties as probability distributions, and decompose the output
variance into parts attributable to input variables and combinations of variables. The
sensitivity of the output to an input variable is therefore measured by the amount of
variance in the output caused by that input. These can be expressed as conditional
expectations, i.e. considering a model Y=f(X) for X={X1, X2, ... Xk}, a measure of
sensitivity of the ith variable Xi is given as,
where "Var" and "E" denote the variance and expected value operators respectively,
and X~i denotes the set of all input variables except Xi. This expression essentially
measures the contribution Xi alone to the uncertainty (variance) in Y (averaged over
variations in other variables), and is known as the first-order sensitivity index or main
effect index. Importantly, it does not measure the uncertainty caused by interactions
with other variables. A further measure, known as thetotal effect index, gives the
total variance in Y caused by Xi and its interactions with any of the other input
variables. Both quantities are typically standardised by dividing by Var(Y).
Variance-based methods allow full exploration of the input space, accounting for
interactions, and nonlinear responses. For these reasons they are widely used when
it is feasible to calculate them. Typically this calculation involves the use of Monte
Carlo methods, but since this can involve many thousands of model runs, other
methods (such as emulators) can be used to reduce computational expense when
necessary. Note that full variance decompositions are only meaningful when the
input factors are independent from one another.[22]
Screening[edit]
Screening is a particular instance of a sampling-based method. The objective here is
rather to identify which input variables are contributing significantly to the output
uncertainty in high-dimensionality models, rather than exactly quantifying sensitivity
(i.e. in terms of variance). Screening tends to have a relatively low computational
cost when compared to other approaches, and can be used in a preliminary analysis
to weed out uninfluential variables before applying a more informative analysis to the
remaining set. One of the most commonly used screening method is the elementary
effect method.[23][24]
Alternative Methods[edit]
A number of methods have been developed to overcome some of the constraints
discussed above, which would otherwise make the estimation of sensitivity
measures infeasible (most often due to computational expense). Generally, these
methods focus on efficiently calculating variance-based measures of sensitivity.
Emulators[edit]
Emulators (also known as metamodels, surrogate models or response surfaces)
are data-modelling/machine learning approaches that involve building a relatively
simple mathematical function, known as an emulator, that approximates the
input/output behaviour of the model itself.[25]
In other words, it is the concept of
"modelling a model" (hence the name "metamodel"). The idea is that, although
computer models may be a very complex series of equations that can take a long
time to solve, they can always be regarded as a function of their inputs Y=f(X). By
running the model at a number of points in the input space, it may be possible to fit a
much simpler emulator η(X), such that η(X)≈f(X) to within an acceptable margin of
error. Then, sensitivity measures can be calculated from the emulator (either with
Monte Carlo or analytically), which will have a negligible additional computational
cost. Importantly, the number of model runs required to fit the emulator can be
orders of magnitude less than the number of runs required to directly estimate the
sensitivity measures from the model.[26]
Clearly the crux of an emulator approach is to find an η (emulator) that is a
sufficiently close approximation to the model f. This requires the following steps,
1. Sampling (running) the model at a number of points in its input space. This
requires a sample design.
2. Selecting a type of emulator (mathematical function) to use.
3. "Training" the emulator using the sample data from the model – this
generally involves adjusting the emulator parameters until the emulator
mimics the true model as well as possible.
Sampling the model can often be done with low-discrepancy sequences, such as
the Sobol sequence or Latin hypercube sampling, although random designs can also
be used, at the loss of some efficiency. The selection of the emulator type and the
training are intrinsically linked, since the training method will be dependent on the
class of emulator. Some types of emulators that have been used successfully for
sensitivity analysis include,
 Gaussian processes[26]
(also known as kriging), where the any combination of
output points is assumed to be distributed as a multivariate Gaussian
distribution. Recently, "treed" Gaussian processes have been used to deal
with heteroscedastic and discontinuous responses.[27][28]
 Random forests,[25]
in which a large number of decision trees are trained, and the
result averaged.
 Gradient boosting,[25]
where a succession of simple regressions are used to
weight data points to sequentially reduce error.
 Polynomial chaos expansions,[29]
which use orthogonal polynomials to
approximate the response surface.
 Smoothing splines,[30]
normally used in conjunction with HDMR truncations (see
below).
The use of an emulator introduces a machine learning problem, which can be
difficult if the response of the model is highly nonlinear. In all cases it is useful to
check the accuracy of the emulator, for example using cross-validation.
High-Dimensional Model Representations (HDMR)[edit]
A high-dimensional model representation (HDMR)[31][32]
(the term is due to H.
Rabitz[33]
) is essentially an emulator approach, which involves decomposing the
function output into a linear combination of input terms and interactions of increasing
dimensionality. The HDMR approach exploits the fact that the model can usually be
well-approximated by neglecting higher-order interactions (second or third-order and
above). The terms in the truncated series can then each be approximated by e.g.
polynomials or splines (REFS) and the response expressed as the sum of the main
effects and interactions up to the truncation order. From this perspective, HDMRs
can be seen as emulators which neglect high-order interactions; the advantage
being that they are able to emulate models with higher dimensionality than full-order
emulators.
Fourier Amplitude Sensitivity Test (FAST)[edit]
Main article: Fourier amplitude sensitivity testing
The Fourier Amplitude Sensitivity Test (FAST) uses the Fourier series to represent a
multivariate function (the model) in the frequency domain, using a single frequency
variable. Therefore, the integrals required to calculate sensitivity indices become
univariate, resulting in computational savings.
Other[edit]
Methods based on Monte Carlo filtering.[34][35]
These are also sampling-based and the
objective here is to identify regions in the space of the input factors corresponding to
particular values (e.g. high or low) of the output.
Other issues[edit]
Assumptions vs. inferences[edit]
In uncertainty and sensitivity analysis there is a crucial TRADE off between how
scrupulous an analyst is in exploring the input assumptions and how wide the
resulting inference may be. The point is well illustrated by the econometrician
Edward E. Leamer (1990):[36]
I have proposed a form of organized sensitivity analysis that I call ‘global
sensitivity analysis’ in which a neighborhood of alternative assumptions is
selected and the corresponding interval of inferences is identified.
Conclusions are judged to be sturdy only if the neighborhood of assumptions
is wide enough to be credible and the corresponding interval of inferences is
narrow enough to be useful.
Note Leamer’s emphasis is on the need for 'credibility' in the selection of
assumptions. The easiest way to invalidate a model is to demonstrate that it is
fragile with respect to the uncertainty in the assumptions or to show that its
assumptions have not been taken 'wide enough'. The same concept is expressed by
Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be
suppressed lest outputs become indeterminate.[37]
Pitfalls and difficulties[edit]
Some common difficulties in sensitivity analysis include
 Too many model inputs to analyse. Screening can be used to reduce
dimensionality.
 The model takes too long to run. Emulators (including HDMR) can reduce the
number of model runs needed.
 There is not enough information to build probability distributions for the inputs.
Probability distributions can be constructed from expert elicitation, although even
then it may be hard to build distributions with great confidence. The subjectivity
of the probability distributions or ranges will strongly affect the sensitivity
analysis.
 Unclear purpose of the analysis. Different statistical tests and measures are
applied to the problem and different factors rankings are obtained. The test
should instead be tailored to the purpose of the analysis, e.g. one uses Monte
Carlo filtering if one is interested in which factors are most responsible for
generating high/lowvalues of the output.
 Too many model outputs are considered. This may be acceptable for quality
assurance of sub-models but should be avoided when presenting the results of
the overall analysis.
 Piecewise sensitivity. This is when one performs sensitivity analysis on one sub-
model at a time. This approach is non conservative as it might overlook
interactions among factors in different sub-models (Type II error).
Applications[edit]
Some examples of sensitivity analyses performed in various disciplines follow here.
Environmental[edit]
Environmental computer models are increasingly used in a wide variety of studies
and applications. For example, global climate models are used for both short-
termweather forecasts and long-term climate change. Moreover, computer models
are increasingly used for environmental decision-making at a local scale, for
example for assessing the impact of a waste water treatment plant on a river flow, or
for assessing the behavior and life-length of bio-filters for contaminated waste water.
In both cases sensitivity analysis may help to understand the contribution of the
various sources of uncertainty to the model output uncertainty and the system
performance in general. In these cases, depending on model complexity, different
sampling strategies may be advisable and traditional sensitivity indices have to be
generalized to cover multiple model outputs,[38]
heteroskedastic effects and
correlated inputs.[7]
Business[edit]
In a decision problem, the analyst may want to identify cost drivers as well as other
quantities for which we need to acquire better knowledge in order to make an
informed decision. On the other hand, some quantities have no influence on the
predictions, so that we can save resources at no loss in accuracy by relaxing some
of the conditions. See Corporate finance: Quantifying uncertainty. Additionally to the
general motivations listed above, sensitivity analysis can help in a variety of other
circumstances specific to business:
 To identify critical assumptions or compare alternative model structures
 To guide future data collections
 To optimize the tolerance of manufactured parts in terms of the uncertainty in
the parameters
 To optimize resources allocation
However there are also some problems associated with sensitivity analysis in the
business context:
 Variables are often interdependent (correlated), which makes examining each
variable individually unrealistic. E.G. changing one factor such as sales volume,
will most likely affect other factors such as the selling price.
 Often the assumptions upon which the analysis is based are made by using past
experience/data which may not hold in the future.
 Assigning a maximum and minimum (or optimistic and pessimistic) value is open
to subjective interpretation. For instance one person's 'optimistic' forecast may
be more conservative than that of another person performing a different part of
the analysis. This sort of subjectivity can adversely affect the accuracy and
overall objectivity of the analysis.
Social Sciences[edit]
Examples from research-led sensitivity analyses can be found on gender wage gap
in Chile[39]
and water sector interventions in Nigeria.
In modern econometrics the use of sensitivity analysis to anticipate criticism is the
subject of one of the ten commandments of applied econometrics (from Kennedy,
2007[40]
):
Thou shall confess in the presence of sensitivity. Corollary: Thou shall
anticipate criticism [•••] When reporting a sensitivity analysis, researchers
should explain fully their specification search so that the readers can judge
for themselves how the results may have been affected. This is basically an
‘honesty is the best policy’ approach, advocated by Leamer, (1978[41]
).
Sensitivity analysis can also be used in model-based policy assessment
studies.[42]
Sensitivity analysis can be used to assess the robustness of composite
indicators,[43]
also known as indices, such as the Environmental Performance Index.
Chemistry[edit]
Sensitivity Analysis is common in many areas of physics and chemistry.[44]
With the accumulation of knowledge about kinetic mechanisms under investigation
and with the advance of power of modern computing technologies, detailed complex
kinetic models are increasingly used as predictive tools and as aids for
understanding the underlying phenomena. A kinetic model is usually described by a
set of differential equations representing the concentration-time relationship.
Sensitivity analysis has been proven to be a powerful tool to investigate a complex
kinetic model.[45][46][47]
Kinetic parameters are frequently determined from experimental data via nonlinear
estimation. Sensitivity analysis can be used for optimal experimental design, e.g.
determining initial conditions, measurement positions, and sampling time, to
generate informative data which are critical to estimation accuracy. A great number
of parameters in a complex model can be candidates for estimation but not all are
estimable.[47]
Sensitivity analysis can be used to identify the influential parameters
which can be determined from available data while screening out the unimportant
ones. Sensitivity analysis can also be used to identify the redundant species and
reactions allowing model reduction.
Engineering[edit]
Modern engineering design makes extensive use of computer models to test
designs before they are manufactured. Sensitivity analysis allows designers to
assess the effects and sources of uncertainties, in the interest of building robust
models. Sensitivity analyses have for example been performed in biomechanical
models,[48]
tunneling risk models,[49]
amongst others.
In meta-analysis[edit]
In a meta analysis, a sensitivity analysis tests if the results are sensitive to
restrictions on the data included. Common examples are large trials only, higher
quality trials only, and more recent trials only. If results are consistent it provides
stronger evidence of an effect and of generalizability.[50]
Multi-criteria decision making[edit]
Sometimes a sensitivity analysis may reveal surprising insights about the subject of
interest. For instance, the field of multi-criteria decision making (MCDM) studies
(among other topics) the problem of how to select the best alternative among a
number of competing alternatives. This is an important task in decision making. In
such a setting each alternative is described in terms of a set of evaluative criteria.
These criteria are associated with weights of importance. Intuitively, one may think
that the larger the weight for a criterion is, the more critical that criterion should be.
However, this may not be the case. It is important to distinguish here the notion
of criticality with that of importance. By critical, we mean that a criterion with small
change (as a percentage) in its weight, may cause a significant change of the final
solution. It is possible criteria with rather small weights of importance (i.e., ones that
are not so important in that respect) to be much more critical in a given situation than
ones with larger weights.[51][52]
That is, a sensitivity analysis may shed light into issues
not anticipated at the beginning of a study. This, in turn, may dramatically improve
the effectiveness of the initial study and assist in the successful implementation of
the final solution.
Time-critical decision making[edit]
Producing time-critical accurate knowledge about the state of a system (effect)
under computational and data acquisition (cause) constraints is a major challenge,
especially if the knowledge required is critical to the system operation where the
safety of operators or integrity of costly equipment is at stake, e.g., during
manufacturing or during environment substrate drilling. Understanding and
interpreting, a chain of interrelated events, predicted or unpredicted, that may or may
not result in a specific state of the system, is the core challenge of this research.
Sensitivity analysis may be used to identify which set of input data signals has a
significant impact on the set of system state information (i.e. output). Through a
cause-effect analysis technique, sensitivity can be used to support the filtering of
unsolicited data to reduce the communication and computational capabilities of a
standard supervisory control and data acquisition system.[7]
Related concepts[edit]
Sensitivity analysis is closely related with uncertainty analysis; while the latter
studies the overall uncertainty in the conclusions of the study, sensitivity analysis
tries to identify what source of uncertainty weighs more on the study's conclusions.
The problem setting in sensitivity analysis also has strong similarities with the field
of design of experiments. In a design of experiments, one studies the effect of some
process or intervention (the 'treatment') on some objects (the 'experimental units'). In
sensitivity analysis one looks at the effect of varying the inputs of a mathematical
model on the output of the model itself. In both disciplines one strives to obtain
information from the system with a minimum of physical or numerical experiments.

More Related Content

What's hot

Different types of distributions
Different types of distributionsDifferent types of distributions
Different types of distributions
RajaKrishnan M
 
Regression Analysis
Regression AnalysisRegression Analysis
Regression Analysis
Salim Azad
 
Optimization techniques
Optimization techniquesOptimization techniques
Optimization techniques
prashik shimpi
 
Design of Experiments
Design of ExperimentsDesign of Experiments
Estimation and confidence interval
Estimation and confidence intervalEstimation and confidence interval
Estimation and confidence interval
Homework Guru
 
SEE Sensitivity Analysis
SEE Sensitivity AnalysisSEE Sensitivity Analysis
SEE Sensitivity Analysis
umair khan
 
Resource Surface Methology
Resource Surface MethologyResource Surface Methology
Resource Surface Methology
PRATHAMESH REGE
 
Two factor factorial_design_pdf
Two factor factorial_design_pdfTwo factor factorial_design_pdf
Two factor factorial_design_pdf
Rione Drevale
 
STATISTICAL PARAMETERS
STATISTICAL  PARAMETERSSTATISTICAL  PARAMETERS
STATISTICAL PARAMETERS
Hasiful Arabi
 
Optimization techniques
Optimization techniques Optimization techniques
Optimization techniques
Dr. Raja Abhilash
 
Optimization techniques
Optimization  techniquesOptimization  techniques
Optimization techniques
biniyapatel
 
Simple & Multiple Regression Analysis
Simple & Multiple Regression AnalysisSimple & Multiple Regression Analysis
Simple & Multiple Regression Analysis
Shailendra Tomar
 
Factorial design M Pharm 1st Yr.
Factorial design M Pharm 1st Yr.Factorial design M Pharm 1st Yr.
Factorial design M Pharm 1st Yr.
Sanket Chordiya
 
Response surface method
Response surface methodResponse surface method
Response surface method
Irfan Hussain
 
Experimental designs
Experimental designsExperimental designs
Experimental designs
rx_sonali
 
{ANOVA} PPT-1.pptx
{ANOVA} PPT-1.pptx{ANOVA} PPT-1.pptx
{ANOVA} PPT-1.pptx
SNEHA AGRAWAL GUPTA
 
multiple regression
multiple regressionmultiple regression
multiple regression
Priya Sharma
 
PROCEDURE FOR TESTING HYPOTHESIS
PROCEDURE FOR   TESTING HYPOTHESIS PROCEDURE FOR   TESTING HYPOTHESIS
PROCEDURE FOR TESTING HYPOTHESIS
Sundar B N
 
POPULATION MODELLING.pptx
POPULATION MODELLING.pptxPOPULATION MODELLING.pptx
POPULATION MODELLING.pptx
ShamsElfalah
 

What's hot (20)

Different types of distributions
Different types of distributionsDifferent types of distributions
Different types of distributions
 
Regression Analysis
Regression AnalysisRegression Analysis
Regression Analysis
 
Optimization techniques
Optimization techniquesOptimization techniques
Optimization techniques
 
Design of Experiments
Design of ExperimentsDesign of Experiments
Design of Experiments
 
Regression analysis
Regression analysisRegression analysis
Regression analysis
 
Estimation and confidence interval
Estimation and confidence intervalEstimation and confidence interval
Estimation and confidence interval
 
SEE Sensitivity Analysis
SEE Sensitivity AnalysisSEE Sensitivity Analysis
SEE Sensitivity Analysis
 
Resource Surface Methology
Resource Surface MethologyResource Surface Methology
Resource Surface Methology
 
Two factor factorial_design_pdf
Two factor factorial_design_pdfTwo factor factorial_design_pdf
Two factor factorial_design_pdf
 
STATISTICAL PARAMETERS
STATISTICAL  PARAMETERSSTATISTICAL  PARAMETERS
STATISTICAL PARAMETERS
 
Optimization techniques
Optimization techniques Optimization techniques
Optimization techniques
 
Optimization techniques
Optimization  techniquesOptimization  techniques
Optimization techniques
 
Simple & Multiple Regression Analysis
Simple & Multiple Regression AnalysisSimple & Multiple Regression Analysis
Simple & Multiple Regression Analysis
 
Factorial design M Pharm 1st Yr.
Factorial design M Pharm 1st Yr.Factorial design M Pharm 1st Yr.
Factorial design M Pharm 1st Yr.
 
Response surface method
Response surface methodResponse surface method
Response surface method
 
Experimental designs
Experimental designsExperimental designs
Experimental designs
 
{ANOVA} PPT-1.pptx
{ANOVA} PPT-1.pptx{ANOVA} PPT-1.pptx
{ANOVA} PPT-1.pptx
 
multiple regression
multiple regressionmultiple regression
multiple regression
 
PROCEDURE FOR TESTING HYPOTHESIS
PROCEDURE FOR   TESTING HYPOTHESIS PROCEDURE FOR   TESTING HYPOTHESIS
PROCEDURE FOR TESTING HYPOTHESIS
 
POPULATION MODELLING.pptx
POPULATION MODELLING.pptxPOPULATION MODELLING.pptx
POPULATION MODELLING.pptx
 

Similar to Sensitivity analysis

Pertemuan 12 Model Sensitivity Analysis (1).pptx
Pertemuan 12 Model Sensitivity Analysis (1).pptxPertemuan 12 Model Sensitivity Analysis (1).pptx
Pertemuan 12 Model Sensitivity Analysis (1).pptx
PRASETIOARIWIBOWO
 
Guidelines to Understanding Design of Experiment and Reliability Prediction
Guidelines to Understanding Design of Experiment and Reliability PredictionGuidelines to Understanding Design of Experiment and Reliability Prediction
Guidelines to Understanding Design of Experiment and Reliability Prediction
ijsrd.com
 
Modeling & simulation in projects
Modeling & simulation in projectsModeling & simulation in projects
Modeling & simulation in projectsanki009
 
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
ijccmsjournal
 
computer application in pharmaceutical research
computer application in pharmaceutical researchcomputer application in pharmaceutical research
computer application in pharmaceutical research
SUJITHA MARY
 
Are we really including all relevant evidence
Are we really including all relevant evidence Are we really including all relevant evidence
Are we really including all relevant evidence
cheweb1
 
Modelling the expected loss of bodily injury claims using gradient boosting
Modelling the expected loss of bodily injury claims using gradient boostingModelling the expected loss of bodily injury claims using gradient boosting
Modelling the expected loss of bodily injury claims using gradient boosting
Gregg Barrett
 
MODELING & SIMULATION.docx
MODELING & SIMULATION.docxMODELING & SIMULATION.docx
MODELING & SIMULATION.docx
JAMEEL AHMED KHOSO
 
Introduction to Statistics and Probability:
Introduction to Statistics and Probability:Introduction to Statistics and Probability:
Introduction to Statistics and Probability:
Shrihari Shrihari
 
On Confidence Intervals Construction for Measurement System Capability Indica...
On Confidence Intervals Construction for Measurement System Capability Indica...On Confidence Intervals Construction for Measurement System Capability Indica...
On Confidence Intervals Construction for Measurement System Capability Indica...
IRJESJOURNAL
 
Introduction to modeling_and_simulation
Introduction to modeling_and_simulationIntroduction to modeling_and_simulation
Introduction to modeling_and_simulation
Aysun Duran
 
Introduction to modeling_and_simulation
Introduction to modeling_and_simulationIntroduction to modeling_and_simulation
Introduction to modeling_and_simulationmukmin91
 
Decentralized Data Fusion Algorithm using Factor Analysis Model
Decentralized Data Fusion Algorithm using Factor Analysis ModelDecentralized Data Fusion Algorithm using Factor Analysis Model
Decentralized Data Fusion Algorithm using Factor Analysis Model
Sayed Abulhasan Quadri
 
Determination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O RingsDetermination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O Rings
IRJET Journal
 
Determination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O RingsDetermination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O Rings
IRJET Journal
 
A Comparison of Traditional Simulation and MSAL (6-3-2015)
A Comparison of Traditional Simulation and MSAL (6-3-2015)A Comparison of Traditional Simulation and MSAL (6-3-2015)
A Comparison of Traditional Simulation and MSAL (6-3-2015)Bob Garrett
 
Episode 12 : Research Methodology ( Part 2 )
Episode 12 :  Research Methodology ( Part 2 )Episode 12 :  Research Methodology ( Part 2 )
Episode 12 : Research Methodology ( Part 2 )
SAJJAD KHUDHUR ABBAS
 
Sensitivity Analysis, Optimal Design, Population Modeling.pptx
Sensitivity Analysis, Optimal Design, Population Modeling.pptxSensitivity Analysis, Optimal Design, Population Modeling.pptx
Sensitivity Analysis, Optimal Design, Population Modeling.pptx
AditiChauhan701637
 
Episode 18 : Research Methodology ( Part 8 )
Episode 18 :  Research Methodology ( Part 8 )Episode 18 :  Research Methodology ( Part 8 )
Episode 18 : Research Methodology ( Part 8 )
SAJJAD KHUDHUR ABBAS
 
panel data.ppt
panel data.pptpanel data.ppt
panel data.ppt
VinayKhandelwal23
 

Similar to Sensitivity analysis (20)

Pertemuan 12 Model Sensitivity Analysis (1).pptx
Pertemuan 12 Model Sensitivity Analysis (1).pptxPertemuan 12 Model Sensitivity Analysis (1).pptx
Pertemuan 12 Model Sensitivity Analysis (1).pptx
 
Guidelines to Understanding Design of Experiment and Reliability Prediction
Guidelines to Understanding Design of Experiment and Reliability PredictionGuidelines to Understanding Design of Experiment and Reliability Prediction
Guidelines to Understanding Design of Experiment and Reliability Prediction
 
Modeling & simulation in projects
Modeling & simulation in projectsModeling & simulation in projects
Modeling & simulation in projects
 
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
 
computer application in pharmaceutical research
computer application in pharmaceutical researchcomputer application in pharmaceutical research
computer application in pharmaceutical research
 
Are we really including all relevant evidence
Are we really including all relevant evidence Are we really including all relevant evidence
Are we really including all relevant evidence
 
Modelling the expected loss of bodily injury claims using gradient boosting
Modelling the expected loss of bodily injury claims using gradient boostingModelling the expected loss of bodily injury claims using gradient boosting
Modelling the expected loss of bodily injury claims using gradient boosting
 
MODELING & SIMULATION.docx
MODELING & SIMULATION.docxMODELING & SIMULATION.docx
MODELING & SIMULATION.docx
 
Introduction to Statistics and Probability:
Introduction to Statistics and Probability:Introduction to Statistics and Probability:
Introduction to Statistics and Probability:
 
On Confidence Intervals Construction for Measurement System Capability Indica...
On Confidence Intervals Construction for Measurement System Capability Indica...On Confidence Intervals Construction for Measurement System Capability Indica...
On Confidence Intervals Construction for Measurement System Capability Indica...
 
Introduction to modeling_and_simulation
Introduction to modeling_and_simulationIntroduction to modeling_and_simulation
Introduction to modeling_and_simulation
 
Introduction to modeling_and_simulation
Introduction to modeling_and_simulationIntroduction to modeling_and_simulation
Introduction to modeling_and_simulation
 
Decentralized Data Fusion Algorithm using Factor Analysis Model
Decentralized Data Fusion Algorithm using Factor Analysis ModelDecentralized Data Fusion Algorithm using Factor Analysis Model
Decentralized Data Fusion Algorithm using Factor Analysis Model
 
Determination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O RingsDetermination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O Rings
 
Determination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O RingsDetermination of Optimum Parameters Affecting the Properties of O Rings
Determination of Optimum Parameters Affecting the Properties of O Rings
 
A Comparison of Traditional Simulation and MSAL (6-3-2015)
A Comparison of Traditional Simulation and MSAL (6-3-2015)A Comparison of Traditional Simulation and MSAL (6-3-2015)
A Comparison of Traditional Simulation and MSAL (6-3-2015)
 
Episode 12 : Research Methodology ( Part 2 )
Episode 12 :  Research Methodology ( Part 2 )Episode 12 :  Research Methodology ( Part 2 )
Episode 12 : Research Methodology ( Part 2 )
 
Sensitivity Analysis, Optimal Design, Population Modeling.pptx
Sensitivity Analysis, Optimal Design, Population Modeling.pptxSensitivity Analysis, Optimal Design, Population Modeling.pptx
Sensitivity Analysis, Optimal Design, Population Modeling.pptx
 
Episode 18 : Research Methodology ( Part 8 )
Episode 18 :  Research Methodology ( Part 8 )Episode 18 :  Research Methodology ( Part 8 )
Episode 18 : Research Methodology ( Part 8 )
 
panel data.ppt
panel data.pptpanel data.ppt
panel data.ppt
 

Recently uploaded

Cyber Sequrity.pptx is life of cyber security
Cyber Sequrity.pptx is life of cyber securityCyber Sequrity.pptx is life of cyber security
Cyber Sequrity.pptx is life of cyber security
perweeng31
 
一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理
一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理
一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理
kywwoyk
 
Drugs used in parkinsonism and other movement disorders.pptx
Drugs used in parkinsonism and other movement disorders.pptxDrugs used in parkinsonism and other movement disorders.pptx
Drugs used in parkinsonism and other movement disorders.pptx
ThalapathyVijay15
 
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...
Amil baba
 
web-tech-lab-manual-final-abhas.pdf. Jer
web-tech-lab-manual-final-abhas.pdf. Jerweb-tech-lab-manual-final-abhas.pdf. Jer
web-tech-lab-manual-final-abhas.pdf. Jer
freshgammer09
 
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
kywwoyk
 
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...
PinkySharma900491
 
F5 LTM TROUBLESHOOTING Guide latest.pptx
F5 LTM TROUBLESHOOTING Guide latest.pptxF5 LTM TROUBLESHOOTING Guide latest.pptx
F5 LTM TROUBLESHOOTING Guide latest.pptx
ArjunJain44
 
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
eemet
 

Recently uploaded (9)

Cyber Sequrity.pptx is life of cyber security
Cyber Sequrity.pptx is life of cyber securityCyber Sequrity.pptx is life of cyber security
Cyber Sequrity.pptx is life of cyber security
 
一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理
一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理
一比一原版UVM毕业证佛蒙特大学毕业证成绩单如何办理
 
Drugs used in parkinsonism and other movement disorders.pptx
Drugs used in parkinsonism and other movement disorders.pptxDrugs used in parkinsonism and other movement disorders.pptx
Drugs used in parkinsonism and other movement disorders.pptx
 
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...
 
web-tech-lab-manual-final-abhas.pdf. Jer
web-tech-lab-manual-final-abhas.pdf. Jerweb-tech-lab-manual-final-abhas.pdf. Jer
web-tech-lab-manual-final-abhas.pdf. Jer
 
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
 
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...
 
F5 LTM TROUBLESHOOTING Guide latest.pptx
F5 LTM TROUBLESHOOTING Guide latest.pptxF5 LTM TROUBLESHOOTING Guide latest.pptx
F5 LTM TROUBLESHOOTING Guide latest.pptx
 
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
一比一原版SDSU毕业证圣地亚哥州立大学毕业证成绩单如何办理
 

Sensitivity analysis

  • 1. Sensitivity analysis Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs.[1] A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty. Ideally, uncertainty and sensitivity analysis should be run in tandem. Sensitivity analysis can be useful for a range of purposes,[2] including  Testing the robustness of the results of a model or system in the presence of uncertainty.  Increased understanding of the relationships between input and output variables in a system or model.  Uncertainty reduction: identifying model inputs that cause significant uncertainty in the output and should therefore be the focus of attention if the robustness is to be increased (perhaps by further research).  Searching for errors in the model (by encountering unexpected relationships between inputs and outputs).  Model simplification – fixing model inputs that have no effect on the output, or identifying and removing redundant parts of the model structure.  Enhancing communication from modelers to decision makers (e.g. by making recommendations more credible, understandable, compelling or persuasive).  Finding regions in the space of input factors for which the model output is either maximum or minimum or meets some optimum criterion (see optimization and Monte Carlo filtering).  In case of calibrating models with large number of parameters, a primary sensitivity test can ease the calibration stage by focusing on the sensitive parameters. Not knowing the sensitivity of parameters can result in time being uselessly spent on non-sensitive ones.[3] Taking an example from economics, in any budgeting process there are always variables that are uncertain. Future taxrates, interest rates, inflation rates, headcount, operating expenses and other variables may not be known with great precision. Sensitivity analysis answers the question, "if these variables deviate from expectations, what will the effect be (on the business, model, system, or whatever is being analyzed), and which variables are causing the largest deviations?" Overview[edit] A mathematical model is defined by a series of equations, input variables and parameters aimed at characterizing some process under investigation. Some examples might be a climate model, an economic model, or a finite element model in engineering. Increasingly, such models are highly complex, and as a result their input/output relationships may be poorly understood. In such cases, the model can be viewed as a black box, i.e. the output is an opaque function of its inputs. Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors of measurement, absence of information and poor or partial understanding of the driving forces and mechanisms. This uncertainty imposes a limit on our confidence in the response or output of the model. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence of stochastic events.[4] Good modeling practice requires that the modeler provides an evaluation of the confidence in the model. This requires, first, a quantification of the uncertainty in any model results (uncertainty analysis); and second, an evaluation of how much each input is contributing to the output uncertainty. Sensitivity analysis addresses the second of these issues (although uncertainty analysis is usually a necessary precursor), performing the role of ordering by importance the strength and relevance of the inputs in determining the variation in the output.[1] In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance. National and international agencies involved in impact
  • 2. assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission (see e.g. the guidelines for impact assessment), the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency's modelling guidelines. Settings and Constraints[edit] The choice of method of sensitivity analysis is typically dictated by a number of problem constraints or settings. Some of the most common are  Computational expense: Sensitivity analysis is almost always performed by running the model a (possibly large) number of times, i.e. a sampling-based approach.[5] This can be a significant problem when,  A single run of the model takes a significant amount of time (minutes, hours or longer). This is not unusual with very complex models.  The model has a large number of uncertain inputs. Sensitivity analysis is essentially the exploration of the multidimensional input space, which grows exponentially in size with the number of inputs. See the curse of dimensionality. Computational expense is a problem in many practical sensitivity analyses. Some methods of reducing computational expense include the use of emulators (for large models), and screening methods (for reducing the dimensionality of the problem). Another method is to use an event-based sensitivity analysis method for variable selection for time-constrained applications.[6] This is an input variable selection method that assembles together information about the trace of the changes in system inputs and outputs using sensitivity analysis to produce an input/output trigger/event matrixthat is designed to map the relationships between input data as causes that trigger events and the output data that describes the actual events. The cause-effect relationship between the causes of state change i.e. input variables and the effect system output parameters determines which set of inputs have a genuine impact on a given output. The method has a clear advantage over analytical and computational IVS method since it tries to understand and interpret system state change in the shortest possible time with minimum computational overhead.[6][7]  Correlated inputs: Most common sensitivity analysis methods assume independence between model inputs, but sometimes inputs can be strongly correlated. This is still an immature field of research and definitive methods have yet to be established.  Nonlinearity: Some sensitivity analysis approaches, such as those based on linear regression, can inaccurately measure sensitivity when the model response isnonlinear with respect to its inputs. In such cases, variance-based measures are more appropriate.  Model interactions: Interactions occur when the perturbation of two or more inputs simultaneously causes variation in the output greater than that of varying each of the inputs alone. Such interactions are present in any model that is non-additive, but will be neglected by methods such as scatterplots and one-at-a-time perturbations.[8] The effect of interactions can be measured by the total-order sensitivity index.  Multiple outputs: Virtually all sensitivity analysis methods consider a single univariate model output, yet many models output a large number of possibly spatially or time-dependent data. Note that this does not preclude the possibility of performing different sensitivity analyses for each output of interest. However, for models in which the outputs are correlated, the sensitivity measures can be hard to interpret.  Given data: While in many cases the practitioner has access to the model, in some instances a sensitivity analysis must be performed with "given data", i.e. where the sample points (the values of the model inputs for each run) cannot be chosen by the analyst. This may occur when a sensitivity analysis has to be performed retrospectively,
  • 3. perhaps using data from an optimisation or uncertainty analysis, or when data comes from a discrete source.[9] Core methodology[edit] Ideal scheme of a possibly sampling-based sensitivity analysis. Uncertainty arising from different sources—errors in the data, parameter estimation procedure, alternative model structures—are propagated through the model for uncertainty analysis and their relative importance is quantified via sensitivity analysis. Sampling-based sensitivity analysis by scatterplots. Y (vertical axis) is a function of four factors. The points in the four scatterplots are always the same though sorted differently, i.e. by Z1, Z2, Z3, Z4 in turn. Note that the abscissa is different for each plot: (−5, +5) for Z1, (−8, +8)
  • 4. for Z2, (−10, +10) for Z3 and Z4. Z4 is most important in influencing Y as it imparts more 'shape' on Y. There are a large number of approaches to performing a sensitivity analysis, many of which have been developed to address one or more of the constraints discussed above.[1] They are also distinguished by the type of sensitivity measure, be it based on (for example) variance decompositions, partial derivatives or elementary effects. In general, however, most procedures adhere to the following outline: 1. Quantify the uncertainty in each input (e.g. ranges, probability distributions). Note that this can be difficult and many methods exist to elicit uncertainty distributions from subjective data.[10] 2. Identify the model output to be analysed (the target of interest should ideally have a direct relation to the problem tackled by the model). 3. Run the model a number of times using some design of experiments,[11] dictated by the method of choice and the input uncertainty. 4. Using the resulting model outputs, calculate the sensitivity measures of interest. In some cases this procedure will be repeated, for example in high-dimensional problems where the user has to screen out unimportant variables before performing a full sensitivity analysis. This section discusses various types of "core methods", distinguished by the various sensitivity measures that are calculated (note that some of these categories "overlap" somewhat). The following section focuses on alternative ways of obtaining these measures, under the constraints of the problem. One-at-a-time (OAT/OFAT)[edit] One of the simplest and most common approaches is that of changing one-factor-at-a-time (OFAT or OAT), to see what effect this produces on the output.[12] [13] [14] OAT customarily involves  Moving one input variable, keeping others at their baseline (nominal) values, then,  Returning the variable to its nominal value, then repeating for each of the other inputs in the same way. Sensitivity may then be measured by monitoring changes in the output, e.g. bypartial derivatives or linear regression. This appears a logical approach as any change observed in the output will unambiguously be due to the single variable changed. Furthermore, by changing one variable at a time, one can keep all other variables fixed to their central or baseline values. This increases the comparability of the results (all ‘effects’ are computed with reference to the same central point in space) and minimizes the chances of computer programme crashes, more likely when several input factors are changed simultaneously. OAT is frequently preferred by modellers because of practical reasons. In case of model failure under OAT analysis the modeller immediately knows which is the input factor responsible for the failure.[8] Despite its simplicity however, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables. This means that the OAT approach cannot detect the presence of interactions between input variables.[15] Local methods[edit] Local methods involve taking the partial derivative of the output Y with respect to an input factor Xi:
  • 5. , where the subscript X0 indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling[16][17] and Automated Differentiation[18] are methods in this class. Similar to OAT/OFAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. Scatter plots[edit] A simple but useful tool is to plot scatter plots of the output variable against individual input variables, after (randomly) sampling the model over its input distributions. The advantage of this approach is that it can also deal with "given data", i.e. a set of arbitrarily-placed data points, and gives a direct visual indication of sensitivity. Quantitative measures can also be drawn, for example by measuring the correlation between Y and Xi, or even by estimating variance-based measures by nonlinear regression.[9] Regression analysis[edit] Regression analysis, in the context of sensitivity analysis, involves fitting a linear regression to the model response and using standardized regression coefficients as direct measures of sensitivity. The regression is required to be linear with respect to the data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because otherwise it is difficult to interpret the standardised coefficients. This method is therefore most suitable when the model response is in fact linear; linearity can be confirmed, for instance, if the coefficient of determination is large. The advantages of regression analysis are that it is simple and has a low computational cost. Variance-based methods[edit] Main article: Variance-based sensitivity analysis Variance-based methods[19][20][21] are a class of probabilistic approaches which quantify the input and output uncertainties as probability distributions, and decompose the output variance into parts attributable to input variables and combinations of variables. The sensitivity of the output to an input variable is therefore measured by the amount of variance in the output caused by that input. These can be expressed as conditional expectations, i.e. considering a model Y=f(X) for X={X1, X2, ... Xk}, a measure of sensitivity of the ith variable Xi is given as, where "Var" and "E" denote the variance and expected value operators respectively, and X~i denotes the set of all input variables except Xi. This expression essentially measures the contribution Xi alone to the uncertainty (variance) in Y (averaged over variations in other variables), and is known as the first-order sensitivity index or main effect index. Importantly, it does not measure the uncertainty caused by interactions with other variables. A further measure, known as thetotal effect index, gives the total variance in Y caused by Xi and its interactions with any of the other input variables. Both quantities are typically standardised by dividing by Var(Y). Variance-based methods allow full exploration of the input space, accounting for interactions, and nonlinear responses. For these reasons they are widely used when it is feasible to calculate them. Typically this calculation involves the use of Monte Carlo methods, but since this can involve many thousands of model runs, other methods (such as emulators) can be used to reduce computational expense when necessary. Note that full variance decompositions are only meaningful when the input factors are independent from one another.[22]
  • 6. Screening[edit] Screening is a particular instance of a sampling-based method. The objective here is rather to identify which input variables are contributing significantly to the output uncertainty in high-dimensionality models, rather than exactly quantifying sensitivity (i.e. in terms of variance). Screening tends to have a relatively low computational cost when compared to other approaches, and can be used in a preliminary analysis to weed out uninfluential variables before applying a more informative analysis to the remaining set. One of the most commonly used screening method is the elementary effect method.[23][24] Alternative Methods[edit] A number of methods have been developed to overcome some of the constraints discussed above, which would otherwise make the estimation of sensitivity measures infeasible (most often due to computational expense). Generally, these methods focus on efficiently calculating variance-based measures of sensitivity. Emulators[edit] Emulators (also known as metamodels, surrogate models or response surfaces) are data-modelling/machine learning approaches that involve building a relatively simple mathematical function, known as an emulator, that approximates the input/output behaviour of the model itself.[25] In other words, it is the concept of "modelling a model" (hence the name "metamodel"). The idea is that, although computer models may be a very complex series of equations that can take a long time to solve, they can always be regarded as a function of their inputs Y=f(X). By running the model at a number of points in the input space, it may be possible to fit a much simpler emulator η(X), such that η(X)≈f(X) to within an acceptable margin of error. Then, sensitivity measures can be calculated from the emulator (either with Monte Carlo or analytically), which will have a negligible additional computational cost. Importantly, the number of model runs required to fit the emulator can be orders of magnitude less than the number of runs required to directly estimate the sensitivity measures from the model.[26] Clearly the crux of an emulator approach is to find an η (emulator) that is a sufficiently close approximation to the model f. This requires the following steps, 1. Sampling (running) the model at a number of points in its input space. This requires a sample design. 2. Selecting a type of emulator (mathematical function) to use. 3. "Training" the emulator using the sample data from the model – this generally involves adjusting the emulator parameters until the emulator mimics the true model as well as possible. Sampling the model can often be done with low-discrepancy sequences, such as the Sobol sequence or Latin hypercube sampling, although random designs can also be used, at the loss of some efficiency. The selection of the emulator type and the training are intrinsically linked, since the training method will be dependent on the class of emulator. Some types of emulators that have been used successfully for sensitivity analysis include,  Gaussian processes[26] (also known as kriging), where the any combination of output points is assumed to be distributed as a multivariate Gaussian distribution. Recently, "treed" Gaussian processes have been used to deal with heteroscedastic and discontinuous responses.[27][28]  Random forests,[25] in which a large number of decision trees are trained, and the result averaged.
  • 7.  Gradient boosting,[25] where a succession of simple regressions are used to weight data points to sequentially reduce error.  Polynomial chaos expansions,[29] which use orthogonal polynomials to approximate the response surface.  Smoothing splines,[30] normally used in conjunction with HDMR truncations (see below). The use of an emulator introduces a machine learning problem, which can be difficult if the response of the model is highly nonlinear. In all cases it is useful to check the accuracy of the emulator, for example using cross-validation. High-Dimensional Model Representations (HDMR)[edit] A high-dimensional model representation (HDMR)[31][32] (the term is due to H. Rabitz[33] ) is essentially an emulator approach, which involves decomposing the function output into a linear combination of input terms and interactions of increasing dimensionality. The HDMR approach exploits the fact that the model can usually be well-approximated by neglecting higher-order interactions (second or third-order and above). The terms in the truncated series can then each be approximated by e.g. polynomials or splines (REFS) and the response expressed as the sum of the main effects and interactions up to the truncation order. From this perspective, HDMRs can be seen as emulators which neglect high-order interactions; the advantage being that they are able to emulate models with higher dimensionality than full-order emulators. Fourier Amplitude Sensitivity Test (FAST)[edit] Main article: Fourier amplitude sensitivity testing The Fourier Amplitude Sensitivity Test (FAST) uses the Fourier series to represent a multivariate function (the model) in the frequency domain, using a single frequency variable. Therefore, the integrals required to calculate sensitivity indices become univariate, resulting in computational savings. Other[edit] Methods based on Monte Carlo filtering.[34][35] These are also sampling-based and the objective here is to identify regions in the space of the input factors corresponding to particular values (e.g. high or low) of the output. Other issues[edit] Assumptions vs. inferences[edit] In uncertainty and sensitivity analysis there is a crucial TRADE off between how scrupulous an analyst is in exploring the input assumptions and how wide the resulting inference may be. The point is well illustrated by the econometrician Edward E. Leamer (1990):[36] I have proposed a form of organized sensitivity analysis that I call ‘global sensitivity analysis’ in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful. Note Leamer’s emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be suppressed lest outputs become indeterminate.[37]
  • 8. Pitfalls and difficulties[edit] Some common difficulties in sensitivity analysis include  Too many model inputs to analyse. Screening can be used to reduce dimensionality.  The model takes too long to run. Emulators (including HDMR) can reduce the number of model runs needed.  There is not enough information to build probability distributions for the inputs. Probability distributions can be constructed from expert elicitation, although even then it may be hard to build distributions with great confidence. The subjectivity of the probability distributions or ranges will strongly affect the sensitivity analysis.  Unclear purpose of the analysis. Different statistical tests and measures are applied to the problem and different factors rankings are obtained. The test should instead be tailored to the purpose of the analysis, e.g. one uses Monte Carlo filtering if one is interested in which factors are most responsible for generating high/lowvalues of the output.  Too many model outputs are considered. This may be acceptable for quality assurance of sub-models but should be avoided when presenting the results of the overall analysis.  Piecewise sensitivity. This is when one performs sensitivity analysis on one sub- model at a time. This approach is non conservative as it might overlook interactions among factors in different sub-models (Type II error). Applications[edit] Some examples of sensitivity analyses performed in various disciplines follow here. Environmental[edit] Environmental computer models are increasingly used in a wide variety of studies and applications. For example, global climate models are used for both short- termweather forecasts and long-term climate change. Moreover, computer models are increasingly used for environmental decision-making at a local scale, for example for assessing the impact of a waste water treatment plant on a river flow, or for assessing the behavior and life-length of bio-filters for contaminated waste water. In both cases sensitivity analysis may help to understand the contribution of the various sources of uncertainty to the model output uncertainty and the system performance in general. In these cases, depending on model complexity, different sampling strategies may be advisable and traditional sensitivity indices have to be generalized to cover multiple model outputs,[38] heteroskedastic effects and correlated inputs.[7] Business[edit] In a decision problem, the analyst may want to identify cost drivers as well as other quantities for which we need to acquire better knowledge in order to make an informed decision. On the other hand, some quantities have no influence on the predictions, so that we can save resources at no loss in accuracy by relaxing some of the conditions. See Corporate finance: Quantifying uncertainty. Additionally to the general motivations listed above, sensitivity analysis can help in a variety of other circumstances specific to business:  To identify critical assumptions or compare alternative model structures  To guide future data collections  To optimize the tolerance of manufactured parts in terms of the uncertainty in the parameters
  • 9.  To optimize resources allocation However there are also some problems associated with sensitivity analysis in the business context:  Variables are often interdependent (correlated), which makes examining each variable individually unrealistic. E.G. changing one factor such as sales volume, will most likely affect other factors such as the selling price.  Often the assumptions upon which the analysis is based are made by using past experience/data which may not hold in the future.  Assigning a maximum and minimum (or optimistic and pessimistic) value is open to subjective interpretation. For instance one person's 'optimistic' forecast may be more conservative than that of another person performing a different part of the analysis. This sort of subjectivity can adversely affect the accuracy and overall objectivity of the analysis. Social Sciences[edit] Examples from research-led sensitivity analyses can be found on gender wage gap in Chile[39] and water sector interventions in Nigeria. In modern econometrics the use of sensitivity analysis to anticipate criticism is the subject of one of the ten commandments of applied econometrics (from Kennedy, 2007[40] ): Thou shall confess in the presence of sensitivity. Corollary: Thou shall anticipate criticism [•••] When reporting a sensitivity analysis, researchers should explain fully their specification search so that the readers can judge for themselves how the results may have been affected. This is basically an ‘honesty is the best policy’ approach, advocated by Leamer, (1978[41] ). Sensitivity analysis can also be used in model-based policy assessment studies.[42] Sensitivity analysis can be used to assess the robustness of composite indicators,[43] also known as indices, such as the Environmental Performance Index. Chemistry[edit] Sensitivity Analysis is common in many areas of physics and chemistry.[44] With the accumulation of knowledge about kinetic mechanisms under investigation and with the advance of power of modern computing technologies, detailed complex kinetic models are increasingly used as predictive tools and as aids for understanding the underlying phenomena. A kinetic model is usually described by a set of differential equations representing the concentration-time relationship. Sensitivity analysis has been proven to be a powerful tool to investigate a complex kinetic model.[45][46][47] Kinetic parameters are frequently determined from experimental data via nonlinear estimation. Sensitivity analysis can be used for optimal experimental design, e.g. determining initial conditions, measurement positions, and sampling time, to generate informative data which are critical to estimation accuracy. A great number of parameters in a complex model can be candidates for estimation but not all are estimable.[47] Sensitivity analysis can be used to identify the influential parameters which can be determined from available data while screening out the unimportant ones. Sensitivity analysis can also be used to identify the redundant species and reactions allowing model reduction. Engineering[edit] Modern engineering design makes extensive use of computer models to test designs before they are manufactured. Sensitivity analysis allows designers to assess the effects and sources of uncertainties, in the interest of building robust
  • 10. models. Sensitivity analyses have for example been performed in biomechanical models,[48] tunneling risk models,[49] amongst others. In meta-analysis[edit] In a meta analysis, a sensitivity analysis tests if the results are sensitive to restrictions on the data included. Common examples are large trials only, higher quality trials only, and more recent trials only. If results are consistent it provides stronger evidence of an effect and of generalizability.[50] Multi-criteria decision making[edit] Sometimes a sensitivity analysis may reveal surprising insights about the subject of interest. For instance, the field of multi-criteria decision making (MCDM) studies (among other topics) the problem of how to select the best alternative among a number of competing alternatives. This is an important task in decision making. In such a setting each alternative is described in terms of a set of evaluative criteria. These criteria are associated with weights of importance. Intuitively, one may think that the larger the weight for a criterion is, the more critical that criterion should be. However, this may not be the case. It is important to distinguish here the notion of criticality with that of importance. By critical, we mean that a criterion with small change (as a percentage) in its weight, may cause a significant change of the final solution. It is possible criteria with rather small weights of importance (i.e., ones that are not so important in that respect) to be much more critical in a given situation than ones with larger weights.[51][52] That is, a sensitivity analysis may shed light into issues not anticipated at the beginning of a study. This, in turn, may dramatically improve the effectiveness of the initial study and assist in the successful implementation of the final solution. Time-critical decision making[edit] Producing time-critical accurate knowledge about the state of a system (effect) under computational and data acquisition (cause) constraints is a major challenge, especially if the knowledge required is critical to the system operation where the safety of operators or integrity of costly equipment is at stake, e.g., during manufacturing or during environment substrate drilling. Understanding and interpreting, a chain of interrelated events, predicted or unpredicted, that may or may not result in a specific state of the system, is the core challenge of this research. Sensitivity analysis may be used to identify which set of input data signals has a significant impact on the set of system state information (i.e. output). Through a cause-effect analysis technique, sensitivity can be used to support the filtering of unsolicited data to reduce the communication and computational capabilities of a standard supervisory control and data acquisition system.[7] Related concepts[edit] Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overall uncertainty in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study's conclusions. The problem setting in sensitivity analysis also has strong similarities with the field of design of experiments. In a design of experiments, one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments.