Error is the difference between the measured and true value, while uncertainty quantifies the doubt about a measurement. Uncertainty comes from sources like repeatability, reproducibility, stability, bias, and reference standards. We report uncertainty with measurements using a confidence level to convey the range we believe the true value lies within. Common ways to express uncertainty are ±1 cm at 95% confidence or best estimate ± uncertainty.
2. • Error is the difference between the measured value and the
‘true value’ of the thing being measured.
• The total error is a combination of both systematic error and
random error
• Uncertainty is a quantification of the doubt about the
measurement result.
• Uncertainty characterizes the range of values within which the
true value is asserted to lie with some level of confidence
• Whenever possible we try to correct for any known errors: for
example, by applying corrections from calibration certificates.
• But any error whose value we do not know is a source of
uncertainty.
3. Sources of Uncertainty in Measurement
– Repeatability
– Reproducibility
– Stability
– Bias
– Drift
– Resolution
– Reference Standard
– Reference Standard Stability
4. Uncertainty
• We are interested in uncertainty of measurement because we wish
to make good quality measurements and to understand the results.
• We may be making the measurements as part of a:
➢Calibration - where the uncertainty of measurement must be
reported on the certificate
➢Test - where the uncertainty of measurement is needed to
determine a pass or fail
➢Or to meet a tolerance
❖ where you need to know the uncertainty before you can
decide whether the tolerance is met
➢Or we may need to read and understand a calibration certificate
or a written specification for a test or measurement
5. Expressing uncertainty of measurement
• Uncertainty of measurement is the doubt that exists about the result
of any measurement
• Since there is always a margin of doubt about any measurement, we
need to ask ‘How big is the margin?’ and ‘How bad is the doubt?’
• Thus, two numbers are needed in order to quantify an uncertainty.
• One is the width of the margin, or interval.
• The other is a confidence level
• Confidence states how sure we are that the ‘true value’ is within that
margin.
• For example: We might say that the length of a certain stick measures
20 centimetres plus or minus 1 centimetre, at the 95 percent
confidence level.
• This result could be written
20 cm ±1 cm, at a level of confidence of 95%
The statement says that we are 95 percent sure that the stick
is between 19 centimetres and 21 centimetres long.
6. Normal or Gaussian distribution
N
x
N
i
i
=
= 1
n
x
x
n
i
i
=
= 1
➢ The summation is divided by n-1 rather than just n
➢ This is done to remove bias that results from
working with a sample rather than a full
population of readings
=
sigma
7. Error and uncertainty
• Uncertainty and error are often used interchangeably in physics
experiments to refer to both the accuracy of a measurement and the
precision of a measurement.
• The accuracy of a measurement refers to how it compares to some ideal
value
• Precision refers to how much successive measurements of the same
quantity using the same apparatus differ from each other.
• When referring to accuracy we should use the term error.
• Typically we should discuss a relative error, described as a percentage of the
ideal value.
• When referring to precision we should use the term uncertainty
• But that said, it’s acceptable to use either error or uncertainty to describe
experimental errors, as long as it is made clear exactly what kind of error we
are referring to.
• Uncertainty is probably the better term to use in general.
• Experimental results reported in general in the form:
Measured result = Best estimate ± uncertainty
8. • When repeated measurements give different results, we want to know
how widely spread the readings are. The spread of values tells us
something about the uncertainty of a measurement.
• By knowing how large this spread is, we can begin to judge the quality of
the measurement or the set of measurements
• The usual way to quantify spread is standard deviation.
• The standard deviation of a set of numbers tells us about how different
the individual readings typically are from the average of the set.
• The ‘true’ value for the standard deviation can only be found from a very
large (infinite) set of readings. From a moderate number of values, only
an estimate of the standard deviation can be found.
• The symbol s is usually used for the estimated standard deviation