Application of Residue Theorem to evaluate real integrations.pptx
150860106054 theory of errors
1. PRESENTATION ON:
THEORY OF ERRORS.
GUIDED BY: PRESENTED BY:
BHAIRAV THAKUR, RIYA
PRAJAPATI
(ASSISTANT PROFESSOR,
(150860106054).
CIVIL DEPARTMENT,LIT).
LAXMI INSTITUTE OF
TECHNOLOGY,SARIGAM.
2. INTRODUCTION:
The measurements always contain errors.
Since the measured quantities such as
area,volume,elevation,slope,through relationships
with the measured quantities, the error in
measured quantities get propagated into the
calculated quantites.
The errors in the measured quantities sholud be
eliminated or minimised before they are used for
computing other quantities.
3. TYPES OF ERRORS:
ERRORS CAN BE CLASSIFIED INTO 3 TYPES:
MISTAKES
SYSTEMATIC
ERRORS
ACCIDENTAL
ERRORS.
4. TYPES OF ERRORS:
• Mistakes are errors that aris from
inexperience,carelessness,inattention on the part of the
observer.
MISTAKES
• A systematic error is an error, that under the same
condition, will always be of the same magnitude and sign.
• Follows some physical or mathematical law.
• Also known as cummulative error.
SYSTEMATIC
ERRORS
• Accidental errors are those which remain after mistakes
and sysytematic errors are eliminated.
• Occur due to lack of perfection in he human eye.
• They do not follow any spcified law.
ACCIDENATL
ERORS.
5. THE LAWS OF ACCIDENTAL
ERRORS:
Accdental errors follow a definite law of
probability.
This law defines the occurrence of erors and can
be expressed in the form of equation which is
used to compute the probable value of a quantity.
6. MOST IMPORTANT FEATURES
:
Small errors tend to be more probable then larg
errors; that is they are the most prrobable.
Positve and negative erors have equal chances
of occuring. The curve is thus symmetrical about
the mean error value.
Large errors occurs infrquenly and are
impossible.
7. 1)Probable error(Er):
The probable error of a single measurmet is
given by:
Es = ±0.6745√(∑V2 /n-1)
= ±0.6745(standard deviation)
where,
Es= probable error of single observation
v= difference between any single observation
and the mean of the series.
N= number of observations.
8. 2) PROBABLE ERROR OF THE MEAN
(Em):
The probable error of mean or average is given
by:
Em=±0.6745√(∑V2 /n(n-1))
=±(Es/√n)
3) PROBABLE ERROR OF SUM:
when a measurement is the result of the sums
and differences of several(n) observations having
different probable errors E1,E2,…En the probable
error of the measurement is given by:
Esum = √(E1
2 + E2
2 + E3
2+…+ En
2) .
9. 4) MEAN SQUARE
ERROR(m.s.e):
The mean square error is equal to the square root
of the arithmetic mean of the squares of the
individual errors.
m.s.e. = ±√((v1
2 + v2
2 + v3
2 +… vn
2 )/n)
= ±√(∑v2/n).
10. LAWS OF WEIGHTS:
The weights of an observation is a number giving
an indication of its worth,truthworthniess,precision
or confidence placed upon the observed value.
Thus, if a certain observation is of weight 3, it
means that it is three times as much reliable as
an observation of weight1. When two quantities
are assumed to be equally reliable, the observed
values are said to be equal weight.
11. WEIGHTS OF LAWS ARE AS
FOLLOW:
The weight of the arithmetic mean of a number of
observations of unit weight is equal o the number
o observations.
The weight of the weighted arithmetic mean is
equal to the sum of the individual weights.
The weights of algebraic sum of two or more
quantities is equal to the reciprocal of the sum of
reciprocals of individual weights.
If a quantity of a given weight is multiplied by a
factor, the weight of the result is obtained by
dividing its given weight by the square of that
factor.
12. If a quantity of a given weight is divided by a
factor, the weight of the result is obtained by
multiplying its given weight by the square of
that factor.
If an equation is multiplied by its own weight,
the weight of the resulting equation is equal to
the reciprocal of the weight of that equation.
The weight of an equation remains
unchanged, if all the signs of the equation are
changed.
The weight of an equation remains unchanged
if the equation is added to or subtracted from
a constant.
13. THEORY OF LEAST SQUARES:
“THE MOST PROBABLE VALUE OF A
QUANTITY EVALUATED FROM A NUMBER OF
OBSERVATIONS IS THE ONE FOR WHICH THE
SQUARES OF THE RESIDUAL ERRORS IS
MINIMUM”.
14. DETERMINATION OF PROBABLE
ERROR:
Direct observations of equal weights:
P.e. of single observation of unit weight.
P.e. of single observation of weight w.
P.e. of single arithmetic mean.
Direct observations of unequal weights:
P.e. of single observation of unit weight.
P.e. of single observation of weight w.
P.e. of weighted arithmetic mean.
Indirect observations of independent quantities.
Indirect observations involving conditional
equations.
Computed quantities.
15. 1)DIRECT OBSERVATIONS OF
EQUAL WEIGHTS:
p.e. of single observation of unit weight may be
calculated by:
Es = ±0.6745√(∑V2 /n-1)
∑V2 = v1
2 + v2
2 + v3
2 +… vn
2
where,
V= residual error
= difference between observed value of a quantity and probable value
of the quantity.
N = number of observations.
p.e. of single observation of weight w:
Esw = p.e. of single observation of weight w/√weight
= Es / √w
p.e. of single arithmetic mean:
Em = ±0.6745√(∑V2 /n(n-1))
16. 2) DIRECT OBSERVATIONS OF
UNEQUAL WEIGHTS:
p.e. of single observation of unit weight:
Esu = ±0.6745√(∑wV2 /n-1)
p.e. of single observation of weight w:
Esuw = ±0.6745√(∑wV2 /w(n-1))
= Es / √w
p.e. of weighted arithmetic mean:
Esum = ±0.6745√(∑wV2 / ∑ w(n-1))
17. DISTRIBUTION OF ERROR OF
FIELD MEASURMENT:
Whenever observations are made in the field, it is
always necessary to check for the closing error, if
any.
The following rules should be applied for the
closing errors:
The correction to be applied to an observation is
inversely proportional to the weight of the
observation.
The correction to be applied to an observation is
directly proportional to the probable error.
In case of line of levels, the correction to be applied
is proportional to the length.
18. OBSERVATION EQUATIONS
ACCOMPAINED BY CONDITIONED
EQUATIONS:
In case of conditioned quantities, one or more
conditioned equations are available in addition to
the observation equations.
When observation equations are accompanied by
conditioned equations, the most probable values
may be obtained by the following methods:
The normal equations
The method of differences
The method of correlates.
19. 1) THE NORMAL EQUATIONS:
A normal equation is the one which is formed by
multiplying each equation by the coefficient of the
unknown whose normal equation is to be found
and by adding the equation thus formed.
2) THE METHOD OF
DIFFRENCES:
If the normal equations involve large numbers, it
becomes very laborious to solve the
simultaneous equations. The method of
differences is used so as to simplify the normal
equations.
20. PROCEDURE:
Assume the corrections k1, k2, k3 ,… kn.
Express the discrepancy between the observed
vales and the assumed value by subtracting the
observed value from the assumed value.
Form the normal equations in k1, k2, k3 ,… kn.
Solve the normal equations in k1, k2, k3 ,… kn.
Add the corrections algebraically to the observed
values to obtain the most probable values.
21. 3) METHOD OF CORRELATES:
The methods of correlates is also called the method of
condition equations or the method of Lagrange
multiplier.
In this method, all the condition equations are collected
at one place. To these conditions, one more equation of
the theory of least squares is added.
Each conditioned equation is multiplied by an unknown
multiplier called correlate, or lagrangian multiplier(λ).
The resultant condition equations are then combined
with the least square condition and after differentiation
expressed as a linear function of the correlates.
A set of correlate normal equations equal in number to
the number of conditions is obtained after back
substituting in the condition equations.
These equations are then solved to find the values of
correlates, which can then be expressed in terms of the
corrections.
22. PROCEDURE:
Assume suitable corrections C1, C2, C3 ,… Cn for each
of the observed quantities.
Write all the condition equations.
Write the equation from the theory of least squares.
Partially differentiate each condition equation and the
equation of least squares.
Multiply each differentiated equation of condition by
correlates -λ1, -λ2, -λ3 ,…-λn. Add the results to the
differentiated equation of the least squares.
Equate to zero the coefficients of 𝛿C1, 𝛿C2, 𝛿C3 ,… 𝛿
Cn to get the values of C1, C2, C3 ,… Cn in terms of λ1,
λ2, λ3 ,…λn etc.
23. Substitute the values of C1, C2, C3 ,… Cn into the
conditioned equations and solve the
simultaneous equations to obtain the values of λ1,
λ2, λ3 ,…λn.
Knowing the values of the correlates and weights,
calculate the values of C1, C2, C3 ,… Cn .
Determine the most probable values by
algebraically adding the correction C1, C2, C3 ,…
Cn to the observed values.