4. Contents
Statistical Inference
Estimation
A. Point Estimation
B. Estimator
C. Estimate
Criteria of a good Estimator
A. Unbiasedness
B. Consistency
C. Efficiency
D. Sufficiency
5. Statistical Inference
Usually the population is not known completely we can obtain information
about population parameters by use of samples drawn from it. Statistical
inference deals with such problems.it is defined as
“The art of drawing conclusions or inferences about the unknown
parameters of the populations from the limited information contained in
the sample”
Two important parts of statistical inference are:
1. Estimation of parameter
2. Testing of Hypotheses
6. Estimation
It is a procedure of making judgment about the true but unknown values
of the population parameters from the sample observations
It is further divided into two parts
Point Estimation
Interval Estimation
7. Point Estimation
If we express an estimate by a single value, it is called point Estimation.
For example the value of X bar (the sample mean) computed from a
sample of size n is a point estimate of the population parameter µ
Estimator
A rule used to estimate a numerical value is called estimator.
The estimator of mean is given below
X bar= 𝑖=1
𝑛 𝑋
𝑛
8. Estimate
An estimate is a numerical value of the unknown parameter obtained by
applying a formula (estimator) to a particular sample.
If θ is a parameter Ô denotes it’s estimate
Example: Let a sample of size 5 be 2,4,5,9,10.then an estimate of the
population mean µ,obtained by applying an estimator
X bar= 𝑖=1
𝑛 𝑋
𝑛
Estimator
X bar=
2+4+5+9+10
5
X bar=
30
5
=6 Estimate
9. Criteria for good point Estimators
A point estimator is considered a good estimator if it is satisfies various criteria.
Four of criteria are
A. Unbiasedness
B. Consistency
C. Efficiency
D. Sufficiency
10. Unbiasedness
The bias of an estimator Ô is defines as the difference between it’s
expected value and the true parameter θ
Bias=E(Ô) – θ
An estimator is defined to be unbiased if the statistic used an estimator has
it’s expected value equal to the true value of the population parameter
being estimated.
E(Ô)=θ
The estimator is defined to be positively biased when
E(Ô)>θ
The estimator is defined to be negatively biased when
E(Ô)<θ
11. Consistency
An estimator is said to be consistent if the statistic to be used as estimator
becomes closer and closer to the population parameter being estimator as the
sample size n increases.
lim
𝑛→∞
𝑃[|Ô − θ| ≤ 𝑒] = 1
A consistent estimator may or may not be unbiased.
The sample mean X bar=
1
𝑛 𝑖=1
𝑛
𝑥 which is an unbiased estimator of µ, is a
consistent estimator of the mean µ.
12. To prove that an estimator is consistent, we may state a criterion that is
sometimes quite useful, as follows
“Let Ô be an estimator of θ based on a sample of size n. Then Ô is a
consistent estimator of θ,if Var(Ô) approaches 0 as n approaches
infinity.”
13. Efficiency
An unbiased estimator is defined to be efficient if the variance of its
sampling distribution is smaller than that of sampling distribution of
any other any other unbiased estimator of the same parameter .IN
other words suppose there are two unbiased estimator T1 and T2 of the
same parameter θ , then T1 will be said to be more efficient estimator
than T2.If Var(T1)<Var(T2).
i. An estimator is more efficient if it is unbiased as well as has the
minimum variance as compared with any other unbiased estimator
ii. The relative efficiency of T1 compare to T2 is given by the ratio
Ef=Var(T2)/Var(T1) which is greater than 1
15. Sufficiency
An estimator is defined to be sufficient if the statistic used as estimator
uses all the information that contained in the sample.
Any statistic that is not computed from all the values in the sample is not
a sufficient estimator. The sample mean X bar is a sufficient estimator of
µ.This implies that x bar contains all the information in the sample
relative to the estimation of population parameter µ and no other
estimator such as the sample median etc.