SlideShare a Scribd company logo
1 of 11
Download to read offline
Statistics 522: Sampling and Survey Techniques
                                        Topic 6

Topic Overview
This topic will cover

   • Sampling with unequal probabilities

   • Sampling one primary sampling unit

   • One-stage sampling with replacement


Unequal probabilities
   • Recall πi is the probability that unit i is selected as part of the sample.

   • Most designs we have studied so far have the πi equal.

   • Now we consider general designs where the πi can vary with i.

   • There are situations where this can give much better results.

Example 6.1
   • Survey of nursing home residents in Philadelphia to determine preferences on life-
     sustaining treatments

   • 294 nursing homes with a total of 37,652 beds (number of residents not known at the
     planning stage)

   • Use cluster sampling

   • Suppose we choose an SRS of the 294 nursing homes and then an SRS of 10 residents
     of each selected home.

   • A nursing home with 20 beds has the same probability of being sampled as a nursing
     home with 1000 beds.

   • 10 residents from the 20 bed home represent fewer people than 10 residents from 1000
     bed home.




                                              1
Self-weighting
   • This procedure gives a sample that is not self-weighted.

   • Alternatives that are self-weighted.

       – A one-stage cluster sample
       – Sample a fixed percentage of the residents of each selected nursing home.

The two-stage cluster design
   • The two-stage cluster design (SRS of homes, then equal proportion SRS of residents
     in each selected home)

       – Gives a mathematically valid estimator

SRS at first stage
Three shortcomings:

   • We would expect ti to be proportional to the number of beds in nursing home i, so
     estimators will have large variance (Mi ).

   • Equal percentage sampling in each selected home may be difficult to administer.

   • Cost is not known in advance (dont know if you will get large or small homes in sample).

The study
   • They drew a sample of 57 nursing homes with probabilities proportional to the number
     of beds.

   • Then they took an SRS of 30 beds (and their occupants) from a list of all beds within
     each selected nursing home.

Properties
   • Each bed is equally likely to be in the sample (note beds vs occupants).

   • The cost is known before selecting the sample.

   • The same number of interviews is taken at each nursing home.

   • The estimators will have smaller variance




                                             2
Key ideas
  • When sampling with unequal probabilities, we deliberately vary the selection proba-
    bilities.

  • We compensate by using weights in the estimation.

  • The key is that we know the selection probabilities

Notation
  • The probability that psu i is in the sample is πi .

  • The probability that psu i is selected on the first draw is ψi .

  • We will consider an artificial situation where n = 1, so πi = ψi .

Sampling one psu
  • Sample size is n = 1.

  • Suppose we are interested in estimating the population total.

  • ti is the total for psu i.

  • To illustrate the ideas, we will assume that we know the whole population.


The Example
  • N = 4 supermarkets

  • Size (in square meters) varies.

  • Select n = 1 with probabilities proportional to size.

  • Record total sales

  • Using the data from one store we want to estimate total sales for the four stores in the
    population.

The population
                                 Store    Size          ψi   ti
                                   A      100         1/16 11
                                   B      200         2/16 20
                                   C      300         3/16 24
                                   D     1000        10/16 245
                                 Total   1600            1 300

                                                 3
Weights
   • The weights wi are the inverses of the selection probabilities ψi .

   • The weighted estimator of the population total is tψ =
                                                       ˆ             wi ti .

   • There are four possible samples.

   • We calculate tψ for each.
                  ˆ


The samples
                              Sample    ψi        wi         ti ˆ
                                                                tψ
                                A     1/16        16        11 176
                                B     2/16         8        20 160
                                C     3/16      16/3        24 128
                                D    10/16     16/10       245 392

                                      ˆ
Sampling distribution of the estimate tψ
                                    Sample    ψi        ˆ
                                                        tψ
                                      1     1/16       176
                                      2     2/16       160
                                      3     3/16       128
                                      4    10/16       392

                                     ˆ
Mean of the sampling distribution of tψ

                     ˆ      1       2     3     10
                   E tψ =      176 + 160 + 128 + 392 = 300 = t
                            16      16    16    16
   • So tψ is unbiased.
        ˆ

   • This will always be true.

                                     ˆ
                                   E tψ =     ψi wi ti =     ti

                                      ˆ
Variance of the sampling distribution tψ

             1                2              3              10
    ˆ
Var(tψ ) =      (176 − 300)2 + (160 − 300)2 + (128 − 300)2 + (392 − 300)2 = 14248
             16               16             16             16
Compare with the variance for an SRS:
            1              1              1              1
Var(tSRS ) = (176 − 300)2 + (160 − 300)2 + (128 − 300)2 + (392 − 300)2 = 154488
    ˆ
            4              4              4              4

                                              4
Interpretation
  • Store D is the largest and we expect it to account for a large portion of the total sales.

  • Therefore, we give it a higher probability of being in the sample (10/16) than it would
    have with an SRS (1/4).

  • If it is selected, we multiply its sales by (16/10) to estimate total sales.


One-stage sampling with replacement
  • Suppose n > 1 and we sample with replacement.

  • This implies πi = 1 − (1 − ψi )n .

  • Probability that item i is selected on the first draw is the same as the probability that
    item i is selected on any other draw.

  • Sampling with replacement gives us n independent estimates of the population total,
    one for each unit in sample.

  • We average these n estimates.

  • Estimated variance is variance of the estimates divided by n

Example 6.2
  • N = 15 classes of elementary stat

  • Mi students in class i (i = 1 to 15)

  • Values of Mi range from 20 to 100.

  • We want a sample of 5 classes.

  • Each student in the selected classes will fill out a questionnaire.

  • (It is possible for the same class to be selected more than once.)

Randomization
  • There are a total of 647 students in these classes.

  • Select 5 random numbers between 1 and 647.

  • Think about ordering the students by class.

  • Each random number corresponds to a student and the corresponding class will be in
    the sample.

                                              5
This method
  • This method is called the cumulative-size method.
  • It is based on M1 , M1 + M2 , M1 + M2 + M3 , . . .
  • An alternative is to use the cumulative sums of the ψi and select random numbers
    between 0 and 1.
  • For this example, ψi = Mi /647

Alternative
  • Systematic sampling is often used as an alternative in this setting.
       – The basic idea is the same.
       – Not technically sampling with replacement
       – Works well as systematic sampling works well.
       – See page 186 for details.
  • Lahiris method
       – Involves two stages of randomization
       – Rejection sampling: corresponds to classroom problem in Problem Set 2.
       – Can be inefficient.
       – See page 187 for details


Estimation Theory
  • Let Qi be the number of times unit i occurs in the sample.
                1
  • Then tψ =
         ˆ
                n
                     Qi ti /ψi .

  • The estimated variance of ti is
                              ˆ
                                      1                  ti
                                                  Qi (      − tψ )2
                                                              ˆ
                                   n(n − 1)              ψi

  • The estimate and its estimated variance are both unbiased.

Choosing the selection probabilities
  • We want small variance for our estimator.
       – Often, ti is related to the size of the psu.
       – We can take ψi proportional to Mi or some other measure of the size of psu i.

                                              6
PPS
  • This procedure is called sampling with probability proportional to size (pps).

  • The formulas for the estimate and variance can be simplified for this special case.
                                                   Mi
                                          ψi =
                                                   K
                                          ti
                                             = K yi
                                                 ¯
                                          ψi

  • See page 190 for details

  • See Example 6.5 on pages 190-192

Two-stage sampling with replacement
  • Basic ideas are very similar to one-stage sampling.

  • ψi is the probability that psu i is selected on the first (or any) draw.

  • We take a sample of mi ssus from each selected psu.

Sampling ssu’s
  • Usually we use an SRS.

  • Alternatives include

       – systematic sampling
       – any other probability sampling method

  • Note if a psu is selected more than once, a separate independent second stage sample
    is required.

Estimates and SE’s
  • Weights are used to make the estimators unbiased.

  • Formulas are similar to those for one-stage.

  • See (6.8) and (6.9) on page 192




                                            7
Outline of the procedure
  1. Determine the ψi .

  2. Select the n psus (with replacement).

  3. Select the ssus.

  4. Estimate the t for each selected psu,

                                        tψ = weight × t
                                        ˆ             ˆ

                             ˆ
  5. The average of these is tψ .
                                           √
  6. SE is the standard error of these (sd/ n).


Unequal probability sampling without replacement
   • ψi is the probability of selection on the first draw.

   • The probability of selection on later draws depends on which units were selected on
     earlier draws.

Estimation
   • πi is called the inclusion probability. (   pop   πi = n)

   • πi,j is the probability that both psu i and psu j are in the sample. (   j=i   πi,j = (n−1)πi )

   • Weights (inverse of selection probability)

        – we use πi /n in place of ψi (with replacement)

   • The recommended procedure is to use the Horvitz-Thompson (HT) estimator and the
                     ˆ         ˆ
     associated SE. (tHT = sam ti /πi )

   • See page 196-197 for details.

   • This estimator can be generalized to other designs that do not use replacement.


Randomization Theory
Framework is

   • Probability sampling without replacement for the psus for the first stage

   • Sampling at the second stage is independent of sampling at the first stage


                                                 8
Horvitz-Thompson
  • Randomization theory can be used to prove the Horvitz-Thompson Theorem.

      – Expected value of the estimator is t.
      – Formula for the variance of the estimator

The estimator
  • tHT =
    ˆ         ˆ
              ti /πi

      – where the sum is over the psu’s selected in the first stage.

  • Idea behind proofs is to condition on which psus are in the sample.

  • Study pages 205-210


Model
  • One-way random effects anova model

                                           Yi,j = Ai +   i,j


    where
                                                              2
      – the Ai are random variables with mean µ and variance σA
      – the   i,j   are random variables with mean 0 and variance σ 2 .
      – the Ai and the      i,j   are uncorrelated

The pps estimator
  • πi = nMi /K – the inclusion probability

                                          ˆ           K ˆ
                                          TP =           Ti
                                                     nMi

  • We rewrite this as a weighted estimator.

                                            ˆ    Mi
                                            ti =         Yi,j
                                                 mi
                                           ˆ
                                           tP =     wi,j Yi,j

                     K
    where wi,j =    nMi

  • Take expected values to show that the estimator is unbiased.


                                                 9
Variance
  • The variance can be computed.

  • See page 211

  • The variance depends on which psu’s are selected through the Mi .

  • The variance is smallest when psu’s with the largest Mi are chosen.

Recall
  • Estimate of population total is the weighted average of the ti for the selected psus.
                                                                ˆ

  • The weights wi are the inverses of the probabilities of selection.


Elephants
  • A circus needed to ship its 50 elephants.

  • They needed to estimate the total weight of the animals.

  • It is not easy to weigh 50 elephants and they were in a hurry.

  • They had data from three years ago.

Sample
  • The owner wanted to base the estimate on a sample.

  • Dumbo had a weight equal to the average three years ago.

  • The owner wanted to weigh Dumbo and multiply by 50.

  • The statistician said:

NO
  • You have to use probability sampling and the Horvitz-Thompson estimator.

  • They compromised:

       – The probability of selecting Dumbo was set as 99/100.
       – The probability of selecting each of the other elephants was 1/4900.




                                            10
Who was selected
  • Dumbo, of course.

  • The owner was happy and said now we can estimate the weight of the 50 elephants as
    50 times Dumbos weight, 50y.

  • The statistician said

NO
  • The estimate of the total weight of the 50 elephants should be Dumbos weight divided
    by his probability of selection.

  • This is y/(99/100) or 100y/99.

  • The theory behind this estimator is rigorous

What if
  • The owner asked

       – What if the randomization had selected Jumbo the largest elephant in the herd?

  • The statistician replied 4900y, where y is Jumbos weight.

Conclusion
  • The statistician lost his circus job and became a teacher of statistics.

  • bad model; highly variable estimator

  • Due to Basu (1971).




                                            11

More Related Content

What's hot

Cluster and multistage sampling
Cluster and multistage samplingCluster and multistage sampling
Cluster and multistage sampling
suncil0071
 
Introduction To Statistics
Introduction To StatisticsIntroduction To Statistics
Introduction To Statistics
albertlaporte
 
Introduction to statistics 2013
Introduction to statistics 2013Introduction to statistics 2013
Introduction to statistics 2013
Mohammad Ihmeidan
 
Sampling methods and sample size
Sampling methods and sample size  Sampling methods and sample size
Sampling methods and sample size
mdanaee
 
Applications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large NumbersApplications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large Numbers
University of Salerno
 

What's hot (20)

Cluster and multistage sampling
Cluster and multistage samplingCluster and multistage sampling
Cluster and multistage sampling
 
Central limit theorem
Central limit theoremCentral limit theorem
Central limit theorem
 
Introduction To Statistics
Introduction To StatisticsIntroduction To Statistics
Introduction To Statistics
 
Sampling distribution
Sampling distributionSampling distribution
Sampling distribution
 
Normal and standard normal distribution
Normal and standard normal distributionNormal and standard normal distribution
Normal and standard normal distribution
 
Sampling Distributions
Sampling DistributionsSampling Distributions
Sampling Distributions
 
Systematic ranom sampling for slide share
Systematic ranom sampling for slide shareSystematic ranom sampling for slide share
Systematic ranom sampling for slide share
 
Introduction to statistics 2013
Introduction to statistics 2013Introduction to statistics 2013
Introduction to statistics 2013
 
Parametric tests
Parametric testsParametric tests
Parametric tests
 
Cluster sampling
Cluster samplingCluster sampling
Cluster sampling
 
Inferential statistics
Inferential statisticsInferential statistics
Inferential statistics
 
What Are Simple Random Sampling and Stratified Random Sampling Analytical Tec...
What Are Simple Random Sampling and Stratified Random Sampling Analytical Tec...What Are Simple Random Sampling and Stratified Random Sampling Analytical Tec...
What Are Simple Random Sampling and Stratified Random Sampling Analytical Tec...
 
Stratified random sampling
Stratified random samplingStratified random sampling
Stratified random sampling
 
Part 2 Cox Regression
Part 2 Cox RegressionPart 2 Cox Regression
Part 2 Cox Regression
 
Sampling methods and sample size
Sampling methods and sample size  Sampling methods and sample size
Sampling methods and sample size
 
Review of Statistics
Review of StatisticsReview of Statistics
Review of Statistics
 
Hypothesis testing: A single sample test
Hypothesis testing: A single sample testHypothesis testing: A single sample test
Hypothesis testing: A single sample test
 
Statistics "Descriptive & Inferential"
Statistics "Descriptive & Inferential"Statistics "Descriptive & Inferential"
Statistics "Descriptive & Inferential"
 
Sampling and Sample Types
Sampling  and Sample TypesSampling  and Sample Types
Sampling and Sample Types
 
Applications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large NumbersApplications to Central Limit Theorem and Law of Large Numbers
Applications to Central Limit Theorem and Law of Large Numbers
 

Viewers also liked (9)

sampling ppt
sampling pptsampling ppt
sampling ppt
 
Sampling slides
Sampling slidesSampling slides
Sampling slides
 
Sampling
SamplingSampling
Sampling
 
Statistical sampling
Statistical samplingStatistical sampling
Statistical sampling
 
Bsm presentation cluster sampling
Bsm presentation cluster samplingBsm presentation cluster sampling
Bsm presentation cluster sampling
 
Presentation On Questionnaire
Presentation On QuestionnairePresentation On Questionnaire
Presentation On Questionnaire
 
Questionnaire
QuestionnaireQuestionnaire
Questionnaire
 
Questionnaire Design
Questionnaire DesignQuestionnaire Design
Questionnaire Design
 
Sampling
SamplingSampling
Sampling
 

Similar to Cluster Sampling

Chapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhd
Chapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhdChapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhd
Chapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhd
beshahashenafe20
 
Introduction to sampling
Introduction to samplingIntroduction to sampling
Introduction to sampling
Situo Liu
 

Similar to Cluster Sampling (20)

Statistics lecture 7 (ch6)
Statistics lecture 7 (ch6)Statistics lecture 7 (ch6)
Statistics lecture 7 (ch6)
 
Statistics-3 : Statistical Inference - Core
Statistics-3 : Statistical Inference - CoreStatistics-3 : Statistical Inference - Core
Statistics-3 : Statistical Inference - Core
 
Statistical thinking
Statistical thinkingStatistical thinking
Statistical thinking
 
Statistik dan Probabilitas Yuni Yamasari 2.pptx
Statistik dan Probabilitas Yuni Yamasari 2.pptxStatistik dan Probabilitas Yuni Yamasari 2.pptx
Statistik dan Probabilitas Yuni Yamasari 2.pptx
 
Chi square
Chi squareChi square
Chi square
 
Chap 6
Chap 6Chap 6
Chap 6
 
Data analysis
Data analysisData analysis
Data analysis
 
Lecture 4 - probability distributions (2).pptx
Lecture 4 - probability distributions (2).pptxLecture 4 - probability distributions (2).pptx
Lecture 4 - probability distributions (2).pptx
 
Stat.pptx
Stat.pptxStat.pptx
Stat.pptx
 
Resampling methods
Resampling methodsResampling methods
Resampling methods
 
7. binomial distribution
7. binomial distribution7. binomial distribution
7. binomial distribution
 
Chapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhd
Chapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhdChapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhd
Chapter Seven - .pptbhhhdfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhd
 
Probability distribution
Probability distributionProbability distribution
Probability distribution
 
Introduction to sampling
Introduction to samplingIntroduction to sampling
Introduction to sampling
 
Elementary statistical inference1
Elementary statistical inference1Elementary statistical inference1
Elementary statistical inference1
 
Learn from Example and Learn Probabilistic Model
Learn from Example and Learn Probabilistic ModelLearn from Example and Learn Probabilistic Model
Learn from Example and Learn Probabilistic Model
 
Statistics-2 : Elements of Inference
Statistics-2 : Elements of InferenceStatistics-2 : Elements of Inference
Statistics-2 : Elements of Inference
 
LR 9 Estimation.pdf
LR 9 Estimation.pdfLR 9 Estimation.pdf
LR 9 Estimation.pdf
 
JM Statr session 13, Jan 11
JM Statr session 13, Jan 11JM Statr session 13, Jan 11
JM Statr session 13, Jan 11
 
t distribution, paired and unpaired t-test
t distribution, paired and unpaired t-testt distribution, paired and unpaired t-test
t distribution, paired and unpaired t-test
 

Recently uploaded

The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 

Recently uploaded (20)

INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 

Cluster Sampling

  • 1. Statistics 522: Sampling and Survey Techniques Topic 6 Topic Overview This topic will cover • Sampling with unequal probabilities • Sampling one primary sampling unit • One-stage sampling with replacement Unequal probabilities • Recall πi is the probability that unit i is selected as part of the sample. • Most designs we have studied so far have the πi equal. • Now we consider general designs where the πi can vary with i. • There are situations where this can give much better results. Example 6.1 • Survey of nursing home residents in Philadelphia to determine preferences on life- sustaining treatments • 294 nursing homes with a total of 37,652 beds (number of residents not known at the planning stage) • Use cluster sampling • Suppose we choose an SRS of the 294 nursing homes and then an SRS of 10 residents of each selected home. • A nursing home with 20 beds has the same probability of being sampled as a nursing home with 1000 beds. • 10 residents from the 20 bed home represent fewer people than 10 residents from 1000 bed home. 1
  • 2. Self-weighting • This procedure gives a sample that is not self-weighted. • Alternatives that are self-weighted. – A one-stage cluster sample – Sample a fixed percentage of the residents of each selected nursing home. The two-stage cluster design • The two-stage cluster design (SRS of homes, then equal proportion SRS of residents in each selected home) – Gives a mathematically valid estimator SRS at first stage Three shortcomings: • We would expect ti to be proportional to the number of beds in nursing home i, so estimators will have large variance (Mi ). • Equal percentage sampling in each selected home may be difficult to administer. • Cost is not known in advance (dont know if you will get large or small homes in sample). The study • They drew a sample of 57 nursing homes with probabilities proportional to the number of beds. • Then they took an SRS of 30 beds (and their occupants) from a list of all beds within each selected nursing home. Properties • Each bed is equally likely to be in the sample (note beds vs occupants). • The cost is known before selecting the sample. • The same number of interviews is taken at each nursing home. • The estimators will have smaller variance 2
  • 3. Key ideas • When sampling with unequal probabilities, we deliberately vary the selection proba- bilities. • We compensate by using weights in the estimation. • The key is that we know the selection probabilities Notation • The probability that psu i is in the sample is πi . • The probability that psu i is selected on the first draw is ψi . • We will consider an artificial situation where n = 1, so πi = ψi . Sampling one psu • Sample size is n = 1. • Suppose we are interested in estimating the population total. • ti is the total for psu i. • To illustrate the ideas, we will assume that we know the whole population. The Example • N = 4 supermarkets • Size (in square meters) varies. • Select n = 1 with probabilities proportional to size. • Record total sales • Using the data from one store we want to estimate total sales for the four stores in the population. The population Store Size ψi ti A 100 1/16 11 B 200 2/16 20 C 300 3/16 24 D 1000 10/16 245 Total 1600 1 300 3
  • 4. Weights • The weights wi are the inverses of the selection probabilities ψi . • The weighted estimator of the population total is tψ = ˆ wi ti . • There are four possible samples. • We calculate tψ for each. ˆ The samples Sample ψi wi ti ˆ tψ A 1/16 16 11 176 B 2/16 8 20 160 C 3/16 16/3 24 128 D 10/16 16/10 245 392 ˆ Sampling distribution of the estimate tψ Sample ψi ˆ tψ 1 1/16 176 2 2/16 160 3 3/16 128 4 10/16 392 ˆ Mean of the sampling distribution of tψ ˆ 1 2 3 10 E tψ = 176 + 160 + 128 + 392 = 300 = t 16 16 16 16 • So tψ is unbiased. ˆ • This will always be true. ˆ E tψ = ψi wi ti = ti ˆ Variance of the sampling distribution tψ 1 2 3 10 ˆ Var(tψ ) = (176 − 300)2 + (160 − 300)2 + (128 − 300)2 + (392 − 300)2 = 14248 16 16 16 16 Compare with the variance for an SRS: 1 1 1 1 Var(tSRS ) = (176 − 300)2 + (160 − 300)2 + (128 − 300)2 + (392 − 300)2 = 154488 ˆ 4 4 4 4 4
  • 5. Interpretation • Store D is the largest and we expect it to account for a large portion of the total sales. • Therefore, we give it a higher probability of being in the sample (10/16) than it would have with an SRS (1/4). • If it is selected, we multiply its sales by (16/10) to estimate total sales. One-stage sampling with replacement • Suppose n > 1 and we sample with replacement. • This implies πi = 1 − (1 − ψi )n . • Probability that item i is selected on the first draw is the same as the probability that item i is selected on any other draw. • Sampling with replacement gives us n independent estimates of the population total, one for each unit in sample. • We average these n estimates. • Estimated variance is variance of the estimates divided by n Example 6.2 • N = 15 classes of elementary stat • Mi students in class i (i = 1 to 15) • Values of Mi range from 20 to 100. • We want a sample of 5 classes. • Each student in the selected classes will fill out a questionnaire. • (It is possible for the same class to be selected more than once.) Randomization • There are a total of 647 students in these classes. • Select 5 random numbers between 1 and 647. • Think about ordering the students by class. • Each random number corresponds to a student and the corresponding class will be in the sample. 5
  • 6. This method • This method is called the cumulative-size method. • It is based on M1 , M1 + M2 , M1 + M2 + M3 , . . . • An alternative is to use the cumulative sums of the ψi and select random numbers between 0 and 1. • For this example, ψi = Mi /647 Alternative • Systematic sampling is often used as an alternative in this setting. – The basic idea is the same. – Not technically sampling with replacement – Works well as systematic sampling works well. – See page 186 for details. • Lahiris method – Involves two stages of randomization – Rejection sampling: corresponds to classroom problem in Problem Set 2. – Can be inefficient. – See page 187 for details Estimation Theory • Let Qi be the number of times unit i occurs in the sample. 1 • Then tψ = ˆ n Qi ti /ψi . • The estimated variance of ti is ˆ 1 ti Qi ( − tψ )2 ˆ n(n − 1) ψi • The estimate and its estimated variance are both unbiased. Choosing the selection probabilities • We want small variance for our estimator. – Often, ti is related to the size of the psu. – We can take ψi proportional to Mi or some other measure of the size of psu i. 6
  • 7. PPS • This procedure is called sampling with probability proportional to size (pps). • The formulas for the estimate and variance can be simplified for this special case. Mi ψi = K ti = K yi ¯ ψi • See page 190 for details • See Example 6.5 on pages 190-192 Two-stage sampling with replacement • Basic ideas are very similar to one-stage sampling. • ψi is the probability that psu i is selected on the first (or any) draw. • We take a sample of mi ssus from each selected psu. Sampling ssu’s • Usually we use an SRS. • Alternatives include – systematic sampling – any other probability sampling method • Note if a psu is selected more than once, a separate independent second stage sample is required. Estimates and SE’s • Weights are used to make the estimators unbiased. • Formulas are similar to those for one-stage. • See (6.8) and (6.9) on page 192 7
  • 8. Outline of the procedure 1. Determine the ψi . 2. Select the n psus (with replacement). 3. Select the ssus. 4. Estimate the t for each selected psu, tψ = weight × t ˆ ˆ ˆ 5. The average of these is tψ . √ 6. SE is the standard error of these (sd/ n). Unequal probability sampling without replacement • ψi is the probability of selection on the first draw. • The probability of selection on later draws depends on which units were selected on earlier draws. Estimation • πi is called the inclusion probability. ( pop πi = n) • πi,j is the probability that both psu i and psu j are in the sample. ( j=i πi,j = (n−1)πi ) • Weights (inverse of selection probability) – we use πi /n in place of ψi (with replacement) • The recommended procedure is to use the Horvitz-Thompson (HT) estimator and the ˆ ˆ associated SE. (tHT = sam ti /πi ) • See page 196-197 for details. • This estimator can be generalized to other designs that do not use replacement. Randomization Theory Framework is • Probability sampling without replacement for the psus for the first stage • Sampling at the second stage is independent of sampling at the first stage 8
  • 9. Horvitz-Thompson • Randomization theory can be used to prove the Horvitz-Thompson Theorem. – Expected value of the estimator is t. – Formula for the variance of the estimator The estimator • tHT = ˆ ˆ ti /πi – where the sum is over the psu’s selected in the first stage. • Idea behind proofs is to condition on which psus are in the sample. • Study pages 205-210 Model • One-way random effects anova model Yi,j = Ai + i,j where 2 – the Ai are random variables with mean µ and variance σA – the i,j are random variables with mean 0 and variance σ 2 . – the Ai and the i,j are uncorrelated The pps estimator • πi = nMi /K – the inclusion probability ˆ K ˆ TP = Ti nMi • We rewrite this as a weighted estimator. ˆ Mi ti = Yi,j mi ˆ tP = wi,j Yi,j K where wi,j = nMi • Take expected values to show that the estimator is unbiased. 9
  • 10. Variance • The variance can be computed. • See page 211 • The variance depends on which psu’s are selected through the Mi . • The variance is smallest when psu’s with the largest Mi are chosen. Recall • Estimate of population total is the weighted average of the ti for the selected psus. ˆ • The weights wi are the inverses of the probabilities of selection. Elephants • A circus needed to ship its 50 elephants. • They needed to estimate the total weight of the animals. • It is not easy to weigh 50 elephants and they were in a hurry. • They had data from three years ago. Sample • The owner wanted to base the estimate on a sample. • Dumbo had a weight equal to the average three years ago. • The owner wanted to weigh Dumbo and multiply by 50. • The statistician said: NO • You have to use probability sampling and the Horvitz-Thompson estimator. • They compromised: – The probability of selecting Dumbo was set as 99/100. – The probability of selecting each of the other elephants was 1/4900. 10
  • 11. Who was selected • Dumbo, of course. • The owner was happy and said now we can estimate the weight of the 50 elephants as 50 times Dumbos weight, 50y. • The statistician said NO • The estimate of the total weight of the 50 elephants should be Dumbos weight divided by his probability of selection. • This is y/(99/100) or 100y/99. • The theory behind this estimator is rigorous What if • The owner asked – What if the randomization had selected Jumbo the largest elephant in the herd? • The statistician replied 4900y, where y is Jumbos weight. Conclusion • The statistician lost his circus job and became a teacher of statistics. • bad model; highly variable estimator • Due to Basu (1971). 11