Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

- Central limit theorem by Vijeesh Soman 2107 views
- Central Limit Theorem Lecture by Brent Heard 21165 views
- law of large number and central li... by lovemucheca 925 views
- Applied Statistics : Sampling metho... by wahidsajol 1503 views
- 16. central limit theorem slide1 (1) by Santosh Ashilwar 2170 views
- 030_Central Limit Theorem by Flevy.com 399 views

1,308 views

Published on

No Downloads

Total views

1,308

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

48

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Statistics Lab Rodolfo Metulini IMT Institute for Advanced Studies, Lucca, Italy Lesson 2 - Application to the Central Limit Theory - 14.01.2014
- 2. Introduction The modern statistics was built and developed around the normal distribution. Academic world use to say that, if the empirical distribution is normal (or approximative normal), everything works good. This depends mainly on the sample dimension Said this, it is important to undestand in which circumstances we can state the distribution is normal. Two founding statistical theorems are helpful: The Central Limit Theorem and The Law of Large Numbers.
- 3. The Law of Large Numbers (LLN) Suppose we have a random variable X with expected value E (X ) = µ. We extract n observation from X (say {x = x1 , x2 , ..., xn }). ˆ If we deﬁne Xn = n −→ ∞, ˆ Xn −→ µ i n xi = x1 +x2 +...+xn , n the LLN states that, for
- 4. The Central Limit Theorem (CLT) Suppose we have a random variable X with expected value E (X ) = µ and v (X ) = σ 2 We extract n observation from X (say {x = x1 , x2 , ..., xn }). ˆ Lets deﬁne Xn = i n xi = x1 +x2 +...+xn . n σ2 ˆ Xn distributes with expected value µ and variance . n In case n −→ ∞ (in pratice n > 30) 2 σ ˆ Xn ∼ N(µ, ), whatever the distribution of x be. n 2 σ ˆ N.B. If X is normal distributed, Xn ∼ N(µ, ) even if n n < 30
- 5. CLT: Empiricals To better understand the CLT, it is recommended to examine the theorem empirically and step by step. By the introduction of new commands in the R programming language. In the ﬁrst part, we will show how to draw and visualize a sample of random numbers from a distribution. Then, we will examine the mean and standard deviation of the sample, then the distribution of the sample means.
- 6. Drawing random numbers - 1 We already introduced the use of the letters d, p and q in relations to the various distributions (e.g. normal, uniform, exponential). A reminder of their use follows: d is for density: it is used to ﬁnd values of the probability density function. p is for probability: it is used to ﬁnd the probability that the random variable lies on the left of a giving number. q is for quantile: it is used to ﬁnd the quantiles of a given distribution. There is a fourth letter, namely r, used to draw random numbers from a distribution. For example runif and rexp would be used to draw random numbers from the uniform and exponential distributions, respectively.
- 7. Drawing random numbers - 2 Let use the rnorm command to draw 500 number atrandom from a normal distribution having mean 100 and standard deviation (sd) 10. > x= rnorm(500,mean=100,sd=10) The resuls, typing in the r consolle x, is a list of 500 numbers extracted at random from a normal distribution with mean 500 and sd 100. When you examine the numbers stored in the variable X , There is a sense that you are pulling random numbers that are clumped about a mean of 100. However, a histagram of this selection provides a diﬀerent picture of the data stored. > hist(x,prob=TRUE)
- 8. Drawing random numbers - Comments Several comments are in order regarding the histogram in the ﬁgure. 1. The histogram is approximately normal in shape. 2. The balance point of the histogram appears to be located near 100, suggesting that the random numbers were drawn from a distribution having mean 100. 3. Almost all of the values are within 3 increments of 10 from the mean, suggesting that random numbers were drawn from a normal distribution having standard deviation 10.
- 9. Drawing random numbers - a new drawing Lets try the experiment again, drawing a new set of 500 random numbers from the normal distribution having mean 100 and standard deviation 10: > x = rnorm(500, mean = 100, sd = 10) > hist(x, prob = TRUE , ylim = c(0, 0.04)) Give a look to the histogram ... It is diﬀerent from the ﬁrst one, however, it share some common traits: (1) it appears normal in shape; (2) it appears to be balanced around 100; (3) all values appears to occur within 3 increments of 10 of the mean. This is a strong evidence that the random numbers have been drawn from a normal distribution having mean 100 and sd 10. We can provide evidence of this claim by imposing a normal density curve: > curve(dnorm(x, mean = 100, sd = 10), 70, 130, add = TRUE , lwd = 2, col = ”red”))
- 10. The curve command The curve command is new. Some comments on its use follow: 1. In its simplest form, the sintax curve(f (x), from =, to =) draws the function deﬁned by f(x) on the interval (from, to). Our function is dnorm(x, mean = 100, sd = 10). The curve command sketches this function of X on the interval (from,to). 2. The notation from = and to = may be omitted if the arguments are in the proper order to the curve command: function ﬁrst, value of from second, value of to third. That is what we have done. 3. If the argument add is set to TRUE , then the curve is added to the existing ﬁgure. If the arument is omitted (or FALSE ) then a new plot is drawn,erasing the prevoius graph.
- 11. ˆ The distribution of Xn (sample mean) In our prevous example we drew 500 random numbers from a normal distribution with mean 100 and standard deviation 10. This leads to ONE sample of n = 500. Now the question is: what is the mean of our sample? > mean(x) [1]100.14132 If we take another sample of 500 random numbers from the SAME distribution, we get a new sample with diﬀerent mean. > x = rnorm(500, mean = 100, sd = 10) mean(x) [1]100.07884 What happens if we draw a sample several times?
- 12. Producing a vector of sample means We will repeatedly sample from the normal distribution. Each of the 500 samples will select 5 random numbers (instead of 500) from the normal distrib. having mean 100 and sd 10. We will then ﬁnd the mean of those samples. We begin by declaring the mean and the standard deviation. Then, we declare the sample mean. > µ = 100; σ = 10 >n=5 We need some place to store the mean of the sample. We initalize a vector xbar to initially contain 500 zeros. > xbar = rep(0, 500)
- 13. Producing a vector of sample means - cycle for It is easy to draw a sample of size n = 5 from the normal distribution having mean µ = 100 and standard deviation σ = 10. We simply issue the command rnorm(n, mean = µ, sd = σ). To ﬁnd the mean of this results, we simply add the adjustment mean(rnorm(n, mean = µ, sd = σ)). The ﬁnal step is to store this results in the vector xbar . Then we must repeat this same process an addintional 499 times. This require the use of a for loop. > for (iin1 : 500){xbar [i] = mean(rnorm(n, mean = µ, sd = σ))}
- 14. Cycle for The i in for (iin1 : 500) is called theindex of the for loop. The index i is ﬁrst set equal to 1, then the body of the for loop is executed. On the next iteration, i is set equal to 2 and the body of the loop is executed again. The loop continues in this manner, incrementing by 1, ﬁnally setting the index i to 500. After executing the last loop, the for cycle is terminated In the body of the for loop, we have xbar [i] = mean(rnorm(n, mean = µ, sd = σ)). This draws a sample of size 5 from the normal distribution, calculates the mean of the sample, and store the results in xbar [i]. When the for loop completes 500 iterations, the vector xbar contains the means of 500 samples of size 5 drawn from the normal distribution having µ = 100 and σ = 10 > hist(xbar , prob = TRUE , breacks = 12, xlim = c(70, 130, ylim = c(0, 0.1)))
- 15. ˆ Distribution of Xn - observations 1. The previous histograms describes the shape of the 500 random number randomly selected, here, the histogram describe the distribution of 500 diﬀerent sample means, each of which founded by selecting n = 5 random number from the normal distribution. 2. The distribution of xbar appears normal in shape. This is so even though the sample size is relatively small ( n = 5). 3. It appears that the balance point occurs near 100. This can be checked with the following command: > mean(xbar ) That is the mean of the sample means, that is almost equal to the mean of the draw of random numbers. 4. The distribution of the sample means appears to be narrower then the random number distributions.
- 16. Increasing the sample size Lets repeat the last experiment, but this time let’s draw a sample size of n = 10 from the same distribution (µ = 100, σ = 10) > µ = 100; σ = 10 > n = 10 > xbar = rep(0, 500) > for (iin1 : 500){xbar [i] = mean(rnorm(n, mean = µ, sd = σ))} hist(xbar , prob = TRUE , breaks = 12, xlim = c(70, 130), ylim = c(0, 0.1)) The Histogram produced is even more narrow than using n=5
- 17. Key Ideas 1. When we select samples from a normal distribution, then the distribution of sample means is also normal in shape 2. The mean of the distribution of sample meana appears to be the same as the mean of the random numbers (parentpopulation) (see the balance points compared) 3. By increasing the sample size of our samples, the histograms becomes narrower . Infact, we would expect a more accurate estimate of the mean of the parent population if we take the mean from a larger sample size. 4. Imagine to draw sample means from a sample of n = ∞. The histogram will be exactly concentrated (P = 1) in Xbar = µ
- 18. Summarise We ﬁnish replicating the statement about CLT: 1. If you draw samples from a norml distribution, then the distribution of the sample means is also normal 2. The mean of the distribution of the sample means is identical to the mean of the parent population 3. The higher the sample size that is drawn, the narrower will be the spread of the distribution of the sample means.
- 19. Homeworks Experiment 1: Draw the Xbar histogram for n = 1000. How is the histogram shape? Experiment 2: Repeat the full experiment drawing random numbers and sample means from a (1) uniform and from (2) a poisson distribution. Is the histogram of Xbar normal in shape for n = 5 and for n=30? Experiment 3: Repeat the full experiment using real data instead of random numbers. (HINT: select samples of dimension n = 5 from the real data, not using rnorm) Recommended: Try to evaluate the agreement of the sample mean histogram with normal distribution by mean of the qq-plot and shapiro wilk test.
- 20. Application to Large Number Law Experiment: toss the coin 100 times. This experiment is like repeating 100 times a random draw from a bernoulli distribution with parameter ρ = 0.5 We expect to have 50 times (value = 1) head and 50 times cross (value = 0), if the coin is not distorted But, in practice, this not happen: repeating the experiment we are going to have a distribution centered in 50, but spread out. ˆ Let’s imagine to deﬁne Xn as the mean of the number of heads ˆ across n experiments. For n −→ ∞, Xn −→ 50

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment