Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

- Error And Power by shoffma5 4385 views
- Type i and type ii errors by p24ssp 23853 views
- Type 1 and type 2 errors by smulford 5919 views
- What's Significant? Hypothesis Test... by Pat Barlow 10515 views
- . Explain Type I and Type II errors... by WeinRunny 126 views
- teast mean one and two sample by Muzamil Hussain 590 views

No Downloads

Total views

6,804

On SlideShare

0

From Embeds

0

Number of Embeds

18

Shares

0

Downloads

0

Comments

0

Likes

8

No embeds

No notes for slide

- 1. α and β and power, oh my!<br />Type I and Type II errors<br />Power and effect size<br />Shown with animation<br />
- 2. Significance Testing and the CLT<br />Reject-or-not decision is based on CLT.<br />CLT: based on “many samples”<br />our sampling statistics will eventually make a normal (ish) distribution <br />(assuming our conditions were met) <br />We can describe the shape, the center, and the spread.<br />And we can use what we know about normal curves to find the probability of getting any one particular sample statistic.<br />
- 3. Significance Testing and the CLT<br />For significance testing, we start—before we’ve even taken a sample—with a hypothesis about where the center is.<br />Set a threshold where any thing past that, we’re officially “surprised”.<br />That area is α, our significance level.<br />α<br />μ0<br />That threshold is our critical value.<br />In the case of sample means, it’ll be t*.<br />In the case of sample proportions, it’ll be z*.<br />t*<br />
- 4. Significance Testing and the CLT<br />Take a sample.<br />It is in keeping with our hypothesized distribution. It doesn’t mean our μ0 is right, but it certainly isn’t evidence against it.<br />α<br />μ0<br />FAIL TO REJECT H0.<br />t*<br />
- 5. Significance Testing and the CLT<br />Take a sample.<br />This statistic isn’t in keeping with our hypothesis. In fact, it’s a relatively rare occurrence. <br />α<br />μ0<br />t*<br />Reject H0! Embrace Ha!<br />
- 6. What if we made a mistake?<br />Rare vs. never<br />Normal distribution – still possible to get tails<br />You can do everything right, and come to the mathematically correct answer and it is still a practically incorrect answer—the H0was right, you shouldn’t have rejected.<br />α<br />You have made a Type I error.<br />How often will we make a Type I error?<br />μ0<br />t*<br />P(Type I) = α.<br />
- 7. Rejected H0. <br />What if we didn’t make a mistake?<br />The true center is somewhere else.<br />α<br />Notice relationship between our statistic and the “true” distribution.<br />μ0<br />μ<br />X<br />t*<br />
- 8. What if we didn’t make a mistake?<br />What is the probability of getting a sample statistic beyond our critical value threshold?<br />P(Reject H0|H0 is wrong)<br />= power of a test<br />α<br />μ0<br />μ<br />The higher the power, the better the test is at rejecting the null hypothesis when it is wrong.<br />t*<br />
- 9. Can we still make a mistake, when our null hypothesis is wrong?<br />What if we’d gotten a different sample?<br />Look at “true” distribution: is this result possible? Rare?<br />What would we do with H0? (remember, we don’t know the “true” distribution)<br />FAIL TO REJECT H0<br />μ0<br />μ<br />We have made a Type II error.<br />t*<br />
- 10. Can we still make a mistake, when our null hypothesis is wrong?<br />What other kind of mistake can we make?<br />P(Type II) = β<br />β<br />power<br />μ0<br />μ<br />t*<br />Notice that βis the complement of the power.<br />Power of the test = 1 – β.<br />
- 11. Types of errors and their probabilities<br />To recap:<br />Type I error: the null hypothesis is correct, but we get a sample statistic that makes us reject H0.<br />Probability: α<br />Type II error: the null hypothesis is wrong (and the distribution is somewhere else), but we get a sample statistic that makes us fail to reject H0.<br />power<br />β<br />α<br />Probability: β<br />μ0<br />μ<br />Power: the probability of rejecting H0 when it is, in fact, wrong.<br />t*<br />Probability: 1 – β.<br />
- 12. Types of errors and their probabilities<br />How are they related?<br />Power and β both rely on the “true” μ<br />Calculating β is BTS <br />Effect size<br />Depend on…<br />power<br />β<br />α<br />μ0<br />μ<br />t*<br />
- 13. Types of errors and their probabilities<br />How does effect size relate to power and β?<br />The larger the effect size…<br />…the larger the power of the significance test<br />…and the smaller the probability of making a Type II error (β).<br />Effect size<br />power<br />β<br />α<br />μ0<br />μ<br />Remember, we do not control the effect size—it is what it is. Just keep in mind its relation to both power and β.<br />t*<br />
- 14. Reducing the probabilities of errors<br />Be able to put errors in context.<br />Type I errors: <br /><ul><li> putting an innocent man in jail
- 15. telling a healthy man he’s sick
- 16. thinking an improvement works when it doesn’t
- 17. accusing a non-drug user of using drugs</li></ul>power<br />β<br />α<br />Type II errors: <br /><ul><li> letting a guilty man go
- 18. telling someone with an STD that they don’t have one
- 19. thinking a system improvement doesn’t work when it does</li></ul>μ0<br />μ<br />t*<br />
- 20. Reducing the probabilities of errors<br />If you decide that the cost of a Type I error is too great, how can you reduce the probability of making one?<br />If you want to reduce the probability of a Type I error (false positive), merely reduce your significance level.<br />β<br />α<br />μ0<br />μ<br />That is, move your critical value threshold so that you have a lower α. <br />t*<br />
- 21. Reducing the probabilities of errors<br />Reduce the probability of making a Type I error.<br />If αdecreases, then β must increase.<br />We still won’t calculate what βis, but we know it has increased.<br />If βincreased, then power must decrease—since it’s 1 – β.<br />β<br />α<br />μ0<br />μ<br />t*<br />
- 22. Reducing the probabilities of errors<br />Reduce the probability of making a Type II error<br />In order to decrease β, we must increase α<br />Notice, also, that if βhas been reduced, then the power (1 – β) has increased.<br />β<br />α<br />μ0<br />μ<br />t*<br />
- 23. Reducing the probabilities of errors<br />So if you decrease the probability of Type I error, you increase the probability of a Type II error. And vice versa.<br />Is there any way to reduce the probability of both?<br />β<br />α<br />μ0<br />μ<br />t*<br />So what do we control?<br />
- 24. Reducing the probabilities of errors<br />We control the spread of our normal curves.<br />CLT <br />If our sample size increases, the centers don’t move, and we reduce variability…<br />So by increasing n, our sample size, we’ve reduced bothαandβ.<br />α<br />β<br />And if we’ve reduced β, we’ve increased the power (1 – β).<br />μ0<br />μ<br />t*<br />
- 25. Review<br />Vocabulary<br />Type I error<br />Type II error<br />Power<br />Effect size<br />α<br />β<br />Critical value<br />Significance level<br />α<br />β<br />μ0<br />μ<br />t*<br />
- 26. Review<br />Concepts<br />How can you reduce the probability of a Type I error? What are the consequences?<br />How can you reduce the probability of a Type II error? What are the consequences?<br />How are effect size and power related? How are effect size and Type II errors related?<br />How can you reduce the probability of both kinds of errors?<br />α<br />β<br />μ0<br />μ<br />t*<br />

No public clipboards found for this slide

Be the first to comment