Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Type I Type Ii Power Effect Size Live Presentation

6,804 views

Published on

Published in: Education

Type I Type Ii Power Effect Size Live Presentation

  1. 1. α and β and power, oh my!<br />Type I and Type II errors<br />Power and effect size<br />Shown with animation<br />
  2. 2. Significance Testing and the CLT<br />Reject-or-not decision is based on CLT.<br />CLT: based on “many samples”<br />our sampling statistics will eventually make a normal (ish) distribution <br />(assuming our conditions were met) <br />We can describe the shape, the center, and the spread.<br />And we can use what we know about normal curves to find the probability of getting any one particular sample statistic.<br />
  3. 3. Significance Testing and the CLT<br />For significance testing, we start—before we’ve even taken a sample—with a hypothesis about where the center is.<br />Set a threshold where any thing past that, we’re officially “surprised”.<br />That area is α, our significance level.<br />α<br />μ0<br />That threshold is our critical value.<br />In the case of sample means, it’ll be t*.<br />In the case of sample proportions, it’ll be z*.<br />t*<br />
  4. 4. Significance Testing and the CLT<br />Take a sample.<br />It is in keeping with our hypothesized distribution. It doesn’t mean our μ0 is right, but it certainly isn’t evidence against it.<br />α<br />μ0<br />FAIL TO REJECT H0.<br />t*<br />
  5. 5. Significance Testing and the CLT<br />Take a sample.<br />This statistic isn’t in keeping with our hypothesis. In fact, it’s a relatively rare occurrence. <br />α<br />μ0<br />t*<br />Reject H0! Embrace Ha!<br />
  6. 6. What if we made a mistake?<br />Rare vs. never<br />Normal distribution – still possible to get tails<br />You can do everything right, and come to the mathematically correct answer and it is still a practically incorrect answer—the H0was right, you shouldn’t have rejected.<br />α<br />You have made a Type I error.<br />How often will we make a Type I error?<br />μ0<br />t*<br />P(Type I) = α.<br />
  7. 7. Rejected H0. <br />What if we didn’t make a mistake?<br />The true center is somewhere else.<br />α<br />Notice relationship between our statistic and the “true” distribution.<br />μ0<br />μ<br />X<br />t*<br />
  8. 8. What if we didn’t make a mistake?<br />What is the probability of getting a sample statistic beyond our critical value threshold?<br />P(Reject H0|H0 is wrong)<br />= power of a test<br />α<br />μ0<br />μ<br />The higher the power, the better the test is at rejecting the null hypothesis when it is wrong.<br />t*<br />
  9. 9. Can we still make a mistake, when our null hypothesis is wrong?<br />What if we’d gotten a different sample?<br />Look at “true” distribution: is this result possible? Rare?<br />What would we do with H0? (remember, we don’t know the “true” distribution)<br />FAIL TO REJECT H0<br />μ0<br />μ<br />We have made a Type II error.<br />t*<br />
  10. 10. Can we still make a mistake, when our null hypothesis is wrong?<br />What other kind of mistake can we make?<br />P(Type II) = β<br />β<br />power<br />μ0<br />μ<br />t*<br />Notice that βis the complement of the power.<br />Power of the test = 1 – β.<br />
  11. 11. Types of errors and their probabilities<br />To recap:<br />Type I error: the null hypothesis is correct, but we get a sample statistic that makes us reject H0.<br />Probability: α<br />Type II error: the null hypothesis is wrong (and the distribution is somewhere else), but we get a sample statistic that makes us fail to reject H0.<br />power<br />β<br />α<br />Probability: β<br />μ0<br />μ<br />Power: the probability of rejecting H0 when it is, in fact, wrong.<br />t*<br />Probability: 1 – β.<br />
  12. 12. Types of errors and their probabilities<br />How are they related?<br />Power and β both rely on the “true” μ<br />Calculating β is BTS <br />Effect size<br />Depend on…<br />power<br />β<br />α<br />μ0<br />μ<br />t*<br />
  13. 13. Types of errors and their probabilities<br />How does effect size relate to power and β?<br />The larger the effect size…<br />…the larger the power of the significance test<br />…and the smaller the probability of making a Type II error (β).<br />Effect size<br />power<br />β<br />α<br />μ0<br />μ<br />Remember, we do not control the effect size—it is what it is. Just keep in mind its relation to both power and β.<br />t*<br />
  14. 14. Reducing the probabilities of errors<br />Be able to put errors in context.<br />Type I errors: <br /><ul><li> putting an innocent man in jail
  15. 15. telling a healthy man he’s sick
  16. 16. thinking an improvement works when it doesn’t
  17. 17. accusing a non-drug user of using drugs</li></ul>power<br />β<br />α<br />Type II errors: <br /><ul><li> letting a guilty man go
  18. 18. telling someone with an STD that they don’t have one
  19. 19. thinking a system improvement doesn’t work when it does</li></ul>μ0<br />μ<br />t*<br />
  20. 20. Reducing the probabilities of errors<br />If you decide that the cost of a Type I error is too great, how can you reduce the probability of making one?<br />If you want to reduce the probability of a Type I error (false positive), merely reduce your significance level.<br />β<br />α<br />μ0<br />μ<br />That is, move your critical value threshold so that you have a lower α. <br />t*<br />
  21. 21. Reducing the probabilities of errors<br />Reduce the probability of making a Type I error.<br />If αdecreases, then β must increase.<br />We still won’t calculate what βis, but we know it has increased.<br />If βincreased, then power must decrease—since it’s 1 – β.<br />β<br />α<br />μ0<br />μ<br />t*<br />
  22. 22. Reducing the probabilities of errors<br />Reduce the probability of making a Type II error<br />In order to decrease β, we must increase α<br />Notice, also, that if βhas been reduced, then the power (1 – β) has increased.<br />β<br />α<br />μ0<br />μ<br />t*<br />
  23. 23. Reducing the probabilities of errors<br />So if you decrease the probability of Type I error, you increase the probability of a Type II error. And vice versa.<br />Is there any way to reduce the probability of both?<br />β<br />α<br />μ0<br />μ<br />t*<br />So what do we control?<br />
  24. 24. Reducing the probabilities of errors<br />We control the spread of our normal curves.<br />CLT <br />If our sample size increases, the centers don’t move, and we reduce variability…<br />So by increasing n, our sample size, we’ve reduced bothαandβ.<br />α<br />β<br />And if we’ve reduced β, we’ve increased the power (1 – β).<br />μ0<br />μ<br />t*<br />
  25. 25. Review<br />Vocabulary<br />Type I error<br />Type II error<br />Power<br />Effect size<br />α<br />β<br />Critical value<br />Significance level<br />α<br />β<br />μ0<br />μ<br />t*<br />
  26. 26. Review<br />Concepts<br />How can you reduce the probability of a Type I error? What are the consequences?<br />How can you reduce the probability of a Type II error? What are the consequences?<br />How are effect size and power related? How are effect size and Type II errors related?<br />How can you reduce the probability of both kinds of errors?<br />α<br />β<br />μ0<br />μ<br />t*<br />

×