Error of
Hypothesis Testing
ED203-STATISTICS WITH
COMPUTER APPLICATION
UNIVERSITY OF NORTHEASTERN PHILIPPINES SCHOOL OF GRADUATE STUDIES
IRIGA CITY
SY 2024-2025
Cecille A. Cuebillas
Reporter: MARIA P. DELA VEGA,PhD
Full Professor IV
2.
A hypothesis isan assumption that is made
based on some evidence.
This is the initial point of any investigation
that translates the research questions into
predictions
It includes components like variables,
population and the relation between the
variables.
HYPOTHESIS:
3.
TYPES OF HYPOTHESIS
Nullhypothesis
The null hypothesis is the claim that there's no effect in the
population. In other words, the null hypothesis (i.e., that there is
no effect) is assumed to be true until the sample provides enough
evidence to reject it.
Alternative hypothesis
The alternative hypothesis is the complement to the null
hypothesis. Null and alternative hypotheses are exhaustive,
meaning that together they cover every possible outcome. They
are also mutually exclusive, meaning that only one can be true at
a time.
4.
ERRORS IN HYPOTHESISTESTING
While doing hypothesis testing, there is always a possibility of
making the wrong decision about your hypothesis; such
instances are referred to as 'errors'.
There are two types of errors that you might make in the
hypothesis testing process: type-I error and type-II error.
Type-I error.
Type-II error.
5.
A Type Ierror means rejecting the null hypothesis
when it's actually true.
TYPE I ERROR (False Positive)
False positive conclusion
It means concluding that results are statistically
significant when, in reality, they came about purely by
chance or because of unrelated factors.
6.
The probability ofmaking a Type I error is denoted as
α (alpha), also known as the significance level.
For example, setting α = 0.05 means you’re willing to
accept a 5% chance of making a Type I error.
It's risk can be minimized through carefully planning
in your study design.
8.
TYPE II ERROR(False Negative)
occurs when the null hypothesis (H₀) is not rejected when
it is false. This means you miss detecting a real effect or
difference.
False negative conclusion.
means failing to conclude there was an effect when there
actually was.
9.
The probability ofmaking a Type II error is denoted as
β (beta).
A smaller β value means you are less likely to make a
Type II error, but it can also increase the chances of a
Type I error.
To reduce the risk we can increase the sample size or the
significance level to increase statistical power.
13.
True State ofPatient's
Health
Doctor Accepts Null (No
Disease)
Doctor Rejects Null (Disease
Present)
Patient is Healthy (Null
Hypothesis is True)
✅Correct Conclusion
(Patient is correctly
diagnosed as healthy)
❌Type I Error (False Positive -
Patient is wrongly diagnosed with a
disease and may receive
unnecessary treatment)
Patient is Sick (Null
Hypothesis is False)
❌Type II Error (False
Negative - Patient is
wrongly diagnosed as
healthy and does not get
needed treatment)
✅Correct Conclusion (Patient is
correctly diagnosed as sick and
receives proper treatment)
14.
Type I vsType II error
The Type I and Type II error rates influence each other. The
significance level (the Type I error rate) affects statistical
power, which is inversely related to the Type II error rate.
Example:
You decide to get tested for COVID-19 based on mild
symptoms. There are two errors that could potentially occur:
15.
Type I error(false positive): The test result says you have
coronavirus, but you don't. (An investigator rejects a null
hypothesis that is actually true in the population)
Type II error (false negative): the test result says you
don't have coronavirus, but you do. (the investigator fails
to reject a null hypothesis that is false in the population.)
16.
IS A TYPEI OR TYPE II ERROR WORSE?
A Type I error means mistakenly going against the
main statistical assumption of a null hypothesis. This
may lead to new policies, practices or treatments that
are inadequate or a waste of resources.
Consequences of a Type I error
Errors can lead to incorrect decisions, such as
approving a treatment that doesn’t work or making
faulty conclusions based on unreliable data..
17.
• In contrast,a Type II error means failing to
reject a null hypothesis. It may only result in
missed opportunities to innovate, but these can
also have important practical consequences.
Consequences of a Type II error
•Errors can lead to missed opportunities, such
as failing to approve a life-saving drug or
missing out on important findings.
18.
• In contrast,a Type II error means failing to
reject a null hypothesis. It may only result in
missed opportunities to innovate, but these can
also have important practical consequences.
Consequences of a Type II error
•Errors can lead to missed opportunities, such
as failing to approve a life-saving drug or
missing out on important findings.
19.
Always consider thecontext of your hypothesis test
and the potential costs of each error type. In some
cases, it may be more critical to avoid a Type I error
(e.g., approving unsafe drugs), while in others, a Type II
error may be more costly (e.g., failing to detect a critical
issue).
Takeaway:
20.
You are evaluatinga new educational program to improve students'
test scores. After analyzing the data, you conclude that the program
doesn’t improve scores, but in reality, it significantly helps students
perform better.
Question 1: Is this a Type I or Type II error? Why?
Type I Error (False Positive)
Type II Error (False Negative)
Question 2:
What could be the consequences of this error in a school system or
for students?
activity