The document discusses key concepts related to hypothesis testing, including:
1. The research hypothesis proposes a relationship between variables to be tested, while the null hypothesis assumes no relationship.
2. Hypotheses must be clear, testable, and specific. They guide the research investigation and aim to explain observed phenomena.
3. The level of significance, typically set at 5%, is the probability of rejecting the null hypothesis when it is true.
4. There are two types of errors in hypothesis testing - Type I errors reject the null when it is true, while Type II errors accept it when it is false. Balancing these errors depends on consequences.
Two-tailed and one-tailed tests
2. Research hypothesis
• A research hypothesis is a specific, clear, and testable proposition or
predictive statement about the possible outcome of a scientific
research study based on a particular property of a population, such as
presumed differences between groups on a particular variable or
relationships between variables.
• In research, a hypothesis is a formal question or proposition.
• It serves as an explanation for the occurrence of specific phenomena.
• A hypothesis is either an assumption to be proved or disproved.
3. Purpose of a Hypothesis
• A hypothesis guides the researcher's investigation.
• It sets the direction and goal of the research.
• The researcher aims to resolve the hypothesis through empirical
evidence.
4. Characteristics of a Hypothesis
1.Clear and Precise:
• The hypothesis must be clearly stated and unambiguous.
• It should provide a precise description of the phenomena under
investigation.
2.Testability:
• A hypothesis should be capable of being tested.
• It should allow for empirical observation and evaluation.
5. 3.Relationship between Variables:
• If the hypothesis is relational, it should state the relationship
between variables.
• For example, how one variable affects another.
4.Limited in Scope and Specific:
• A hypothesis should have a specific focus and be limited in
scope.
• Narrower hypotheses are generally more testable and yield
clearer results.
6. 5.Simplicity:
• The hypothesis should be stated in simple terms.
• It should be easily understandable by all involved in the
research process.
6.Consistency with Established Facts:
• A hypothesis should align with existing knowledge and
established facts.
• It should be in line with the most likely explanation based on
available evidence.
7. 7.Feasible for Testing:
• A hypothesis should be amenable to testing within a reasonable
time frame.
• It should be practically feasible to collect data and evaluate its
validity.
8.Explanation of the Problem Condition:
• The hypothesis should explain the facts that require
explanation.
• It should provide a theoretical framework to understand the
observed phenomena
8. BASIC CONCEPTS CONCERNING TESTING OF
HYPOTHESES
1. Null Hypothesis and Alternative Hypothesis
2. The Level of Significance
3. Decision Rule or Test of Hypothesis
4. Type I and Type II Errors in Hypothesis Testing
5. Two-tailed and One-tailed tests
9. 1. Null Hypothesis and Alternative
Hypothesis
• In statistical analysis, we encounter the concepts of null
hypothesis (H0) and alternative hypothesis (Ha).
• The null hypothesis assumes no significant difference or
relationship between variables.
• The alternative hypothesis proposes an alternative explanation
or relationship.
10. Example of Null and Alternative
Hypotheses
• Suppose we want to compare the superiority of Method A and
Method B.
• Null Hypothesis (H0): Both methods are equally good.
• Alternative Hypothesis (Ha): Method A is superior or Method B
is inferior.
11. Symbolic Representation
• H0: Population mean (µ) = Hypothesized mean (µH0)
• Ha: Alternative hypotheses can have different forms:
• Ha: Population mean (µ) ≠ Hypothesized mean (µH0) [Two-tailed test]
• Ha: Population mean (µ) > Hypothesized mean (µH0) [One-tailed test,
upper direction]
• Ha: Population mean (µ) < Hypothesized mean (µH0) [One-tailed test,
lower direction]
12. Choosing Null Hypothesis and Alternative
Hypothesis
• Null hypothesis represents the hypothesis we are trying to
reject.
• Alternative hypothesis represents other possibilities or what we
wish to prove.
• Null hypothesis is usually specific, while the alternative
hypothesis encompasses all other possibilities.
13. 2. The Level of Significance
• In Statistics, “significance” means “not by chance” or “probably
true”.
• The level of significance is a crucial concept in hypothesis
testing.
• It is typically set at a certain percentage (e.g., 5%).
• The level of significance is defined as the fixed probability of
wrong elimination of null hypothesis when in fact, it is true.
• Choosing the significance level requires careful consideration
and reasoning
14. The level of significance is denoted by the Greek symbol α (alpha).
Therefore, the level of significance is defined as follows:
Rejection of H0 based on Significance Level
• When the significance level is set at 5%, H0 is rejected if the observed
evidence has a probability of occurring less than 0.05 under H0.
• Explanation of the researcher's acceptance of a certain risk in
rejecting H0 when it is true.
Example: The value significant at 5% refers to p-value is less than 0.05
or p < 0.05. Similarly, significant at the 1% means that the p-value is
less than 0.01.
15. 3. Decision Rule or Test of Hypothesis
• The decision rule is a criterion used to determine whether to accept
or reject the null hypothesis (H0) in favor of the alternative
hypothesis (Ha).
• It guides the decision-making process based on the collected data and
the hypothesis being tested.
16. Let's consider an example scenario:
• H0: A certain lot is good (few defective items).
• Ha: The lot is not good (many defective items).
To apply the decision rule, we need to decide:
• The number of items to be tested from the lot.
• The criterion for accepting or rejecting the hypothesis.
Let's say we test 10 items from the lot.
Decision Rule:
• If there are none or only 1 defective item among the 10 tested, we
accept H0.
• If there are more than 1 defective items, we reject H0 and accept Ha.
17. Importance of the Decision Rule
The decision rule ensures consistency and objectivity in hypothesis
testing.
It provides a predefined guideline to interpret the collected data and
make informed decisions.
Adjusting the Decision Rule
The decision rule can be adjusted based on the specific context and
research requirements.
Factors such as sample size, desired level of confidence, and the nature
of the hypothesis influence the decision rule.
18. 4. Type I and Type II Errors in Hypothesis
Testing
• In hypothesis testing, there are two types of errors that can
occur: Type I error and Type II error.
• Type I error: Rejecting the null hypothesis (H0) when it is
actually true.
• Type II error: Accepting the null hypothesis (H0) when it is
actually false.
19. Type I Error (α Error)
• Type I error is denoted by α (alpha) and is also known as the level of
significance.
• It occurs when we reject the null hypothesis (H0) even though it is
true.
• The probability of Type I error is usually determined in advance, such
as 5% (α = 0.05).
• Significance Level = p (type I error) = α
20. Type II error
• Type II error is denoted by β (beta).
• It occurs when we accept the null hypothesis (H0) even though
it is false.
• The probability of Type II error is inversely related to the
probability of rejecting H0 (α).
21. Relationship Between Type I and Type II
Errors
• Type I and Type II errors are inversely related.
• When we try to reduce Type I error by decreasing α, the
probability of Type II error (β) increases.
• There is a trade-off between the two types of errors.
22. Balancing Type I and Type II Errors
• Balancing Type I and Type II errors depends on the specific
context and the consequences of each error.
• Decision-makers consider the costs or penalties associated with
both types of errors to determine an appropriate level of
significance (α).
Let's consider an example scenario:
• Type I error: Rejecting a batch of chemicals that should have
been accepted.
• Type II error: Risking the poisoning of users by accepting a
potentially harmful chemical compound.
23. • The appropriate balance between Type I and Type II errors depends
on the specific situation and the potential consequences of each
error.
• Decision-makers should set a higher level for Type I error if the
consequences of Type II error are more severe.
24. 5. Two-Tailed and One-Tailed Tests in
Hypothesis Testing
Two-Tailed Test
• A two-tailed test is used when the null hypothesis (H0) is a
specific value and the alternative hypothesis (Ha) is a value not
equal to the specified value of H0.
• It rejects H0 if the sample mean is significantly higher or lower
than the hypothesized value of the population mean.
• Symbolically, a two-tailed test can be represented as H0: μ = μ0
and Ha: μ ≠ μ0.
25. Two-Tailed Test Rejection Regions
• In a two-tailed test, there are two rejection regions, one on each tail
of the distribution curve.
• The rejection regions are determined based on the significance level
(α), such as 5% (α = 0.05).
• The acceptance region is the region where the sample mean falls if
we accept H0.
26.
27. One-Tailed Test
• A one-tailed test is used when we want to test if the population
mean is either lower than or higher than a specified value.
• Ha: μ < μ0 (left-tailed test) or Ha: μ > μ0 (right-tailed test).
• In a left-tailed test, there is one rejection region on the left tail of
the distribution curve.
• The rejection region is determined based on the significance
level (α), such as 5% (α = 0.05).
• The acceptance region is the region where the sample mean
falls if we accept H0.
28.
29. • In a right-tailed test, there is one rejection region on the right tail
of the distribution curve.
• The rejection region is determined based on the significance
level (α), such as 5% (α = 0.05).
• The acceptance region is the region where the sample mean
falls if we accept H0.
32. 1.Making a formal statement
1.Clearly state the null hypothesis (H0) and the alternative hypothesis
(Ha)
2.Consider the nature of the research problem
3.Choose between a one-tailed or two-tailed test based on the
alternative hypothesis
• Example:
• H0: m = 10 tons
• Ha: m > 10 tons
2. Choose a predetermined level of significance (e.g., 5% or 1%)
• Factors influencing the level of significance include:
• Magnitude of the difference between sample means
• Sample size
• Variability of measurements within samples
• Directionality of the hypothesis
33. 3. Deciding the distribution to use
• Determine the appropriate sampling distribution
• Choice between normal distribution and t-distribution
• Select the correct distribution based on the characteristics of
the data
4. Selecting a random sample and computing an appropriate
value
• Randomly select a sample(s)
• Compute a test statistic using the sample data and the chosen
distribution
34. 5. Calculation of the probability
• Calculate the probability that the sample result would deviate as
much as observed if the null hypothesis were true
6. Comparing the probability
• Compare the calculated probability with the specified
significance level (a)
• If the calculated probability is equal to or smaller than a (or a/2
for a two-tailed test), reject the null hypothesis and accept the
alternative hypothesis
• If the calculated probability is greater than a, accept the null
hypothesis