This document provides an introduction and overview of a presentation on hypothesis testing for a single sample test. It includes an abstract, introduction, definitions, explanations of the central limit theorem and t-test, assumptions, examples, and a question/answer section on hypothesis testing. A group of 11 students will be presenting on hypothesis testing for a single sample test, including topics like the central limit theorem, t-test, z-test, assumptions of different tests, and examples of applying the tests.
This document discusses statistical inference, which involves drawing conclusions about an unknown population based on a sample. There are two main types of statistical inference: parameter estimation and hypothesis testing. Parameter estimation involves obtaining numerical values of population parameters from a sample, like estimating the percentage of people aware of a product. Hypothesis testing involves making judgments about assumptions regarding population parameters based on sample data. The document also discusses point estimation, interval estimation, standard error, and provides examples of calculating confidence intervals.
Hypothesis testing involves stating a null hypothesis (H0) and an alternative hypothesis (H1). A test statistic is calculated from sample data and used to determine whether to reject or fail to reject H0. There are two types of errors: Type I rejects a true H0, Type II fails to reject a false H0. The significance level (α) limits Type I error, while power (1- β) measures the test's ability to reject H0 when it is false. Tests can be one-tailed if H1 specifies a direction, or two-tailed. The rejection region defines values where H0 will be rejected.
BMI (kg/m2)
22.1
23.4
24.8
26.2
27.6
28.9
30.3
31.6
32.9
34.2
35.5
36.8
38.1
39.4
The sample mean is 29.1 kg/m2 and the sample standard
deviation is 4.2 kg/m2. Test the hypothesis that the
population mean BMI is 30 kg/m2 at 5% level of
significance.
Hypothesis testing refers to formal statistical procedures used to accept or reject claims about populations based on data. It involves:
1) Stating a null hypothesis that makes a claim about a population parameter.
2) Collecting sample data and computing a test statistic.
3) Determining whether to reject the null hypothesis based on the probability of obtaining the sample statistic if the null is true.
Rejecting the null supports the alternative hypothesis. Type I and Type II errors occur when the null is incorrectly rejected or not rejected. Hypothesis tests aim to minimize errors while maximizing power to detect meaningful alternative hypotheses.
The document discusses testing of hypotheses. It defines a hypothesis as a tentative prediction about the relationship between variables. Good hypotheses are precise, testable, and consistent with known facts. Hypothesis testing involves formulating a null hypothesis (Ho) and an alternative hypothesis (H1). A significance level such as 5% is chosen. If the test statistic falls within the critical region, Ho is rejected. Type I error rejects a true Ho, while Type II error accepts a false Ho. Power refers to correctly rejecting a false Ho. The testing process determines test statistics, critical regions, and interprets results to draw conclusions.
This document provides an overview of basic hypothesis testing concepts. It defines key terms like the null hypothesis, type I and type II errors, significance levels, and p-values. It explains how hypothesis tests are used to determine if there is a statistically significant difference between two groups, with the goal of rejecting or failing to reject the null hypothesis. Examples are given around comparing the effectiveness of two drugs and testing if reindeer can fly. Both parametric and non-parametric statistical tests are introduced.
This document discusses statistical inference, which involves drawing conclusions about an unknown population based on a sample. There are two main types of statistical inference: parameter estimation and hypothesis testing. Parameter estimation involves obtaining numerical values of population parameters from a sample, like estimating the percentage of people aware of a product. Hypothesis testing involves making judgments about assumptions regarding population parameters based on sample data. The document also discusses point estimation, interval estimation, standard error, and provides examples of calculating confidence intervals.
Hypothesis testing involves stating a null hypothesis (H0) and an alternative hypothesis (H1). A test statistic is calculated from sample data and used to determine whether to reject or fail to reject H0. There are two types of errors: Type I rejects a true H0, Type II fails to reject a false H0. The significance level (α) limits Type I error, while power (1- β) measures the test's ability to reject H0 when it is false. Tests can be one-tailed if H1 specifies a direction, or two-tailed. The rejection region defines values where H0 will be rejected.
BMI (kg/m2)
22.1
23.4
24.8
26.2
27.6
28.9
30.3
31.6
32.9
34.2
35.5
36.8
38.1
39.4
The sample mean is 29.1 kg/m2 and the sample standard
deviation is 4.2 kg/m2. Test the hypothesis that the
population mean BMI is 30 kg/m2 at 5% level of
significance.
Hypothesis testing refers to formal statistical procedures used to accept or reject claims about populations based on data. It involves:
1) Stating a null hypothesis that makes a claim about a population parameter.
2) Collecting sample data and computing a test statistic.
3) Determining whether to reject the null hypothesis based on the probability of obtaining the sample statistic if the null is true.
Rejecting the null supports the alternative hypothesis. Type I and Type II errors occur when the null is incorrectly rejected or not rejected. Hypothesis tests aim to minimize errors while maximizing power to detect meaningful alternative hypotheses.
The document discusses testing of hypotheses. It defines a hypothesis as a tentative prediction about the relationship between variables. Good hypotheses are precise, testable, and consistent with known facts. Hypothesis testing involves formulating a null hypothesis (Ho) and an alternative hypothesis (H1). A significance level such as 5% is chosen. If the test statistic falls within the critical region, Ho is rejected. Type I error rejects a true Ho, while Type II error accepts a false Ho. Power refers to correctly rejecting a false Ho. The testing process determines test statistics, critical regions, and interprets results to draw conclusions.
This document provides an overview of basic hypothesis testing concepts. It defines key terms like the null hypothesis, type I and type II errors, significance levels, and p-values. It explains how hypothesis tests are used to determine if there is a statistically significant difference between two groups, with the goal of rejecting or failing to reject the null hypothesis. Examples are given around comparing the effectiveness of two drugs and testing if reindeer can fly. Both parametric and non-parametric statistical tests are introduced.
Hypothesis testing involves proposing and testing hypotheses, or predictions, about relationships between variables. There are four main types of hypotheses: null, alternative, directional, and non-directional. The null hypothesis proposes no relationship between variables, while the alternative hypothesis contradicts the null. Directional hypotheses predict the nature of a relationship, while non-directional hypotheses do not. Common statistical tests used for hypothesis testing include the z-test, t-test, chi-square test, and F-test. Hypothesis testing is a crucial part of the scientific method for assessing theories through empirical observation.
This document discusses key concepts in statistical estimation including:
- Estimation involves using sample data to infer properties of the population by calculating point estimates and interval estimates.
- A point estimate is a single value that estimates an unknown population parameter, while an interval estimate provides a range of plausible values for the parameter.
- A confidence interval gives the probability that the interval calculated from the sample data contains the true population parameter. Common confidence intervals are 95% confidence intervals.
- Formulas for confidence intervals depend on whether the population standard deviation is known or unknown, and the sample size.
Research method ch08 statistical methods 2 anovanaranbatn
1) The document discusses various statistical methods including one-way ANOVA, repeated measures ANOVA, and ANCOVA.
2) One-way ANOVA is used to compare the means of three or more independent groups when you have one independent variable with three or more categories and one continuous dependent variable.
3) Repeated measures ANOVA is used when the same subjects are measured under different conditions to assess for main effects and interactions while accounting for the dependency of measurements within subjects.
This document discusses confidence intervals for population means and proportions. It explains how to construct confidence intervals using the normal distribution for large sample sizes (n ≥ 30) and the t-distribution for small sample sizes. Formulas are provided for calculating margin of error and determining necessary sample size. Guidelines are given for determining whether to use the normal or t-distribution based on sample size and characteristics. Confidence intervals can be constructed for variance and standard deviation using the chi-square distribution.
This document provides an overview of hypothesis testing. It begins by defining hypothesis testing and listing the typical steps: 1) formulating the null and alternative hypotheses, 2) computing the test statistic, 3) determining the p-value and interpretation, and 4) specifying the significance level. It then discusses different types of hypothesis tests for claims about a mean when the population standard deviation is known or unknown, as well as tests for claims about a population proportion. Examples are provided for each type of test to demonstrate how to apply the steps. The document aims to explain the concept and process of hypothesis testing for making data-driven decisions about statistical claims.
This document discusses confidence intervals, which provide a range of values that is likely to include an unknown population parameter based on a sample statistic. It defines key concepts like confidence level, confidence limits, and factors that determine how to set the confidence interval like sample size, population variability, and precision of values. It explains how larger sample sizes and more precise measurements result in narrower confidence intervals. Applications to clinical trials are discussed, showing how sample size impacts the ability to make definitive recommendations based on trial results.
This document discusses probability and Bayes' theorem. It provides examples of basic probability concepts like the probability of a coin toss. It then defines conditional probability as the probability of an event given another event. Bayes' theorem is introduced as a way to revise a probability based on new information. An example problem demonstrates how to calculate the probability of rain given a weather forecast using Bayes' theorem.
Hypothesis testing , T test , chi square test, z test Irfan Ullah
- The document discusses hypothesis testing and the p-value approach, which involves specifying the null and alternative hypotheses, calculating a test statistic, determining the p-value, and comparing it to the significance level α to determine whether to reject or accept the null hypothesis.
- It also discusses type I and type II errors, degrees of freedom as the number of independent pieces of information, and chi-square and t-tests as statistical tests.
Statistical inference involves drawing conclusions about a population based on a sample. It has two main areas: estimation and hypothesis testing. Estimation uses sample data to obtain point or interval estimates of unknown population parameters. Hypothesis testing determines whether to accept or reject statements about population parameters. Confidence intervals give a range of values that are likely to contain the true population parameter, with a specified level of confidence such as 90% or 95%.
This document provides an overview of various statistical analysis techniques used in inferential statistics, including t-tests, ANOVA, ANCOVA, chi-square, regression analysis, and interpreting null hypotheses. It defines key terms like alpha levels, effect sizes, and interpreting graphs. The overall purpose is to explain common statistical methods for analyzing data and determining the probability that results occurred by chance or were statistically significant.
This document discusses hypothesis testing, which involves drawing inferences about a population based on a sample from that population. It outlines the key elements of a hypothesis test, including the null and alternative hypotheses, test statistics, critical regions, significance levels, critical values, and p-values. Type I and Type II errors are explained, where a Type I error involves rejecting the null hypothesis when it is true, and a Type II error involves failing to reject the null when it is false. The power of a hypothesis test is defined as the probability of correctly rejecting the null hypothesis when it is false. Controlling type I and II errors involves considering the significance level, sample size, and population parameters in the null and alternative hypotheses.
This document summarizes a presentation on hypothesis testing. It defines key concepts like the null and alternative hypotheses, type I and type II errors, and one-tailed and two-tailed tests. It then outlines the procedure for hypothesis testing, including setting hypotheses, selecting a significance level, calculating test statistics, and determining whether to reject the null hypothesis. Specific hypothesis tests are described, like paired t-tests for comparing two related samples and tests of proportions. Limitations of hypothesis testing are noted, such as that results are probabilistic and small samples impact reliability.
Brm (one tailed and two tailed hypothesis)Upama Dwivedi
This document discusses one-tailed and two-tailed hypothesis tests. It defines a hypothesis as an assumption made about the probable results of research. The null hypothesis assumes a parameter takes a certain value, while the alternative hypothesis expresses how the parameter may deviate. A one-tailed test examines if a parameter falls on one side of the distribution, while a two-tailed test looks at both sides. Two-tailed tests are more conservative since they require more extreme test statistics to reject the null hypothesis. Examples are provided to illustrate the difference between one-tailed and two-tailed tests.
This document provides information about the Kruskal-Wallis H test, a non-parametric method for testing whether samples originate from the same distribution. It describes how the Kruskal-Wallis test is a generalization of the Mann-Whitney U test that allows comparison of more than two independent groups. The test works by ranking all data from lowest to highest and then summing the ranks for each group to calculate the test statistic H, which is compared to a chi-squared distribution to determine whether to reject or fail to reject the null hypothesis that all population medians are equal.
This document provides an introduction to hypothesis testing including:
1. The 5 steps in a hypothesis test: set up null and alternative hypotheses, define test procedure, collect data, decide whether to reject null hypothesis, interpret results.
2. Large sample tests for the mean involve testing if the population mean is equal to or not equal to a specified value using a test statistic that follows a normal distribution.
3. Type I and Type II errors occur when the decision made based on the hypothesis test does not match the actual truth - a Type I error rejects the null hypothesis when it is true, a Type II error fails to reject the null when it is false. The probability of each error can be minimized by choosing
This document provides an overview of non-parametric statistics. It defines non-parametric tests as those that make fewer assumptions than parametric tests, such as not assuming a normal distribution. The document compares and contrasts parametric and non-parametric tests. It then explains several common non-parametric tests - the Mann-Whitney U test, Wilcoxon signed-rank test, sign test, and Kruskal-Wallis test - and provides examples of how to perform and interpret each test.
The document discusses one-way analysis of variance (ANOVA), which compares the means of three or more populations. It provides an example where sales data from three marketing strategies are analyzed using ANOVA. The null hypothesis is that the population means are equal, and it is rejected since the F-statistic is greater than the critical value, indicating at least one mean is significantly different. Post-hoc comparisons using the Bonferroni method find that Strategy 2 (emphasizing quality) has significantly higher sales than Strategy 1 (emphasizing convenience).
Confidence Intervals: Basic concepts and overviewRizwan S A
This document provides an overview of confidence intervals. It defines confidence intervals and describes their use in statistical inference to estimate population parameters. It explains that a confidence interval provides a range of plausible values for an unknown population parameter based on a sample statistic. The document outlines the key steps in calculating a confidence interval, including determining the point estimate, standard error, and critical value corresponding to the desired confidence level. It discusses how the width of the confidence interval indicates the precision of the estimate and is affected by factors like the sample size and population variability.
The document provides an overview of inferential statistics and the central limit theorem. It discusses how the central limit theorem states that the sampling distribution of sample means will be approximately normally distributed for large sample sizes, even if the population is not normally distributed. It provides examples and explanations of key concepts like sampling distribution, mean, standard deviation, and normal distribution. The document also covers how to calculate confidence intervals and the importance of the central limit theorem in allowing inferences to be made about populations from sample data.
Application of Central Limit Theorem to Study the Student Skills in Verbal, A...theijes
Through this paper we analyses the application of the central limit theorem to study the Verbal, Apptitude and Reasoning skills of students. The planning of teaching is based on the mathematical knowledge about the theorem. The different meanings of this theorem were analyzed using the history of its development and previous research studies related to this theorem. Results at the end of this work will serve to improve the correct application of different elements of meaning for central limit theorem when solving the selected problem and to prepare new proposals to teach statistics to students. The central limit theorem forms the basis of inferential statistics and it would be difficult to overestimate its importance. In a statistical study, the sample mean is used to estimate the population mean. However, the number of different samples (of a given size) that could be taken is extremely large and these different samples would have different means. Some would be lower than the mean of the population and some would be higher.The central limit theorem states that, for samples of size n from a normal population, the distribution of sample means is normal with a mean equal to the mean of the population and a standard deviation equal to the standard deviation of the population divided by the square root of the sample size. (For suitably large sample sizes, the central limit theorem also applies to populations whose distributions are not normal.)
Hypothesis testing involves proposing and testing hypotheses, or predictions, about relationships between variables. There are four main types of hypotheses: null, alternative, directional, and non-directional. The null hypothesis proposes no relationship between variables, while the alternative hypothesis contradicts the null. Directional hypotheses predict the nature of a relationship, while non-directional hypotheses do not. Common statistical tests used for hypothesis testing include the z-test, t-test, chi-square test, and F-test. Hypothesis testing is a crucial part of the scientific method for assessing theories through empirical observation.
This document discusses key concepts in statistical estimation including:
- Estimation involves using sample data to infer properties of the population by calculating point estimates and interval estimates.
- A point estimate is a single value that estimates an unknown population parameter, while an interval estimate provides a range of plausible values for the parameter.
- A confidence interval gives the probability that the interval calculated from the sample data contains the true population parameter. Common confidence intervals are 95% confidence intervals.
- Formulas for confidence intervals depend on whether the population standard deviation is known or unknown, and the sample size.
Research method ch08 statistical methods 2 anovanaranbatn
1) The document discusses various statistical methods including one-way ANOVA, repeated measures ANOVA, and ANCOVA.
2) One-way ANOVA is used to compare the means of three or more independent groups when you have one independent variable with three or more categories and one continuous dependent variable.
3) Repeated measures ANOVA is used when the same subjects are measured under different conditions to assess for main effects and interactions while accounting for the dependency of measurements within subjects.
This document discusses confidence intervals for population means and proportions. It explains how to construct confidence intervals using the normal distribution for large sample sizes (n ≥ 30) and the t-distribution for small sample sizes. Formulas are provided for calculating margin of error and determining necessary sample size. Guidelines are given for determining whether to use the normal or t-distribution based on sample size and characteristics. Confidence intervals can be constructed for variance and standard deviation using the chi-square distribution.
This document provides an overview of hypothesis testing. It begins by defining hypothesis testing and listing the typical steps: 1) formulating the null and alternative hypotheses, 2) computing the test statistic, 3) determining the p-value and interpretation, and 4) specifying the significance level. It then discusses different types of hypothesis tests for claims about a mean when the population standard deviation is known or unknown, as well as tests for claims about a population proportion. Examples are provided for each type of test to demonstrate how to apply the steps. The document aims to explain the concept and process of hypothesis testing for making data-driven decisions about statistical claims.
This document discusses confidence intervals, which provide a range of values that is likely to include an unknown population parameter based on a sample statistic. It defines key concepts like confidence level, confidence limits, and factors that determine how to set the confidence interval like sample size, population variability, and precision of values. It explains how larger sample sizes and more precise measurements result in narrower confidence intervals. Applications to clinical trials are discussed, showing how sample size impacts the ability to make definitive recommendations based on trial results.
This document discusses probability and Bayes' theorem. It provides examples of basic probability concepts like the probability of a coin toss. It then defines conditional probability as the probability of an event given another event. Bayes' theorem is introduced as a way to revise a probability based on new information. An example problem demonstrates how to calculate the probability of rain given a weather forecast using Bayes' theorem.
Hypothesis testing , T test , chi square test, z test Irfan Ullah
- The document discusses hypothesis testing and the p-value approach, which involves specifying the null and alternative hypotheses, calculating a test statistic, determining the p-value, and comparing it to the significance level α to determine whether to reject or accept the null hypothesis.
- It also discusses type I and type II errors, degrees of freedom as the number of independent pieces of information, and chi-square and t-tests as statistical tests.
Statistical inference involves drawing conclusions about a population based on a sample. It has two main areas: estimation and hypothesis testing. Estimation uses sample data to obtain point or interval estimates of unknown population parameters. Hypothesis testing determines whether to accept or reject statements about population parameters. Confidence intervals give a range of values that are likely to contain the true population parameter, with a specified level of confidence such as 90% or 95%.
This document provides an overview of various statistical analysis techniques used in inferential statistics, including t-tests, ANOVA, ANCOVA, chi-square, regression analysis, and interpreting null hypotheses. It defines key terms like alpha levels, effect sizes, and interpreting graphs. The overall purpose is to explain common statistical methods for analyzing data and determining the probability that results occurred by chance or were statistically significant.
This document discusses hypothesis testing, which involves drawing inferences about a population based on a sample from that population. It outlines the key elements of a hypothesis test, including the null and alternative hypotheses, test statistics, critical regions, significance levels, critical values, and p-values. Type I and Type II errors are explained, where a Type I error involves rejecting the null hypothesis when it is true, and a Type II error involves failing to reject the null when it is false. The power of a hypothesis test is defined as the probability of correctly rejecting the null hypothesis when it is false. Controlling type I and II errors involves considering the significance level, sample size, and population parameters in the null and alternative hypotheses.
This document summarizes a presentation on hypothesis testing. It defines key concepts like the null and alternative hypotheses, type I and type II errors, and one-tailed and two-tailed tests. It then outlines the procedure for hypothesis testing, including setting hypotheses, selecting a significance level, calculating test statistics, and determining whether to reject the null hypothesis. Specific hypothesis tests are described, like paired t-tests for comparing two related samples and tests of proportions. Limitations of hypothesis testing are noted, such as that results are probabilistic and small samples impact reliability.
Brm (one tailed and two tailed hypothesis)Upama Dwivedi
This document discusses one-tailed and two-tailed hypothesis tests. It defines a hypothesis as an assumption made about the probable results of research. The null hypothesis assumes a parameter takes a certain value, while the alternative hypothesis expresses how the parameter may deviate. A one-tailed test examines if a parameter falls on one side of the distribution, while a two-tailed test looks at both sides. Two-tailed tests are more conservative since they require more extreme test statistics to reject the null hypothesis. Examples are provided to illustrate the difference between one-tailed and two-tailed tests.
This document provides information about the Kruskal-Wallis H test, a non-parametric method for testing whether samples originate from the same distribution. It describes how the Kruskal-Wallis test is a generalization of the Mann-Whitney U test that allows comparison of more than two independent groups. The test works by ranking all data from lowest to highest and then summing the ranks for each group to calculate the test statistic H, which is compared to a chi-squared distribution to determine whether to reject or fail to reject the null hypothesis that all population medians are equal.
This document provides an introduction to hypothesis testing including:
1. The 5 steps in a hypothesis test: set up null and alternative hypotheses, define test procedure, collect data, decide whether to reject null hypothesis, interpret results.
2. Large sample tests for the mean involve testing if the population mean is equal to or not equal to a specified value using a test statistic that follows a normal distribution.
3. Type I and Type II errors occur when the decision made based on the hypothesis test does not match the actual truth - a Type I error rejects the null hypothesis when it is true, a Type II error fails to reject the null when it is false. The probability of each error can be minimized by choosing
This document provides an overview of non-parametric statistics. It defines non-parametric tests as those that make fewer assumptions than parametric tests, such as not assuming a normal distribution. The document compares and contrasts parametric and non-parametric tests. It then explains several common non-parametric tests - the Mann-Whitney U test, Wilcoxon signed-rank test, sign test, and Kruskal-Wallis test - and provides examples of how to perform and interpret each test.
The document discusses one-way analysis of variance (ANOVA), which compares the means of three or more populations. It provides an example where sales data from three marketing strategies are analyzed using ANOVA. The null hypothesis is that the population means are equal, and it is rejected since the F-statistic is greater than the critical value, indicating at least one mean is significantly different. Post-hoc comparisons using the Bonferroni method find that Strategy 2 (emphasizing quality) has significantly higher sales than Strategy 1 (emphasizing convenience).
Confidence Intervals: Basic concepts and overviewRizwan S A
This document provides an overview of confidence intervals. It defines confidence intervals and describes their use in statistical inference to estimate population parameters. It explains that a confidence interval provides a range of plausible values for an unknown population parameter based on a sample statistic. The document outlines the key steps in calculating a confidence interval, including determining the point estimate, standard error, and critical value corresponding to the desired confidence level. It discusses how the width of the confidence interval indicates the precision of the estimate and is affected by factors like the sample size and population variability.
The document provides an overview of inferential statistics and the central limit theorem. It discusses how the central limit theorem states that the sampling distribution of sample means will be approximately normally distributed for large sample sizes, even if the population is not normally distributed. It provides examples and explanations of key concepts like sampling distribution, mean, standard deviation, and normal distribution. The document also covers how to calculate confidence intervals and the importance of the central limit theorem in allowing inferences to be made about populations from sample data.
Application of Central Limit Theorem to Study the Student Skills in Verbal, A...theijes
Through this paper we analyses the application of the central limit theorem to study the Verbal, Apptitude and Reasoning skills of students. The planning of teaching is based on the mathematical knowledge about the theorem. The different meanings of this theorem were analyzed using the history of its development and previous research studies related to this theorem. Results at the end of this work will serve to improve the correct application of different elements of meaning for central limit theorem when solving the selected problem and to prepare new proposals to teach statistics to students. The central limit theorem forms the basis of inferential statistics and it would be difficult to overestimate its importance. In a statistical study, the sample mean is used to estimate the population mean. However, the number of different samples (of a given size) that could be taken is extremely large and these different samples would have different means. Some would be lower than the mean of the population and some would be higher.The central limit theorem states that, for samples of size n from a normal population, the distribution of sample means is normal with a mean equal to the mean of the population and a standard deviation equal to the standard deviation of the population divided by the square root of the sample size. (For suitably large sample sizes, the central limit theorem also applies to populations whose distributions are not normal.)
1. The document discusses hypothesis testing, including defining the null and alternative hypotheses, types of errors, test statistics, and testing differences between population means and differences between two samples.
2. Examples are provided to demonstrate hypothesis testing for one and two sample means. This includes stating the hypotheses, significance level, test statistic, critical region, and conclusion.
3. Assignments are given applying hypothesis testing to compare lung destruction between smokers and non-smokers, serum complement activity between disease and normal subjects, and podiatric problems between elderly diabetic and non-diabetic patients.
This document provides an overview of Module 5 on sampling distributions. It discusses key concepts like parameters vs statistics, sampling variability, and sampling distributions. It explains that the sampling distribution of a sample mean is a normal distribution with a mean equal to the population mean and standard deviation equal to the population standard deviation divided by the square root of the sample size. The central limit theorem states that as the sample size increases, the distribution of sample means will approach a normal distribution regardless of the shape of the population distribution. The module also covers binomial distributions for sample counts and proportions.
The document discusses key concepts related to sampling including the differences between populations and samples, probability and non-probability sampling, and the central limit theorem. It notes that samples are used to make inferences about populations and discusses factors like time, cost, and consumption that necessitate sampling rather than examining entire populations. The central limit theorem states that the distribution of sample means will approach a normal distribution as the sample size increases.
The document discusses key concepts related to sampling including populations, samples, probability and non-probability sampling, variables, sampling error, sampling distributions, and the central limit theorem. It provides examples to illustrate these concepts and implications of the central limit theorem including how it allows inferring properties of the population from a sample.
Statistical Techniques in Business & Economics (McGRAV-HILL) 12 Edt. Chapter ...tarta
This chapter discusses sampling methods and the central limit theorem. It has five learning goals:
1) Explain why sampling is used instead of studying the entire population.
2) Describe methods for selecting a sample, including random sampling techniques.
3) Define and construct the sampling distribution of the sample mean.
4) Explain the central limit theorem and how it applies to sampling distributions.
5) Use the central limit theorem to find probabilities related to sample means.
This document discusses key concepts related to sampling, statistics, and sample size for impact evaluations. It covers how samples relate to populations, sampling variation, the law of large numbers, the central limit theorem, hypothesis testing, and statistical power. The main points are:
1) Samples are subsets of populations that can be used to make inferences about the overall population. Larger sample sizes reduce sampling variation and provide more accurate estimates.
2) According to the central limit theorem, the distribution of sample means will approximate a normal distribution, even if the underlying population is not normal, as long as the sample size is large enough.
3) Hypothesis testing involves comparing sample results to determine if an intervention had a
1. The document discusses key concepts in statistics including population, sampling, random sampling, standard error, and standard error of the mean.
2. A population is the total set of observations, while a sample is a subset selected from the population. Random sampling selects subjects entirely by chance so each member has an equal chance of being selected.
3. The standard error is the standard deviation of a statistic's sampling distribution and indicates how much a statistic may vary between samples. It decreases with larger sample sizes. The standard error of the mean specifically measures how much the sample mean may differ from the population mean.
This document provides an overview of quantitative methods for probability distributions. It discusses key concepts like binomial distribution, normal distribution, standard normal distribution, central limit theorem, point estimates, interval estimates, and confidence intervals. Examples are provided to illustrate how to calculate probabilities, means, and confidence intervals for estimating population parameters based on sample data. Key probability distributions and statistical techniques are defined to analyze and make inferences about data.
The document defines a sampling distribution of sample means as a distribution of means from random samples of a population. The mean of sample means equals the population mean, and the standard deviation of sample means is smaller than the population standard deviation, equaling it divided by the square root of the sample size. As sample size increases, the distribution of sample means approaches a normal distribution according to the Central Limit Theorem.
Biostatistics - the application of statistical methods in the life sciences including medicine, pharmacy, and agriculture.
An understanding is needed in practice issues requiring sound decisions.
Statistics is a decision science.
Biostatistics therefore deals with data.
Biostatistics is the science of obtaining, analyzing and interpreting data in order to understand and improve human health.
Applications of Biostatistics
Design and analysis of clinical trials
Quality control of pharmaceuticals
Pharmacy practice research
Public health, including epidemiology
Genomics and population genetics
Ecology
Biological sequence analysis
Bioinformatics etc.
- Sampling distribution describes the distribution of sample statistics like means or proportions drawn from a population. It allows making statistical inferences about the population.
- The central limit theorem states that sampling distributions of sample means will be approximately normally distributed regardless of the population distribution, if the sample size is large.
- Standard error measures the amount of variability in values of a sample statistic across different samples. It is used to construct confidence intervals for population parameters.
This document discusses inferential statistics and hypothesis testing. It provides examples of researchers formulating hypotheses and collecting data to test them. Researchers take random samples from populations to test if there are meaningful differences between groups. Hypothesis testing involves comparing experimental and control groups after exposing them to different levels of an independent variable. The goal is to determine if the independent variable caused a detectable change in the dependent variable. Inferential statistics are used to test if sample means differ significantly, which would suggest the hypothesis is supported or not supported. Proper sampling and estimating sampling distributions, standard errors, and variability are important concepts for accurately testing hypotheses about populations based on sample data.
Chp11 - Research Methods for Business By Authors Uma Sekaran and Roger BougieHassan Usman
This document discusses sampling and sampling distributions. It defines key concepts like population, sample, probability distributions, sampling distributions, and the central limit theorem. It explains that as sample size increases, the sampling distribution approximates a normal distribution according to the central limit theorem. It also discusses different types of sampling methods like simple random sampling, systematic random sampling, and stratified random sampling.
This document discusses key concepts in sampling and sample size determination. It defines population, parameter, sample, and statistic. A target population refers to the entire group a researcher wishes to generalize to, while an accessible population is the specific study population. Parameter describes a numeric characteristic of the entire population, while statistic describes a numeric characteristic of a sample. The document also outlines factors that influence sample size determination, such as population homogeneity and desired precision. It provides examples of sample size calculations using Slovin's formula and Calmorin's formula.
The document discusses estimation and confidence intervals. It explains the central limit theorem, which states that sample means will approximate a normal distribution as long as sample sizes are sufficiently large. This allows constructing confidence intervals for a population mean using z-scores. The document provides formulas for calculating confidence intervals using point estimates from sample data and outlines how to interpret the resulting confidence intervals. It notes that when the population standard deviation is unknown, a t-distribution can be used if sample sizes are large enough.
This document discusses statistical estimation and confidence intervals. It begins with an overview of the central limit theorem, which states that as sample size increases, the sampling distribution of the sample means will approximate a normal distribution. It then covers how to construct confidence intervals to estimate population parameters like the mean and proportion when the population standard deviation is both known and unknown. The document explains how the t-distribution is used when the population standard deviation is unknown and the sample size is small. It provides examples of how to calculate confidence intervals and determine sample sizes needed based on the central limit theorem.
Clinical Trials Versus Health Outcomes Research: SAS/STAT Versus SAS Enterpri...cambridgeWD
Clinical trials and health outcomes research differ in important ways that impact statistical modeling approaches. Clinical trials typically use homogeneous samples and focus on a single endpoint, while health outcomes data is heterogeneous with multiple endpoints. Predictive modeling techniques used in health outcomes research, like those in SAS Enterprise Miner, are better suited than traditional methods as they can handle complex real-world data without strong assumptions and more accurately predict rare events. Validation of models on separate test data is also important for generalizing results.
Clinical Trials Versus Health Outcomes Research: SAS/STAT Versus SAS Enterpri...cambridgeWD
This document discusses the differences between clinical trials and health outcomes research. Clinical trials use homogeneous samples, surrogate endpoints, and focus on a single outcome. They are also typically underpowered for rare events. Health outcomes research uses heterogeneous data from the general population to examine multiple real endpoints simultaneously. It has larger samples and data that allow analysis of rare occurrences. Predictive modeling is better suited than traditional statistical methods for analyzing heterogeneous health outcomes data due to relaxed assumptions like normality.
Similar to Hypothesis testing: A single sample test (20)
Resumes, Cover Letters, and Applying OnlineBruce Bennett
This webinar showcases resume styles and the elements that go into building your resume. Every job application requires unique skills, and this session will show you how to improve your resume to match the jobs to which you are applying. Additionally, we will discuss cover letters and learn about ideas to include. Every job application requires unique skills so learn ways to give you the best chance of success when applying for a new position. Learn how to take advantage of all the features when uploading a job application to a company’s applicant tracking system.
Job Finding Apps Everything You Need to Know in 2024SnapJob
SnapJob is revolutionizing the way people connect with work opportunities and find talented professionals for their projects. Find your dream job with ease using the best job finding apps. Discover top-rated apps that connect you with employers, provide personalized job recommendations, and streamline the application process. Explore features, ratings, and reviews to find the app that suits your needs and helps you land your next opportunity.
Jill Pizzola's Tenure as Senior Talent Acquisition Partner at THOMSON REUTERS...dsnow9802
Jill Pizzola's tenure as Senior Talent Acquisition Partner at THOMSON REUTERS in Marlton, New Jersey, from 2018 to 2023, was marked by innovation and excellence.
5 Common Mistakes to Avoid During the Job Application Process.pdfAlliance Jobs
The journey toward landing your dream job can be both exhilarating and nerve-wracking. As you navigate through the intricate web of job applications, interviews, and follow-ups, it’s crucial to steer clear of common pitfalls that could hinder your chances. Let’s delve into some of the most frequent mistakes applicants make during the job application process and explore how you can sidestep them. Plus, we’ll highlight how Alliance Job Search can enhance your local job hunt.
Leadership Ambassador club Adventist modulekakomaeric00
Aims to equip people who aspire to become leaders with good qualities,and with Christian values and morals as per Biblical teachings.The you who aspire to be leaders should first read and understand what the ambassador module for leadership says about leadership and marry that to what the bible says.Christians sh
A Guide to a Winning Interview June 2024Bruce Bennett
This webinar is an in-depth review of the interview process. Preparation is a key element to acing an interview. Learn the best approaches from the initial phone screen to the face-to-face meeting with the hiring manager. You will hear great answers to several standard questions, including the dreaded “Tell Me About Yourself”.
3. ABSTRACT
The hypothesis testing is about population means
and proportions. A sample mean or proportion,
obtained from a single sample, will be with the
hypothesized parameter and a decision made as to
whether or not to reject the hypothesis.
However, it is more important to obtain a good
understanding of fundamental ideas than to be
overly concerned with practical applications.
4. Introduction
Statistics first of all is not a method by which one can prove
almost anything one wants to prove. Hypothesis testing is a
part of this vast statistics where “hypothesis” means a
statement to be totally proved empirically. The hypothesis
is the most important technique in statistical inference.
Hypothesis tests are widely used in business and industry
for making decisions. In attempting to reach decisions ,it is
useful to make assumptions 0r guesses about the
populations involved.
5. Cont…
Such assumptions which may or may not be true are called
“statistical hypothesis”. The hypothesis is made about the
value of some parameter ,but the only facts available to
estimate the true parameters are those provided by a
sample. If the sample statistics differs from the hypothesis
made about the population parameter a decision must be
made as to whether or not this difference is significant. It is
the hypothesis that is rejected, if not it must be accepted.
Hence, the term “tests of hypothesis”.
6. Definition
Single sample test is the part of this hypothesis testing.
The single sample test is used to determine whether a
sample comes from a population with a specific mean.
This population mean is not always known, but is
sometimes hypothesized.
“Single samples are measurements made on two
different sets of items. When we conduct a hypothesis
test using two random samples. We must choose the
best based on whether the samples are dependent or
independent.”
7. Cont..
For example, you want to show that a new teaching
method for pupils struggling to learn English grammar
can improve their grammar skills to the national
average. The sample would be pupils who received the
new teaching method and the population mean would
be the national average score.
Again, alternatively the doctors that work in Accident
and Emergency (A&E) departments work 100 per
week despite the dangers(e.g. tiredness) of working such
long hours. The sample 1000 doctors in emergency
departments and see their hours differ from 100 hours.
8. Central Limit Theorem
The central limit theorem states that the sampling
distribution of the mean of any independent random
variable will be normal or nearly normal if the sample
size is large enough . How large is "large enough"? -
The answer depends on two factors:
1. Requirement for accuracy. The more closely the
sampling distribution needs to resemble a normal
distribution, the more sample points will be required.
9. Cont…
2) The shape of the underlying population. The more
closely the original population resembles a normal
distribution, the fewer sample points will be required.
In practice, some statisticians say that a sample size of
30 is large enough when the population distribution is
roughly bell- shaped. Others recommend a sample size
of at least 40. But if the original population is
distinctly not normal ( is badly skewed, has multiple
peaks, and/or has outliers), researchers like the sample
size to be even larger.
10. Cont…
Theory of Statistical regularity under general conditions
the average of data observed over time tends to be
distributed as a normal distribution. It's usefulness lies in
it's complete generality : no matter how a variable change,
the sum of it's values will show a normal distribution if
enough measurement are taken. It forms the basis of the
law of large numbers and was formulate by the Russian
mathematician Alexander
Mikhailovich Lyapunov (1857-1918) drawing upon the
work of the French mathematician
Pierre Simon Laplace ( 1749 - 1827).
11. Cont…
According to the central limit theorem, the mean of a
sample of data will be closer to the mean of overall
population in question as the sample size increases,
notwithstanding standing the actual distribution of
the data, whether it is normal or non normal.
Example:
If an investor is looking to analysis the overall return
for a stock index made up of 1,000 stocks, he can
random samples of stocks from the index to get an
estimate for the return of the total index.
12. The samples must be random and at least 30 stocks
must be evaluated in each sample for the central limit
theorem to hold. Random samples ensure a broad
range of stock across industries and sectors is
represented in the sample. Stock previously selected
must also be replaced for selection in other samples to
avoid bias. The average returns from these samples
approximate the return for the whole index and are
approximately normally distributed.
13. USUAGE EXAMPLES : 1
1)The central limit theorem was especially useful in
increasing our understanding of the statistical modeling
position we held in our project.
2) If u want to try and predict the future for a product, you
can use the central limit theorem to get a good base line.
3)Using the central limit theorem will allow you to
breakdown your company’s finances and find out just how
well you are doing.
4) This gives you to the ability to measure how much the
means of various samples will vary without having to take
any other sample means to compare it with.
14. Formula of the Central Limit
Theorem:
Central limit theorem states that if we have mean and
standard deviation of a particular population and we
take a large sample size within the population, then
mean of sample is same as the mean of the population.
Standard deviation of the sample is equal to standard
deviation of the population divided by square root of
sample size. Central limit theorem is applicable for a
sufficiently large sample size ( n ≥ 30). The formula for
central limit theorem can be stated as follows:
15. Cont…
µ =µ and
Where, µ = Population mean.
= Population standard deviation
µ =sample mean
= sample standard deviation
N= sample size.
Solved Examples:
Question no. 1:The record of weights of male population of
follows normal distribution. Its mean and standard
deviation are 70 kg and 15 kg respectively. If a researcher
considers the records of 50 males, then what would be the
mean and standard deviation of the chosen sample?
16. Cont…
Solution:
Mean of the population µ = 70 kg.
Sample size N = 50
Mean of the sample is given by;
µ = µ
µ = 70 kg.
standard deviation(sd) of the sample is given by
= = =2.121 = 2.1 kg (Ans).
17. Question 2 :
At a coastal area the number of crabs caught per day are recorded. The
average of which is 10 and S.D.() is 3 if there the record of 60 days is
chosen randomly, estimate the mean and standard deviation of the chosen
sample.
Solution:
Mean of population µ= 10
Standard deviation of population= 3
Sample size n= 60
Mean of the sample is given by µ =µ, µ = 10
Standard deviation of the sample is given by =
=0.387
Decision: The mean
and SD ( of the chosen sample is 0.387
(approximately)
18. HYPOTHESIS TESTING FOR SMALL SAMPLE
AND POPULATION MEAN UNKNOWN
When using a test statistics for one population mean
there were two cases where we must use the t-
distribution instead of the z-distribution. The first
case is where the sample size is small (below 30 or so)
and the second case is when the population standard
deviation is not known and we have to estimate it
using the sample standard deviations. In both cases we
have less reliable information on which to base our
conclusions , so we have to pay a penalty for this by
using the t-distribution which has more variability in
the tails than a z-distribution has.
19. REQUIREMENTSSMALL SAMPLE TEST OF
HYPOTHESIS ABOUT POPULATION MEAN
1. A random sample is selected from the target
population
2. The population has a relative frequency distribution
that is approximately normal
Small sample test of hypothesis about µ
Two tailed Left tailed Right tailed
H∘:µ=µ∘ H∘:µ=µ∘ H∘:µ=µ∘
Ha:µ≠µ∘ Ha:µ<µ∘ Ha:µ>µ∘
20. POPULATION STANDARD DEVIATION(σ) KNOWN
OR UNKNOWN
As with confidence intervals there are two types of single
sample hypothesis tests:
1. When the population standard deviation(σ) is known or
given
2. When the population standard deviation (σ) is not
known and therefore we have to use an estimate.
When σ is known, we use the normal standard or z-
distribution to establish the non rejection region and
critical values.
When σ is not known, we use the t-distribution instead
every sample size has its own t-distribution with (n-1)
degrees of freedom.
21. T-TEST
The statistician and chemist W.S Gusset discovers it in
1908
T-test looks at the t-statistic ,t-distribution and degree of
freedom to determine the probability of difference between
population. The test statistic is known as t-test.
“ The t-test is a hypothesis test that uses the t-statistics and
the t-distribution to arrive at a decision. When small
samples are used and when the population standard
deviation is unknown, the hypothesis test about one mean
and the test involving two means is t-
test.”[SCHMIDT:1979;485]
22. ASSUMPTIONS FOR T-TEST
1. Data are interval or ratio level
2. Simple random sample has been taken. The data is
collected from a representative randomly selected
portion of the total population.
3. The data when plotted results in a normal
distribution, bell-shaped distribution curve.
4. Homogeneity of variance ,homogenous or equal
variance exits when the standard deviation of
samples are approximately equal.
23. Z- Test
A Z- test is a statistical test used to determine whether two
population means are different when the variances are
known and the sample size is large. The test statistic is
assumed to have a normal distribution and nuisance
parameters such as standard deviation should be known
for an accurate Z- test to be performed.
Z- test is a hypothesis test that uses a Z- score as the
obtained statistics and the normal distribution. When
population standard deviation are known, the hypothesis
test about one or two means are Z- test.
[Schimidt:1979;488]
24. Assumptions of Z- test:
All parametric statistics have a set of
assumptions that must be met in order to
properly use the statistics to test hypothesis.
the assumption of the Z- test are listed
below:
Random sampling from a defined
population.
Interval or ratio scale of measurement.
Population is normally distributed.
25. Question No. 1 That the mean waste recycled by adults in
the United States is more than 10 pound per person per
month. Yu want to test this claim. You find that the mean
waste recycled per person per month for a random sample
of 18 adults in the United States is 12.4 ponds and the
standard deviation is 2.7 pound. Af =0.01. Can you support
the claim?
26. Solution:
Let the hypothesis,
Null hypothesis (H0): (µ≤µ0)
The claim that the mean waste recycled for U.S. adults is not
more than 10 pound per person per month.
Here,
Given that,
Sample mean = 12.4
Population mean µ= 10
Sample size n= 18
Standard deviation = 2.7
Here, sample size is less than 30. So it is a T- test. We know the
formula of T- test .
T- test =
27. Cont…
We can put the value into the formula
T=
=
=
=
= 3.64
So out calculated value is 3.64
28. Here,
Degree of freedom df= n-1 = 18-1 = 17. So it is one tailed test.
Here , level of significance = .01
So the table value with df = 17 and 0.01 level of significance is
2.898
We know,
when calculated value(cv) ≥ table value(tv).
H0 is to be rejected.
Here, we have found,
calculated value(cv) = 3.64 and
table value(tv) = 2.898
Since the calculated value is greater than the table value, so we
can reject the null hypothesis(H0).
Decision: The claim that the mean recycled waste for U.S. adults
is more than 10 pound per month.
29. Question No. 2:
Boys of a certain age are known to have a mean weight
of µ = 85 pounds. A complaint is made that the boys
living in a municipal children`s home are underfed. As
bit of evidence, n = 25 boys are weighted and found to
have a mean weight of =80.94 pounds. It is known
that the population standard deviation is 11.6 pounds
and level of significance 0.05. Based on the available
data, what should be concluded concerning the
complaint?
30. Solution:
Let the hypothesis,
Null hypothesis (H0): µ=µ0
The boys living in a municipal children`s home are not
underfed.
• Alternative hypothesis (Ha):µ≠µ∘
• The boys living in a municipal children’s home are underfed.
• Here,
Given that,
Sample mean = 80.94
Population mean µ = 85
Sample size n = 64
Standard deviation =11.6
Here, the sample size is greater than 30 so it is a Z test.
31. Cont…
We know the formula of Z test:
Z = =
=
=
= -2.8
So our calculated value (cv) = -2.8
The level of significance = 0.05
It is a two tailed test.
32. Cont…
So (1-0.05) = .95/2 = 0.475 is 0.4750
The closest area of .475 is .4750
The co-responding Z value of .4750 is 1.96.
We know,
If calculated value (cv) ≥ tv then H0 is to be rejected.
Here,
Calculated value(cv) + -2.8
Table value(tv) = 1.96
Since our calculated value is less than table value so we can
accept null hypothesis.
Decision: the boys living in a municipal children`s home
are not underfed.
33. CHI- SQUARE TEST:
Chi-square test is a statistical test commonly used to
compare observed data with data we would expect to
obtain according to a specific hypothesis.
A chi-square statistics is a measurement of how
expectations compare to results. The data used in
calculating a chi-square statistic must be random, raw,
mutually exclusive, drawn from independent variables and
drawn from a large enough sample.
A chi-square test is designed to analyze categorical data.
That means that the data has been counted and divided
into categories.
A statistical method assessing the goodness of fit between
a set of observed values and those expected theoretically.
34. Cont…
According to Spiegel, “A measure of discrepancy
existing between the observed and expected
frequencies where if the total frequency is N .”
According to Blalock, “The chi-square test is a very
general test that can be used whenever the researchers
wish to evaluate whether or not frequencies which
have been empirically obtained. Significantly from
those which would be expected under a certain set of
theoretical assumptions.”
35. ASSUMPTION OF CHI-SQUARE
TEST:
1. LEVEL OF MEASUREMENT: Chi-square tests are
sometimes used with ordinal scales and sometimes even
interval scales.
2. EXACT TEST: This test is one of a class of “exact test”,
because the significance of the deviation from a
“null hypothesis” can be calculated exactly.
3. INDEPENDENCE ASSUMPTION: Chi-square test can not
be used on correlated data.
4. SAMPLING DISTRIBUTION: Distributions are
differentiated according to the degrees of freedom.
5. MODEL: Independent random samples.
36. Cont…
6. THE DATA ARE NOMINAL OR ORDINAL LEVEL.
7. NO EXPECTED CELL FREQUINCES IS LESS THAN 5.
8. CHI-SQUARED GOODNESS OF FIT TEST.
9. CHI- SQUARE TEST OF INDEPENDENCE.
37. PROPERTIES OF THE CHI-SQUARE
DISTRIBUTION:
The chi-square distribution is a continuous probability
distribution within the values ranging from 0 to infinity in
the positive direction.Chi-square can never assume
negative values.
The total area under a chi-square curve is equal to 1.
Each chi-square curve (except when degree of freedom
=1)begins at 0 on the horizontal axis,increases to a peak
,and them approaches the horizontal axis asymptotically
from above.
Each chi-square curve is skewed to the right.As the number
of degree of freedom increase,the curve becomes more and
more like a normal curve.
38. Cont…
It is one of the most widely used distributions in
statistical applications.
This distribution may be derived from normal
distribution.
Chi-square is non-negative.
Chi-square is non-symmetric.
Chi-square is a non- parametric test ,which is less
restrictive than parametric test such as Z test.
39. Question:
A certain drug is claimed to be effecting in curing cold.
in an experiment on 500 persons with cold half of
them were given the drug and half of them were given
the sugar pills. The patients reactions to the treatment
are recorded on the following table:
Helped Harmed No effect Total
Drug 150(a) 30(b) 70(c) 250
Sugar pills 130(b) 40(e) 80(f) 250
Total 280 70 150 500
40. Cont…
On the basis of this data can it be concluded that there
is a significant difference in the effect of the drug and
sugar pills?
Let the hypothesis,
H0 (Null hypothesis): there is no significance
difference in drug and sugar pills.
Ha (Alternative hypothesis) there is a significant
difference in drug and sugar pills. from the table above
we can determine the expected values of frequencies.
41. Cont…
The formula of expected frequencies (fe) is
fe(a)= = 140
fe(b) = = 35
fe(c) = = 75
fe(d) = = 140
fe(e) = = 35
fe(f) = = 75
The formula of the Chi-Square Test is = ∑
42. Now, we can prepare a calculation
of Chi-Square table:
call
A 150 140 10 100 0.714
B 30 35 -5 25 0.714
C 70 75 -5 25 0.333
D 130 140 -10 100 0.714
E 40 35 5 25 0.714
F 80 75 5 25 0.333
∑= 3.522
43. Cont…
Our calculated value (cv) of chi-square x2 is 3.522
Significance level is = 0.05
Degree of freedom, df = (R-1)(C-1)
=(2-1)(3-1)
= 1×2
=2
At 0.05 significance level and df =2, the table value (tv) is
5.99.
We know, if cv≥tv H0 is to be rejected.
Here,
CV= 3.522
TV= 5.99
44. Cont..
Since our calculated value is less than table value. So
we can reject Ha. accept H0.
Decision: There is no significance difference in drug
and sugar.
45. Significance / importance of Chi-
Square test:
The most useful and popular tools in social science
research is Chi-Square. The test has many
applications, the most common of which in the social
sciences are 'contingency' problems in which two
nominal scale variables have been cross classified.
[H.M. BLALOCK; SOCIAL STATISTICS]
46. Weakness of Chi- square test :
chi- square test is very popularity used in practice. However,
there are a few limitations that we need to be aware of when we
consider using it:
It tends to be less accurate with very small expected frequencies.
It tends to less accurate for small degrees of freedom and for
small N too.
Chi- Square does not measure the strength of the relationship. It
merely measures whether
there is a relationship. Which is not likely to be due to chance?
It is less powerful and less restrictive.
It the observations are not independent of each other, they can’t
be used in Chi- Square in proper way.
47. A Test of Goodness of Fit of Chi-
Square Test:
A test of goodness of fit of Chi- Square test: Chi-
Square test enable us to see how does the assumed
theoretical distribution fit to the observe data. When
some theoretical distribution is fit to the given data.
We are always interested in knowing as how well this
distribution fits with the observed data. The Chi-
Square test can give answer to it.
48. Difference between Z test and T test.
Z- test T- test
1. Z-test is a statistical hypothesis test that
follows a normal distribution
1. T-test follows a Student’s T-distribution.
2. Z- test is appropriate when the sample size is
moderate to large(n <30)
2. t- test is appropriate when the sample size is
small(n>30).
3. Z-tests are less commonly used than T-tests. 3. T-tests are more commonly used than Z-
tests.
4. Z-tests are preferred when standard
deviations are known
4. T-tests are preferred when
standard deviations are unknown.
5. Z- test is less adaptable while T- test is more
adaptable.
5. T- test has many methods that will
suit any need.
6. Z- tests find lessee users than T- test 6. T- test is more commonly used than Z- test.
7. There are no fluctuations occur in Z- test 7. there are fluctuations that may occur in t test
sample variances.
49. Difference between T- distribution and
chi-square distribution:
T- distribution Chi-square distribution
1. T- distribution is a parametric test 1. Chi-square distribution is a non parametric
test.
2 . The variables involved in t- distribution are
measured at interval level data.
2 . In Chi-square test nominal or ordinal level
data are used.
3 . T – test may one or two tailed test. 3 . The Chi-square test is always one tailed test
4 . The critical region of t distribution may
appear at either sides of the mean.
4 . the critical region of chi- square test always
appears in the right side of the mean.
5 . The value of t- distribution may be positive
or negative.
5 . The value of Chi-square is always positive.
6 . The sample size is relatively low/small in t-
test.
6 . in Chi-square distribution the sample size is
high.
50. Conclusion:
Hypothesis testing begins with the drawing of a
sample & calculating its characteristics. The statistical
testing of hypothesis is the most important technique
in statistical inference that based on probability & is
used to draw conclusions about population parameter.
This is widely used in business & industry for making
decisions. So, the major purpose of hypothesis testing
is to choose between two competing hypothesis about
the value of a population parameter.
51. Reference:
Social Statistics, Hurbert M. Blalock, Jr.
Business Statistics, SP Gupta and MP Gupta.
Statistics to Social Science, Anthony Walsh.
Understanding and Using Statistics, Schmidt.
Statistics For Measurement, Richard I. Levin-
Davis S. Rubin.