This document outlines how to perform a two-sample z-test to analyze the difference between means of two independent samples. It discusses determining if samples are independent or dependent, stating the null and alternative hypotheses, calculating the test statistic, and making conclusions based on the results. An example compares the mean credit card debt of males and females using a two-sample z-test and finds no significant difference.
The document describes a completely randomized design (CRD) experiment. A CRD is the simplest experimental design where treatments are assigned to experimental units completely at random. Each unit has an equal chance of receiving any treatment. A CRD is best for small numbers of treatments on homogeneous units. Randomization, advantages like flexibility and simple analysis, and disadvantages like potential loss of precision are discussed. The key differences between fixed and random effects in how inferences can be drawn are also outlined.
The document discusses the normal curve and standard scores. It defines the normal curve as a continuous probability distribution that is bell-shaped and symmetric. It was developed by Gauss and Pearson. The normal curve can be divided into areas defined by standard deviations from the mean. Standard scores are raw scores converted to other scales, including z-scores, t-scores, and stanines. Z-scores indicate the distance from the mean in standard deviations. T-scores are on a scale of 50 plus or minus 10. Stanines use a nine-point scale with a mean of 5 and standard deviation of 2.
The normal distribution is a continuous probability distribution defined by its probability density function. A random variable has a normal distribution if its density function is defined by a mean (μ) and standard deviation (σ). The normal distribution is symmetrical and bell-shaped. It is commonly used to approximate other distributions when the sample size is large.
Standard deviationnormal distributionshowBiologyIB
This tutorial provides information about the normal curve and normal distributions. It discusses key characteristics of the normal curve including that most values fall in the middle and fewer values fall at the extremes. It also discusses how to calculate percentages of values that fall within a certain number of standard deviations from the mean. Additional topics covered include using z-scores to standardize values, types of normal distributions that vary in spread, and how data is not always normally distributed if skewed to one side.
The document discusses prospects for slepton searches in future experiments based on previous works. It summarizes the status of the muon anomalous magnetic moment measurement, which shows a 3-4 sigma discrepancy from the Standard Model prediction. This discrepancy could be explained by contributions from new physics, such as supersymmetry. Supersymmetry predicts superpartner particles like sleptons. The author's dissertation will examine the muon g-2 anomaly within the framework of the minimal supersymmetric standard model and study slepton mass bounds and prospects for discovering sleptons at future colliders in a model-independent way to help explain the muon g-2 discrepancy.
The document discusses normal and standard normal distributions. It provides examples of using a normal distribution to calculate probabilities related to bone mineral density test results. It shows how to find the probability of a z-score falling below or above certain values. It also explains how to determine the sample size needed to estimate an unknown population proportion within a given level of confidence.
The document discusses the standard normal distribution. It defines the standard normal distribution as having a mean of 0, a standard deviation of 1, and a bell-shaped curve. It provides examples of how to find probabilities and z-scores using the standard normal distribution table or calculator. For example, it shows how to find the probability of an event being below or above a given z-score, or between two z-scores. It also shows how to find the z-score corresponding to a given cumulative probability.
The document discusses testing for independence between two variables using a contingency table and chi-square test. It explains how to set up a contingency table with observed and expected frequencies, and how to calculate the chi-square test statistic to determine if the variables are independent or dependent. An example is provided that tests if blood pressure is independent of jogging status using a contingency table and chi-square test.
The document describes a completely randomized design (CRD) experiment. A CRD is the simplest experimental design where treatments are assigned to experimental units completely at random. Each unit has an equal chance of receiving any treatment. A CRD is best for small numbers of treatments on homogeneous units. Randomization, advantages like flexibility and simple analysis, and disadvantages like potential loss of precision are discussed. The key differences between fixed and random effects in how inferences can be drawn are also outlined.
The document discusses the normal curve and standard scores. It defines the normal curve as a continuous probability distribution that is bell-shaped and symmetric. It was developed by Gauss and Pearson. The normal curve can be divided into areas defined by standard deviations from the mean. Standard scores are raw scores converted to other scales, including z-scores, t-scores, and stanines. Z-scores indicate the distance from the mean in standard deviations. T-scores are on a scale of 50 plus or minus 10. Stanines use a nine-point scale with a mean of 5 and standard deviation of 2.
The normal distribution is a continuous probability distribution defined by its probability density function. A random variable has a normal distribution if its density function is defined by a mean (μ) and standard deviation (σ). The normal distribution is symmetrical and bell-shaped. It is commonly used to approximate other distributions when the sample size is large.
Standard deviationnormal distributionshowBiologyIB
This tutorial provides information about the normal curve and normal distributions. It discusses key characteristics of the normal curve including that most values fall in the middle and fewer values fall at the extremes. It also discusses how to calculate percentages of values that fall within a certain number of standard deviations from the mean. Additional topics covered include using z-scores to standardize values, types of normal distributions that vary in spread, and how data is not always normally distributed if skewed to one side.
The document discusses prospects for slepton searches in future experiments based on previous works. It summarizes the status of the muon anomalous magnetic moment measurement, which shows a 3-4 sigma discrepancy from the Standard Model prediction. This discrepancy could be explained by contributions from new physics, such as supersymmetry. Supersymmetry predicts superpartner particles like sleptons. The author's dissertation will examine the muon g-2 anomaly within the framework of the minimal supersymmetric standard model and study slepton mass bounds and prospects for discovering sleptons at future colliders in a model-independent way to help explain the muon g-2 discrepancy.
The document discusses normal and standard normal distributions. It provides examples of using a normal distribution to calculate probabilities related to bone mineral density test results. It shows how to find the probability of a z-score falling below or above certain values. It also explains how to determine the sample size needed to estimate an unknown population proportion within a given level of confidence.
The document discusses the standard normal distribution. It defines the standard normal distribution as having a mean of 0, a standard deviation of 1, and a bell-shaped curve. It provides examples of how to find probabilities and z-scores using the standard normal distribution table or calculator. For example, it shows how to find the probability of an event being below or above a given z-score, or between two z-scores. It also shows how to find the z-score corresponding to a given cumulative probability.
The document discusses testing for independence between two variables using a contingency table and chi-square test. It explains how to set up a contingency table with observed and expected frequencies, and how to calculate the chi-square test statistic to determine if the variables are independent or dependent. An example is provided that tests if blood pressure is independent of jogging status using a contingency table and chi-square test.
1. O documento prova que a soma de duas variáveis aleatórias independentes com distribuição de Poisson é uma estatística suficiente para o parâmetro lambda.
2. A distribuição condicional de X1 e X2 dado o valor de sua soma T é independente de lambda.
3. Portanto, a soma T é uma estatística suficiente para lambda.
The document introduces the Gaussian or normal distribution, its key properties, and how it can be used for inference. The Gaussian distribution is symmetrical and bell-shaped. It is completely defined by its mean and standard deviation. By transforming data into z-scores, the standard normal distribution can be applied to understand the probabilities of outcomes in any normal distribution. The Gaussian distribution and z-scores allow researchers to assess likelihoods and make inferences about variable values based on their known distribution.
The document discusses Chi-Square tests, which are used when assumptions of normality are violated. It provides requirements for Chi-Square tests, including that variables must be independent and samples sufficiently large. The key steps are outlined: determine appropriate test, establish significance level, formulate hypotheses, calculate test statistic using frequencies, determine degrees of freedom, and compare to critical value. An example compares party membership to opinions on gun control to demonstrate a Chi-Square test of independence.
The document discusses organizing and summarizing data using frequency distributions. It defines key terms like frequency distribution, class width, boundaries, and midpoints. Examples are provided to demonstrate how to construct frequency distributions, calculate values, and interpret results. Comparing distributions can reveal differences in datasets. Gaps may indicate separate populations in the data. [END SUMMARY]
The document discusses the normal curve and its key properties. A normal curve is a bell-shaped distribution that is symmetrical around the mean value, with half of the data falling above and half below the mean. The standard deviation measures how spread out the data is from the mean. In a normal distribution, 68% of the data lies within 1 standard deviation of the mean, 95% within 2 standard deviations, and 99.7% within 3 standard deviations, following the 68-95-99.7 rule.
The document discusses various transformations that can be applied to variables in SPSS to satisfy assumptions of normality, homogeneity of variance, and linearity. It describes logarithmic, square root, inverse, and square transformations and how to compute them in SPSS. Adjustments may need to be made to variable values depending on minimum/maximum values and distribution skew. The document provides examples of computing each transformation for a variable measuring time spent online.
This document defines and discusses quartiles, deciles, and percentiles. Quartiles divide a data set into four equal parts, with the first quartile (Q1) representing the lowest 25% of values. Deciles divide data into ten equal parts. Percentiles indicate the value below which a certain percentage of observations fall. Examples are provided for calculating Q1, Q3, D1 using formulas for grouped and ungrouped data sets. Quartiles, deciles, and percentiles are commonly used to summarize and report on statistical data.
Discrete Random Variable (Probability Distribution)LeslyAlingay
This presentation the statistics teachers to discuss discrete random variable since it includes examples and solutions.
Content:
-definition of random variable
-creating a frequency distribution table
- creating a histogram
-solving for the mean, variance and standard deviation.
References:
http://www.elcamino.edu/faculty/klaureano/documents/math%20150/chapternotes/chapter6.sullivan.pdf
https://www.mathsisfun.com/data/random-variables-mean-variance.html
https://www.youtube.com/watch?v=OvTEhNL96v0
https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch12/5214891-eng.htm
This document discusses key properties of several probability distributions including the binomial, Poisson, and normal distributions. It explains that the binomial distribution is defined by the number of trials (n) and probability of success (p), while the Poisson distribution is defined solely by its mean. The normal distribution is then described as being defined by its mean and standard deviation. It proceeds to outline several distinguishing features of the normal distribution, including being unimodal, symmetrical, and asymptotic.
The F-distribution is used to compare the variances of two populations. It is defined as the ratio of two normally distributed populations' variances. The F-distribution depends on the degrees of freedom v1 and v2, which are based on the sample sizes. The null hypothesis is that the two variances are equal. If the calculated F-value exceeds the critical value from tables, the null hypothesis is rejected.
This document discusses sequential digital circuits and various counter circuits. It begins with an introduction to sequential circuits and how they differ from combinational circuits in their ability to store state. Common storage elements like latches and flip-flops are described along with their characteristics. Various types of latches and flip-flops such as D, JK, and T flip-flops are defined. The document then covers counter circuits like synchronous and asynchronous counters. Specific counter circuits like ring counters and Johnson counters are explained. Implementation of 4-bit synchronous and asynchronous counters using flip-flops is demonstrated. Finally, a decade counter integrated circuit is briefly described.
Cumulative frequency is found by adding up all successive frequencies in a frequency distribution table and totals or gradually builds up over time. A cumulative frequency histogram uses the cumulative frequency column from a table to create a graph that looks like upward steps, and an ogive line graph drawn on the histogram connects the corners of each successive column starting from the bottom left corner to show the accumulating total.
Kendall's tau is a nonparametric statistic that measures the ordinal association between two variables. It calculates the number of concordant and discordant pairs to determine the tau coefficient between -1 and 1, where higher positive values indicate a stronger monotonic relationship. Kendall's tau is often used as a hypothesis test of statistical dependence between variables and has advantages over Spearman's rho such as better statistical properties and direct interpretation. A partial correlation measures the relationship between two variables while controlling for one or more other variables. A scatter plot graphs the relationship between two quantitative variables with one on the x-axis and one on the y-axis to identify outliers, correlation, and the type of relationship.
The document discusses properties of normal distributions and the standard normal distribution. It provides examples of finding probabilities and values associated with normal distributions. The key points are:
- Normal distributions are continuous and bell-shaped. The mean, median and mode are equal.
- The standard normal distribution has a mean of 0 and standard deviation of 1.
- Probabilities under the normal curve can be found using z-scores and the standard normal table.
- Values like z-scores can be determined by finding the corresponding cumulative area in the standard normal table.
Final generalized linear modeling by idrees waris iugcId'rees Waris
This document discusses generalized linear models (GLM). It begins by introducing the topic and outlines the main points to be covered, including the history of GLM, assumptions for using GLM, and how to run GLM in SPSS. The document then covers the components of GLM, including the random, systematic, and link components. It discusses various distributions and link functions that can be used in GLM. The document concludes by providing an example of how to analyze shipping damage incident data using Poisson GLM in SPSS.
This document provides information about the normal distribution and related statistical concepts. It begins with learning objectives and definitions of key terms like the normal distribution formula and how the mean and standard deviation affect the shape of the distribution. It then discusses properties of the normal distribution like symmetry and how it extends infinitely in both directions. The next sections cover areas under the normal curve and how to calculate probabilities using the standard normal distribution table. Later sections explain how to convert variables to standard scores using z-scores and the concepts of skewness and sampling distributions. Examples and exercises are provided throughout to illustrate calculating probabilities and percentiles for the normal distribution.
An independent t-test is used to compare the means of two independent groups on a continuous dependent variable. It tests if there is a statistically significant difference between the population means of the two groups. The test assumes the groups are independent, the dependent variable is normally distributed for each group, and the groups have equal variances. To perform the test, the researcher states the hypotheses, sets an alpha level, calculates the t-statistic and degrees of freedom, and determines whether to reject or fail to reject the null hypothesis by comparing the t-statistic to the critical value.
This document outlines how to perform hypothesis tests to compare the means of two independent samples. It discusses using a two-sample z-test when samples are large and normally distributed, and a two-sample t-test when samples are small. The key steps are to state the null and alternative hypotheses, calculate the test statistic, find the critical value, make a decision to reject or fail to reject the null hypothesis, and interpret the results. Examples are provided to demonstrate these tests.
1. O documento prova que a soma de duas variáveis aleatórias independentes com distribuição de Poisson é uma estatística suficiente para o parâmetro lambda.
2. A distribuição condicional de X1 e X2 dado o valor de sua soma T é independente de lambda.
3. Portanto, a soma T é uma estatística suficiente para lambda.
The document introduces the Gaussian or normal distribution, its key properties, and how it can be used for inference. The Gaussian distribution is symmetrical and bell-shaped. It is completely defined by its mean and standard deviation. By transforming data into z-scores, the standard normal distribution can be applied to understand the probabilities of outcomes in any normal distribution. The Gaussian distribution and z-scores allow researchers to assess likelihoods and make inferences about variable values based on their known distribution.
The document discusses Chi-Square tests, which are used when assumptions of normality are violated. It provides requirements for Chi-Square tests, including that variables must be independent and samples sufficiently large. The key steps are outlined: determine appropriate test, establish significance level, formulate hypotheses, calculate test statistic using frequencies, determine degrees of freedom, and compare to critical value. An example compares party membership to opinions on gun control to demonstrate a Chi-Square test of independence.
The document discusses organizing and summarizing data using frequency distributions. It defines key terms like frequency distribution, class width, boundaries, and midpoints. Examples are provided to demonstrate how to construct frequency distributions, calculate values, and interpret results. Comparing distributions can reveal differences in datasets. Gaps may indicate separate populations in the data. [END SUMMARY]
The document discusses the normal curve and its key properties. A normal curve is a bell-shaped distribution that is symmetrical around the mean value, with half of the data falling above and half below the mean. The standard deviation measures how spread out the data is from the mean. In a normal distribution, 68% of the data lies within 1 standard deviation of the mean, 95% within 2 standard deviations, and 99.7% within 3 standard deviations, following the 68-95-99.7 rule.
The document discusses various transformations that can be applied to variables in SPSS to satisfy assumptions of normality, homogeneity of variance, and linearity. It describes logarithmic, square root, inverse, and square transformations and how to compute them in SPSS. Adjustments may need to be made to variable values depending on minimum/maximum values and distribution skew. The document provides examples of computing each transformation for a variable measuring time spent online.
This document defines and discusses quartiles, deciles, and percentiles. Quartiles divide a data set into four equal parts, with the first quartile (Q1) representing the lowest 25% of values. Deciles divide data into ten equal parts. Percentiles indicate the value below which a certain percentage of observations fall. Examples are provided for calculating Q1, Q3, D1 using formulas for grouped and ungrouped data sets. Quartiles, deciles, and percentiles are commonly used to summarize and report on statistical data.
Discrete Random Variable (Probability Distribution)LeslyAlingay
This presentation the statistics teachers to discuss discrete random variable since it includes examples and solutions.
Content:
-definition of random variable
-creating a frequency distribution table
- creating a histogram
-solving for the mean, variance and standard deviation.
References:
http://www.elcamino.edu/faculty/klaureano/documents/math%20150/chapternotes/chapter6.sullivan.pdf
https://www.mathsisfun.com/data/random-variables-mean-variance.html
https://www.youtube.com/watch?v=OvTEhNL96v0
https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch12/5214891-eng.htm
This document discusses key properties of several probability distributions including the binomial, Poisson, and normal distributions. It explains that the binomial distribution is defined by the number of trials (n) and probability of success (p), while the Poisson distribution is defined solely by its mean. The normal distribution is then described as being defined by its mean and standard deviation. It proceeds to outline several distinguishing features of the normal distribution, including being unimodal, symmetrical, and asymptotic.
The F-distribution is used to compare the variances of two populations. It is defined as the ratio of two normally distributed populations' variances. The F-distribution depends on the degrees of freedom v1 and v2, which are based on the sample sizes. The null hypothesis is that the two variances are equal. If the calculated F-value exceeds the critical value from tables, the null hypothesis is rejected.
This document discusses sequential digital circuits and various counter circuits. It begins with an introduction to sequential circuits and how they differ from combinational circuits in their ability to store state. Common storage elements like latches and flip-flops are described along with their characteristics. Various types of latches and flip-flops such as D, JK, and T flip-flops are defined. The document then covers counter circuits like synchronous and asynchronous counters. Specific counter circuits like ring counters and Johnson counters are explained. Implementation of 4-bit synchronous and asynchronous counters using flip-flops is demonstrated. Finally, a decade counter integrated circuit is briefly described.
Cumulative frequency is found by adding up all successive frequencies in a frequency distribution table and totals or gradually builds up over time. A cumulative frequency histogram uses the cumulative frequency column from a table to create a graph that looks like upward steps, and an ogive line graph drawn on the histogram connects the corners of each successive column starting from the bottom left corner to show the accumulating total.
Kendall's tau is a nonparametric statistic that measures the ordinal association between two variables. It calculates the number of concordant and discordant pairs to determine the tau coefficient between -1 and 1, where higher positive values indicate a stronger monotonic relationship. Kendall's tau is often used as a hypothesis test of statistical dependence between variables and has advantages over Spearman's rho such as better statistical properties and direct interpretation. A partial correlation measures the relationship between two variables while controlling for one or more other variables. A scatter plot graphs the relationship between two quantitative variables with one on the x-axis and one on the y-axis to identify outliers, correlation, and the type of relationship.
The document discusses properties of normal distributions and the standard normal distribution. It provides examples of finding probabilities and values associated with normal distributions. The key points are:
- Normal distributions are continuous and bell-shaped. The mean, median and mode are equal.
- The standard normal distribution has a mean of 0 and standard deviation of 1.
- Probabilities under the normal curve can be found using z-scores and the standard normal table.
- Values like z-scores can be determined by finding the corresponding cumulative area in the standard normal table.
Final generalized linear modeling by idrees waris iugcId'rees Waris
This document discusses generalized linear models (GLM). It begins by introducing the topic and outlines the main points to be covered, including the history of GLM, assumptions for using GLM, and how to run GLM in SPSS. The document then covers the components of GLM, including the random, systematic, and link components. It discusses various distributions and link functions that can be used in GLM. The document concludes by providing an example of how to analyze shipping damage incident data using Poisson GLM in SPSS.
This document provides information about the normal distribution and related statistical concepts. It begins with learning objectives and definitions of key terms like the normal distribution formula and how the mean and standard deviation affect the shape of the distribution. It then discusses properties of the normal distribution like symmetry and how it extends infinitely in both directions. The next sections cover areas under the normal curve and how to calculate probabilities using the standard normal distribution table. Later sections explain how to convert variables to standard scores using z-scores and the concepts of skewness and sampling distributions. Examples and exercises are provided throughout to illustrate calculating probabilities and percentiles for the normal distribution.
An independent t-test is used to compare the means of two independent groups on a continuous dependent variable. It tests if there is a statistically significant difference between the population means of the two groups. The test assumes the groups are independent, the dependent variable is normally distributed for each group, and the groups have equal variances. To perform the test, the researcher states the hypotheses, sets an alpha level, calculates the t-statistic and degrees of freedom, and determines whether to reject or fail to reject the null hypothesis by comparing the t-statistic to the critical value.
This document outlines how to perform hypothesis tests to compare the means of two independent samples. It discusses using a two-sample z-test when samples are large and normally distributed, and a two-sample t-test when samples are small. The key steps are to state the null and alternative hypotheses, calculate the test statistic, find the critical value, make a decision to reject or fail to reject the null hypothesis, and interpret the results. Examples are provided to demonstrate these tests.
Hypothesis testing part iii for difference of meansNadeem Uddin
This document discusses hypothesis testing for the difference between means. It provides three examples that demonstrate how to perform hypothesis tests to compare the means of two samples. The examples show how to: 1) State the null and alternative hypotheses, 2) Determine the test statistic and critical region based on the hypotheses, level of significance and test assumptions, 3) Perform calculations to obtain the test statistic value, and 4) Make a conclusion about whether to reject or fail to reject the null hypothesis based on comparing the test statistic to the critical region.
This document describes the steps for conducting an independent samples t-test. The t-test is used to compare the means of two independent groups on a continuous dependent variable. It tests whether the means of the two groups are statistically significantly different from each other. The steps include: 1) stating the null and alternative hypotheses, 2) setting the significance level, 3) calculating the t-value, 4) finding the critical t-value, and 5) making a conclusion about whether to reject the null hypothesis based on the t-values. An example compares math test scores of male and female college students to determine if gender significantly impacts scores.
This document provides an overview of statistical tests commonly used in neuroimaging such as t-tests, ANOVAs, and regression. It discusses the purposes of these tests and how they are applied. T-tests are used to compare means, for example to determine if the difference between two conditions is statistically significant. ANOVAs examine variances and can be used when comparing more than two groups. Regression allows describing and predicting the relationship between variables and is useful in the general linear model approach used in SPM. Key assumptions and calculations for each method are outlined.
This chapter discusses methods for hypothesis testing and constructing confidence intervals for two populations or groups. It provides examples comparing testosterone levels before and after having children, weight loss from a diet, and approval ratings between age groups. The chapter explores the processes and formulas for hypothesis tests and confidence intervals involving two proportions, including a worked example comparing reported rates of cheating between husbands and wives.
The document discusses statistical concepts including Gaussian distributions, standard deviation, confidence intervals, t-tests, and calibration curves. It provides examples of how to calculate the mean, standard deviation, confidence intervals using t-tables, and how to perform t-tests to compare two data sets. It also describes constructing a calibration curve using the method of least squares to determine the best-fit line and using that line to find the concentration of an unknown sample.
This document discusses methods for comparing two population or treatment means, including notation, hypothesis tests, and confidence intervals. Key points covered include:
1) Notation for comparing two means includes the sample size, mean, variance, and standard deviation for each population or treatment.
2) Hypothesis tests for comparing two means can use a z-test if the population standard deviations are known, or a two-sample t-test if the standard deviations are unknown.
3) Confidence intervals can be constructed for the difference between two population means using a t-distribution, assuming independent random samples of sufficient size or approximately normal populations.
This document outlines the steps for conducting a hypothesis test comparing the means of two independent samples. The test assumes simple random sampling, independent samples that are much larger than the sample sizes, and approximately normally distributed sampling distributions. The steps are: 1) state the null and alternative hypotheses, 2) choose a two-sample t-test analysis, 3) calculate the t-statistic and p-value, 4) compare the p-value to the significance level and reject or fail to reject the null hypothesis.
This document outlines the steps for conducting a hypothesis test comparing the means of two independent samples. The test assumes simple random sampling, independent samples that are much larger than the sample sizes, and approximately normally distributed sampling distributions. The steps are: 1) state the null and alternative hypotheses, 2) choose a two-sample t-test method, 3) calculate the t-statistic and p-value, 4) compare the p-value to the significance level and reject or fail to reject the null hypothesis.
This document provides an overview of basic statistical concepts and terms. It discusses variables, observational vs experimental research, dependent and independent variables, measurement scales, systematic and random errors, accuracy vs precision, populations, distributions like binomial and normal, central tendency, dispersion, and other key statistical concepts. Examples are provided to illustrate statistical terminology.
1. This document contains 11 multi-part math problems involving systems of equations and inequalities. The problems cover topics such as solving systems graphically, algebraically, and determining if ordered pairs are solutions. They also involve word problems about ages, expenses, and splitting amounts into parts.
2. Key steps addressed include setting up tables of values, identifying line types, finding the solution set intersection, using substitution or elimination methods, stating yes or no for ordered pairs, and drawing graphs of solution sets for systems of inequalities.
3. The problems progress from simpler systems to more complex ones involving multiple equations or inequalities, requiring skills like algebraic manipulation, graphical analysis, and translating word problems into mathematical systems.
Analysis of variance (ANOVA) is a statistical technique used to test if the means of two or more populations are equal. It involves computing test statistics F from the ratio of mean sum of squares due to treatments and mean sum of squares due to errors. The computed F value is then compared to a critical value from the F-distribution to determine if the null hypothesis that the population means are equal can be rejected. Key assumptions for ANOVA include independent random samples from normally distributed populations with equal variances.
This chapter discusses methods for forming confidence intervals and conducting hypothesis tests to compare two population parameters, such as means, proportions, or variances. It covers topics like confidence intervals for the difference between two independent population means when the variances are known or unknown, confidence intervals for dependent sample means from before-after studies, and confidence intervals for comparing two independent population proportions. Examples are provided to demonstrate how to calculate confidence intervals for differences in means using pooled variances and how to form confidence intervals to compare proportions from two populations.
Algebra Solving Open Sentences Involving Absolute Valueguestd1dc2e
This document provides instruction on solving absolute value equations and inequalities. It begins by explaining what absolute value equations like |x|=5 mean and the two cases to consider when solving. Examples are provided of solving equations like |x+4|=5 and |x-7|=8. The document also explains how to write absolute value equations and solve inequalities, providing examples like solving |3y-3|>9 for the values of y. Key steps are outlined for both absolute value equations and inequalities.
Algebra Solving Open Sentences Involving Absolute Valueguestd1dc2e
This document provides instruction on solving absolute value equations and inequalities. It begins by explaining what absolute value equations like |x|=5 mean and the two cases to consider when solving. Examples are provided of solving equations like |x+4|=5 and |x-7|=8. The document also explains how to write absolute value equations and solve inequalities, providing examples like solving |3y-3|>9 for the values of y. Key steps are outlined for both absolute value equations and inequalities.
Algebra Solving Open Sentences Involving Absolute Valueguestd1dc2e
This document provides instruction on solving absolute value equations and inequalities. It begins by explaining what absolute value equations like |x|=5 mean and the two cases to consider when solving. Examples are provided of solving equations like |x+4|=5 and |x-7|=8. The document also explains how to write absolute value equations and solve inequalities, providing examples like solving |3y-3|>9 for the values of y. Key steps are outlined for both absolute value equations and inequalities.
Solving Open Sentences Involving Absolute Valueguestd1dc2e
This document provides instruction on solving absolute value equations and inequalities. It begins by explaining what absolute value equations like |x|=5 mean and the two cases to consider when solving. Examples are provided of solving equations like |x+4|=5 and |x-7|=8. The document also explains how to write absolute value equations and solve inequalities, providing examples like solving |3y-3|>9 for the values of y. Key steps are outlined for both absolute value equations and inequalities.
Algebra Solving Open Sentences Involving Absolute Valueguestd1dc2e
This document provides instruction on solving absolute value equations and inequalities. It begins by explaining what absolute value equations like |x|=5 mean and the two cases to consider when solving. Examples are provided of solving equations like |x+4|=5 and |x-7|=8. The document also explains how to write absolute value equations and solve inequalities, providing examples like solving |3y-3|>9 for the values of y. Key steps are outlined for both absolute value equations and inequalities.
This document summarizes key concepts regarding the chi-square distribution and its applications to statistical tests. It discusses:
1) The mathematical properties of the chi-square distribution and how it can be derived from the normal distribution.
2) Examples of chi-square goodness-of-fit tests to determine if sample data fits an expected distribution like the normal.
3) How chi-square tests of independence can assess if two criteria of classification applied to data are independent.
4) Additional chi-square tests of homogeneity and Fisher's exact test. Formulas and steps for calculating test statistics are provided.
1. Chapter 8
Hypothesis Testing with Two
Samples
Math 117 --- Eddie Laanaoui 1
2. Chapter Outline
• 8.1 Testing the Difference Between Means (Large
Independent Samples)
• 8.2 Testing the Difference Between Means (Small
Independent Samples)
• 8.3 Testing the Difference Between Means
(Dependent Samples)
• 8.4 Testing the Difference Between Proportions
2
Math 117 --- Eddie Laanaoui
3. Section 8.1
Testing the Difference Between
Means (Large Independent Samples)
(2 sample Z- test)
3
Math 117 --- Eddie Laanaoui
4. Section 8.1 Objectives
• Determine whether two samples are independent or
dependent
• Perform a two-sample z-test for the difference
between two means μ1 and μ2 using large independent
samples
4
Math 117 --- Eddie Laanaoui
5. Two Sample Hypothesis Test
• Compares two parameters from two populations.
• Sampling methods:
Independent Samples
• The sample selected from one population is not
related to the sample selected from the second
population.
Dependent Samples (paired or matched samples)
• Each member of one sample corresponds to a
member of the other sample.
5
Math 117 --- Eddie Laanaoui
6. Independent and Dependent Samples
Independent Samples Dependent Samples
Sample 1 Sample 2 Sample 1 Sample 2
6
Math 117 --- Eddie Laanaoui
7. Example: Independent and Dependent
Samples
Classify the pair of samples as independent or
dependent.
•Sample 1: Resting heart rates of 35 individuals before
drinking coffee.
•Sample 2: Resting heart rates of the same individuals
after drinking two cups of coffee.
Solution:
Dependent Samples (The samples can be paired with
respect to each individual)
7
Math 117 --- Eddie Laanaoui
8. Example: Independent and Dependent
Samples
Classify the pair of samples as independent or
dependent.
•Sample 1: Test scores for 35 statistics students.
•Sample 2: Test scores for 42 biology students who do
not study statistics.
Solution:
Independent Samples (Not possible to form a pairing
between the members of the samples; the sample sizes
are different, and the data represent scores for different
individuals.)
8
Math 117 --- Eddie Laanaoui
9. Two Sample Hypothesis Test with
Independent Samples
1. Null hypothesis H0
A statistical hypothesis that usually states there is
no difference between the parameters of two
populations.
Always contains the symbol ≤, =, or ≥.
1. Alternative hypothesis Ha
A statistical hypothesis that is true when H0 is
false.
Always contains the symbol >, ≠, or <.
9
Math 117 --- Eddie Laanaoui
10. Two Sample Hypothesis Test with
Independent Samples
H0: μ1 = μ2 H0: μ1 ≤ μ2 H0: μ1 ≥ μ2
Ha: μ1 ≠ μ2 Ha: μ1 > μ2 Ha: μ1 < μ2
Regardless of which hypotheses you use, you
always assume there is no difference between the
population means, or μ1 = μ2.
10
Math 117 --- Eddie Laanaoui
11. Two Sample z-Test for the Difference
Between Means
Three conditions are necessary to perform a z-test for
the difference between two population means μ1 and μ2.
1.The samples must be randomly selected.
2.The samples must be independent.
3.Each sample size must be at least 30, or, if not, each
population must have a normal distribution with a
known standard deviation.
11
Math 117 --- Eddie Laanaoui
12. Two Sample z-Test for the Difference
Between Means
If these requirements are met, the sampling distribution
for x1 − x2 (the difference of the sample means) is a
normal distribution with
Mean: µ x − x = µ x − µ x = µ1 − µ 2
1 2 1 2
σ 12 σ 2
2
Standard error: σ x − x 1 2
= σ x +σ x =
2
1
2
+
n1 n2
2
Sampling distribution
for x1 − x2 :
x1 − x2
−σ x − x
1 2
µ − µ2
1 σ x −x
1 2
12
Math 117 --- Eddie Laanaoui
13. Two Sample z-Test for the Difference
Between Means
• Test statistic is x1 − x2
• The standardized test statistic is
z=
( x1 − x2) − ( µ1 − µ2) where σ = σ12 + σ 2
2
σ x −x
1 2
1
x −x
2
n1 n2
• When the samples are large, you can use s1 and s2 in place
of σ1 and σ2. If the samples are not large, you can still use
a two-sample z-test, provided the populations are
normally distributed and the population standard
deviations are known.
13
Math 117 --- Eddie Laanaoui
14. Using a Two-Sample z-Test for the
Difference Between Means (Large
Independent Samples)
In Words In Symbols
1. State the claim mathematically. State H0 and Ha.
Identify the null and alternative
hypotheses.
2. Specify the level of significance. Identify α.
14
Math 117 --- Eddie Laanaoui
15. Using a Two-Sample z-Test for the
Difference Between Means (Large
Independent Samples)
In Words In Symbols
3. Find the standardized test
z=
( x1 − x2) − ( µ1 − µ2)
statistic. σ x −x
1 2
4. Make a decision to reject or If P-value is SMALL,
fail to reject the null reject the NULL.
hypothesis. Otherwise, fail to
5. Interpret the decision in the reject H0.
context of the original claim.
15
Math 117 --- Eddie Laanaoui
16. Example: Two-Sample z-Test for the
Difference Between Means
A consumer education organization claims that there is a
difference in the mean credit card debt of males and
females in the United States. The results of a random
survey of 200 individuals from each group are shown
below. The two samples are independent. Do the results
support the organization’s claim? Use α = 0.05.
Females (1) Males (2)
x1 = $2290 x2 = $2370
s1 = $750 s2 = $800
n1 = 200 n2 = 200
16
Math 117 --- Eddie Laanaoui
17. Solution: Two-Sample z-Test for the
Difference Between Means
• H0: μ1 = μ2 • Test Statistic:
• Ha: μ1 ≠ μ2 (2290 − 2370) − 0
z= = −1.03
• α = 0.05 7502 8002
+
• n1= 200, n2 = 200 200 200
• P-value = 0.302 • Decision: Fail to Reject H0
At the 5% level of significance,
there is not enough evidence to
support the organization’s
claim that there is a difference
in the mean credit card debt of
males and females.
17
Math 117 --- Eddie Laanaoui
18. Example: Using Technology to Perform a
Two-Sample z-Test
The American Automobile Association claims that the
average daily cost for meals and lodging for vacationing in
Texas is less than the same average costs for vacationing in
Virginia. The table shows the results of a random survey of
vacationers in each state. The two samples are independent.
At α = 0.01, is there enough evidence to support the claim?
Texas (1) Virginia (2)
x1 = $248 x2 = $252
s1 = $15 s2 = $22
n1 = 50 n2 = 35
18
Math 117 --- Eddie Laanaoui
19. Solution: Using Technology to Perform a
Two-Sample z-Test
• H0: μ1 ≥ μ2 Calculate:
• Ha: μ1 < μ2
TI-83/84set up:
Draw:
19
Math 117 --- Eddie Laanaoui
20. Section 8.1 Summary
• Determined whether two samples are independent or
dependent
• Performed a two-sample z-test for the difference
between two means μ1 and μ2 using large independent
samples
20
Math 117 --- Eddie Laanaoui
21. Section 8.2
Testing the Difference Between
Means (Small Independent Samples)
(2 sample T- test)
21
Math 117 --- Eddie Laanaoui
22. Section 8.2 Objectives
• Perform a t-test for the difference between two means
μ1 and μ2 using small independent samples
22
Math 117 --- Eddie Laanaoui
23. Two Sample t-Test for the Difference
Between Means
• If samples of size less than 30 are taken from normally-
distributed populations, a t-test may be used to test the
difference between the population means μ1 and μ2.
• Three conditions are necessary to use a t-test for small
independent samples.
1. The samples must be randomly selected.
2. The samples must be independent.
3. Each population must have a normal distribution.
23
Math 117 --- Eddie Laanaoui
24. Two Sample t-Test for the Difference
Between Means
• The standardized test statistic is
t=
( x1 − x2) − ( µ1 − µ2)
σ x −x
1 2
• The standard error and the degrees of freedom of the
sampling distribution depend on whether the
population variances σ 1 and σ 2 are equal.
2 2
24
Math 117 --- Eddie Laanaoui
25. Two Sample t-Test for the Difference
Between Means
• Variances are not equal
(choose Not POOLED in you calculator)
If the population variances are not equal, then the
standard error is
2 2
s1 s2
σ x −x = +
n1 n2
1 2
d.f = smaller of n1 – 1 or n2 – 1
25
Math 117 --- Eddie Laanaoui
26. Two Sample t-Test for the Difference
Between Means
• Variances are equal (choose POOLED in you calculator)
Information from the two samples is combined to
calculate a pooled estimate of the standard deviation
σ.
ˆ
σ=
ˆ
( n1 − 1) s12 + ( n2 − 1) s2
2
n1 + n2 − 2
The standard error for the sampling distribution of
x1 − x2 is
1 1
σ x −x =σ × +
ˆ
1 2
n1 n2
d.f.= n1 + n2 – 2
Math 117 --- Eddie Laanaoui
26
27. Normal or t-Distribution?
Are both sample sizes 2 Sample z-test.
Yes
at least 30?
No
Are both population
Are the 2 Sample t-test
standard deviations No Yes
population “POOLED s.d”
known? d.f = n + n – 2.
variances equal?
No
1 2
Yes
2 Sample t-test
2 Sample z-test. “not POOLED s.d”
d.f = smaller of n1 – 1 or n2 – 1.
Math 117 --- Eddie Laanaoui
27
28. Example: Two-Sample t-Test for the
Difference Between Means
The braking distances of 8 Volkswagen GTIs and 10 Ford
Focuses were tested when traveling at 60 miles per hour on
dry pavement. The results are shown below. Can you
conclude that there is a difference in the mean braking
distances of the two types of cars? Use α = 0.01. Assume the
populations are normally distributed and the population
variances are not equal. (Adapted from Consumer Reports)
GTI (1) Focus (2)
x1 = 134 ft x2 = 143ft
s1 = 6.9 ft s2 = 2.6 ft
n1 = 8 n2 = 10
28
Math 117 --- Eddie Laanaoui
29. Solution: Two-Sample t-Test for the
Difference Between Means
• H0: μ1 = μ2 • Test Statistic:
• Ha: μ1 ≠ μ2 (134 − 143) − 0
t= = −3.496
• α = 0.01 6.92 2.62
+
• d.f. = 8 – 1 = 7 8 10
• P-value = 0.0073 • Decision: Reject H0
At the 1% level of significance,
there is enough evidence to
conclude that the mean braking
distances of the cars are
different.
29
Math 117 --- Eddie Laanaoui
30. Example: Two-Sample t-Test for the
Difference Between Means
A manufacturer claims that the calling range (in feet) of its
2.4-GHz cordless telephone is greater than that of its leading
competitor. You perform a study using 14 randomly selected
phones from the manufacturer and 16 randomly selected
similar phones from its competitor. The results are shown
below. At α = 0.05, can you support the manufacturer’s
claim? Assume the populations are normally distributed and
the population variances are equal.
Manufacturer (1) Competition (2)
x1 = 1275ft x2 = 1250 ft
s1 = 45 ft s2 = 30 ft
n1 = 14 n2 = 16
30
Math 117 --- Eddie Laanaoui
31. Solution: Two-Sample t-Test for the
Difference Between Means
• H0: μ1 ≤ μ2 • Test Statistic:
• Ha: μ1 > μ2 t = 1.811
• α = 0.05
• P-value = 0.0404
• Decision: Reject H0
At the 5% level of significance,
there is enough evidence to
support the manufacturer’s
claim that its phone has a
greater calling range than its
competitors.
31
Math 117 --- Eddie Laanaoui
32. Section 8.2 Summary
• Performed a t-test for the difference between two
means μ1 and μ2 using small independent samples
32
Math 117 --- Eddie Laanaoui
33. Section 8.3
Testing the Difference Between
Means (Dependent Samples)
(t- test “for L3 !!”)
33
Math 117 --- Eddie Laanaoui
34. Section 8.3 Objectives
• Perform a t-test to test the mean of the difference for
a population of paired data
34
Math 117 --- Eddie Laanaoui
35. t-Test for the Difference Between Means
• To perform a two-sample hypothesis test with
dependent samples, the difference between each data
pair is first found:
d = x1 – x2 Difference between entries for a data pair
• The test statistic is the mean d of these differences.
d = ∑ d Mean of the differences between paired
n data entries in the dependent samples
35
Math 117 --- Eddie Laanaoui
36. t-Test for the Difference Between Means
Three conditions are required to conduct the test.
1. The samples must be randomly selected.
2. The samples must be dependent (paired).
3. Both populations must be normally distributed.
If these conditions are met, then the sampling
distribution for d is approximated by a t-distribution
with n – 1 degrees of freedom, where n is the number
of data pairs.
d
-t0 μd t0
36
Math 117 --- Eddie Laanaoui
37. Symbols used for the t-Test for μd
Symbol Description
n The number of pairs of data
d The difference between entries for a data pair,
d = x1 – x2
µd The hypothesized mean of the differences of
paired data in the population
37
Math 117 --- Eddie Laanaoui
38. Symbols used for the t-Test for μd
Symbol Description
d The mean of the differences between the paired
data entries in the dependent samples
∑d
d=
n
sd The standard deviation of the differences
between the paired data entries in the dependent
samples Σ(d − d ) 2
sd =
n −1
38
Math 117 --- Eddie Laanaoui
39. t-Test for the Difference Between Means
• The test statistic is
∑d
d=
n
• The standardized test statistic is
d − µd
t=
sd n
• The degrees of freedom are
d.f. = n – 1
39
Math 117 --- Eddie Laanaoui
40. t-Test for the Difference Between Means
(Dependent Samples)
In Words In Symbols
1. State the claim mathematically. State H0 and Ha.
Identify the null and alternative
hypotheses.
2. Specify the level of significance. Identify α.
3. Use t-Test
(NOT 2 sample t-Test!!)
40
Math 117 --- Eddie Laanaoui
41. t-Test for the Difference Between Means
(Dependent Samples)
In Words In Symbols
4. Calculate d and sd.
d = ∑d
n
(Hint: L3 = L1-L2)
∑(d − d ) 2
sd =
n −1
d − µd
• Find the standardized test t=
sd n
statistic.
41
Math 117 --- Eddie Laanaoui
42. t-Test for the Difference Between Means
(Dependent Samples)
In Words In Symbols
6. Make a decision to reject or If P-value is SMALL,
fail to reject the null reject the NULL.
hypothesis. Otherwise, fail to
reject H0.
7. Interpret the decision in the
context of the original
claim.
42
Math 117 --- Eddie Laanaoui
43. Example: t-Test for the Difference
Between Means
A golf club manufacturer claims that golfers can lower their
scores by using the manufacturer’s newly designed golf
clubs. Eight golfers are randomly selected, and each is asked
to give his or her most recent score. After using the new
clubs for one month, the golfers are again asked to give their
most recent score. The scores for each golfer are shown in
the table. Assuming the golf scores are normally distributed,
is there enough evidence to support the manufacturer’s claim
at α = 0.10?
Golfer 1 2 3 4 5 6 7 8
Score (old) 89 84 96 82 74 92 85 91
Score 83 83 92 84 76 91 80 91
(new)
43
Math 117 --- Eddie Laanaoui
44. Solution: Two-Sample t-Test for the
Difference Between Means
d = (old score) – (new score)
• H0: μd ≤ 0 • Test Statistic:
• Ha: μd > 0
• α = 0.10
• Decision:
44
Math 117 --- Eddie Laanaoui
45. Solution: Two-Sample t-Test for the
Difference Between Means
d = (old score) – (new score)
• H0: μd ≤ 0 • Test Statistic:
• Ha: μd > 0 d − µd
t= = 1.498
• α = 0.10 sd n
• P-value = 0.089 • Decision: Reject H0
At the 10% level of significance,
the results of this test indicate
that after the golfers used the
new clubs, their scores were
significantly lower.
45
Math 117 --- Eddie Laanaoui
46. Section 8.3 Summary
• Performed a t-test to test the mean of the difference
for a population of paired data
46
Math 117 --- Eddie Laanaoui
47. Section 8.4
Testing the Difference Between
Proportions
(2 Prop. Z test)
47
Math 117 --- Eddie Laanaoui
48. Section 8.4 Objectives
• Perform a z-test for the difference between two
population proportions p1 and p2
48
Math 117 --- Eddie Laanaoui
49. Two-Sample z-Test for Proportions
• Used to test the difference between two population
proportions, p1 and p2.
• Three conditions are required to conduct the test.
1. The samples must be randomly selected.
2. The samples must be independent.
3. The samples must be large enough to use a
normal sampling distribution. That is,
n1p1 ≥ 5, n1q1 ≥ 5, n2p2 ≥ 5, and n2q2 ≥ 5.
49
Math 117 --- Eddie Laanaoui
50. Two-Sample z-Test for the Difference
Between Proportions
• If these conditions are met, then the sampling
distribution for p1 − p2 is a normal distribution
ˆ ˆ
• Mean: µ p − p = p1 − p2
ˆ ˆ 1 2
• A weighted estimate of p1 and p2 can be found by
using p = x1 + x2 , where x = n p and x = n p
n1 + n2 1 1 ˆ1 2 2 ˆ2
• Standard error:
σ p − p = pq 1 + 1
ˆ ˆ n n ÷
1 2
1 2
50
Math 117 --- Eddie Laanaoui
51. Two-Sample z-Test for the Difference
Between Proportions
• The test statistic is p1 − p2
ˆ ˆ
• The standardized test statistic is
( p1 − p2) − ( p1 − p2)
ˆ ˆ
z=
pq +
1 1
n n ÷
1 2
where
x1 + x2
p= and q = 1 − p
n1 + n2
51
Math 117 --- Eddie Laanaoui
52. Two-Sample z-Test for the Difference
Between Proportions
In Words In Symbols
1. State the claim. Identify the null State H0 and Ha.
and alternative hypotheses.
2. Specify the level of significance. Identify α.
3. Find the weighted estimate of x1 + x2
p=
p1 and p2. n1 + n2
52
Math 117 --- Eddie Laanaoui
53. Two-Sample z-Test for the Difference
Between Proportions
In Words In Symbols
6. Find the standardized test ( p1 − p2) − ( p1 − p2)
ˆ ˆ
z=
statistic.
pq +
1 1
n n ÷
1 2
7. Make a decision to reject or If P-value is SMALL,
fail to reject the null reject the NULL.
hypothesis. Otherwise, fail to
reject H0.
8. Interpret the decision in the
context of the original claim.
53
Math 117 --- Eddie Laanaoui
54. Example: Two-Sample z-Test for the
Difference Between Proportions
In a study of 200 randomly selected adult female and
250 randomly selected adult male Internet users, 30% of
the females and 38% of the males said that they plan to
shop online at least once during the next month. At
α = 0.10 test the claim that there is a difference between
the proportion of female and the proportion of male
Internet users who plan to shop online.
Solution:
1 = Females 2 = Males
54
Math 117 --- Eddie Laanaoui
55. Solution: Two-Sample z-Test for the
Difference Between Means
• H0: p1 = p2 • Test Statistic:
• Ha: p1 ≠ p2
• α = 0.10
• n1= 200, n2 = 250
• P-value = • Decision:
55
Math 117 --- Eddie Laanaoui
56. Solution: Two-Sample z-Test for the
Difference Between Means
x1 = n1 p1 = 60
ˆ x2 = n2 p2 = 95
ˆ
x1 + x2 60 + 95
p= = ≈ 0.3444
n1 + n2 200 + 250
q = 1 − p = 1 − 0.3444 = 0.6556
Note:
n1 p = 200(0.3444) ≥ 5 n1q = 200(0.6556) ≥ 5
n2 p = 250(0.3444) ≥ 5 n2q = 250(0.6556) ≥ 5
56
Math 117 --- Eddie Laanaoui
58. Solution: Two-Sample z-Test for the
Difference Between Means
• H0: p1 = p2 • Test Statistic:
• Ha: p1 ≠ p2 z = −1.77
• α = 0.10
• Decision: Reject H0
• n1= 200, n2 = 250
At the 10% level of
• P-value = 0.076 significance, there is enough
evidence to conclude that there
is a difference between the
proportion of female and the
proportion of male Internet
users who plan to shop online.
58
Math 117 --- Eddie Laanaoui
59. Example: Two-Sample z-Test for the
Difference Between Proportions
A medical research team conducted a study to test the effect
of a cholesterol reducing medication. At the end of the
study, the researchers found that of the 4700 randomly
selected subjects who took the medication, 301 died of
heart disease. Of the 4300 randomly selected subjects who
took a placebo, 357 died of heart disease. At α = 0.01 can
you conclude that the death rate due to heart disease is
lower for those who took the medication than for those who
took the placebo? (Adapted from New England Journal of
Medicine)
Solution:
1 = Medication 2 = Placebo
59
Math 117 --- Eddie Laanaoui
60. Solution: Two-Sample z-Test for the
Difference Between Means
• H0: p1 ≥ p2 • Test Statistic:
• Ha: p1 < p2
• α = 0.01
• n1= 4700, n2 = 4300
• P-value = • Decision:
60
Math 117 --- Eddie Laanaoui
61. Solution: Two-Sample z-Test for the
Difference Between Means
x1 301 x2 357
p1 =
ˆ = = 0.064 p2 =
ˆ = = 0.083
n1 4700 n2 4300
x1 + x2 301 + 357
p= = ≈ 0.0731
n1 + n2 4700 + 4300
q = 1 − p = 1 − 0.0731 = 0.9269
Note:
n1 p = 4700(0.0731) ≥ 5 n1q = 4700(0.9269) ≥ 5
n2 p = 4300(0.0731) ≥ 5 n2q = 4300(0.9269) ≥ 5
61
Math 117 --- Eddie Laanaoui
63. Solution: Two-Sample z-Test for the
Difference Between Means
• H0: p1 ≥ p2 • Test Statistic:
• Ha: p1 < p2 z = −3.46
• α = 0.01
• Decision: Reject H0
• n1= 4700 , n2 = 4300
At the 1% level of significance,
• P-value = 0.000275 there is enough evidence to
conclude that the death rate due
to heart disease is lower for
those who took the medication
than for those who took the
placebo.
63
Math 117 --- Eddie Laanaoui
64. Section 8.4 Summary
• Performed a z-test for the difference between two
population proportions p1 and p2
64
Math 117 --- Eddie Laanaoui