This document provides information on different types of graphs and displays that can be used to represent quantitative and qualitative data, including stem-and-leaf plots, dot plots, pie charts, Pareto charts, scatter plots, and time series charts. Examples are given for how to construct each type of graph or display using sample data sets. Key aspects like labeling axes, plotting data points, and interpreting trends are discussed.
The document discusses the concept of bimodal IT and the need for organizations to adopt both traditional ("Mode 1") and more agile ("Mode 2") approaches to software delivery. It provides context around the increasing digital demands on organizations and the limitations of conventional IT methods. The document then defines bimodal IT and contrasts the characteristics of Mode 1 versus Mode 2. It also discusses some of the challenges to adopting more agile approaches and outlines a proposed roadmap for organizations to evolve towards a bimodal model of IT.
The Triton Travel Club contact survey collects a student's name, address, birthday, and parent/guardian contact information. It asks whether the student will enroll in an upcoming tour and which other countries or places the club should offer trips to in the future, with options including Argentina, Australia, Britain, California, China, Costa Rica, Egypt, France, Germany, Greece, India, Italy, Mexico, New Zealand, Peru, South Africa, Spain, Thailand, Vietnam and others.
Logarithmic functions are inverses of exponential functions. To graph a logarithmic function:
1. Identify the inverse exponential form.
2. Create a table of values for the exponential form.
3. Invert the ordered pairs.
4. Plot the points and sketch the graph of the logarithm.
Logarithmic functions can be transformed through stretching, compression, reflection, horizontal translation, and vertical translation compared to the parent logarithmic function. Examples are shown to demonstrate how transformations alter the graph.
This document introduces logarithmic functions as inverses of exponential functions. It defines logarithms as the inverse of an exponential function y = bx, such that y = bx is equivalent to logby = x. The document provides examples of writing exponential equations in logarithmic form and vice versa. It also demonstrates how to evaluate logarithms by using the definition to write them in exponential form and setting the exponents equal. Finally, it defines the common logarithm as a logarithm with base 10, which can be written as logx.
The document discusses exponential functions of the form f(x) = a*b^x, explaining that they always have a curved shape and asymptote at y=0. It distinguishes between exponential growth, where the value of y increases as x increases, and exponential decay, where the value of y decreases as x increases. Examples are provided to demonstrate how to determine if a function represents growth or decay and to find the y-intercept.
This document discusses properties and transformations of exponential functions, including stretch, compression, reflection, and horizontal and vertical translation. It also discusses the number e as the base for natural exponential functions and using the function A=Pe^rt to model continuously compounded interest. Examples are provided to demonstrate graphing transformations of exponential functions and using the continuously compounded interest formula.
This document provides information on different types of graphs and displays that can be used to represent quantitative and qualitative data, including stem-and-leaf plots, dot plots, pie charts, Pareto charts, scatter plots, and time series charts. Examples are given for how to construct each type of graph or display using sample data sets. Key aspects like labeling axes, plotting data points, and interpreting trends are discussed.
The document discusses the concept of bimodal IT and the need for organizations to adopt both traditional ("Mode 1") and more agile ("Mode 2") approaches to software delivery. It provides context around the increasing digital demands on organizations and the limitations of conventional IT methods. The document then defines bimodal IT and contrasts the characteristics of Mode 1 versus Mode 2. It also discusses some of the challenges to adopting more agile approaches and outlines a proposed roadmap for organizations to evolve towards a bimodal model of IT.
The Triton Travel Club contact survey collects a student's name, address, birthday, and parent/guardian contact information. It asks whether the student will enroll in an upcoming tour and which other countries or places the club should offer trips to in the future, with options including Argentina, Australia, Britain, California, China, Costa Rica, Egypt, France, Germany, Greece, India, Italy, Mexico, New Zealand, Peru, South Africa, Spain, Thailand, Vietnam and others.
Logarithmic functions are inverses of exponential functions. To graph a logarithmic function:
1. Identify the inverse exponential form.
2. Create a table of values for the exponential form.
3. Invert the ordered pairs.
4. Plot the points and sketch the graph of the logarithm.
Logarithmic functions can be transformed through stretching, compression, reflection, horizontal translation, and vertical translation compared to the parent logarithmic function. Examples are shown to demonstrate how transformations alter the graph.
This document introduces logarithmic functions as inverses of exponential functions. It defines logarithms as the inverse of an exponential function y = bx, such that y = bx is equivalent to logby = x. The document provides examples of writing exponential equations in logarithmic form and vice versa. It also demonstrates how to evaluate logarithms by using the definition to write them in exponential form and setting the exponents equal. Finally, it defines the common logarithm as a logarithm with base 10, which can be written as logx.
The document discusses exponential functions of the form f(x) = a*b^x, explaining that they always have a curved shape and asymptote at y=0. It distinguishes between exponential growth, where the value of y increases as x increases, and exponential decay, where the value of y decreases as x increases. Examples are provided to demonstrate how to determine if a function represents growth or decay and to find the y-intercept.
This document discusses properties and transformations of exponential functions, including stretch, compression, reflection, and horizontal and vertical translation. It also discusses the number e as the base for natural exponential functions and using the function A=Pe^rt to model continuously compounded interest. Examples are provided to demonstrate graphing transformations of exponential functions and using the continuously compounded interest formula.
This document discusses the chi-square goodness of fit test, which is used to check if observed data counts match the expected distribution of counts into categories. It examines whether a population follows a specified theoretical distribution.
This document discusses testing for homogeneity among populations using chi-square tests. It defines homogeneity as populations having the same structure or composition. A test of homogeneity determines if different populations have the same proportions for various categories. It requires using a contingency table and chi-square distribution. An example tests if the same proportion of males and females prefer different pet types using survey data from college students.
The document provides an overview of the chi-square distribution and how it can be used for hypothesis testing. It discusses that the chi-square distribution is used to find critical values for determining the area under the curve for a given degrees of freedom. It also gives an example of how chi-square can be used to test if two variables such as keyboard type and time to learn typing are independent.
This document discusses synthetic division and the remainder theorem. Synthetic division is a process that simplifies long division when dividing a polynomial by a linear factor of the form x - a. It involves setting up the coefficients of the polynomial and multiplying/adding through the process. The remainder theorem states that if a polynomial P(x) is divided by x - a, then the remainder is equal to P(a). It provides a quick way to find the remainder of a polynomial division problem by evaluating the polynomial at the value of a. Examples are given to demonstrate evaluating polynomials using the remainder theorem.
Long division can be used to divide polynomials in a similar way to dividing numbers. The key steps are to set up the division problem, divide the term of the dividend by the term of the divisor, multiply the divisor by the quotient term and subtract, then bring down the next term of the dividend and repeat. This polynomial long division allows polynomials to be factored by finding all divisor polynomials that give a remainder of zero. The factor theorem can also be used to check if a linear polynomial is a factor by setting it equal to zero and checking if it makes the other polynomial equal to zero.
This document discusses solving polynomial equations by factoring. It provides examples of factoring polynomials, including factoring the difference and sum of cubes. Factoring by substitution is also introduced as a method for factoring polynomials of degree 4 or higher. The document demonstrates solving polynomial equations by factoring the expressions and setting each factor equal to 0. Both real and imaginary solutions may be obtained depending on whether the factors are real or complex numbers. Graphing is presented as an alternative method to find real solutions of a polynomial equation.
The document discusses inferences for correlation and regression. It provides an example of testing the correlation between percentage of population with a college degree (x) and percentage growth in income (y) for 6 Ohio communities. There is a positive correlation between x and y, but this does not necessarily mean higher education causes higher earnings. The document also discusses measuring the spread of data points around the least squares line, including the standard error of estimate, using an example of how much copper sulfate dissolves in water at different temperatures.
This document provides guided notes on inferences for correlation and regression. It discusses how the sample correlation coefficient and least squares line estimate population parameters and require assumptions about the data. It also outlines how to test the population correlation coefficient using a significance test and interpret the results. An example is provided testing the correlation between education levels and income growth. Students are asked to practice computing the standard error of estimate from a data set and answering summary questions.
1. The document discusses writing polynomials in factored form and finding the zeros of polynomial functions. It defines linear factors, roots, zeros, and x-intercepts as equivalent terms.
2. Examples are provided of writing polynomials in factored form using the factor theorem to find the zeros, and then graphing the polynomial function based on its zeros.
3. The factor theorem states that a linear expression x - a is a factor of a polynomial if and only if a is a zero of the related polynomial function. This allows writing a polynomial given its zeros.
This document discusses how to describe the shape of a cubic function by listing it in standard form, describing the end behavior of the graph, determining the possible number of turning points using a table of values, and determining the increasing and decreasing intervals. It explains that to describe the shape, you identify the sign of the leading coefficient to determine the end behavior and the number of turning points, which is one less than the possible degree. The document also discusses using differences of consecutive y-values in a table to determine the least degree of the polynomial function that could generate the data, with constant first differences indicating linear, constant second differences indicating quadratic, and constant third differences indicating cubic.
This document defines key concepts related to polynomials and polynomial functions. It defines monomials as terms involving variables and exponents, and polynomials as sums of monomials. The degree of a polynomial is the highest exponent among its terms. Polynomial functions are polynomials written in terms of a single variable. Standard form arranges polynomial terms by descending degree. Polynomials are classified by degree and number of terms. Higher degree polynomials can have more turning points and their end behavior depends on the leading term. Examples show determining standard form, classifying polynomials, identifying end behavior and increasing/decreasing parts of graphs.
This document discusses scatter diagrams and linear correlation. It provides examples of scatter diagrams that do and do not show linear correlation. It defines the correlation coefficient r as a measure of linear correlation between two variables on a scatter plot, with values between -1 and 1. It presents formulas for calculating r and provides an example of computing r using wind velocity and sand drift rate data. It cautions that correlation does not necessarily imply causation and that lurking variables can influence the correlation between two variables.
1) Linear regression finds the "best-fitting" linear relationship between two variables by minimizing the vertical distances between the data points and the linear equation line.
2) The coefficient of determination, r^2, measures how well the linear relationship described by the regression line fits the actual data, with higher r^2 values indicating less unexplained variability.
3) r^2 has an interpretation as the percentage of the total variation in the response variable that is explained by the explanatory variable.
Many statistical tests use paired data samples to compare two population means. Paired data occurs naturally in "before and after" situations where the same item is measured before and after a treatment. When testing paired data, the proper procedure is to run a one-sample test on the differences between each pair of measurements. This allows researchers to determine if a treatment had a statistically significant effect based on the average difference between paired measurements.
The document describes testing a claim about the proportion of seeds from a new hybrid wheat variety that germinate. A botanist claimed the proportion for the hybrid that germinate is 80%, the same as the parent plants. An experiment was conducted where 400 seeds from the hybrid were tested, and 312 germinated. The summary provides instructions to use a 5% level of significance and critical regions to test the claim that the proportion germinating for the hybrid is 80%.
Testing a proportion using critical regions follows a similar process to testing a mean. The main difference is that a proportion represents a probability rather than a measurement. When using critical regions to test a proportion, if the sample test statistic falls within the critical region, the null hypothesis is rejected, and if it falls outside the critical region, the null hypothesis is not rejected. The test statistic is the z-score calculated from the sample proportion and null hypothesized proportion.
1. Paired data involves dependent samples that are measured in pairs, such as measurements taken from the same subject before and after a treatment. When comparing paired data, the proper test is a one-sample t-test on the differences between pairs.
2. The null hypothesis for a paired t-test states that the mean of the differences between pairs is equal to zero, indicating no change between measurements. The alternative hypothesis depends on the specific problem but can be left-tailed, right-tailed, or two-tailed.
3. The key steps in a paired t-test are calculating the differences between pairs, finding the mean and standard deviation of the differences, determining the t-statistic, computing the
1) Testing a proportion uses a binomial distribution with hypotheses about p, the probability of success on each trial. The test statistic is calculated and compared to a normal distribution to get a p-value.
2) An example tests whether a new eye surgery technique is better than the old technique based on a trial with 225 surgeries and 88 successes, using a 1% significance level.
3) Key steps are to check conditions, calculate the test statistic, find the p-value using the normal distribution, and either reject or fail to reject the null hypothesis based on the significance level.
This document discusses the chi-square goodness of fit test, which is used to check if observed data counts match the expected distribution of counts into categories. It examines whether a population follows a specified theoretical distribution.
This document discusses testing for homogeneity among populations using chi-square tests. It defines homogeneity as populations having the same structure or composition. A test of homogeneity determines if different populations have the same proportions for various categories. It requires using a contingency table and chi-square distribution. An example tests if the same proportion of males and females prefer different pet types using survey data from college students.
The document provides an overview of the chi-square distribution and how it can be used for hypothesis testing. It discusses that the chi-square distribution is used to find critical values for determining the area under the curve for a given degrees of freedom. It also gives an example of how chi-square can be used to test if two variables such as keyboard type and time to learn typing are independent.
This document discusses synthetic division and the remainder theorem. Synthetic division is a process that simplifies long division when dividing a polynomial by a linear factor of the form x - a. It involves setting up the coefficients of the polynomial and multiplying/adding through the process. The remainder theorem states that if a polynomial P(x) is divided by x - a, then the remainder is equal to P(a). It provides a quick way to find the remainder of a polynomial division problem by evaluating the polynomial at the value of a. Examples are given to demonstrate evaluating polynomials using the remainder theorem.
Long division can be used to divide polynomials in a similar way to dividing numbers. The key steps are to set up the division problem, divide the term of the dividend by the term of the divisor, multiply the divisor by the quotient term and subtract, then bring down the next term of the dividend and repeat. This polynomial long division allows polynomials to be factored by finding all divisor polynomials that give a remainder of zero. The factor theorem can also be used to check if a linear polynomial is a factor by setting it equal to zero and checking if it makes the other polynomial equal to zero.
This document discusses solving polynomial equations by factoring. It provides examples of factoring polynomials, including factoring the difference and sum of cubes. Factoring by substitution is also introduced as a method for factoring polynomials of degree 4 or higher. The document demonstrates solving polynomial equations by factoring the expressions and setting each factor equal to 0. Both real and imaginary solutions may be obtained depending on whether the factors are real or complex numbers. Graphing is presented as an alternative method to find real solutions of a polynomial equation.
The document discusses inferences for correlation and regression. It provides an example of testing the correlation between percentage of population with a college degree (x) and percentage growth in income (y) for 6 Ohio communities. There is a positive correlation between x and y, but this does not necessarily mean higher education causes higher earnings. The document also discusses measuring the spread of data points around the least squares line, including the standard error of estimate, using an example of how much copper sulfate dissolves in water at different temperatures.
This document provides guided notes on inferences for correlation and regression. It discusses how the sample correlation coefficient and least squares line estimate population parameters and require assumptions about the data. It also outlines how to test the population correlation coefficient using a significance test and interpret the results. An example is provided testing the correlation between education levels and income growth. Students are asked to practice computing the standard error of estimate from a data set and answering summary questions.
1. The document discusses writing polynomials in factored form and finding the zeros of polynomial functions. It defines linear factors, roots, zeros, and x-intercepts as equivalent terms.
2. Examples are provided of writing polynomials in factored form using the factor theorem to find the zeros, and then graphing the polynomial function based on its zeros.
3. The factor theorem states that a linear expression x - a is a factor of a polynomial if and only if a is a zero of the related polynomial function. This allows writing a polynomial given its zeros.
This document discusses how to describe the shape of a cubic function by listing it in standard form, describing the end behavior of the graph, determining the possible number of turning points using a table of values, and determining the increasing and decreasing intervals. It explains that to describe the shape, you identify the sign of the leading coefficient to determine the end behavior and the number of turning points, which is one less than the possible degree. The document also discusses using differences of consecutive y-values in a table to determine the least degree of the polynomial function that could generate the data, with constant first differences indicating linear, constant second differences indicating quadratic, and constant third differences indicating cubic.
This document defines key concepts related to polynomials and polynomial functions. It defines monomials as terms involving variables and exponents, and polynomials as sums of monomials. The degree of a polynomial is the highest exponent among its terms. Polynomial functions are polynomials written in terms of a single variable. Standard form arranges polynomial terms by descending degree. Polynomials are classified by degree and number of terms. Higher degree polynomials can have more turning points and their end behavior depends on the leading term. Examples show determining standard form, classifying polynomials, identifying end behavior and increasing/decreasing parts of graphs.
This document discusses scatter diagrams and linear correlation. It provides examples of scatter diagrams that do and do not show linear correlation. It defines the correlation coefficient r as a measure of linear correlation between two variables on a scatter plot, with values between -1 and 1. It presents formulas for calculating r and provides an example of computing r using wind velocity and sand drift rate data. It cautions that correlation does not necessarily imply causation and that lurking variables can influence the correlation between two variables.
1) Linear regression finds the "best-fitting" linear relationship between two variables by minimizing the vertical distances between the data points and the linear equation line.
2) The coefficient of determination, r^2, measures how well the linear relationship described by the regression line fits the actual data, with higher r^2 values indicating less unexplained variability.
3) r^2 has an interpretation as the percentage of the total variation in the response variable that is explained by the explanatory variable.
Many statistical tests use paired data samples to compare two population means. Paired data occurs naturally in "before and after" situations where the same item is measured before and after a treatment. When testing paired data, the proper procedure is to run a one-sample test on the differences between each pair of measurements. This allows researchers to determine if a treatment had a statistically significant effect based on the average difference between paired measurements.
The document describes testing a claim about the proportion of seeds from a new hybrid wheat variety that germinate. A botanist claimed the proportion for the hybrid that germinate is 80%, the same as the parent plants. An experiment was conducted where 400 seeds from the hybrid were tested, and 312 germinated. The summary provides instructions to use a 5% level of significance and critical regions to test the claim that the proportion germinating for the hybrid is 80%.
Testing a proportion using critical regions follows a similar process to testing a mean. The main difference is that a proportion represents a probability rather than a measurement. When using critical regions to test a proportion, if the sample test statistic falls within the critical region, the null hypothesis is rejected, and if it falls outside the critical region, the null hypothesis is not rejected. The test statistic is the z-score calculated from the sample proportion and null hypothesized proportion.
1. Paired data involves dependent samples that are measured in pairs, such as measurements taken from the same subject before and after a treatment. When comparing paired data, the proper test is a one-sample t-test on the differences between pairs.
2. The null hypothesis for a paired t-test states that the mean of the differences between pairs is equal to zero, indicating no change between measurements. The alternative hypothesis depends on the specific problem but can be left-tailed, right-tailed, or two-tailed.
3. The key steps in a paired t-test are calculating the differences between pairs, finding the mean and standard deviation of the differences, determining the t-statistic, computing the
1) Testing a proportion uses a binomial distribution with hypotheses about p, the probability of success on each trial. The test statistic is calculated and compared to a normal distribution to get a p-value.
2) An example tests whether a new eye surgery technique is better than the old technique based on a trial with 225 surgeries and 88 successes, using a 1% significance level.
3) Key steps are to check conditions, calculate the test statistic, find the p-value using the normal distribution, and either reject or fail to reject the null hypothesis based on the significance level.