This document defines key terms and concepts related to standard deviation and variance. It provides formulas for calculating range, deviation, variance, and standard deviation for both ungrouped and grouped data. Examples are given to demonstrate calculating these metrics from raw data sets and grouped data tables. Interpreting skewness is also discussed.
This document defines key concepts in probability, including:
- Probability is a numerical measure of the likelihood of an event occurring. It is measured on a scale from 0 to 1.
- A random experiment is any process with uncertain outcomes that can be repeated. It has a sample space of all possible outcomes.
- Sample outcomes are the potential results of an experiment. The sample space is the set of all sample outcomes.
- An event is any subset of sample outcomes, such as a specific outcome or group of outcomes.
- Probability rules include that the probability of an event must be between 0 and 1, the probability of two mutually exclusive events sums to their individual probabilities, and conditional probability is the probability of one
This document summarizes key concepts from Chapter 5 of the textbook "Principles of Managerial Finance" by Lawrence J. Gitman. The chapter focuses on risk and return fundamentals including measuring risk of single and multiple assets, the benefits of diversification, and the Capital Asset Pricing Model (CAPM). It provides an overview of the chapter topics, study guide examples, answers to review questions, and solutions to problems to help instructors teach the concepts.
1. The document defines discrete random variables as random variables that can take on a finite or countable number of values. It provides an example of a discrete random variable being the number of heads from 4 coin tosses.
2. It introduces the probability mass function (pmf) as a function that gives the probability of a discrete random variable taking on a particular value. The pmf must be greater than or equal to 0 and sum to 1.
3. The cumulative distribution function (CDF) of a discrete random variable is defined as the sum of the probabilities of the random variable being less than or equal to a particular value. The CDF ranges from 0 to 1 and increases monotonically.
The document contains calculations to determine skewness using grouped data. It includes frequency distributions of grouped data with ranges of values for X, frequencies (f), deviations (d), d-squared (d2), and d-cubed (d3). Formulas are provided to calculate the second (m2) and third (m3) moments about the mean. The computations are presented in a table with columns for X, M, f, fM, d, d2, d3, fd2, and fd3.
This document discusses random variables and probability distributions. It begins by introducing random variables and how they can be either discrete or continuous. Discrete random variables can take on countable values, while continuous can take on any value within an interval. Several examples of each are given, such as number of sales (discrete) and length (continuous). The document then discusses how to describe and find the probability distribution of a discrete random variable using a graph, table, or formula. It provides an example of a probability mass function and the expected values and variance of discrete random variables. Finally, it gives an example of calculating probabilities of winning or losing a bet in roulette.
The document discusses different measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating each measure using various examples. The mean is the average value, the median is the middle value when data is arranged in order, and the mode is the value that occurs most frequently in a data set. Formulas are given for calculating the measures using both grouped and ungrouped data.
This document defines key terms and concepts related to standard deviation and variance. It provides formulas for calculating range, deviation, variance, and standard deviation for both ungrouped and grouped data. Examples are given to demonstrate calculating these metrics from raw data sets and grouped data tables. Interpreting skewness is also discussed.
This document defines key concepts in probability, including:
- Probability is a numerical measure of the likelihood of an event occurring. It is measured on a scale from 0 to 1.
- A random experiment is any process with uncertain outcomes that can be repeated. It has a sample space of all possible outcomes.
- Sample outcomes are the potential results of an experiment. The sample space is the set of all sample outcomes.
- An event is any subset of sample outcomes, such as a specific outcome or group of outcomes.
- Probability rules include that the probability of an event must be between 0 and 1, the probability of two mutually exclusive events sums to their individual probabilities, and conditional probability is the probability of one
This document summarizes key concepts from Chapter 5 of the textbook "Principles of Managerial Finance" by Lawrence J. Gitman. The chapter focuses on risk and return fundamentals including measuring risk of single and multiple assets, the benefits of diversification, and the Capital Asset Pricing Model (CAPM). It provides an overview of the chapter topics, study guide examples, answers to review questions, and solutions to problems to help instructors teach the concepts.
1. The document defines discrete random variables as random variables that can take on a finite or countable number of values. It provides an example of a discrete random variable being the number of heads from 4 coin tosses.
2. It introduces the probability mass function (pmf) as a function that gives the probability of a discrete random variable taking on a particular value. The pmf must be greater than or equal to 0 and sum to 1.
3. The cumulative distribution function (CDF) of a discrete random variable is defined as the sum of the probabilities of the random variable being less than or equal to a particular value. The CDF ranges from 0 to 1 and increases monotonically.
The document contains calculations to determine skewness using grouped data. It includes frequency distributions of grouped data with ranges of values for X, frequencies (f), deviations (d), d-squared (d2), and d-cubed (d3). Formulas are provided to calculate the second (m2) and third (m3) moments about the mean. The computations are presented in a table with columns for X, M, f, fM, d, d2, d3, fd2, and fd3.
This document discusses random variables and probability distributions. It begins by introducing random variables and how they can be either discrete or continuous. Discrete random variables can take on countable values, while continuous can take on any value within an interval. Several examples of each are given, such as number of sales (discrete) and length (continuous). The document then discusses how to describe and find the probability distribution of a discrete random variable using a graph, table, or formula. It provides an example of a probability mass function and the expected values and variance of discrete random variables. Finally, it gives an example of calculating probabilities of winning or losing a bet in roulette.
The document discusses different measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating each measure using various examples. The mean is the average value, the median is the middle value when data is arranged in order, and the mode is the value that occurs most frequently in a data set. Formulas are given for calculating the measures using both grouped and ungrouped data.
This document provides an overview of capital structure determination and the traditional and Modigliani-Miller approaches. It discusses key concepts like the net operating income approach, optimal capital structure, total value principle, market imperfections, and the effects of taxes. The document uses examples to illustrate how capital structure affects required rates of return on equity and the overall cost of capital. It also demonstrates how arbitrage ensures capital structure does not impact total firm value under the Modigliani-Miller approach.
The document discusses the hypergeometric distribution, which describes the probability of successes in draws without replacement from a finite population. It provides the formula for the hypergeometric distribution and compares it to the binomial distribution. Examples are given to demonstrate how to calculate probabilities of various outcomes using the hypergeometric distribution formula.
Chapter 4 part3- Means and Variances of Random Variablesnszakir
Statistics, study of probability, The Mean of a Random Variable, The Variance of a Random Variable, Rules for Means and Variances, The Law of Large Numbers,
Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests
Chapter Topic:
Hypothesis Testing Methodology
Z Test for the Mean ( Known)
p-Value Approach to Hypothesis Testing
Connection to Confidence Interval Estimation
One-Tail Tests
t Test for the Mean ( Unknown)
Z Test for the Proportion
Potential Hypothesis-Testing Pitfalls and Ethical Issues
This document discusses various measures of dispersion used to quantify how spread out or clustered data values are around a central tendency. It defines key terms like range, variance, standard deviation, and coefficient of variation. Examples are provided to demonstrate how to calculate these measures for both individual and grouped data. The normal distribution curve is also discussed to show how dispersion relates to the percentage of values that fall within a given number of standard deviations from the mean.
This document provides an overview of the key topics in Chapter 6 on the normal distribution, including:
1) It introduces continuous probability distributions and defines the normal distribution as the most important continuous probability distribution.
2) It explains how the normal distribution can be standardized to have a mean of 0 and standard deviation of 1, known as the standardized normal distribution.
3) It outlines the types of problems that will be solved using the normal distribution, including finding probabilities and percentiles for both the normal and standardized normal distribution.
This chapter discusses the valuation of long-term securities such as bonds, preferred stock, and common stock. It defines important valuation concepts and describes how to value different types of long-term securities, including bonds that pay periodic interest (coupon bonds) and those that do not (zero-coupon bonds). The chapter also covers adjusting bond valuations for semi-annual compounding of interest and provides examples of valuing perpetual bonds, coupon bonds, zero-coupon bonds, and preferred stock. Common stock valuation is also introduced.
This chapter discusses important discrete probability distributions used in statistics. It begins with an introduction to discrete random variables and probability distributions. It then covers the key concepts of mean, variance, standard deviation, and covariance for discrete distributions. The chapter focuses on explaining the binomial, hypergeometric, and Poisson distributions and how to calculate probabilities using them. It concludes with examples of how to apply these distributions to areas like finance.
TEXTBOOK ON MATHEMATICAL ECONOMICS FOR CU , BU CALCUTTA , SOLVED EXERCISES , ...SOURAV DAS
This document provides guidance for undergraduate students on mathematical economics. It begins with a preface that outlines common challenges students face and recommendations for resources. It then presents a table of contents that outlines the topics to be covered, including review of logic, matrices, calculus, optimization, differential equations, and more. The document provides an in-depth reference for students to develop skills in applying quantitative and mathematical methods to economics.
The Normal Distribution and Other Continuous DistributionsYesica Adicondro
The document describes concepts related to the normal distribution and other continuous probability distributions. It introduces the normal distribution and its properties including that it is bell-shaped and symmetric with the mean, median and mode being equal. It describes how the mean and standard deviation determine the location and spread of the distribution. It also covers translating problems to the standardized normal distribution and how to find probabilities using the normal distribution table and by calculating the area under the normal curve.
This chapter discusses sampling and sampling distributions. The key points are:
1) A sample is a subset of a population that is used to make inferences about the population. Sampling is important because it is less time consuming and costly than a census.
2) Descriptive statistics describe samples, while inferential statistics make conclusions about populations based on sample data. Sampling distributions show the distribution of all possible values of a statistic from samples of the same size.
3) The sampling distribution of the sample mean is normally distributed for large sample sizes due to the central limit theorem. Its mean is the population mean and its standard deviation decreases with increasing sample size. Acceptance intervals can be used to determine the range a
This document provides an overview of stock-flow consistent (SFC) modeling. It discusses the justification and origins of the SFC approach, key features of post-Keynesian SFC models, and some problems and solutions related to SFC modeling. Specifically, it covers:
1) The background and motivation for SFC modeling from an accounting perspective.
2) Main features of post-Keynesian SFC models, including the use of balance sheet matrices, transaction flow matrices, and portfolio decisions.
3) Some challenges with SFC modeling, such as dealing with redundant equations, closures, calibration, and the possibility of multiple equilibria. It also discusses potential solutions to these challenges
This chapter aims to teach students how to compute and interpret various numerical descriptive measures of data, including measures of central tendency (mean, median, mode), variation (range, variance, standard deviation), and shape (skewness). It covers how to find quartiles and construct box-and-whisker plots. The chapter also discusses population summary measures, rules for describing variation around the mean, and interpreting correlation coefficients.
The document discusses the role of financial management. It explains that financial management concerns acquiring, financing, and managing assets to achieve an overall goal. It also discusses the goal of the firm being shareholder wealth maximization and the potential agency problems that can arise from the separation of ownership and management in corporations.
This chapter discusses the relationship between risk and return for both individual assets and portfolios of assets. It defines risk as the chance of financial loss and explains that higher risk assets generally provide higher expected returns. The chapter covers measuring the expected return, standard deviation, and coefficient of variation of individual assets. It then explains how forming a portfolio of assets can reduce overall risk through diversification. The chapter discusses how the correlation between asset returns impacts the risk reduction from diversification. It also addresses how adding more assets to a portfolio continues to reduce non-market or unique risk.
This document provides an introduction to the t statistic, which is used to test hypotheses about an unknown population mean. It discusses how the t-statistic is similar to the z-score but uses the sample standard deviation rather than the population standard deviation since this value is unknown. It outlines how to calculate the t-statistic and compares it to calculating the z-score. The document also discusses degrees of freedom, the t-distribution, and how to conduct hypothesis tests using the t-statistic.
Reporting a one way repeated measures anovaKen Plummer
The document provides guidance on reporting the results of a one-way repeated measures ANOVA in APA style. It includes templates for reporting the main ANOVA results and any post-hoc pairwise comparisons between conditions. Key sections are highlighted to fill in values from an example SPSS output to generate a complete APA-style results section reporting a significant effect of time of season on pizza consumption.
1. The document discusses sampling methods and the central limit theorem. It describes various probability sampling methods like simple random sampling, systematic random sampling, and stratified random sampling.
2. It defines the sampling distribution of the sample mean and explains that according to the central limit theorem, the sampling distribution will follow a normal distribution as long as the sample size is large.
3. The mean of the sampling distribution is equal to the population mean, and its variance is equal to the population variance divided by the sample size. This allows probabilities to be determined about a sample mean falling within a certain range.
Applied Business Statistics ,ken black , ch 6AbdelmonsifFadl
This chapter summary covers key concepts about continuous probability distributions discussed in Chapter 6 of the textbook "Business Statistics, 6th ed." by Ken Black. The chapter objectives are to understand the uniform distribution, appreciate the importance of the normal distribution, and know how to solve normal distribution problems. It discusses the uniform, normal, and exponential distributions. It explains how to calculate probabilities using the normal distribution and z-scores. It also discusses when the normal distribution can be used to approximate the binomial distribution.
This document discusses analysis of variance (ANOVA) techniques. It defines the F-distribution and its characteristics. It then covers testing for equal variances between two populations and comparing means of two or more populations using one-way and two-way ANOVA. Examples are provided to illustrate hypothesis testing using the F-statistic to compare variances and population means. Finally, it discusses developing confidence intervals for differences in treatment means and using ANOVA in Excel.
1) The document discusses concepts related to probability distributions including uniform, normal, and binomial distributions.
2) It provides examples of calculating probabilities and values using the uniform, normal, and binomial distributions as well as the normal approximation to the binomial.
3) Key concepts covered include means, standard deviations, z-values, areas under the normal curve, and the continuity correction factor for approximating binomial with normal.
This document provides an overview of capital structure determination and the traditional and Modigliani-Miller approaches. It discusses key concepts like the net operating income approach, optimal capital structure, total value principle, market imperfections, and the effects of taxes. The document uses examples to illustrate how capital structure affects required rates of return on equity and the overall cost of capital. It also demonstrates how arbitrage ensures capital structure does not impact total firm value under the Modigliani-Miller approach.
The document discusses the hypergeometric distribution, which describes the probability of successes in draws without replacement from a finite population. It provides the formula for the hypergeometric distribution and compares it to the binomial distribution. Examples are given to demonstrate how to calculate probabilities of various outcomes using the hypergeometric distribution formula.
Chapter 4 part3- Means and Variances of Random Variablesnszakir
Statistics, study of probability, The Mean of a Random Variable, The Variance of a Random Variable, Rules for Means and Variances, The Law of Large Numbers,
Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests
Chapter Topic:
Hypothesis Testing Methodology
Z Test for the Mean ( Known)
p-Value Approach to Hypothesis Testing
Connection to Confidence Interval Estimation
One-Tail Tests
t Test for the Mean ( Unknown)
Z Test for the Proportion
Potential Hypothesis-Testing Pitfalls and Ethical Issues
This document discusses various measures of dispersion used to quantify how spread out or clustered data values are around a central tendency. It defines key terms like range, variance, standard deviation, and coefficient of variation. Examples are provided to demonstrate how to calculate these measures for both individual and grouped data. The normal distribution curve is also discussed to show how dispersion relates to the percentage of values that fall within a given number of standard deviations from the mean.
This document provides an overview of the key topics in Chapter 6 on the normal distribution, including:
1) It introduces continuous probability distributions and defines the normal distribution as the most important continuous probability distribution.
2) It explains how the normal distribution can be standardized to have a mean of 0 and standard deviation of 1, known as the standardized normal distribution.
3) It outlines the types of problems that will be solved using the normal distribution, including finding probabilities and percentiles for both the normal and standardized normal distribution.
This chapter discusses the valuation of long-term securities such as bonds, preferred stock, and common stock. It defines important valuation concepts and describes how to value different types of long-term securities, including bonds that pay periodic interest (coupon bonds) and those that do not (zero-coupon bonds). The chapter also covers adjusting bond valuations for semi-annual compounding of interest and provides examples of valuing perpetual bonds, coupon bonds, zero-coupon bonds, and preferred stock. Common stock valuation is also introduced.
This chapter discusses important discrete probability distributions used in statistics. It begins with an introduction to discrete random variables and probability distributions. It then covers the key concepts of mean, variance, standard deviation, and covariance for discrete distributions. The chapter focuses on explaining the binomial, hypergeometric, and Poisson distributions and how to calculate probabilities using them. It concludes with examples of how to apply these distributions to areas like finance.
TEXTBOOK ON MATHEMATICAL ECONOMICS FOR CU , BU CALCUTTA , SOLVED EXERCISES , ...SOURAV DAS
This document provides guidance for undergraduate students on mathematical economics. It begins with a preface that outlines common challenges students face and recommendations for resources. It then presents a table of contents that outlines the topics to be covered, including review of logic, matrices, calculus, optimization, differential equations, and more. The document provides an in-depth reference for students to develop skills in applying quantitative and mathematical methods to economics.
The Normal Distribution and Other Continuous DistributionsYesica Adicondro
The document describes concepts related to the normal distribution and other continuous probability distributions. It introduces the normal distribution and its properties including that it is bell-shaped and symmetric with the mean, median and mode being equal. It describes how the mean and standard deviation determine the location and spread of the distribution. It also covers translating problems to the standardized normal distribution and how to find probabilities using the normal distribution table and by calculating the area under the normal curve.
This chapter discusses sampling and sampling distributions. The key points are:
1) A sample is a subset of a population that is used to make inferences about the population. Sampling is important because it is less time consuming and costly than a census.
2) Descriptive statistics describe samples, while inferential statistics make conclusions about populations based on sample data. Sampling distributions show the distribution of all possible values of a statistic from samples of the same size.
3) The sampling distribution of the sample mean is normally distributed for large sample sizes due to the central limit theorem. Its mean is the population mean and its standard deviation decreases with increasing sample size. Acceptance intervals can be used to determine the range a
This document provides an overview of stock-flow consistent (SFC) modeling. It discusses the justification and origins of the SFC approach, key features of post-Keynesian SFC models, and some problems and solutions related to SFC modeling. Specifically, it covers:
1) The background and motivation for SFC modeling from an accounting perspective.
2) Main features of post-Keynesian SFC models, including the use of balance sheet matrices, transaction flow matrices, and portfolio decisions.
3) Some challenges with SFC modeling, such as dealing with redundant equations, closures, calibration, and the possibility of multiple equilibria. It also discusses potential solutions to these challenges
This chapter aims to teach students how to compute and interpret various numerical descriptive measures of data, including measures of central tendency (mean, median, mode), variation (range, variance, standard deviation), and shape (skewness). It covers how to find quartiles and construct box-and-whisker plots. The chapter also discusses population summary measures, rules for describing variation around the mean, and interpreting correlation coefficients.
The document discusses the role of financial management. It explains that financial management concerns acquiring, financing, and managing assets to achieve an overall goal. It also discusses the goal of the firm being shareholder wealth maximization and the potential agency problems that can arise from the separation of ownership and management in corporations.
This chapter discusses the relationship between risk and return for both individual assets and portfolios of assets. It defines risk as the chance of financial loss and explains that higher risk assets generally provide higher expected returns. The chapter covers measuring the expected return, standard deviation, and coefficient of variation of individual assets. It then explains how forming a portfolio of assets can reduce overall risk through diversification. The chapter discusses how the correlation between asset returns impacts the risk reduction from diversification. It also addresses how adding more assets to a portfolio continues to reduce non-market or unique risk.
This document provides an introduction to the t statistic, which is used to test hypotheses about an unknown population mean. It discusses how the t-statistic is similar to the z-score but uses the sample standard deviation rather than the population standard deviation since this value is unknown. It outlines how to calculate the t-statistic and compares it to calculating the z-score. The document also discusses degrees of freedom, the t-distribution, and how to conduct hypothesis tests using the t-statistic.
Reporting a one way repeated measures anovaKen Plummer
The document provides guidance on reporting the results of a one-way repeated measures ANOVA in APA style. It includes templates for reporting the main ANOVA results and any post-hoc pairwise comparisons between conditions. Key sections are highlighted to fill in values from an example SPSS output to generate a complete APA-style results section reporting a significant effect of time of season on pizza consumption.
1. The document discusses sampling methods and the central limit theorem. It describes various probability sampling methods like simple random sampling, systematic random sampling, and stratified random sampling.
2. It defines the sampling distribution of the sample mean and explains that according to the central limit theorem, the sampling distribution will follow a normal distribution as long as the sample size is large.
3. The mean of the sampling distribution is equal to the population mean, and its variance is equal to the population variance divided by the sample size. This allows probabilities to be determined about a sample mean falling within a certain range.
Applied Business Statistics ,ken black , ch 6AbdelmonsifFadl
This chapter summary covers key concepts about continuous probability distributions discussed in Chapter 6 of the textbook "Business Statistics, 6th ed." by Ken Black. The chapter objectives are to understand the uniform distribution, appreciate the importance of the normal distribution, and know how to solve normal distribution problems. It discusses the uniform, normal, and exponential distributions. It explains how to calculate probabilities using the normal distribution and z-scores. It also discusses when the normal distribution can be used to approximate the binomial distribution.
This document discusses analysis of variance (ANOVA) techniques. It defines the F-distribution and its characteristics. It then covers testing for equal variances between two populations and comparing means of two or more populations using one-way and two-way ANOVA. Examples are provided to illustrate hypothesis testing using the F-statistic to compare variances and population means. Finally, it discusses developing confidence intervals for differences in treatment means and using ANOVA in Excel.
1) The document discusses concepts related to probability distributions including uniform, normal, and binomial distributions.
2) It provides examples of calculating probabilities and values using the uniform, normal, and binomial distributions as well as the normal approximation to the binomial.
3) Key concepts covered include means, standard deviations, z-values, areas under the normal curve, and the continuity correction factor for approximating binomial with normal.
This chapter introduces key probability concepts including experiments, outcomes, events, classical, empirical and subjective probabilities, and rules for calculating probabilities. It defines probability as a measure between 0 and 1 of the likelihood of an event occurring. The three approaches to assigning probabilities are classical, empirical, and subjective. Classical probability uses equally likely outcomes and counting favorable outcomes. Empirical probability is based on observed frequencies over many trials. Subjective probability is used when there is little past data. Rules of addition and multiplication for probabilities are presented. Conditional probability and joint probability are also defined.
This document defines key terms and concepts related to probability distributions, including discrete and continuous random variables, and the mean, variance, and standard deviation of probability distributions. It also describes the characteristics and computations for the binomial, hypergeometric, and Poisson probability distributions. Examples are provided to illustrate how to calculate probabilities using these three specific probability distributions.
This document outlines key concepts about discrete probability distributions. It defines probability distributions and random variables, distinguishing between discrete and continuous distributions. It describes how to calculate the mean, variance, and standard deviation of discrete distributions. The document also provides details on the binomial and Poisson probability distributions, including their characteristics and how to compute probabilities using them. Examples are provided to illustrate calculating probabilities and distribution properties.
This chapter discusses various methods for summarizing and exploring data, including dot plots, stem-and-leaf displays, percentiles, box plots, and scatter plots. Dot plots and stem-and-leaf displays organize data in a way that shows the distribution while maintaining each data point. Percentiles such as the median and quartiles divide data into equal portions. Box plots graphically show the center, spread, and outliers of data. Scatter plots reveal relationships between two variables, while contingency tables summarize categorical data relationships.
This chapter discusses two-sample hypothesis tests for comparing means and proportions between two independent populations or between paired/dependent samples. It provides examples of hypothesis tests to compare the means of two independent samples using the z-test if populations are normal and sample sizes are large, or the t-test if populations are normal but sample sizes are small. Tests are also shown to compare proportions between two independent populations using the z-test, and to compare means between paired samples using the t-test.
This document defines key concepts in hypothesis testing including the null and alternative hypotheses, the five-step hypothesis testing procedure, and types of errors. It provides examples of hypothesis tests for a population mean when the standard deviation is known and unknown, and for a population proportion. The document explains how to set up and conduct hypothesis tests, interpret results, and compute Type I and Type II errors.
The document discusses methods for organizing and presenting both qualitative and quantitative data, including frequency tables, bar charts, pie charts, and different types of frequency distributions. It provides examples of how to construct a frequency table by determining the number of classes, class intervals, and class limits based on a set of data. It also describes how to create histograms, frequency polygons, and cumulative frequency distributions to graphically display a frequency distribution and highlights key terms such as class frequency, class interval, and relative frequency.
This document provides an introduction to statistics, covering key concepts such as descriptive versus inferential statistics, qualitative versus quantitative variables, discrete versus continuous variables, and the four levels of measurement (nominal, ordinal, interval, and ratio). Descriptive statistics are used to organize and summarize data, while inferential statistics allow generalizing from a sample to a population. Variables can be qualitative (non-numeric attributes) or quantitative (numeric values), and quantitative variables can be discrete (taking on countable values) or continuous (taking on any value within a range). The levels of measurement refer to the type of data and whether differences and relationships can be determined.
This chapter discusses point estimates and confidence intervals. A point estimate is a statistic used to estimate a population parameter, while a confidence interval provides a range of values that is likely to include the true population parameter. The width of a confidence interval depends on the sample size, population variability, and desired confidence level. Confidence intervals for a mean can be constructed using the t or z distributions depending on whether the population standard deviation is known. Confidence intervals can also be constructed for a population proportion. Sample sizes needed for estimating means and proportions are also addressed.
The document provides an overview of key economic concepts including:
1) Economics is the study of how scarce resources are allocated to satisfy unlimited wants.
2) Microeconomics examines decision-making of individuals and firms while macroeconomics looks at the whole economy.
3) Ceteris paribus means all other things remain equal while one variable is changed.
4) Scarcity, choice, opportunity cost, and the factors of production (land, labor, capital and entrepreneurship) are basic economic concepts.
5) Graphs, marginal analysis, production possibilities curves, technology, and the three basic economic problems (what, how, and for whom to produce) are also introduced.
BIOSTATISTICS MEAN MEDIAN MODE SEMESTER 8 AND M PHARMACY BIOSTATISTICS.pptxPayaamvohra1
1. The document provides information about biostatistics including measures of central tendency, dispersion, correlation, and regression. It defines terms like mean, median, mode, range, and standard deviation.
2. Examples of calculating mean, median, and mode from individual data sets, grouped frequency distributions, and continuous series are shown step-by-step.
3. Parametric tests like t-test, ANOVA, and tests of significance are also introduced. Overall, the document covers fundamental concepts in biostatistics through examples.
This document discusses various measures of central tendency including the arithmetic mean, geometric mean, harmonic mean, and median. It provides formulas and examples for calculating each measure. The arithmetic mean is the most commonly used average and is calculated by summing all values and dividing by the total number of items. The geometric mean considers the product of values while the harmonic mean is best for data involving rates or proportions. The median is the middle value when values are arranged in order.
This document discusses measures of dispersion in economics, which quantify how data values are spread around the average. It defines four main measures: range, which is the difference between highest and lowest values; mean deviation, which is the average absolute deviation from the mean or median; variance, which is the average of squared deviations from the mean; and standard deviation, which is the square root of the variance. Formulas are provided for calculating each measure from both ungrouped and grouped frequency distribution data. Examples are included to demonstrate calculating the measures.
This document discusses various measures of central tendency including arithmetic mean, geometric mean, and harmonic mean. It provides formulas to calculate each measure and examples worked out step-by-step. For arithmetic mean, the sum of all values is divided by the total number of values. Geometric mean is calculated by taking the nth root of the product of all values. Harmonic mean is calculated as the reciprocal of the arithmetic mean of the reciprocals. In all examples shown, the relationship between the measures holds such that the arithmetic mean is greater than the geometric mean, which is greater than the harmonic mean.
1. The document provides the scheme of work and lesson notes for Economics for Grade 11 students at Princeton College in Nigeria for the first term of the 2019/2020 school year.
2. It outlines 10 weeks of topics to be covered including basic economic tools, measures of dispersion, economic systems, and key economic indicators.
3. The lessons provide definitions, formulas, examples, and practice problems for students to learn concepts like mean, median, mode, range, variance, and standard deviation.
This document discusses various measures of dispersion in statistics including range, mean deviation, variance, and standard deviation. It provides definitions and formulas for calculating each measure along with examples using both ungrouped and grouped frequency distribution data. Box-and-whisker plots are also introduced as a graphical method to display the five number summary of a data set including minimum, quartiles, and maximum values.
The document provides information about various measures of central tendency including arithmetic mean, median, mode, geometric mean, and harmonic mean. It defines each measure and provides examples of calculating them using data from frequency distributions. The arithmetic mean is the most common average and is calculated by summing all values and dividing by the total number of values. The median is the middle value when values are arranged in order. The mode is the most frequent value. The geometric mean is calculated by taking the nth root of the product of n values. The harmonic mean gives the greatest weight to the smallest values and is used to average rates.
This document provides information about calculating the mean absolute deviation and coefficient of dispersion for grouped and ungrouped data:
- The mean absolute deviation is the average of the absolute differences between observations and the central value (mean, median or mode).
- Formulas are given for calculating the mean absolute deviation about the median for ungrouped and grouped data.
- An example shows how to calculate the mean deviation from the median and coefficient of dispersion for a set of ungrouped data and for grouped data from a frequency distribution.
- Practice problems are included for the reader to calculate these measures of dispersion.
CAVENDISH COLLEGE LESSON NOTE FOR FIRST TERM ECONOMICS SSS2 UPDATED..docxDORISAHMADU
The document provides information about measures of central tendency and dispersion from economics lessons at Cavendish College. It defines terms like mean, median, mode, range, variance, and standard deviation. For measures of central tendency, it gives the formulas to calculate each measure and provides an example using exam marks. For measures of dispersion, it similarly defines terms like range, mean deviation, variance, and standard deviation and gives the relevant formulas. It also includes an example using student weights to demonstrate calculating these measures.
This document discusses interpreting test scores through statistical measures like mean, median, mode, and other concepts. It provides formulas and examples to calculate measures of central tendency like mean from classified and unclassified data using long and short methods. It also shows how to calculate the median from a frequency distribution and defines mode as 3 times the median minus 2 times the mean. Examples are given for calculating all three measures of central tendency.
The document provides information on measures of central tendency. It discusses five main measures - arithmetic mean, geometric mean, harmonic mean, mode, and median. For arithmetic mean, it provides formulas and examples for calculating the mean from ungrouped and grouped data using both the direct and assumed mean methods. It also discusses the merits and demerits of each measure.
Measures of central tendency are used to describe the center or typical value of a dataset. The three most common measures are:
1. The mean (average) is calculated by adding all values and dividing by the number of values. It is impacted by outliers.
2. The median is the middle value when data is arranged from lowest to highest. Half the values are above it and half below.
3. The mode is the value that occurs most frequently. Datasets can have multiple modes or no clear mode.
Other measures include weighted mean, quartiles, deciles and percentiles which divide the data into progressively more segments. The choice of measure depends on the characteristics of the data and purpose of
This document discusses various measures of dispersion in statistics. It defines dispersion as the extent to which items in a data set vary from the central value. Some key measures of dispersion discussed include range, interquartile range, quartile deviation, mean deviation, and standard deviation. Formulas and examples are provided for calculating range, quartile deviation, and mean deviation from data sets. The objectives, properties, merits and demerits of each measure are outlined.
The document discusses various measures of central tendency and dispersion used in statistics. It defines mean, median, mode, quartiles, percentiles and deciles as measures of central tendency. It also discusses arithmetic mean, weighted mean, geometric mean, harmonic mean and their relationships. Measures of dispersion discussed include range, mean deviation, standard deviation, variance, interquartile range and coefficient of variation. Formulas to calculate these measures from grouped and ungrouped data are also provided.
This document provides examples and explanations of key statistical concepts including measures of central tendency (mode, median, mean), measures of dispersion (range, quartiles, interquartile range, variance, standard deviation), and examples of calculating these measures from data presented in various formats such as frequency tables, histograms, and ogives. Formulas are given for calculating the median, mean, variance and standard deviation for both discrete and grouped data. Worked examples are provided for finding these measures from different datasets.
This document provides information about various measures of central tendency including arithmetic mean, median, mode, and quartiles. It defines each measure and provides formulas and examples for calculating them for different types of data series, including individual, discrete, frequency distribution, and cumulative frequency series. Formulas are given for calculating the arithmetic mean, median, quartiles, and mode of a data set, along with examples worked out step-by-step. Advantages and disadvantages of each measure are also discussed.
This document discusses measures of central tendency, which are values used to describe the center or typical value of a data set. There are three main measures: mean, median, and mode. The mean is the average value, calculated by summing all values and dividing by the number of values. The median is the middle value when values are arranged from lowest to highest. The mode is the most frequently occurring value. The document provides formulas and examples for calculating each measure, and discusses their relative advantages and disadvantages.
This document discusses measures of central tendency and dispersion such as mean, median, mode, range, standard deviation, and quartile deviation. It provides examples of calculating the mean, median, and mode for continuous data series. One example calculates the mean as 36.31, median as 34.5, and mode using the formula Z = 3Me - 2X as 30.88. The document also includes multiple choice questions related to relationships between averages and identifying averages based on values given. It recommends statistics textbooks for additional reference on topics discussed.
This document discusses measures of dispersion such as mean deviation. It provides formulas to calculate mean deviation from the mean, median, and mode. It also gives examples of calculating mean deviation from given data sets and interpreting the results. Mean deviation is the average of the absolute deviations from the mean, median, or mode. It indicates how far the values are spread out from the central tendency. The document shows how to compute mean deviation and its coefficient from sample data for different measures of central tendency.
Similar to Describing Data: Numerical Measures (20)
The document discusses building customer loyalty through quality service. It defines key concepts like customer satisfaction, customer loyalty, and quality. It explains that customer satisfaction alone is not enough for loyalty and that companies must implement relationship marketing. The chapter objectives are to understand these concepts and how to resolve complaints, implement quality practices, and manage capacity and demand.
This document discusses probability distributions and related concepts. It begins by defining key terms like probability distribution, random variable, discrete and continuous distributions. It then focuses on several specific discrete probability distributions - binomial, hypergeometric, and Poisson. For each, it provides the characteristics and formulas for calculating probabilities. Several examples are worked through to demonstrate calculating probabilities, means, variances and more for problems that fit each distribution.
Frequency Tables, Frequency Distributions, and Graphic PresentationConflagratioNal Jahid
This document provides an overview of key concepts for describing data through frequency tables, distributions, and graphs. It defines important terms like frequency table, distribution, class, interval and discusses how to organize both qualitative and quantitative data. Guidelines for data collection are provided. Examples are given to demonstrate how to construct frequency tables and distributions and convert them to relative frequencies. Finally, different types of graphs for presenting frequency distributions are described, including histograms, polygons and cumulative distributions.
The document discusses strategic planning for businesses in the hospitality and tourism industries. It covers defining a company's mission and objectives, analyzing internal strengths and weaknesses as well as external opportunities and threats, developing business strategies, and implementing and controlling the strategic plan. Key aspects of strategic planning discussed include assessing profit potential of business units, defining competitive scopes, establishing strategic business units, and considering growth strategies like diversification, integration and strategic alliances.
The document discusses key characteristics of marketing services, including the intangible and perishable nature of services. It outlines strategies for managing service quality and differentiation, such as exceeding customer expectations, emphasizing physical surroundings, and training employees. The service-profit chain links customer satisfaction to employee satisfaction and business profits. Managing capacity, consistency, and customer relationships are also important.
The document discusses population migration from rural to urban areas in Bangladesh. It identifies several factors that contribute to migration, including natural factors like monsoon flooding and riverbank erosion, as well as economic factors such as poverty, unemployment, and seasonal food insecurity in rural areas. It also examines the social structure and social stratification in Bangladesh, noting traditional class distinctions had little importance and identifying key social classes based on employment status.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
2. 2
3-77
Properties of Arithmetic Mean
1. Every set of interval-level and ratio-level data has a
mean.
2. All the values are included in computing the mean.
3. The mean is unique.
4. The sum of the deviations of each value from the
mean is zero.
5. The mean is affected by unusually large or small
data values.
6. It can not be computed for an open-ended
frequency distribution
3-88
Calculating Sample Mean:
From Grouped Data
Class Frequency
0-8 2
8-16 6
16-24 3
24-32 5
32-40 2
40-48 2
Total 20
3-99
Calculating Sample Mean:
From Grouped Data
Class Mid-Points
(x)
f d
=(x-A)÷i
fd
0-8 4 2 -2 -4
8-16 12 6 -1 -6
16-24 20=A 3 0 0
24-32 28 5 +1 5
32-40 36 2 +2 4
40-48 44 2 +3 6
Total 20 20 ∑fd=5
22=2+20=8×
20
5
+20=i×
n
fd
+A=x
∑
n
fx
=x
∑
3-1010
Weighted Mean
The weighted mean takes into account the
weight of every item
The weighted mean of a set of numbers X1, X2,
..., Xn, with corresponding weights w1, w2, ...,wn,
is computed from the following formula:
3-1111
EXAMPLE – Weighted Mean
The Carter Construction Company pays its hourly
employees $16.50, $19.00, or $25.00 per hour. There
are 26 hourly employees, 14 of which are paid at the
$16.50 rate, 10 at the $19.00 rate, and 2 at the $25.00
rate. What is the mean hourly rate paid the 26
employees?
3-12
The Median
PROPERTIES OF THE MEDIAN
1. There is a unique median for each data set.
2. It is not affected by extremely large or small values and is therefore a valuable measure of centraltendency
when such values occur.
3. It can be computed for ratio-level, interval-level, and ordinal-level data.
4. It can be computed for an open-ended frequency distribution if the median does not lie in an open-ended
class.
EXAMPLES:
MEDIAN The midpoint of the values after they have been ordered from the smallest to the largest, or the largest to
the smallest.
The ages for a sample of five college students are:
21, 25, 19, 20, 22
Arranging the data in ascending order gives:
19, 20, 21, 22, 25.
Thus the median is 21.
The heights of four basketball players, in inches, are:
76, 73, 80, 75
Arranging the data in ascending order gives:
73, 75, 76, 80.
Thus the median is 75.5
3. 3
3-1313
Calculating Median From Grouped Data
Class f CF
0-8 2 2
8-16 6 8
16-24 3 11
24-32 5 16
32-40 2 18
40-48 2 20
Total 20
m
m
L+i}
f
1)+(CF-2/)1+n(
{=m
Median class Identification:
(n+1)/2 th item is the median.
(20+1)/2=10.510.5 thth item lies in the 3rd
class
CF= cumulative frequency of the class
preceding to the median class
fm= frequency of the median class
Lm= lower limit of the median class
20=16+8}
3
9)-5.10(
{=16+8}
3
1)+(8-2/)1+20(
{=m
3-14
The Mode
MODE The value of the observation that appears most frequently.
3-1515
Properties of Mode
1.Mode occurs most times
2.It is not affected by extreme values
3.There may not be a mode or may be several
modes (bi-mode or multi-mode)
4.Mode can be used for either numerical or
categorical data
5.Mode can be found out from a frequency
distribution with open-ended class
3-1616
Calculating Mode From Grouped Data
Class f
0-8 2
8-16 6
16-24 3
24-32 5
32-40 2
40-48 2
Total 20
i)
d+d
d
(+L=M
21
1
Mo0
M0= Mode from sample
LM0= Lower limit of the modal class
d1 = Difference between the frequency of
modal class and that of pre-modal class
d1 = Difference between the frequency of
modal class and that of post-modal class
4.14=8}
3)+(2
4
{+8=M0
3-17
The Relative Positions of the Mean,
Median and the Mode
In a positively or
negatively skewed
distribution, median
is the best measure
In a symmetric
distribution, all
measures will
give the same
results
3-18
The Geometric Mean
Useful in finding the average change of percentages, ratios, indexes, or growthrates
over time.
It has a wide application in business and economics because we are often interested in
finding the percentage changes in sales, salaries, or economic figures, such as the
GDP, which compound or build on each other.
The geometric mean will always be less than or equal to the arithmetic mean.
The formula for the geometric mean is written:
EXAMPLE:
Suppose you receive a 5 percent increase in salary this year and a 15 percent
increase next year. The average annual percent increase is 9.886, not 10.0. Why is
this so? We begin by calculating the geometric mean.
098861151051 .).)(.(GM
4. 4
3-1919
EXAMPLE – Geometric Mean
The return on investment earned by Atkins
construction Company for four successive years
was: 30%, 20%, -40%, and 200 percent. What is
the geometric mean rate of return on
investment?
..).)(.)(.)(.(GM 2941808203602131 44
3-2020
EXAMPLE –Geometric Mean
Another use of the geometric mean is to
determine the average percent change over
a period of time.
For example, if you earned $30,000 in1997 and
$50,000 in 2007, what is your annual rate of
increase over the period
tart
5.24%=0.0524=1–0524.1=1–
000,30
000,50
=GM
1–
periodofstheatValue
periodofendtheatValue
=GM
10
n
3-2121
Dispersion
Why Study Dispersion?
A measure of location only describes the center of
the data, not the spread of the data.
– Two sets of data with the same central values may differ
in distribution pattern. [0, 10, 10, 20] and [9, 10, 10, 11]
It enables us to judge the reliability of data by
having additional information
It helps us compare the spread of two or more
distributions
Wewill be careful in using widely dispersed data
3-22
Measures of Dispersion
RANGE
MEAN DEVIATION
VARIANCE AND STANDARD DEVIATION
3-2323
EXAMPLE – Range
The number of cappuccinos sold at the Starbucks
location in the Orange Country Airport between 4 and
7 p.m. for a sample of 5 days last year were 20, 40,
50, 60, and 80. Determine the range for the number of
cappuccinos sold.
Range = Largest – Smallest value
= 80 – 20 = 60
3-24
EXAMPLE – Mean Deviation
EXAMPLE:
The number of cappuccinos sold at the Starbucks location in the Orange Country
Airport between 4 and 7 p.m. for a sample of 5 days last year were 20, 40, 50, 60,
and 80. Determine the mean deviation for the number of cappuccinos sold.
Step 1: Compute the mean
Step 2: Subtract the mean (50) from each of the observations, convert to positive if difference
is negative
Step 3: Sum the absolute differences found in step 2 then divide by the number of
observations
50
5
8060504020
n
x
x
5. 5
3-2525
Example – Population Mean Deviation from
Grouped Data
No of colds experienced
in 12 months
No of persons
(f)
0
1
2
3
4
5
6
7
8
9
15
46
91
162
110
95
82
26
13
2
Calculate mean deviation for the following
frequency distribution:
3-2626
X f fX X-3.78 fX-3.78
0
1
2
3
4
5
6
7
8
9
15
46
91
162
110
95
82
26
13
2
0
46
182
486
440
475
492
182
104
18
3.78
2.78
1.78
0.78
0.22
1.22
2.22
3.22
4.22
5.22
56.70
127.88
161.98
126.36
24.20
115.90
182.04
83.72
54.86
10.44
N = 642 fX = 2425 941.30
78.3=
642
2425
=
N
fXΣ
=µ 47.1=
642
30.941
=
N
|µ–x|fΣ
=MD
Example – Population Mean Deviation from
Grouped Data
3-27
Variance and Standard Deviation
VARIANCE The arithmetic mean of the squared deviations from the mean.
The variance and standard deviations are nonnegative and are zero only
if all observations are the same.
For populations whose values are near the mean, the variance and
standard deviation will be small.
For populations whose values are dispersed from the mean, the
population variance and standard deviation will be large.
The variance overcomes the weakness of the range by using all the
values in the population
STANDARD DEVIATION The square root of the variance.
3-28
EXAMPLE – Population Variance and
Population Standard Deviation
The number of traffic citations issued during the last five months in Beaufort County, South Carolina, is
reported below:
What is the population variance?
Step 1: Find the mean.
Step 2: Find the difference between each observation and the mean, and square that difference.
Step 3: Sum all the squared differences found in step 3
Step 4: Divide the sum of the squared differences by the number of items in the population.
29
12
348
12
1034...1719
N
x
124
12
488,1)( 2
2
N
X
3-2929
Example – Population Variance and Standard
Deviation From Grouped Data
Class Frequency X fx (x-u)² f(x-u)²
700-799
800-899
900-999
1000-1099
1100-1199
1200-1299
1300-1399
1400-1499
1500-1599
1600-1699
1700-1799
1800-1899
4
7
8
10
12
17
13
10
9
7
2
1
750
850
950
1050
1150
1250
1350
1450
1550
1650
1750
1850
3000
5950
7600
10500
13800
21250
17550
14500
13950
11550
3500
1850
250000
160000
90000
40000
10000
0
10000
40000
90000
160000
250000
360000
10,00,000
11,20,000
7,20,000
4,00,000
1,20,000
0
1,30,000
4,00,000
8,10,000
11,20,000
5,00,000
3,60,000
Total 200 ∑fx=
1,25,000
f(x-u)² =
66,80,00
Levin: Example-3-66 3-3030
Example – Population Variance and Standard
Deviation From Grouped Data
1250=
100
000,25,1
=
N
fx
=µ
∑
800,66
100
000,80,66
N
2)x(f2
5.258800,66
6. 6
3-31
Sample Variance and
Standard Deviation
sampletheinnsobservatioofnumbertheis
sampletheofmeantheis
sampletheinnobservatioeachofvaluetheis
variancesampletheis
:Where
2
n
X
X
s
EXAMPLE
The hourly wages for a sample of part-time
employees at Home Depot are: $12, $20,
$16, $18, and $19.
What is the sample variance?
3-32
EXAMPLE:
Determine the arithmetic mean vehicle
selling price given in the frequency
table below.
The Sample Mean and Standard
Deviation of Grouped Data
EXAMPLE
Compute the standard deviation of the vehicle
selling prices in the frequencytable below.
3-3333
Uses of Standard Deviation (SD)
SD tells us the location of values of a
frequency distribution in relation to its mean
According to Chebyshev: ‘No matter what the
shape of the distribution, at least 75% of the
values will fall within ±2σ from the mean of the
distribution and at least 89% of the values will
lie within ±3 σ from the mean.’
If the distribution is symmetric, bell-shaped, we
can measure it with more precision
3-34
Chebyshev’s Theorem and Empirical Rule
The arithmetic mean biweekly
amount contributed by the Dupree
Paint employees to the company’s
profit-sharing plan is $51.54, and
the standard deviation is $7.51. At
least what percent of the
contributions lie within plus 3.5
standard deviations and minus 3.5
standard deviations of the mean?
3-3535
Standard Score
Standard Score gives us the number of
standard deviations an observation lies
below or above the mean.
Standard Score of any data point is
represented by Z.
Z= (x-u)÷σ
3-3636
Example – Sample Variance and Standard
Deviation From Grouped Data
The administrator of a Georgia hospital
surveyed the number of days 200
randomly chosen patients stayed in the
hospital following an operation. The days
are given in the table.
(a) Calculate the standard deviation and
mean
(b) According to Chebyshev’s theorem,
how many stays should be between 0
and 17 days? How many are actually in
that interval?
c) Because the distribution is roughly
bell-shaped, how many stays can we
expect between 0 and 17 days?
Class f
1 – 3
4 – 6
7 – 9
10 – 12
13 – 15
16 – 18
19 – 21
22 – 24
18
90
44
21
9
9
4
5
Total 200
Levin: Exercise Problem-3-66
7. 7
3-3737
Solution– Sample Variance and Standard
Deviation From Grouped Data
Class f x fx ∑f(x- )²
1 – 3
4 – 6
7 – 9
10 – 12
13 – 15
16 – 18
19 – 21
22 – 24
18
90
44
21
9
9
4
5
2
5
8
11
14
17
20
23
36
450
352
231
126
153
80
115
587.90
663.41
3.57
226.62
355.51
775.90
603.68
1168.16
Total 200 1543 4384.8
x
191
182
3-3838
Solution– Sample Variance and Standard
Deviation From Grouped Data
715.7=
200
1543
=
n
fxΣ
=x
69.4=
1–200
76.4384
=
1–n
)x–x(fΣ
=S
2
(b-i) 0–17 range equals mean ±2SD (7.715 ± 2(4.69)
Thus, at least150 (75% of 200) is expected.
(b-ii) Something between 182 and 191 observed
(c) 190 items expected
3-3939
Coefficient of Variation (CV)
A relative measure of dispersion, comparable
across distributions, that expresses the SD as
percentage of mean
CV is used to compare 2 or more sets of data
measured in different units
CV is sensitive to outliers
CV= (σ÷u) ×100
3-4040
Problem of CV
Regular MBA:
Evening MBA:
23
27
29
34
27
30
22
29
24
28
21
30
25
34
26
35
27
28
24
29
Students’ ages in the regular daytime MBA
program and the evening program of Central
University are described by these two samples:
If homogeneity of the class is a positive factor in
learning, use a measure of relative variability to
suggest which of the two groups will be easier to
teach.
Levin: Exercise Problem-3-76
3-4141
Solution to the Problem of CV
Levin: Exercise Problem-3-76
Regular MBA Evening MBA
X X
23
29
27
22
24
21
25
26
27
24
– 1.8
4.2
2.2
-2.8
-0.8
-3.8
0.2
1.2
2.2
– 0.8
3.24
17.64
4.84
7.84
0.64
14.44
0.04
1.44
4.84
0.64
27
34
30
29
28
30
34
35
28
29
– 3.4
3.6
-0.4
-1.4
-2.4
-0.4
3.6
4.6
-2.4
– 1.4
11.56
12.96
0.16
1.96
5.76
0.16
12.96
21.16
5.76
1.96
x = 248 = 55.6 x = 304 = 74.4
X(
)x–x(
2
)x–x( )x–x(
2
)x–x(
3-4242
Solution to the Problem of CV
Levin: Exercise Problem-3-76
8.24=
10
248
=
n
xΣ
=X -MBAReg
4.30=
10
304
=
n
xΣ
=xEMBA
485.2=
1–10
6.55
=
1–n
)x–x(Σ
=S
2
MBAgRe 876.2=
1–10
4.74
=
1–n
)x–x(Σ
=S
2
EMBA
%02.10=100×
8.24
485.2
=100×
x
s
CV MBAgRe
%46.9=100×
4.30
876.2
=100×
x
s
CVEMBA
Since 9.46% < 10.02%, EMBA is easier to teach