Managers require accurate forecasts to make good decisions. There are three main categories of forecasting approaches: qualitative and judgmental techniques which rely on experience; statistical time-series models which analyze historical data patterns; and explanatory/causal methods which consider factors influencing changes. Some common forecasting techniques include moving averages, exponential smoothing, and trend line analysis, with error metrics like mean absolute deviation used to evaluate accuracy.
Hypothesis Test _One-sample t-test, Z-test, Proportion Z-testRavindra Nath Shukla
This document discusses hypothesis testing concepts including the null and alternative hypotheses, type I and II errors, and the hypothesis testing process. It provides examples of hypothesis testing for a mean where the population standard deviation is known (z-test) and unknown (t-test). The document outlines the 6 steps in hypothesis testing and provides examples using both the critical value approach and p-value approach. It discusses the relationship between hypothesis testing and confidence intervals.
This document outlines key concepts related to constructing confidence intervals for estimating population means and proportions. It discusses how to calculate confidence intervals when the population standard deviation is known or unknown. Specifically, it provides the formulas and assumptions for constructing confidence intervals for a population mean using the normal and t-distributions. It also outlines how to calculate confidence intervals for a population proportion using the normal approximation. Examples are provided to demonstrate how to construct 95% confidence intervals for a mean and proportion based on sample data.
The document provides an overview of analysis of variance (ANOVA), including what it is, how it works, key terminology, and the steps to conduct one-way and two-way ANOVA tests. ANOVA is a statistical technique used to test if there are significant differences between the means of two or more groups. It compares the variation within groups to the variation between groups to determine if observed differences are due to chance. The document outlines the null and alternative hypotheses, calculations for sums of squares, degrees of freedom, F-statistics, and how to interpret the results against critical values from the F-distribution table.
The document discusses the Chi-Square test. It begins by explaining what the Chi-Square test is and how it was developed. It then provides the formula for computing Chi-Square and explains how the test can be used to determine if there is a significant difference between observed and expected results. Some key applications of the Chi-Square test discussed include testing hypotheses about variance, testing independence of attributes, and testing goodness of fit. Examples are provided to illustrate how to perform Chi-Square tests for different situations.
This presentation is a part of Business analytics course.
Probability Distribution is a statistical function which links or lists all the possible outcomes a random variable can take, in any random process, with its corresponding probability of occurrence.
Hypothesis Test _One-sample t-test, Z-test, Proportion Z-testRavindra Nath Shukla
This document discusses hypothesis testing concepts including the null and alternative hypotheses, type I and II errors, and the hypothesis testing process. It provides examples of hypothesis testing for a mean where the population standard deviation is known (z-test) and unknown (t-test). The document outlines the 6 steps in hypothesis testing and provides examples using both the critical value approach and p-value approach. It discusses the relationship between hypothesis testing and confidence intervals.
This document outlines key concepts related to constructing confidence intervals for estimating population means and proportions. It discusses how to calculate confidence intervals when the population standard deviation is known or unknown. Specifically, it provides the formulas and assumptions for constructing confidence intervals for a population mean using the normal and t-distributions. It also outlines how to calculate confidence intervals for a population proportion using the normal approximation. Examples are provided to demonstrate how to construct 95% confidence intervals for a mean and proportion based on sample data.
The document provides an overview of analysis of variance (ANOVA), including what it is, how it works, key terminology, and the steps to conduct one-way and two-way ANOVA tests. ANOVA is a statistical technique used to test if there are significant differences between the means of two or more groups. It compares the variation within groups to the variation between groups to determine if observed differences are due to chance. The document outlines the null and alternative hypotheses, calculations for sums of squares, degrees of freedom, F-statistics, and how to interpret the results against critical values from the F-distribution table.
The document discusses the Chi-Square test. It begins by explaining what the Chi-Square test is and how it was developed. It then provides the formula for computing Chi-Square and explains how the test can be used to determine if there is a significant difference between observed and expected results. Some key applications of the Chi-Square test discussed include testing hypotheses about variance, testing independence of attributes, and testing goodness of fit. Examples are provided to illustrate how to perform Chi-Square tests for different situations.
This presentation is a part of Business analytics course.
Probability Distribution is a statistical function which links or lists all the possible outcomes a random variable can take, in any random process, with its corresponding probability of occurrence.
Hypothesis testing involves stating a null hypothesis (H0) and an alternative hypothesis (H1). A test statistic is calculated from sample data and used to determine whether to reject or fail to reject H0. There are two types of errors: Type I rejects a true H0, Type II fails to reject a false H0. The significance level (α) limits Type I error, while power (1- β) measures the test's ability to reject H0 when it is false. Tests can be one-tailed if H1 specifies a direction, or two-tailed. The rejection region defines values where H0 will be rejected.
hypothesis testing-tests of proportions and variances in six sigmavdheerajk
The document provides information about various statistical hypothesis tests that can be used to analyze data and test if process improvements have resulted in significant changes. It discusses one proportion tests, two proportions tests, one-variance tests, two-variances tests, and how to determine which test to use based on the type of data and questions being asked. Examples are also provided of applying these tests using Minitab software to analyze sample data and test hypotheses about changes between before and after process improvement situations. The document aims to help determine the appropriate statistical tests for validating improvements in processes.
This document provides an overview of sampling theory and statistical analysis. It discusses different sampling methods, important sampling terms, and statistical tests. The key points are:
1) There are two ways to collect statistical data - a complete enumeration (census) or a sample survey. A sample is a portion of a population that is examined to estimate population characteristics.
2) Common sampling methods include simple random sampling, systematic sampling, stratified sampling, cluster sampling, quota sampling, and purposive sampling.
3) Important terms include parameters, statistics, sampling distributions, and statistical inferences about populations based on sample data.
4) Statistical tests covered include hypothesis testing, types of errors, test statistics, critical values,
This document discusses statistical concepts such as parameters, statistics, descriptive statistics, estimation, and hypothesis testing. It provides examples of:
- Point estimates and interval estimates used to estimate population parameters from sample statistics. Point estimates provide a single value while interval estimates provide a range of values.
- Confidence intervals which specify a range of values that is expected to contain the population parameter a certain percentage of times, known as the confidence level. Common confidence levels are 90%, 95%, and 99%.
- Formulas for constructing confidence intervals for the population mean, proportion, and variance based on the sample statistic, sample size, confidence level, and whether the population standard deviation is known.
1) To understand the underlying structure of Time Series represented by sequence of observations by breaking it down to its components.
2) To fit a mathematical model and proceed to forecast the future.
The document discusses different types of two-sample hypothesis tests, including tests comparing two population means of independent samples, two population proportions, and paired or dependent samples. It provides examples and step-by-step explanations of how to conduct two-sample t-tests, z-tests, and tests of proportions. Key points covered include determining the appropriate test statistic based on sample size and characteristics, stating the null and alternative hypotheses, test criteria, and decisions rules.
This document provides an overview of the binomial probability distribution, including key terminology like random experiments, outcomes, sample space, and discrete vs. continuous random variables. It defines a binomial experiment as having n repeated trials with two possible outcomes (success/failure), where the probability of success p is constant for each trial. The number of successes is a binomial random variable with a binomial probability distribution. Several examples are given to illustrate calculating probabilities of outcomes for binomial experiments involving dice rolls, patient recoveries, telephone call successes, ratios of children's sexes, and metal piston rejects. The mean, variance, and standard deviation of the binomial distribution are also defined in terms of n, p, and q.
This ppt is a part of Business Analytics course.
Normal distribution : -
The Normal Distribution, also called the Gaussian Distribution, is the most significant continuous probability distribution.
A normal distribution is a
symmetric, bell-shaped curve
that describes the distribution of continuous random variables.
The normal curve describes how data are distributed in a population.
A large number of random variables are either nearly or exactly represented by the normal distribution
The normal distribution can be used to represent a wide range of data, such as test scores, height measurements, and weights of people in a population.
The central limit theorem states that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases, even if the population is not normally distributed. It provides the mean and standard deviation of the sampling distribution of the sample mean. The document gives the definition of the central limit theorem and provides an example of how to use it to calculate probabilities related to the sample mean of a large normally distributed population.
This document provides instructions for conducting a one sample t-test in SPSS. It explains how to select the test variable, specify the comparison mean value, and obtain the output, which includes descriptive statistics for subgroups and the results of the t-test showing the t-statistic, significance value, degrees of freedom, mean difference, and confidence interval. The t-test is used to test the hypothesis of equal population means.
1. The document discusses the chi-square test, which is used to determine if there is a relationship between two categorical variables.
2. A contingency table is constructed with observed frequencies to calculate expected frequencies under the null hypothesis of no relationship.
3. The chi-square test statistic is calculated by summing the squared differences between observed and expected frequencies divided by the expected frequencies.
4. The calculated chi-square value is then compared to a critical value from the chi-square distribution to determine whether to reject or fail to reject the null hypothesis.
Hypothesis testing refers to formal statistical procedures used to accept or reject claims about populations based on data. It involves:
1) Stating a null hypothesis that makes a claim about a population parameter.
2) Collecting sample data and computing a test statistic.
3) Determining whether to reject the null hypothesis based on the probability of obtaining the sample statistic if the null is true.
Rejecting the null supports the alternative hypothesis. Type I and Type II errors occur when the null is incorrectly rejected or not rejected. Hypothesis tests aim to minimize errors while maximizing power to detect meaningful alternative hypotheses.
This document defines time series and its components. A time series is a set of observations recorded over successive time intervals. It has four main components: trend, seasonality, cycles, and irregular variations. Trend refers to the overall increasing or decreasing tendency over time. Seasonality refers to predictable changes that occur around the same time each year. Cycles have periods longer than a year. Irregular variations are random fluctuations. The document also discusses methods for analyzing time series components including additive, multiplicative, and mixed models.
Hypothesis testing involves proposing and testing hypotheses, or predictions, about relationships between variables. There are four main types of hypotheses: null, alternative, directional, and non-directional. The null hypothesis proposes no relationship between variables, while the alternative hypothesis contradicts the null. Directional hypotheses predict the nature of a relationship, while non-directional hypotheses do not. Common statistical tests used for hypothesis testing include the z-test, t-test, chi-square test, and F-test. Hypothesis testing is a crucial part of the scientific method for assessing theories through empirical observation.
This document provides an overview of hypotheses testing in research. It defines a hypothesis as an explanation or proposition that can be tested scientifically. The main points covered are:
1. The general procedure for hypothesis testing involves making formal statements of the null and alternative hypotheses, selecting a significance level, choosing a statistical distribution, collecting a random sample, calculating probabilities, and comparing probabilities to determine whether to reject or fail to reject the null hypothesis.
2. There are two types of hypotheses tests - one-tailed and two-tailed. A one-tailed test has one rejection region while a two-tailed test has two rejection regions, one in each tail.
3. Errors in hypothesis testing can occur when the null hypothesis
This document provides information on chi-square tests and other statistical tests for qualitative data analysis. It discusses the chi-square test for goodness of fit and independence. It also covers Fisher's exact test and McNemar's test. Examples are provided to illustrate chi-square calculations and how to determine statistical significance based on degrees of freedom and critical values. Assumptions and criteria for applying different tests are outlined.
This document discusses parametric and nonparametric statistical tests. Parametric tests like the t-test and ANOVA assume a normal distribution of data and compare population means. Nonparametric tests do not assume a normal distribution and can be used when sample sizes are small or distributions are unknown. Specific parametric tests covered include the t-test for comparing two groups, one-way ANOVA for comparing three or more groups on one factor, and two-way ANOVA for examining two factors. Examples of how and when to use these various tests are provided.
this session differentiates between univariate, bivariate, and multivariate analysis. it covers practical assessment of table of critical values and understanding of the degree of freedom
This document provides information about the Kruskal-Wallis H test, a non-parametric method for testing whether samples originate from the same distribution. It describes how the Kruskal-Wallis test is a generalization of the Mann-Whitney U test that allows comparison of more than two independent groups. The test works by ranking all data from lowest to highest and then summing the ranks for each group to calculate the test statistic H, which is compared to a chi-squared distribution to determine whether to reject or fail to reject the null hypothesis that all population medians are equal.
- Forecasting helps reduce risk and uncertainty in decision making by predicting future outcomes.
- There are three main types of forecasting methods: qualitative, extrapolative/time series, and causal/explanatory.
- Time series forecasting uses historical data patterns to predict future values, accounting for trends, seasonality, cycles, and randomness. Common time series forecasting techniques include moving averages, weighted moving averages, and exponential smoothing.
FIRE ADMIN UNIT 1 .orct121320#ffffff#fa951a#FFFFFF#e7b3513VERSON.docxAKHIL969626
FIRE ADMIN UNIT 1 .orct121320#ffffff#fa951a#FFFFFF#e7b3513VERSON2.2MAYOR/CITY COUNSELxNO#66b66cCITY MANAGER1zNO#CD6A80FIRE CHIEF2zNO#504DCDOPERATIONS ASSISTANT CHIEF3zNO#FF8C00ADMINISTRATIVE ASSISTANT CHIEF3zNO#8E388ECHIEF OF PREVENTION5zNO#00ae00CHIEF OF TRAINING5zNO#ff6e01CONFIDENTIAL AMINISTRSTIVE ASSISTANT3x8#935c24ADMINISTRATIVE ASSISTANT4x9#388E8EADMINISTRATIVE ASSISTANT5y10#5483a2BATTALION CHIEF (1 PER SHIFT4zNO#B0171FDISTRICT CHIEF (3 PER SHIFT)11zNO#912CEECAPTAIN (18 PER SHIFT)12zNO#0000EELIEUTANENT (18 PER SHIFT)13zNO#00868BDRIVER/OPERATOR (18 PER SHIFT)14zNO#698B22FIREFIGHTER-1 (18 PER SHIFT)15zNO#FFA500RESCUE SPECIALIST II (10 PER SHIFT)12zNO#7171C6RESCUE SPECIALIST I (10 PER SHIFT)17zNO#418cf0SENIOR FIRE INVESTIGATOR6zNO#00BFFFSENIOR FIRE SAFETY EDUCATOR6zNO#4682B4SENIOR FIRE INSPECTOR6zNO#FF8C00FIRE INVESTIGATOR II19zNO#0000EEFIRE INVESTIGATOR I22zNO#6E7B8BFIRE SAFETY EDUCATOR II20zNO#FF6103FIRE SAFETY EDUCATOR I24zNO#FFE4E1FIRE INSPECTOR II21zNO#808000FIRE INSPECTOR I (2)26zNO#9BCD9BSENIOR TRAINING OFFICER7zNO#87CEFATRAINING OFFICER II (2)28zNO#D02090TRAINING OFFICER I (3)29zNO#308014MAINTENANCE SUPERVISOR/MASTER MECHANIC5zNO#9ACD32ADMINISTRATIVE ASSISTANT31y32#418cf0MAINTENANCE TECHNICIAN II31zNO#CD6A80MAINTENANCE TECHNICIAN (2)33zNO#504DCDzNO#FF8C00yNO#8E388ExNO#00ae00zNO#ff6e01xNO#935c24yNO#388E8ExNO#5483a2zNO#B0171FxNO#912CEExNO#00ae00yNO#00868ByNO#698B22xNO#FFA500yNO#7171C6zNO#418cf0xNO#00BFFFyNO#4682B4xNO#FF8C00yNO#0000EExNO#6E7B8BxNO#FF6103zNO#FFE4E1xNO#808000yNO#9BCD9ByNO#87CEFAxNO#D02090xNO#308014yNO#9ACD32zNO#418cf0yNO#CD6A80xNO#504DCDyNO#FF8C00xNO#8E388ExNO#00ae00yNO#ff6e01zNO#935c24xNO#388E8EyNO#5483a2xNO#B0171FxNO#912CEEyNO#00ae00yNO#00868BxNO#698B22zNO#FFA500zNO#7171C6yNO#6E7B8BxNO#00BFFFyNO#FFE4E1zNO#FF8C00yNO#0000EEyNO#6E7B8BxNO#FF6103yNO#FFE4E1zNO#808000yNO#9BCD9BxNO#87CEFAyNO#D02090xNO#308014xNO#9ACD32yNO#418cf0xNO#CD6A80zNO#504DCDzNO#FF8C00yNO#8E388ExNO#00ae00yNO#ff6e01zNO#935c24yNO#388E8EyNO#5483a2xNO#B0171FyNO#912CEEzNO#00ae1eyNO#00868BxNO#698B22yNO#FFA500xNO#7171C6
Business Decision Making Project Part 2
Jared Linscombe
QNT/275
Dr. Davisson
September 12, 2016
Descriptive Statistics
Descriptive statistics are statistics that describe or summarize features of collected data. Descriptive statistics simply present quantitative information in a manner that can be easily managed. The large amount of data is reduced into a simple summary and therefore the whole process of describing the data is less laborious.
For example, finding the mean helps to summarize a lot of individual information into a way that is quickly understood. The samples are likely to produce different independent variables that affect the sales of Elite Technologies Limited. For this reason, we opt to use bivariate analysis in the describing the statistics. Bivariate analysis of the descriptive statistics that is derived from the data will help in drawing relationships between different variables.
For a more accurate representa ...
Hypothesis testing involves stating a null hypothesis (H0) and an alternative hypothesis (H1). A test statistic is calculated from sample data and used to determine whether to reject or fail to reject H0. There are two types of errors: Type I rejects a true H0, Type II fails to reject a false H0. The significance level (α) limits Type I error, while power (1- β) measures the test's ability to reject H0 when it is false. Tests can be one-tailed if H1 specifies a direction, or two-tailed. The rejection region defines values where H0 will be rejected.
hypothesis testing-tests of proportions and variances in six sigmavdheerajk
The document provides information about various statistical hypothesis tests that can be used to analyze data and test if process improvements have resulted in significant changes. It discusses one proportion tests, two proportions tests, one-variance tests, two-variances tests, and how to determine which test to use based on the type of data and questions being asked. Examples are also provided of applying these tests using Minitab software to analyze sample data and test hypotheses about changes between before and after process improvement situations. The document aims to help determine the appropriate statistical tests for validating improvements in processes.
This document provides an overview of sampling theory and statistical analysis. It discusses different sampling methods, important sampling terms, and statistical tests. The key points are:
1) There are two ways to collect statistical data - a complete enumeration (census) or a sample survey. A sample is a portion of a population that is examined to estimate population characteristics.
2) Common sampling methods include simple random sampling, systematic sampling, stratified sampling, cluster sampling, quota sampling, and purposive sampling.
3) Important terms include parameters, statistics, sampling distributions, and statistical inferences about populations based on sample data.
4) Statistical tests covered include hypothesis testing, types of errors, test statistics, critical values,
This document discusses statistical concepts such as parameters, statistics, descriptive statistics, estimation, and hypothesis testing. It provides examples of:
- Point estimates and interval estimates used to estimate population parameters from sample statistics. Point estimates provide a single value while interval estimates provide a range of values.
- Confidence intervals which specify a range of values that is expected to contain the population parameter a certain percentage of times, known as the confidence level. Common confidence levels are 90%, 95%, and 99%.
- Formulas for constructing confidence intervals for the population mean, proportion, and variance based on the sample statistic, sample size, confidence level, and whether the population standard deviation is known.
1) To understand the underlying structure of Time Series represented by sequence of observations by breaking it down to its components.
2) To fit a mathematical model and proceed to forecast the future.
The document discusses different types of two-sample hypothesis tests, including tests comparing two population means of independent samples, two population proportions, and paired or dependent samples. It provides examples and step-by-step explanations of how to conduct two-sample t-tests, z-tests, and tests of proportions. Key points covered include determining the appropriate test statistic based on sample size and characteristics, stating the null and alternative hypotheses, test criteria, and decisions rules.
This document provides an overview of the binomial probability distribution, including key terminology like random experiments, outcomes, sample space, and discrete vs. continuous random variables. It defines a binomial experiment as having n repeated trials with two possible outcomes (success/failure), where the probability of success p is constant for each trial. The number of successes is a binomial random variable with a binomial probability distribution. Several examples are given to illustrate calculating probabilities of outcomes for binomial experiments involving dice rolls, patient recoveries, telephone call successes, ratios of children's sexes, and metal piston rejects. The mean, variance, and standard deviation of the binomial distribution are also defined in terms of n, p, and q.
This ppt is a part of Business Analytics course.
Normal distribution : -
The Normal Distribution, also called the Gaussian Distribution, is the most significant continuous probability distribution.
A normal distribution is a
symmetric, bell-shaped curve
that describes the distribution of continuous random variables.
The normal curve describes how data are distributed in a population.
A large number of random variables are either nearly or exactly represented by the normal distribution
The normal distribution can be used to represent a wide range of data, such as test scores, height measurements, and weights of people in a population.
The central limit theorem states that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases, even if the population is not normally distributed. It provides the mean and standard deviation of the sampling distribution of the sample mean. The document gives the definition of the central limit theorem and provides an example of how to use it to calculate probabilities related to the sample mean of a large normally distributed population.
This document provides instructions for conducting a one sample t-test in SPSS. It explains how to select the test variable, specify the comparison mean value, and obtain the output, which includes descriptive statistics for subgroups and the results of the t-test showing the t-statistic, significance value, degrees of freedom, mean difference, and confidence interval. The t-test is used to test the hypothesis of equal population means.
1. The document discusses the chi-square test, which is used to determine if there is a relationship between two categorical variables.
2. A contingency table is constructed with observed frequencies to calculate expected frequencies under the null hypothesis of no relationship.
3. The chi-square test statistic is calculated by summing the squared differences between observed and expected frequencies divided by the expected frequencies.
4. The calculated chi-square value is then compared to a critical value from the chi-square distribution to determine whether to reject or fail to reject the null hypothesis.
Hypothesis testing refers to formal statistical procedures used to accept or reject claims about populations based on data. It involves:
1) Stating a null hypothesis that makes a claim about a population parameter.
2) Collecting sample data and computing a test statistic.
3) Determining whether to reject the null hypothesis based on the probability of obtaining the sample statistic if the null is true.
Rejecting the null supports the alternative hypothesis. Type I and Type II errors occur when the null is incorrectly rejected or not rejected. Hypothesis tests aim to minimize errors while maximizing power to detect meaningful alternative hypotheses.
This document defines time series and its components. A time series is a set of observations recorded over successive time intervals. It has four main components: trend, seasonality, cycles, and irregular variations. Trend refers to the overall increasing or decreasing tendency over time. Seasonality refers to predictable changes that occur around the same time each year. Cycles have periods longer than a year. Irregular variations are random fluctuations. The document also discusses methods for analyzing time series components including additive, multiplicative, and mixed models.
Hypothesis testing involves proposing and testing hypotheses, or predictions, about relationships between variables. There are four main types of hypotheses: null, alternative, directional, and non-directional. The null hypothesis proposes no relationship between variables, while the alternative hypothesis contradicts the null. Directional hypotheses predict the nature of a relationship, while non-directional hypotheses do not. Common statistical tests used for hypothesis testing include the z-test, t-test, chi-square test, and F-test. Hypothesis testing is a crucial part of the scientific method for assessing theories through empirical observation.
This document provides an overview of hypotheses testing in research. It defines a hypothesis as an explanation or proposition that can be tested scientifically. The main points covered are:
1. The general procedure for hypothesis testing involves making formal statements of the null and alternative hypotheses, selecting a significance level, choosing a statistical distribution, collecting a random sample, calculating probabilities, and comparing probabilities to determine whether to reject or fail to reject the null hypothesis.
2. There are two types of hypotheses tests - one-tailed and two-tailed. A one-tailed test has one rejection region while a two-tailed test has two rejection regions, one in each tail.
3. Errors in hypothesis testing can occur when the null hypothesis
This document provides information on chi-square tests and other statistical tests for qualitative data analysis. It discusses the chi-square test for goodness of fit and independence. It also covers Fisher's exact test and McNemar's test. Examples are provided to illustrate chi-square calculations and how to determine statistical significance based on degrees of freedom and critical values. Assumptions and criteria for applying different tests are outlined.
This document discusses parametric and nonparametric statistical tests. Parametric tests like the t-test and ANOVA assume a normal distribution of data and compare population means. Nonparametric tests do not assume a normal distribution and can be used when sample sizes are small or distributions are unknown. Specific parametric tests covered include the t-test for comparing two groups, one-way ANOVA for comparing three or more groups on one factor, and two-way ANOVA for examining two factors. Examples of how and when to use these various tests are provided.
this session differentiates between univariate, bivariate, and multivariate analysis. it covers practical assessment of table of critical values and understanding of the degree of freedom
This document provides information about the Kruskal-Wallis H test, a non-parametric method for testing whether samples originate from the same distribution. It describes how the Kruskal-Wallis test is a generalization of the Mann-Whitney U test that allows comparison of more than two independent groups. The test works by ranking all data from lowest to highest and then summing the ranks for each group to calculate the test statistic H, which is compared to a chi-squared distribution to determine whether to reject or fail to reject the null hypothesis that all population medians are equal.
- Forecasting helps reduce risk and uncertainty in decision making by predicting future outcomes.
- There are three main types of forecasting methods: qualitative, extrapolative/time series, and causal/explanatory.
- Time series forecasting uses historical data patterns to predict future values, accounting for trends, seasonality, cycles, and randomness. Common time series forecasting techniques include moving averages, weighted moving averages, and exponential smoothing.
FIRE ADMIN UNIT 1 .orct121320#ffffff#fa951a#FFFFFF#e7b3513VERSON.docxAKHIL969626
FIRE ADMIN UNIT 1 .orct121320#ffffff#fa951a#FFFFFF#e7b3513VERSON2.2MAYOR/CITY COUNSELxNO#66b66cCITY MANAGER1zNO#CD6A80FIRE CHIEF2zNO#504DCDOPERATIONS ASSISTANT CHIEF3zNO#FF8C00ADMINISTRATIVE ASSISTANT CHIEF3zNO#8E388ECHIEF OF PREVENTION5zNO#00ae00CHIEF OF TRAINING5zNO#ff6e01CONFIDENTIAL AMINISTRSTIVE ASSISTANT3x8#935c24ADMINISTRATIVE ASSISTANT4x9#388E8EADMINISTRATIVE ASSISTANT5y10#5483a2BATTALION CHIEF (1 PER SHIFT4zNO#B0171FDISTRICT CHIEF (3 PER SHIFT)11zNO#912CEECAPTAIN (18 PER SHIFT)12zNO#0000EELIEUTANENT (18 PER SHIFT)13zNO#00868BDRIVER/OPERATOR (18 PER SHIFT)14zNO#698B22FIREFIGHTER-1 (18 PER SHIFT)15zNO#FFA500RESCUE SPECIALIST II (10 PER SHIFT)12zNO#7171C6RESCUE SPECIALIST I (10 PER SHIFT)17zNO#418cf0SENIOR FIRE INVESTIGATOR6zNO#00BFFFSENIOR FIRE SAFETY EDUCATOR6zNO#4682B4SENIOR FIRE INSPECTOR6zNO#FF8C00FIRE INVESTIGATOR II19zNO#0000EEFIRE INVESTIGATOR I22zNO#6E7B8BFIRE SAFETY EDUCATOR II20zNO#FF6103FIRE SAFETY EDUCATOR I24zNO#FFE4E1FIRE INSPECTOR II21zNO#808000FIRE INSPECTOR I (2)26zNO#9BCD9BSENIOR TRAINING OFFICER7zNO#87CEFATRAINING OFFICER II (2)28zNO#D02090TRAINING OFFICER I (3)29zNO#308014MAINTENANCE SUPERVISOR/MASTER MECHANIC5zNO#9ACD32ADMINISTRATIVE ASSISTANT31y32#418cf0MAINTENANCE TECHNICIAN II31zNO#CD6A80MAINTENANCE TECHNICIAN (2)33zNO#504DCDzNO#FF8C00yNO#8E388ExNO#00ae00zNO#ff6e01xNO#935c24yNO#388E8ExNO#5483a2zNO#B0171FxNO#912CEExNO#00ae00yNO#00868ByNO#698B22xNO#FFA500yNO#7171C6zNO#418cf0xNO#00BFFFyNO#4682B4xNO#FF8C00yNO#0000EExNO#6E7B8BxNO#FF6103zNO#FFE4E1xNO#808000yNO#9BCD9ByNO#87CEFAxNO#D02090xNO#308014yNO#9ACD32zNO#418cf0yNO#CD6A80xNO#504DCDyNO#FF8C00xNO#8E388ExNO#00ae00yNO#ff6e01zNO#935c24xNO#388E8EyNO#5483a2xNO#B0171FxNO#912CEEyNO#00ae00yNO#00868BxNO#698B22zNO#FFA500zNO#7171C6yNO#6E7B8BxNO#00BFFFyNO#FFE4E1zNO#FF8C00yNO#0000EEyNO#6E7B8BxNO#FF6103yNO#FFE4E1zNO#808000yNO#9BCD9BxNO#87CEFAyNO#D02090xNO#308014xNO#9ACD32yNO#418cf0xNO#CD6A80zNO#504DCDzNO#FF8C00yNO#8E388ExNO#00ae00yNO#ff6e01zNO#935c24yNO#388E8EyNO#5483a2xNO#B0171FyNO#912CEEzNO#00ae1eyNO#00868BxNO#698B22yNO#FFA500xNO#7171C6
Business Decision Making Project Part 2
Jared Linscombe
QNT/275
Dr. Davisson
September 12, 2016
Descriptive Statistics
Descriptive statistics are statistics that describe or summarize features of collected data. Descriptive statistics simply present quantitative information in a manner that can be easily managed. The large amount of data is reduced into a simple summary and therefore the whole process of describing the data is less laborious.
For example, finding the mean helps to summarize a lot of individual information into a way that is quickly understood. The samples are likely to produce different independent variables that affect the sales of Elite Technologies Limited. For this reason, we opt to use bivariate analysis in the describing the statistics. Bivariate analysis of the descriptive statistics that is derived from the data will help in drawing relationships between different variables.
For a more accurate representa ...
Demand forecasting involves estimating future demand for a product or service using both informal and quantitative methods. It is important for making pricing, production capacity, and market entry decisions. Methods include educated guesses, analyzing historical sales data, and using current test market data. Demand forecasting aims to minimize risks from an uncertain future by making reasonable assumptions about likely market conditions.
This document discusses quantitative approaches to forecasting, including time series analysis and forecasting techniques. It covers the components of a time series, including trends, cycles, seasonality, and irregular components. Specific quantitative forecasting approaches covered include smoothing methods like moving averages, weighted moving averages, and exponential smoothing. Examples are provided to demonstrate how to perform moving averages and exponential smoothing on time series data for sales of headache medicine. The document aims to teach readers how to analyze time series data and select appropriate forecasting techniques.
The document discusses forecasting techniques. It outlines the learning objectives which include listing elements of a good forecast, describing qualitative and quantitative forecasting approaches, and explaining measures of forecast accuracy. The document also describes various forecasting techniques such as qualitative judgmental forecasts, quantitative time-series forecasts including naive forecasts, moving averages, weighted moving averages, exponential smoothing, and linear trend analysis. It provides examples and discusses advantages and disadvantages of each technique.
The document discusses various quantitative forecasting techniques including smoothing methods, trend projection, and regression analysis. It provides examples of using moving averages, weighted moving averages, and exponential smoothing to forecast sales data for Robert's Drugs. Specifically, it calculates the mean squared error for different smoothing techniques including a two period moving average, three period moving average, and exponential smoothing with alphas of 0.1 and 0.2 to determine the best method for the Robert's Drugs data.
This document provides an overview of time series analysis and forecasting techniques. It discusses key concepts such as stationary and non-stationary time series, additive and multiplicative models, smoothing methods like moving averages and exponential smoothing, autoregressive (AR), moving average (MA) and autoregressive integrated moving average (ARIMA) models. The document uses examples to illustrate how to identify patterns in time series data and select appropriate models for description, explanation and forecasting of time series.
This document discusses demand forecasting methods. It explains that forecasting involves estimating future demand for products and services. There are different types of forecasts including long-range, medium-range, and short-term forecasts used for strategic, tactical, and operational planning respectively. Qualitative methods rely on judgment while quantitative methods use mathematical models and historical data. Common quantitative methods are linear regression, moving average, and exponential smoothing. Accuracy and characteristics like impulse response and noise dampening ability are used to evaluate forecasting models.
IRJET- Overview of Forecasting TechniquesIRJET Journal
This document provides an overview of different forecasting techniques, including qualitative and quantitative methods. It discusses several qualitative techniques like the Delphi method, consumer market surveys, and jury of executive opinion. It also examines various quantitative techniques such as the moving average method, weighted moving average method, exponential smoothing, and least squares. The document serves to introduce students to common forecasting approaches and provide examples of each type of technique.
Time series analysis involves collecting data points over consistent time intervals and analyzing how variables change over time. It allows analysts to identify trends, seasonal patterns, and make predictions about future data. There are several key objectives of time series analysis including describing patterns in historical data, explaining relationships between variables, forecasting future values, and identifying outliers. Proper time series analysis requires large datasets with consistent data collection intervals to identify meaningful trends while accounting for noise and seasonal fluctuations.
The document discusses various forecasting techniques used in business analytics. It begins by explaining the importance of forecasting and defining time-series data components like trend, seasonality, cyclicality and irregular components. It then covers techniques like moving average, single exponential smoothing, Holt's method, Croston's method and regression models. It also discusses identifying appropriate autoregressive (AR) and moving average (MA) models using autocorrelation functions and model selection techniques like ARIMA.
1. Demand forecasting forms the basis of supply chain planning as it allows managers to plan production, transportation, and other activities in anticipation of or in response to customer demand.
2. Forecasts can use qualitative methods like expert judgment or quantitative methods like time-series analysis of historical data to predict demand trends, levels, and seasonal variations.
3. The appropriate forecasting method depends on the forecast horizon, with short-term forecasts relying more on time-series analysis, medium-term using both time-series and causal models, and long-term relying more on judgment.
Here are 3 practice problems using quantitative forecasting methods:
1. Using simple exponential smoothing, forecast next period's sales given the following data with a smoothing constant of 0.3:
Period: Sales
1: 100
2: 110
3: 120
4: ?
Forecast: F1 = 100
F2 = 100 + 0.3(110 - 100) = 103
F3 = 103 + 0.3(120 - 103) = 108.9
F4 = 108.9 + 0.3(120 - 108.9) = 113.67
2. Using linear regression, forecast next year's profits based on advertising expenditures given:
Year: Prof
This document discusses forecasting of diesel fuel prices by a team of students. It provides background on types of diesel fuel and their uses. The document then discusses the purpose and importance of forecasting for businesses. It outlines different qualitative and quantitative forecasting methods that could be used to forecast diesel prices, including executive opinions, Delphi method, time series analysis, exponential smoothing, and linear trend lines. The key factors to consider for price forecasting are also summarized.
Analysis of Forecasting Sales By Using Quantitative And Qualitative MethodsIJERA Editor
This paper focuses on analysis of forecasting sales using quantitative and qualitative methods. This forecast should be able to help create a model for measuring a successes and setting goals from financial and operational view points. The resulting model should tell if we have met our goals with respect to measures, targets, initiatives
Demand forecasting involves determining customer demand for products and services in terms of what is needed, where, when, and in what quantities. It is a customer-focused activity that supports other planning functions and is the foundation of a company's logistics process. There are qualitative and quantitative methods for demand forecasting. Qualitative methods include surveys of buyer intentions, expert opinions, the Delphi method, market experimentation, and collective opinions. Quantitative methods include time series models, trend analysis, moving averages, exponential smoothing, and regression models.
This document discusses two management techniques: time series analysis and work sampling. It provides details on:
1. Time series analysis techniques including trend analysis, forecasting, moving averages, weighted moving averages, and exponential smoothing which are used to predict patterns in data over time.
2. Work sampling which measures the proportion of time workers spend on different activities and can be used to determine standard times for tasks. The document outlines the steps for conducting a work sampling study.
3. An example of a work sampling study conducted over 1500 minutes to develop standard times for a cargo loading operation.
This document provides an overview of time series analysis and cross-sectional analysis. It defines both approaches and discusses their goals, types, components, techniques, and advantages/disadvantages. For time series analysis, it describes trends, seasonality, cycles, and irregular variations as the main components. Common techniques mentioned include Box-Jenkins ARIMA models and Holt-Winters exponential smoothing. Advantages include the ability to study trends over time, while disadvantages relate to issues like missing data, measurement error, and changing patterns. The document then covers cross-sectional analysis and provides a comparison of the two approaches.
This document discusses various tools and techniques for demand forecasting that can help entrepreneurs with production planning. It describes several statistical methods like the Delphi technique, nominal group technique, opinion polls, moving average, trend analysis, and time series analysis that can be used to estimate demand. It also discusses concepts like seasonality, trends, cycles, and Box-Jenkins models that can aid in demand forecasting. The document provides links to download additional resources on statistics, reasoning, English language improvement, mathematics, and general knowledge.
This document discusses capital budgeting and methods for evaluating investment projects. It covers:
- Net present value (NPV) which discounts future cash flows to determine if a project's present value exceeds its cost. Projects with positive NPV should be accepted.
- Internal rate of return (IRR) which is the discount rate that makes a project's NPV equal to zero. Projects with IRR exceeding the cost of capital should be accepted.
- Examples are provided to demonstrate calculating NPV and IRR using the discounted cash flow approach for projects with both even and uneven cash flows over time.
The document compares NPV and IRR as evaluation methods and their appropriate use for investment decisions.
The document discusses market efficiency and the efficient market hypothesis. It defines market efficiency as prices reflecting all relevant financial information, so there are equal opportunities for buyers and sellers. The efficient market hypothesis states that stock prices instantly change to reflect new public information, making it impossible for investors to consistently earn above-average returns. The hypothesis is criticized for not explaining market bubbles that have occurred. The document also explains the weak, semi-strong, and strong forms of market efficiency and provides examples to illustrate market efficiency.
This document provides an overview of business communication. It begins with defining communication and distinguishing between intra-personal and inter-personal communication. It then defines business communication and discusses the process, characteristics, and types including organizational structure, direction, and mode of expression. It also covers specific communication channels like downward, upward, horizontal, and diagonal communication. The document then addresses the essentials of good English for business communication and discusses business correspondence, including the importance, types, and anatomy of effective business letters and resumes. It concludes with emphasizing the importance of effective communication for business success.
The document summarizes the story of King Lear. It discusses how King Lear divides his kingdom among his three daughters, asking them to declare their love for him. His daughter Cordelia refuses to exaggerate her genuine affection, angering her father and leading him to disinherit her. This sets off a tragic chain of events. The document then analyzes important lessons about communication that can be drawn from the story, such as the importance of honesty, understanding one's audience, and preventing misunderstandings.
This document discusses flexible budgets. It defines a flexible budget as a budget that changes based on different output levels to recognize varying cost behavior patterns. Flexible budgets are prepared for a range of activity levels rather than a single level. They provide a dynamic basis for comparison and a tailored budget for each output volume. Some key advantages are determining costs, sales, and profits at different operating capacities and identifying profit areas.
Bureaucratic management is a formal system of organization based on hierarchical levels and defined roles to maintain efficiency. It was developed by Max Weber, who saw it as the most rational and efficient form of organization. Key characteristics include a clear line of authority, strict rules and regulations, division of labor, and impersonal relationships based on position rather than personality. While efficient for large, stable organizations like governments, it is criticized for being rigid and limiting growth due to excessive rules.
Discharge of a contract means termination of contractual obligations between parties. A contract can be discharged in several ways including performance, agreement between parties, impossibility of performance, failure to provide facilities for performance, death, refusal of performance, unauthorized alterations, lapse of time, operation of law, and breach of contract. Some key ways are discharge by performance when both parties fulfill their obligations, and discharge by agreement/consent when parties mutually agree to novate, accept accord and satisfaction, remit obligations, or rescind the contract.
This document discusses the organic farming industry in India. It notes that while agriculture still contributes significantly to India's GDP, organic farming is growing. Demand for organic food is increasing, especially in major cities, due to greater health awareness. The M.P. Vindhya Jaivik Herbal Development Foundation was established to promote organic farming, reduce middlemen, develop export zones, and improve farmers' livelihoods and public health. There is significant market potential for organic foods in India given rising demand, government support, and opportunities for export and differentiation. However, challenges include a lack of farmer awareness, high costs, and competition.
A market consists of sellers and buyers where transactions can potentially occur. Marketing management involves choosing target markets and attracting, retaining, and growing customers through superior value. The marketing concept holds that organizational goals are met by understanding customer needs and satisfying them better than competitors. It has four pillars: targeting specific markets, understanding customer needs, integrating marketing functions, and achieving profitability. Customer retention marketing aims to convert occasional buyers into loyal, long-term customers through communication, service, listening to customers, loyalty programs, and connecting customers.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Natural Language Processing (NLP), RAG and its applications .pptxfkyes25
1. In the realm of Natural Language Processing (NLP), knowledge-intensive tasks such as question answering, fact verification, and open-domain dialogue generation require the integration of vast and up-to-date information. Traditional neural models, though powerful, struggle with encoding all necessary knowledge within their parameters, leading to limitations in generalization and scalability. The paper "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" introduces RAG (Retrieval-Augmented Generation), a novel framework that synergizes retrieval mechanisms with generative models, enhancing performance by dynamically incorporating external knowledge during inference.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
State of Artificial intelligence Report 2023kuntobimo2016
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
2. Forecasting
Managers require good forecasts of future events to
make good decisions.
For example, forecasts of interest rates, energy prices,
and other economic indicators are needed for financial
planning;
Sales forecasts are needed to plan production and
workforce capacity; and
forecasts of trends in demographics, consumer behavior,
and technological innovation are needed for long-term
strategic planning.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
3. Forecasting
Many of us have faced the challenge of selecting the
best option when buying a new product or trying out a
new technique.
This can be a challenging task as most options tend to
sound similar to one another, making it difficult to
determine the best choice.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
4. Forecasting
Business analysts may choose from a wide range of
forecasting techniques to support decision making.
Three major categories of forecasting approaches are : -
Qualitative and judgmental
techniques
Statistical time-series
models
Explanatory/causal
methods.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
5. 1. Qualitative and Judgmental Forecasting
Qualitative and judgmental techniques rely on experience
and intuition;
they are necessary –
when historical data are not available or
when the decision maker needs to forecast far into the
future.
For example, a forecast of when the next generation of a microprocessor will
be available and what capabilities it might have will depend greatly on the
opinions and expertise of individuals who understand the technology.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
6. Judgmental techniques range from: -
simple methods as a manager’s opinion,
or a group-based jury of executive opinion.
to more structured approaches such as
Historical analogy and
The Delphi method.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
7. Historical Analogy
In historical analogy a forecast is obtained through
a comparative analysis with a previous situation.
For example, if a new product is being introduced, the response of
consumers to marketing campaigns to similar, previous products
can be used as a basis to predict how the new marketing campaign
might fare
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
9. Analogies often provide good forecasts,
but you need to be careful to recognize new or different
circumstances.
Another analogy is international conflict relative to the
price of oil.
Should war break out, the price would be expected to
rise, analogous to what it has done in the past.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
10. The Delphi Method
The Delphi method, uses a panel of experts, whose identities are
typically kept confidential from one another, to respond to a
sequence of questionnaires.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
11. Characteristics of the Delphi
Participants are experts in their field.
The technique uses a series of rounds or iterations where
information is given back to the participants for review.
Participants work anonymously. They do not know who the
other participants might be.
Future focused
The Delphi technique is a “consensus” research method.
In most cases, the goal is to approach a consensus among the
expert panel as to future “best” solutions.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
13. A time series is a stream of historical data, such as weekly sales.
Time series generally have one or more of the following
components:
Trends (T),
Seasonal effects (S),
Cyclical effects (C), or
Random behavior (R)
Time-series models assume that whatever forces have influenced
the dependent variable in the recent past will continue into the
near future. Thus, the forecasts are developed by extrapolating
these data into the future.
2. Statistical time-series models
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
14. Statistical time-series models find greater applicability for short-
range forecasting problems.
We characterize the values of a time series over T periods as At, t
= 1, 2, c, T.
Time series are tabulated or graphed to show the nature of the
time dependence.
The forecast value (Ye) is commonly expressed as a
multiplicative or additive function of its components as :
Additive model : Ye=T+S+C+R
Multiplicative model : Ye = T.S.C.R
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
16. T : Trend is a gradual long-term directional movement in the
data (growth or decline).
S: Seasonal effects are similar variations occurring during
corresponding periods, c.g., December retail sales.
Seasonal can be quarterly, monthly, weekly, daily, or even hourly
indexes.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
17. C: Cyclical factors are the long-term swings about the trend line.
They are often associated with business cycles and may extend
out to several years in length.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
18. R: Random component are sporadic (unpredictable) effects due
to chance and unusual occurrences.
Time series that do not have trend, seasonal, or cyclical effects
but are relatively constant and exhibit only random behavior are
called stationary time series.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
19. Different types of Time-series models
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
20. A. Naive Approach
It is the simplest way to forecast.
It is a technique that assumes dependent variable (such as
demand, sales) in the next period is equal to dependent variable
in the most recent period.
E.g. Feb. month sales of Big basket will follow the sales of Jan.
month.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
21. B. Moving Averages model
The moving average method assumes that future observations
will be similar to the recent past.
It is most useful as a short-range forecasting method.
Although this method is very simple, it has proven to be quite
useful in stable environments, such as inventory management,
demand forecast etc.
Simple Moving Average :
Mathematically : SM𝐴 =
𝐷𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒 𝑖𝑛 𝑘 𝑝𝑒𝑟𝑖𝑜𝑑
𝑇𝑜𝑡𝑎𝑙 𝑡𝑖𝑚𝑒 𝑝𝑒𝑟𝑖𝑜𝑑 𝑁
Where’s Ft+1 = Forecast for t+1 time, N= times, Yk= dependent variable in k period
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
22. Moving Averages model
For example, let's say we have a monthly sales data for the past
7 months, as follows:
If we want to forecast the sales for the next month of march., we
can use a 2-month SMA, which calculates the average of the past
two months:
Month Actual
Sales
2 month SMA
forecast
Sales
Forecast
Jan 100
Feb 120
Mar. 110 =(100+120)/2= 110 110
Apr. 130 = (120+110)/2=115 115
May. 140 =(110+130)/2 = 120 120
Jun. 120 =(130+140)/2 = 135 135
Jul. 110 = (140+120)/2 = 130 130
Aug. = (120+110)/2 = 115 115
80
90
100
110
120
130
140
150
Jan Feb Mar. Apr. May. Jun. Jul.
SMA - Forecast
Actual Sales Sales Forecast
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
24. Weighted Moving Average :
In a weighted moving average, past observations are given differential
weights (usually the weight decrease as the data becomes older).
Weighted moving average is given by :
WM𝐴 =
𝑊𝑒𝑖𝑔ℎ𝑡 𝑜𝑓 𝑝𝑒𝑟𝑖𝑜𝑑 𝑘 (𝐷𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒 𝑖𝑛 𝑝𝑒𝑟𝑖𝑜𝑑 𝑘)
𝑊𝑒𝑖𝑔ℎ𝑡𝑠
Or
where Wk is the weight given to value of Y at time k (Yk) and
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
25. Here's an example to illustrate how WMA works:
Let's say you are tracking the sales of a particular product over a period
of 5 months. The sales figures for each month are as follows:
Month Actual
Sales(Units)
Weight WMA WMA for May month
Jan. 100 0.5 100 = (100*0.5) +
(120*0.3) + (150*0.1)
+ (180*0.05)+
(200*0.05) =
50+36+15+9+10
Feb. 120 0.3 95
Mar. 150 0.1 102.67
Apr. 180 0.05 110.47
May 200 0.05 120 = 120
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
26. Error Metrics and Forecast Accuracy
The quality of a forecast depends on how accurate it is in predicting
future values of a time series.
In the simple moving average and smoothing model, different values
for forecasted time period will produce different forecasts.
To analyze the effectiveness of different forecasting models, we can
define error metrics, which compare quantitatively the forecast with the
actual observations.
Three metrics that are commonly used are :
• Mean Absolute Deviation,
• Mean Square Error, and
• Mean Absolute Percentage Error. @Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
27. Mean Absolute Deviation :
The mean absolute deviation (MAD) is the absolute
difference between the actual value and the forecast,
averaged over a range of forecasted values:
where At = actual value of the time series at time t, Ft = forecast value for time t,
n = number of forecast values
(not the number of data points since we do not have a forecast value associated
with the first k data points)
MAD provides a robust measure of error and is less affected
by extreme observations @Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
28. Mean Square Error :
Mean square error (MSE) is probably the most commonly used error
metric.
It penalizes larger errors because squaring larger numbers has a
greater impact than squaring smaller numbers. The formula for MSE
is
Again, n represents the number of forecast values used in computing
the average.
The square root of MSE, called the root mean square error
(RMSE), is used:
RMSE is expressed in the same units as the data, allowing for more
practical comparisons. @Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
29. Mean Absolute Percentage Error :
The mean absolute percentage error (MAPE) is the average
of absolute errors divided by actual observation values.
MAPE eliminates the measurement scale by dividing the
absolute error by the time-series data value.
This allows a better relative comparison.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
36. This technique fits a trend line to a series of historical data points and
then projects the line into the future for medium to long-range
forecasts.
If the time series exhibits an increasing or decreasing trend then a
trend analysis is more appropriate.
A trend line defines the relationship between forecast value and the
time period by the following equation.
ŷ = a + bX, where, ŷ is the forecast value and X is the time period.
Where ŷ-computed value of the variable to be predicted (called the
dependent variable) 36
D. Trend Line
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
37. X is the independent variable and Y is the dependent
variable since the forecast value depends on the time
period.
A least square method is used to develop a trend line.
a = Y -axis intercept.
b = slope of the regression line (or the rate of change in Y
for given changes in X)
X = the independent variable (which in this case is time).
37
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
38. 38
Now we have to calculate the value of 'a' and 'b' constants
as :
𝑏 =
(𝑋− 𝑋)∗(𝑌− 𝑌)
(𝑋−𝑋 )2 =
𝐶𝑜𝑣( 𝑋,𝑌)
𝑉𝑎𝑟 (𝑋)
And
a = 𝑌 − 𝑏 ∗ 𝑋
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
39. In the equation, Y = a + bX, a is the intercept on the Y-
axis. a gives the value of variable Y, when X = 0.
The slope of the line is b which gives the change in the
value of dependent variable Y for a unit change in the
value of X.
The “Intercept” and “Slope” functions in Excel are used to
calculate a and b respectively.
39
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
40. Example: Consider the demand data given in the table
below.
Project the trendline.
Time: Independent
Variable (X) 1 2 3 4 5 6 7 8 9 10
Demand :
Dependent
Variable (Y) 9 15 32 48 52 60 39 65 90 93
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
42. 42
Solution Manual Method :-
For any given time
period, the difference
between the forecast
(values on dash line) and
the actual demand (values
on zigzag line) gives the
error in that period.
The trend analysis
method minimizes the
sum of the squares of
these errors in calculating
the values of a and b.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
44. The Excel functions give b = 8.65 and a = 2.73.
Use them in equation, Y = a + bX, to make a forecast.
For example, for period 11 (X = 11),
Forecast = 2.73 + 11*8.65 = 97.87.
Similarly, for period 12,
Forecast = 2.73 + 12*8.65 = 106.52.
44
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
45. 3. Explanatory/causal methods.
Explanatory/causal models, often called econometric
models, seek to identify factors that explain statistically
the patterns observed in the variable being forecast,
usually with regression analysis.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
46. Regression Analysis
Regression Analysis establishes a relationship between two sets of numbers
that are time series.
For example, when a series of Y numbers (such as the monthly sales of
cameras over a period of years) is causally connected with the series of X
numbers (the monthly advertising budget), then it is beneficial to establish
a relationship between X and Y in order to forecast Y.
In regression analysis X is the independent variable and Y is the dependent
variable.
46
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
47. The regression analysis gives the relationship between X
and Y by the following equation.
Y = a + bX,
where, a is the intercept on the Y-axis
(value of the variable Y when X = 0); and b is the slope of the line
which gives the change in the value of variable Y for a unit change
in the value of X.
The “Intercept” function in Excel calculates a and the
“Slope” function in Excel is used to find the value of b.
47
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
48. Example: Use the data given in the following table for ten-pairs of X
and Y.
o The Excel functions give b = 50.23 and a = 62.44.
o Use them in equation, Y = a + bX, to forecast.
o Suppose X = 15, then
Forecast = 62.44 + 50.23*15 = 815.84.
Observation
Number
1 2 3 4 5 6 7 8 9 10
Independent
Variable (x)
10 12 11 9 10 12 10 13 14 12
Dependent
Variable (y)
400 600 700 500 800 700 500 700 800 600
48
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
49. The forecasts (values on straight line) and the actual demand
data points have been plotted in the following figure.
49
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
50. For any given time period, the difference between the
forecast values and the actual demand gives the error in that
period.
The regression analysis minimizes the sum of the squares of
these errors in calculating the values of a and b.
An assumption that is generally made in regression analysis
is that the relationship between the correlate pairs is linear.
However, if nonlinear relations are hypothesized, there are
strong, but more complex methods for doing nonlinear
regression analyses.
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM
51. Coefficient of Determination
The coefficient of determination (r2), where r is the value of the
coefficient of correlation, is a measure of the variability that is accounted
for by the regression line for the dependent variable.
The coefficient of determination always falls between 0 and 1.
For example, if r = 0.8, the coefficient of determination is r2 = 0.64
meaning that 64% of the variation in Y is due to variation in X.
The remaining 36% variation in the value of Y is due to other variables.
If the coefficient of determination is low, multiple regression analysis
may be used to account for all variables affecting the independent
variable Y. 51
@Ravindra Nath Shukla (PhD Scholar) ABV-IIITM