IBM401 Lecture 12


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

IBM401 Lecture 12

  1. 1. Quantitative Analysis for Business<br />Lecture 12 – Final Exam Review<br />October 4th, 2010<br /><br />
  2. 2. Final exam<br />No. of questions: <br />3<br />Duration: <br />3 hours<br />Grade: <br />50% of final grade<br />Topics:<br />Everything<br />
  3. 3. Linear regression<br />Identifying independent and dependent variable<br />The sample correlation coefficient for two variables<br />Linear regression model form<br />
  4. 4. Linear regression<br />The assumptions of linear regression model are the following:<br />A linear relations exists between the dependent variable and the independent variable.<br />The independent variable is not random.<br />The expected value of the error term is 0.<br />The variance of the error term is the same for all observations<br />The error term is uncorrelated across observations.<br />The error term is normally distributed. <br />
  5. 5. Linear regression<br />In the regression model Yi = bo + b1Xi + Ei, if we know the estimated parameters, bo and b1, for any value of the independent variable, X, then the predicted value of the dependent variable Y is Y = b0 + b1X<br />In simple linear regression, the F-test tests the same null hypothesis as testing the statistical significance of b1 (or the t-test)<br />H0: b1 = 0<br />H1: b1 ≠ 0<br />If F > Fc, then reject H0<br />^<br />^<br />
  6. 6. Multiple regression<br />Multiple regression model<br />Intercept – the value of the dependent variable when the independent variables are all equal to zero<br />Each slope coefficient – the estimated change in the dependent variable for a one-unit change in the independent variable, holding the other independent variables contant<br />Y = 0 + 1X1 + 2X2 + … + kXk + <br />
  7. 7. Hypothesis testing of regression coefficients<br />t-statistic – used to test the significance of the individual coefficient in a multiple regression<br />t-statistic has n-k-1 degrees of freedom<br />Estimated regression coefficient – hypothesized value<br />Coefficient standard error of bj<br />
  8. 8. F-statistic<br />F-test assesses how well the set of independent variables, as a group, explains the variation of the dependent variable<br />F-statistic is used to test whether at least one of the independent variables explains a significant portion of the variation of the dependent variable<br />
  9. 9. Coefficient of determination (R2)<br />Multiple coefficient of determination, R2, can be used to test the overall effectiveness of the entire set of independent variables in explaining the dependent variable. <br />
  10. 10. Adjusted R2<br />Unfortunately, R2 by itself may not be a reliable measure of the multiple regression model<br />R2 almost always increases as variables are added to the model<br />We need to take new variables into account<br />Where<br />n = number of observations<br />k = number of independent variables<br />Ra2 = adjusted R2<br />
  11. 11. Adjusted R2<br />Whenever there is more than 1 independent variable<br />Ra2 is less than or equal to R2<br />So adding new variables to the model will increase R2 but may increase or decrease the Ra2<br />Ra2 maybe less than 0 if R2 is low enough<br />
  12. 12. Time-Series Models<br />Time-series models attempt to predict the future based on the past<br />Common time-series models are<br />Moving average<br />Exponential smoothing<br />Trend projections<br />Decomposition<br />Regression analysis is used in trend projections and one type of decomposition model<br />
  13. 13. Decomposition of a Time-Series<br />A time series typically has four components<br />Trend (T) is the gradual upward or downward movement of the data over time<br />Seasonality (S) is a pattern of demand fluctuations above or below trend line that repeats at regular intervals<br />Cycles (C) are patterns in annual data that occur every several years<br />Random variations (R) are “blips” in the data caused by chance and unusual situations<br />
  14. 14. <ul><li>Mathematically</li></ul>where<br />wi = weight for the ith observation<br />Weighted Moving Averages<br /><ul><li>Weighted moving averages use weights to put more emphasis on recent periods
  15. 15. Often used when a trend or other pattern is emerging</li></li></ul><li>Exponential Smoothing<br />Exponential smoothing is easy to use and requires little record keeping of data<br />It is a type of moving average<br />New forecast = Last period’s forecast<br /> + (Last period’s actual demand - Last period’s forecast)<br />Where  is a weight (or smoothing constant) with a value between 0 and 1 inclusive<br />
  16. 16. Exponential Smoothing<br />Mathematically<br />where<br />Ft+1 = new forecast (for time period t + 1)<br />Ft = previous forecast (for time period t)<br /> = smoothing constant (0 ≤  ≤ 1)<br />Yt = pervious period’s actual demand<br /><ul><li>The idea is simple – the new estimate is the old estimate plus some fraction of the error in the last period</li></li></ul><li>Exponential Smoothing with Trend Adjustment<br />Like all averaging techniques, exponential smoothing does not respond to trends<br />A more complex model can be used that adjusts for trends<br />The basic approach is to develop an exponential smoothing forecast then adjust it for the trend<br />Forecast including trend (FITt) = New forecast (Ft)<br /> + Trend correction (Tt)<br />
  17. 17. Exponential Smoothing with Trend Adjustment<br />The equation for the trend correction uses a new smoothing constant <br />Tt is computed by<br />where<br />Tt+1 = smoothed trend for period t + 1<br />Tt = smoothed trend for preceding period<br /> = trend smooth constant that we select<br />Ft+1 = simple exponential smoothed forecast for period t + 1<br />Ft = forecast for pervious period<br />
  18. 18. Seasonal Variations with Trend<br />When both trend and seasonal components are present, the forecasting task is more complex<br />Seasonal indices should be computed using a centered moving average (CMA) approach<br />There are four steps in computing CMAs<br />Compute the CMA for each observation (where possible)<br />Compute the seasonal ratio = Observation/CMA for that observation<br />Average seasonal ratios to get seasonal indices<br />If seasonal indices do not add to the number of seasons, multiply each index by (Number of seasons)/(Sum of indices)<br />