This working paper describes using a recursive least squares estimation method with exponential forgetting to estimate coefficients of a state space model of the US macroeconomy. The model uses 12 state variables including consumption, investment, industrial production, and interest rates. Out-of-sample forecasts show lower root mean square errors than OLS forecasts, indicating the adaptive method provides better tracking. Sensitivity analysis found a forgetting factor of 0.96411 produced the most accurate in-sample and out-of-sample forecasts compared to other values tested.
Any business and economic applications of forecasting involve time series data. Re-gression models can be fit to monthly, quarterly, or yearly data using the techniques de-scribed in previous chapters. However, because data collected over time tend to exhibit trends, seasonal patterns, and so forth, observations in different time periods are re¬lated or autocorrelated. That is, for time series data, the sample of observations cannot be regarded as a random sample. Problems of interpretation can arise when standard regression methods are applied to observations that are related to one another over time. Fitting regression models to time series data must be done with considerable care.
Any business and economic applications of forecasting involve time series data. Re-gression models can be fit to monthly, quarterly, or yearly data using the techniques de-scribed in previous chapters. However, because data collected over time tend to exhibit trends, seasonal patterns, and so forth, observations in different time periods are re¬lated or autocorrelated. That is, for time series data, the sample of observations cannot be regarded as a random sample. Problems of interpretation can arise when standard regression methods are applied to observations that are related to one another over time. Fitting regression models to time series data must be done with considerable care.
This presentation explains almost all the concepts that needs to be understood and developed before running an OLS in Regression analysis. The concept of Unconditional and Conditional means have been discussed in detail along with the differences between the PRF and SRF.
this presentation defines basics of regression analysis for students and scholars. uses, objectives, types of regression, use of spss for regression and various tools available in the market to calculate regression analysis
Bayesian analysis of shape parameter of Lomax distribution using different lo...Premier Publishers
The Lomax distribution also known as Pareto distribution of the second kind or Pearson Type VI distribution has been used in the analysis of income data, and business failure data. It may describe the lifetime of a decreasing failure rate component as a heavy tailed alternative to the exponential distribution. In this paper we consider the estimation of the parameter of Lomax distribution. Baye’s estimator is obtained by using Jeffery’s and extension of Jeffery’s prior by using squared error loss function, Al-Bayyati’s loss function and Precautionary loss function. Maximum likelihood estimation is also discussed. These methods are compared by using mean square error through simulation study with varying sample sizes. The study aims to find out a suitable estimator of the parameter of the distribution. Finally, we analyze one data set for illustration.
A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTSIJCSES Journal
This paper presents a tutorial on Kalman filtering that is designed for instruction to undergraduate
students. The idea behind this work is that undergraduate students do not have much of the statistical and
theoretical background necessary to fully understand the existing research papers and textbooks on this
topic. Instead, this work offers an introductory experience for students which takes a more practical usage
perspective on the topic, rather than the statistical derivation. Students reading this paper should be able
to understand how to apply Kalman filtering tools to mathematical problems without requiring a deep
theoretical understanding of statistical theory.
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...BRNSS Publication Hub
The most important assumption about time series and econometrics data is stationarity. Therefore, this study focuses on behaviors of some parameters in stationarity of autoregressive (AR) and moving average (MA) models. Simulation studies were conducted using R statistical software to investigate the parameter values at different orders (p) of AR and (q) of MA models, and different sample sizes. The stationary status of the p and q are, respectively, determined, parameters such as mean, variance, autocorrelation function (ACF), and partial autocorrelation function (PACF) were determined. The study concluded that the absolute values of ACF and PACF of AR and MA models increase as the parameter values increase but decrease with increase of their orders which as a result, tends to zero at higher lag orders. This is clearly observed in large sample size (n = 300). However, their values decline as sample size increases when compared by orders across the sample sizes. Furthermore, it was observed that the means values of the AR and MA models of first order increased with increased in parameter but decreased when sample sizes were decreased, which tend to zero at large sample sizes, so also the variances
This presentation explains almost all the concepts that needs to be understood and developed before running an OLS in Regression analysis. The concept of Unconditional and Conditional means have been discussed in detail along with the differences between the PRF and SRF.
this presentation defines basics of regression analysis for students and scholars. uses, objectives, types of regression, use of spss for regression and various tools available in the market to calculate regression analysis
Bayesian analysis of shape parameter of Lomax distribution using different lo...Premier Publishers
The Lomax distribution also known as Pareto distribution of the second kind or Pearson Type VI distribution has been used in the analysis of income data, and business failure data. It may describe the lifetime of a decreasing failure rate component as a heavy tailed alternative to the exponential distribution. In this paper we consider the estimation of the parameter of Lomax distribution. Baye’s estimator is obtained by using Jeffery’s and extension of Jeffery’s prior by using squared error loss function, Al-Bayyati’s loss function and Precautionary loss function. Maximum likelihood estimation is also discussed. These methods are compared by using mean square error through simulation study with varying sample sizes. The study aims to find out a suitable estimator of the parameter of the distribution. Finally, we analyze one data set for illustration.
A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTSIJCSES Journal
This paper presents a tutorial on Kalman filtering that is designed for instruction to undergraduate
students. The idea behind this work is that undergraduate students do not have much of the statistical and
theoretical background necessary to fully understand the existing research papers and textbooks on this
topic. Instead, this work offers an introductory experience for students which takes a more practical usage
perspective on the topic, rather than the statistical derivation. Students reading this paper should be able
to understand how to apply Kalman filtering tools to mathematical problems without requiring a deep
theoretical understanding of statistical theory.
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...BRNSS Publication Hub
The most important assumption about time series and econometrics data is stationarity. Therefore, this study focuses on behaviors of some parameters in stationarity of autoregressive (AR) and moving average (MA) models. Simulation studies were conducted using R statistical software to investigate the parameter values at different orders (p) of AR and (q) of MA models, and different sample sizes. The stationary status of the p and q are, respectively, determined, parameters such as mean, variance, autocorrelation function (ACF), and partial autocorrelation function (PACF) were determined. The study concluded that the absolute values of ACF and PACF of AR and MA models increase as the parameter values increase but decrease with increase of their orders which as a result, tends to zero at higher lag orders. This is clearly observed in large sample size (n = 300). However, their values decline as sample size increases when compared by orders across the sample sizes. Furthermore, it was observed that the means values of the AR and MA models of first order increased with increased in parameter but decreased when sample sizes were decreased, which tend to zero at large sample sizes, so also the variances
Power System State Estimation Using Weighted Least Squares (WLS) and Regulari...IJERA Editor
In this paper, a new formulation for power system state estimation is proposed. The formulation is based on
regularized least squares method which uses the principle of Thikonov’s regularization to overcome the
limitations of conventional state estimation methods. In this approach, the mathematical unfeasibility which
results from the lack of measurements in case of ill-posed problems is eliminated. This paper also deals with
comparison of conventional method of state estimation and proposed formulation. A test procedure based n the
variance of the estimated linearized power flows is proposed to identify the observable islands of the system.
The obtained results are compared with the results obtained by conventional WLS method
Urban strategies to promote resilient cities The case of enhancing Historic C...inventionjournals
This research tackles disaster prevention problems in dense urban areas, concentrating on the urban fire challenge in Historic Cairo district, Egypt, through disaster risk management approach. The study area suffers from the strike of several urban fire outbreaks, that resulted in disfiguring historic monuments and destroying unregulated traditional markets. Therefore, the study investigates the significance of hazard management and how can urban strategies improve the city resilient through reducing the impact of natural and man-made threats. The main findings of the research are the determination of the vulnerability factors in Historic Cairo district, either regarding management deficiency or issues related to the existing urban form. It is found that the absence of the mitigation and preparedness phases is the main problem in the risk management cycle in the case study. Additionally, the coping initiatives adopted by local authorities to address risks are random and insufficient. The study concludes with recommendations which invoke incorporating hazard management stages (pre disaster, during disaster and post disaster) into the process of evolving development planning. Finally, solutions are offered to mitigate, prepare, respond and recover from fire disasters in the case study. The solutions include urban policies, land-use planning, urban design outlines, safety regulation and public awareness and training.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Quantum Computing: Current Landscape and the Future Role of APIs
Forecasting With An Adaptive Control Algorithm
1. WORKING PAPER SERIES
Forecasting with an Adaptive Control Algorithm
Donald S. Allen
Yang Woo Kim
Meenakshi Pasupathy
Working Paper 1996-009A
http://reseach.stlouisfed.org/wp/1996/96-009.pdf
FEDERAL RESERVE BANK OF ST. LOUIS
Research Division
411 Locust Street
St. Louis, MO 63102
______________________________________________________________________________________
The views expressed are those of the individual authors and do not necessarily reflect official positions of
the Federal Reserve Bank of St. Louis, the Federal Reserve System, or the Board of Governors.
Federal Reserve Bank of St. Louis Working Papers are preliminary materials circulated to stimulate
discussion and critical comment. References in publications to Federal Reserve Bank of St. Louis Working
Papers (other than an acknowledgment that the writer has had access to unpublished material) should be
cleared with the author or authors.
Photo courtesy of The Gateway Arch, St. Louis, MO. www.gatewayarch.com
2. FORECASTING WITH AN ADAPTIVE CONTROL ALGORITHM
July 1996
ABSTRACT
We construct a parsimonious model ofthe U.S. macro economy using a state space representation and
recursive estimation. At the core ofthe estimation procedure is a prediction/correction algorithm based on
a recursive least squares estimation with exponential forgetting. The algorithm is a Kalman filter-type
update method which minimizes the sum of discounted squared errors. This method reduces the
contribution ofpast errors in the estimate of the current period coefficients and thereby adapts to potential
time variation ofparameters. The root mean square errors of out-of-sample forecast of the model show
improvement over OLS forecasts. One period ahead in-sample forecasts showed better tracking than OLS
in-sample forecasts.
JEL CLASSIFICATION: E17, E27
KEYWORDS: Forecasting, Time-varying parameters, Adaptive Control
Donald S. Allen Yang Woo Kim
Economist Visiting Scholar-Federal Reserve Bank
Research Department The Bank of Korea
Federal Reserve Bank of St. Louis Institute for Monetary
411 Locust Street and Economic Research
St. Louis, MO 63102
Meenakshi Pasupathy
Visitor-Federal Reserve Bank
Baruch College of the City University
ofNew York
Acknowledgement: Rob Dittmar provided programming and numerical methods assistance for
this article.
3. Introduction
Implicit in the use ofordinary least squares loss functions in econometric estimation is
the assumption oftime invariance of system parameters. Intuition suggests that in a
macroeconomic environment fundamental relationships among aggregate variables change over
time for various reasons such as the Lucas critique or compositional effects of heterogeneity.
Dynamic systems with time-varying properties present a fundamental problem in control and
signal processing. Sargent (1993) suggests that adaptation is an important part ofeconomic
dynamics when considering issues of learning and bounded rationality. Recursive estimation
methods play a key role in adaptation and tracking oftime-varying dynamics. A good survey of
the basic techniques used to derive and analyze algorithms for tracking time-varying systems is
provided in Ljung and Gunnarsson (1990),
The purpose of this paper is to use a recursive method to estimate coefficients of a
parsimonious model ofthe U.S. macro economy and compare forecast results to that of a simple
OLS model. We use the state space representation approach, where state variables (X) are used
to define the state ofthe economy at each period in time, and the control variable (u) is assumed
to be exogenously determined. Ifwe believe that the parameters ofthe system change slowly
over time, the ordinary least squares method can be extended by weighting current information
more heavily than past information. In this model we use a recursive least squares with
exponential forgetting algorithm to estimate the coefficients of the state space model.
The resulting “transition” matrix is used for a short run forecast of the state variables,
assuming time invariance afterthe end of the sample period. The root mean square ofthe
2
4. forecast errors are better for this model than a simple OLS model. The rest ofthe paper is
organized as follows: Section 2 briefly discusses the state space representation and the recursive
least squares algorithm. Section 3 discusses the choice of variables and their transformations and
the parameters of choice for the algorithm. Section 4 presents the results of the model and
Section 5 concludes the paper.
2. State Space Representation and Recursive Estimation
The state space representation of systems’ is characterized by the choice of n state
variables (usually denoted by X) which are assumed to fully describe the system at any point in
time, a control variable (denoted by u) which is assumed to be exogenously determined and can
be used to move the system from its current state to another state, and a measured or output
variable (denoted by y) which is ofparticular interest. The system can then be described by a
triplet A, b, and c as in equations (1) and (2) below.2
X~=AX~1 ÷bu~ (1)
= C~X~ (2)
‘For this paper we assume a single-input single-output (SISO) system which means that our output and
control variables are scalars.
2 The system described is fully deterministic. Stochastic errors can be assumed to enter either additively or
in the coefficients.
3
5. X is an n x 1 vector of state variables which describes the economy,3 A is an n x n matrix of
coefficients, u is a control variable (scalar), b is an n x 1 vector of coefficients, y is the output
variable (of interest), c is an n x 1 vector of coefficients. In a stochastic environment we assume
that the measurements of the variables are noisy and uncertain and the noise components are
independent, identical normally distributed disturbances. For this paper we will focus only on
estimating the state transition equation (1).
Recursive Least Squares with exponential forgetting:
Recursive Least Squares (RLS) estimation is a special case of the Kalman filter which can
be used to avoid the numerical difficulties of matrix inversion present in ordinary least squares
(OLS) estimation. OLS is applied to the first k observations ofthe data sample to determine a
starting point for parameter estimates. Each additional observation is used to update coefficient
estimates recursively, thus avoiding the need for further matrix inversion. With proper choice of
the initial conditions, the final estimator at the end of the sample period is equal to the OLS
estimator. Harvey (1993) summarizes the method. The RLS method with exponential forgetting
used here modifies the basic RLS updating algorithm to weigh new information more heavily.
The method is appealing forcases where time-varying parameters are suspected.
For an equation of the form
z(t) = ~T(t) e (3)
~ The accepted format is a first order difference equation. If additional lags of particular variables are
desired then the list of variables is expanded appropriately by defining lagged values of these variables as X’s. It can
be shown that an ARMA representation can be modeled by this first order difference equation model.
4
6. where ® is a vector of model parameters and p(t) is a set of explanatory variables, the usual
quadratic loss function is replaced by a discounted loss function of the form
V(e,t) = l/2~ At~~~T(~)e)2 (4)
where X is a number less than or equal to one and is referred to as the forgetting factor.
The recursive algorithm is given by
~(t) = ~(t-l) + K(t) (z(t)
- pT(t) ~(t-1))
K(t) = P(t) .p(t) = P(t—1) p(t) (XI+pT(t)P(t_l) p(t)y’ (5)
P(t) = (I— K(t)pT(t)) P(t—l)/?~
The essential feature of the algorithm is that t- 1 estimates of e are adjusted with new information
by a transformation of the error in predicting z using ~-, nd current p’s. The adjustment to the
a
error, K(t), is called the Kalman gain and is a function of the rate ofchange in the errors and is
weighted by the discount factor 2.. P(t) is the covariance matrix at time t. Both K(t) and the
moment matrix P(t) are updated recursively. In the state space model of equations (1) and (2),
z(t) is X~,p(t) is X~, nd ~(t) A(t), or p(t) can be [X, u,] and e(t) would correspond to [A(t),
a is ,
b(t)].
3. Data and Transformations Used
The choice of state variables was based on a variety of factors including availability on a
monthly frequency, explanatory capacity, timeliness of data release and available sample size.
From an initial list of 18 variables we decided that the U.S. macro economy can be
5
7. parsimoniously described by twelve variables (transformed appropriately). We define the state
variable vector X as the following: consumption (CBM), investment(CPB), industrial production
(IP), changes in manufacturing inventory (MIM), changes in retail inventory (TRIT),
manufacturing inventory/sales ratio (MR 1), retail inventory/sales ratio (TSRR), urban CPI
(PCU), total employment (LE), 3-month Treasury Bill interest rate (FTB3M), and M2 monetary
aggregate (FM2). We choose the control variable u as the Fed Funds rate (FFED). New
construction is used as a proxy for investment. The data are of monthly frequency, and the
sample period considered is January, 1981 through December, 1995.
The variable names are listed in Table 1 and following transformations were made: log
levels of CBM, CPB, IP, LE, FM2 and PCU; log differences ofMIM, and TRIT; MR1, TSRR,
FTB3M, and, FFED were untransformed.
First-differencing and deflation of data The objective of the forecasting model is to track
variables that may be changing over time. Despite evidence of unit roots in all variables except
MR1, TSRR and 3-month T-Bill rate (FTB3M), detrending ofdata via first differencing or
filtering was not deemed necessary for tracking purposes.4 Inventory datawas differenced for
two reasons: first, because change in inventory is a component of Gross Domestic Product (GDP)
and second, because including first-differences in inventories (instead of levels) appeared to
improve the model’s ability to capture turning points in the business cycle. Nominal values were
used for all variables along with the consumer price index as one of the state variables to observe
the effect of changes in fed funds rate on inflation.
~The existence of a trend does not adversely impact the performance of tracking algorithms. Results of unit
root tests are not reported here but are available.
6
8. Estimation Procedure
The algorithm shown in equation 5 was used to estimate the coefficients for each variable
individually. The right hand side variables are the one period lags of all the state variables plus
the current period value of the control variable. The final period estimate ofcoefficients was
assembled into the equivalent A matrix and b vector for forecasting.
Choice of A, &,~,and P0 The P matrix was initialized as the identity matrix and the e ‘s were
initialized as an AR(1) process with coefficient 0.8. The forgetting factor, 2., was chosen to be
0.96411. This value of the forgetting factor reduces the weight of the error after 63 months, (a
period equivalent to the average length of postwar business cycles), to 10 percent. Lower values
of )~, hich represent higher rates of forgetting, led to improvement in tracking but resulted in a
w
higher variance of the estimated e’ s over time. The model converges relatively quickly and
initial values affect only the early estimates in the sample. Using VAR coefficient estimates as
starting values for the parameters did no better than using unit roots as initial starts. Starting
coefficients of 0.8 on an AR(l) model were chosen to avoid any biases toward a unit root. High
variance on the initial P matrix, which is equivalent to a diffuse prior, resulted in higher variance
of ~ exaggerated the “turning points” in the forecast. Estimates of the A matrix were
and
nonsingular and had stable eigenvalues with the “typical” assumptions for 2~, ~, 0.
P
4. Results
The model is distinguished by the recursive technique which updates past estimates of
coefficients using the error in the one period ahead forecast. As a first test of the model’s
performance, the one period ahead forecast of the model for each state variable was recorded
7
9. for each period in the sample and compared to the actual values. Since the initial values, ~,
of the parameters were chosen arbitrarily, and the dataused to adjust errors are limited
initially,5 the estimates took a few periods to converge. Once convergence was achieved, the
estimates tracked the actual values very closely. The Root Mean Square Error (RMSE) of the
in-sample forecasts of consumption, industrial production, employment, and CPI for the most
recent 60 month period (AKP) were monitored. The model was also estimated using OLS for
the period February, 1981 through December, 1995. In-sample forecasts from OLS and the
one-period ahead forecasts obtained using the recursive least squares procedure were
considered for the last sixty sample periods (91:01 - 95:12) and the percentage RMSE were
computed for the four variables of interest.6 Table 2 shows the comparison of the RMSE to
the OLS in-sample forecast for the same period. For all four variables the RLS with
forgetting had lower RMSE than the OLS.
Out-of-sample forecasts for 5-months (January, 1996 to May, 1996), were made using
the recursive least squares and OLS estimation procedures. Two different RLS out-of-sample
forecasts were made for comparison purposes. One assumed no new information was
available for updating the coefficients, the other assumed that coefficients were updated each
period. The 5-month forecast using the constant parameter estimates (A matrix) at the end of
~The recursive technique by definition uses only information from the past. Hence early
estimates use limited data.
The RMSE were computed after transforming the data back to their original form by
6
exponentiating the forecasted values. % RMSE was computed using the formula
2
1/nE ~~~‘xlOO
N ~
8
10. 95:12 (without update) is called AKP5. The second set of out-of-sample one period ahead
forecasts using the updated coefficients is called AKP1. The percentage RMSE from the out-
of sample OLS forecasts and the recursive least squares procedure forecasts are compared in
Table 3. For all variables except industrial production using the AKP1 method, the recursive
least squares model had a lower % RMSE than OLS. These results suggest that the recursive
least squares procedure provides better tracking and forecasting (both in-sample and out-of-
sample) than OLS for the model developed in this paper. Figures 1 - 4 represent the one
period ahead in-sample forecast for the last 24 months of the sample, the 5 period forecast
(AKP5) and the one period ahead forecast (AKP1) compared to actual values (for the last 29
periods) for the four variables of interest. As the figures show, the tracking is quite close in
all four cases.
Sensitivity to A: The estimation and forecasting results discussed above are obtained
by setting the forgetting factor equal to 0.96411. Sensitivity of the results to changes in the
rate at which past errors are discounted was studied by setting A equal to 0.8 and 1.0. The
percentage RMSE for the in-sample and out-of-sample forecasts of industrial production for
the three different choices of A are given in Table 4. RMSE is the lowest for the choice
A=0.9641 1 in almost all cases. Figures 5 and 6 compare the OLS forecasts with the recursive
least squares procedure (AKP5) forecasts for two different values of A. The graphs show that
the forecasts obtained using A=0.96 perform much better than OLS forecasts, while the
forecasts corresponding to A= 1.0 are worse than OLS forecasts. Figures 8 - 10 compare the
two sensitivity cases with the actual and the A=0.96 case. As expected, forecasts are more
volatile with lower A. This is because lower values of A weigh recent errors more heavily
9
11. and the correction tends to be sharper in forecasts. For similar reasons we expect forecasts
with higher values of A to be smoother, and the figures show that the forecasts are indeed
smoother when A= 1.0. Although our choice of A was not an attempt to optimize, it does
better than the alternatives used in the sensitivity tests.
5. Summary and Conclusions
The purpose of this exercise was to develop a relatively parsimonious macroeconomic
model which might be useful for short term forecasts and would recognize the potential for
time varying parameters. Using a measure of per cent RMSE of in-sample and out of sample
forecasts, the model does better than a simple OLS model. Because there was little attempt
to steep the model in theoretical microfoundations, we would not recommend it at this time
for use in long-term forecasting. However, it is parsimonious enough to be used in the
decision making process.
Table 1 List of Variables Used
-
Name Variable
1P Industrial Production
CPB Total New Construction
MIM Manufacturers Inventories
TRIT Retailers Inventories
MR1 Manufacturers Inventory/Sales Ratio
TSRR Retailers Inventory/Sales Ratio
FM2 M2 Money Stock
FTB3M 3-Month Treasury Bill (Auction Average)
PCU Urban Consumers Price Index (All Items)
LE Number of Civilians over 16 employed
CBM Personal Consumption Expenditure
FFED Fed Funds Rate
10
12. Table 2 - % RMSE for (60 Period) In Sample Forecasts
-
IP ICONS ILE IPCU
AKP 0.418 0.485 0.230 0.132
OLS 0.738 0.828 0.248 0.374
Table 3 - % RMSE for (5 Period) Out of Sample Forecasts
lIP ICONS ILE IPCU
AKP1 0.652 0.468 0.214 0.137
AKP5 0.608 0.389 0.429 0.301
OLS 0.626 0.808 0.582 0.393
Table 4 - % RMSE for Industrial Production For Different Values of Lambda
AKP5 AKP1
A=0.8 A=0.96 A= 1.0 A=0.8 A=0.96 A= 1.0
RMSE6O 0.699984 0.418171 0.521724 0.699984 0.418171 0.521724
RMSE5 0.649919 0.608086 1.481615 0.413427 0.651874 0.749956
11
13. Figure 1: Actual vs. Forecast of Consumption
8.56
8.54
8.52
6.5
C
8.48
E
rj~ 8.46
0
U 8.44
8.42
8,4
Jan/94 Jul/94 Jan/95 Jul/95 Jan/96
Actual AKP5 AKPI
Figure 2: Actual vs.Forecast of Industrial Production
4.84
4.82
4.8
0
4.’
C)
4.78
0
C.’
4.76
“4
C.’
“4
4,74
4.4
4.72
Jan/94 Jul/94 Jan/95 Jul/95 Jan/96
Actual AKP5 AKPI
12
14. Figure 3: Actual vs. Forecast Employment
11.76
11.75
‘~ 11.74
~2)
E
E 11.73
11.72
11.71
11.7
Jan/94 Jul/94 Jan/95 Jul/95 Jan/96
Actual AKP5 AKPI
Figure 4: Actual vs. Forecast CPI
5.06
5.04
ci.
5.02
5
4.98
Jan/94 Jul/94 Jan/95 Jul/95 Jan/96
Actual AKP5 AKP1
13
15. Figure 5: AKP5 (lambda=0.96) and OLS Forecasts for lP
4.84
4.82
C
4.8
~ 4.78
~0
C
4.76
4.74
Jan-94 Jul-94 Jan-95 Jul-95 Jan-96 Jul-96
OLS ActuaC — — — AKP5
Figure 6: AKP5 (lambda=1 .0) and OLS Forecasts for IP
4.84
4.82
C
.2 4.8
C
~0
2
~- 4.78
(0
4.76
4.74
4.72
Jan-94 Jul-94 Jan-95 Jul-95 Jan-96 Jul-96
Actual — — — Lamda=1.0 OLS
14
16. Figure 7: Forecasts (AKP5) of IP for Different Values of Lambda
4.84
4.82
C
0
~ 4,8
0
CL 4.78
(0
4.76
4.74
4.72
Jan-94 Jul-94 Jan-95 Jul-95 Jan-96 Jul-96
Actual — — — Lambda=O.96 Lamda=1 .0
Figure 8: Forecasts (AKP5) of IP for Different Values of Lambda
4.86
4.84
~ 4.82
C)
~ 4.8
0
.~ 4.78
U)
4.76
4.74
4.72
Jan-94 Jul-94 Jan-95 Jul-95 Jan-96 Jul-96
Actual — — — Lambda=0,96 Lambda=0.8
15
17. Figure 9: Forecasts (AKP1) of lP for Different Values of Lambda
4,84
4.82
4.8
CL 4.78
CO
C,)
.~ 4.76
C
4,74
4.72
Jan-94 Jul-94 Jan-95 Jul-95 Jan-96 Jul-96
Actual — — — Lambda=0,96 Lambda=1 .0
Figure 10: Forecasts (AKP1) of IP for Different Values of Lambda
C
0
C,)
‘0
0
Li.
CO
U)
‘0
C
Jan-94 Jul-94 Jan-95 Jul-95 Jan-96 Jul-96
Actual — — — Lambda=0.96 Lambda~0.8
16
18. References
Aström, Karl Johan, and Björn Wittenmark, Adaptive Control, Addison-Wesley Publishing
Company, New York, 1989.
Bryson, Arthur E., and Yu-Chi Ho, Applied Optimal Control: Optimization, Estimation, and
Control John Wiley & Sons, New York, 1975.
Goldberger, A. S., “Unbiased Prediction by Recursive Least Square”, Econometrica, Vol. 38,
No. 2, p.36’7, March, 1970.
Gunnarsson, Svante, and Lennart Ljung, “Adaptation and Tracking in System Identification - A
Survey,” Automatica, Vol. 26, No. 1, pp. 7-21, 1990.
Harvey, Andrew C., Forecasting, structural time series models and the Kalman filter, Cambridge
University Press, 1990.
____________ Time Series Models, Second Edition, The MIT Press, Cambridge, Massachusetts,
1993.
Sargent, Thomas J., Bounded Rationality in Macroeconomics: The Arne Ryde Memorial
Lectures, Clarendon Press, Oxford, 1993.
17