This slide deck describes how CBO used a Bayesian vector autoregression model to assess the uncertainty of the economic forecast presented in CBO’s Current View of the Economy in 2023 and 2024 and the Budgetary Implications (November 2022).
Presentation by U. Devrim Demirel, CBO's Fiscal Policy Studies Unit Chief, and James Otterson at the 28th International Conference of The Society for Computational Economics.
This slide deck describes how CBO used a Markov-switching model to assess the uncertainty of the economic forecast presented in CBO’s Current View of the Economy in 2023 and 2024 and the Budgetary Implications (November 2022).
Presentation by Kevin Perese, an analyst for CBO’s Tax Analysis Division, at the Association for Public Policy & Management’s 2016 Fall Research Conference, Pre-Conference Workshop on Microsimulation Modeling.
As developmental work for analysis for the Congress, CBO is reexamining the projection and alignment methodology used in its individual income tax microsimulation model. The presentation provides an overview of CBO’s current projection and alignment methodology, surveys the methodologies used in other static microsimulation models, and considers the criteria CBO will use when updating its current methodology.
The information in this presentation is preliminary and is being circulated to stimulate discussion and critical comment.
Applying the Bootstrap Techniques in Detecting Turning Points: a Study of Con...FGV Brazil
Applying the Bootstrap Techniques in Detecting Turning Points: a Study of Consumer Sentiment Survey - 2014
The purpose of this study, FGV’s Brazilian Institute of Economics (IBRE), is to improve the ability of the Consumer Confidence Index (CCI) of detecting turning points by shorting the statistical confidence interval by applying the Bootstrap Technique to the Consumers Survey.
In our earlier blog, we discussed PD terminology and PD calibration approaches as applicable to the IFRS 9 framework. In this blog, we have discussed the methodologies for adjusting PDs for the ‘forward-looking’ macroeconomic scenarios and development of PD Term Structure.
Presentation by U. Devrim Demirel, CBO's Fiscal Policy Studies Unit Chief, and James Otterson at the 28th International Conference of The Society for Computational Economics.
This slide deck describes how CBO used a Markov-switching model to assess the uncertainty of the economic forecast presented in CBO’s Current View of the Economy in 2023 and 2024 and the Budgetary Implications (November 2022).
Presentation by Kevin Perese, an analyst for CBO’s Tax Analysis Division, at the Association for Public Policy & Management’s 2016 Fall Research Conference, Pre-Conference Workshop on Microsimulation Modeling.
As developmental work for analysis for the Congress, CBO is reexamining the projection and alignment methodology used in its individual income tax microsimulation model. The presentation provides an overview of CBO’s current projection and alignment methodology, surveys the methodologies used in other static microsimulation models, and considers the criteria CBO will use when updating its current methodology.
The information in this presentation is preliminary and is being circulated to stimulate discussion and critical comment.
Applying the Bootstrap Techniques in Detecting Turning Points: a Study of Con...FGV Brazil
Applying the Bootstrap Techniques in Detecting Turning Points: a Study of Consumer Sentiment Survey - 2014
The purpose of this study, FGV’s Brazilian Institute of Economics (IBRE), is to improve the ability of the Consumer Confidence Index (CCI) of detecting turning points by shorting the statistical confidence interval by applying the Bootstrap Technique to the Consumers Survey.
In our earlier blog, we discussed PD terminology and PD calibration approaches as applicable to the IFRS 9 framework. In this blog, we have discussed the methodologies for adjusting PDs for the ‘forward-looking’ macroeconomic scenarios and development of PD Term Structure.
This slide deck outlines the models CBO uses to assess the budgetary effects of alternative economic scenarios such as those presented in CBO’s Current View of the Economy in 2023 and 2024 and the Budgetary Implications (November 2022).
Customer Churn is a burning problem for Telecom companies. In this project, we simulate one such case of customer churn where we work on a data of postpaid customers with a contract. The data has information about the customer usage behavior, contract details and the payment details. The data also indicates which were the customers who canceled their service. Based on this past data, we need to build a model which can predict whether a customer will cancel their service in the future or not.
A dynamic Nelson-Siegel model with forward-looking indicators for the yield c...FGV Brazil
This paper proposes a Factor-Augmented Dynamic Nelson-Siegel (FADNS) model to predict the yield curve in the US that relies on a large data set of weekly financial and macroeconomic variables. The FADNS model significantly improves interest rate forecasts relative to the extant models in the literature. For longer horizons, it beats autoregressive alternatives, with a reduction in mean absolute error of up to 40%. For shorter horizons, it offers a good challenge to autoregressive forecasting models, outperforming them for the 7- and 10-year yields. The out-of-sample analysis shows that the good performance comes mostly from the forward-looking nature of the variables we employ. Including them reduces the mean absolute error in 5 basis points on average with respect to models that reflect only past macroeconomic events.
Date: 2017-03
Authors:
Vieira, Fausto José Araújo
Chague, Fernando Daniel
Fernandes, Marcelo
This paper is a methodological exercices presenting the results obtained from the estimation of the growth convergence equation using different methodologies.
A dynamic balanced panel data is estimated using: OLS, WithinGroup, HsiaoAnderson, First Difference, GMM with endogenous and GMM with predetermined instruments. An unbalanced panel is also realized for OLS, WG and FD.
Results are discused in light of Monte Carlo studies.
The prognosis of Basel III costs in Albanian Economyinventionjournals
This article is based on the basic hypothesis according to which the implementation of Basel III will be associated with costs in the economy. Studies show that tightening of requirements for capital and liquidity under Basel III is expected to reduce the level of GDP in the long term. The aim of the study is to prove correlation between bank indicators and GDP, and then if it is proved the correlation, to calculate the costs.Banks can implement the increased capital and liquidity requirements under Basel III using different strategies. The goal is to predict the impact of these new macro strategies of banks in the framework of Basel III.Also an important part of this article is to understand the operation and impact of the banking system in some macroeconomic indicators. From the study it can be said that there is not a direct relationship between the level of capital and GDP, and the same for liquidity and GDP level connection. It is proved the link between the level of liquidity and interest rates as well as the link between the lending and interest rates
Customer lifetime value model is based on the discounted cash flows arising from the average annual revenues contributed by each customer (model A).
The second model is also based on the discounted cash flows arising from the average annual revenues contributed by a subscriber but a constant annual growth rate is also assumed to govern the rise in the growth in revenues (model B).
The improvements to be considered include giving due consideration to estimating the future cash flows and growth rates through regression analysis, accounting for the other revenue streams that the subscriber contributes such as DTV memberships and the value of the subscriber’s social network.
Multifactorial Heath-Jarrow-Morton model using principal component analysisIJECEIAES
In this study, we propose an implementation of the multifactor Heath-Jarrow- Morton (HJM) interest rate model using an approach that integrates principal component analysis (PCA) and Monte Carlo simulation (MCS) techniques. By integrating PCA and MCS with the multifactor HJM model, we successfully capture the principal factors driving the evolution of short-term interest rates in the US market. Additionally, we provide a framework for deriving spot interest rates through parameter calibration and forward rate estimation. For this, we use daily data from the US yield curve from June 2017 to December 2019. The integration of PCA, MCS with multifactor HJM model in this study represents a robust and precise approach to characterizing interest rate dynamics and compared to previous approaches, this method provided greater accuracy and improved understanding of the factors influencing US Treasury Yield interest rates.
Review Parameters Model Building & Interpretation and Model Tunin.docxcarlstromcurtis
Review Parameters: Model Building & Interpretation and Model Tuning
1. Model Building
a. Assessments and Rationale of Various Models Employed to Predict Loan Defaults
The z-score formula model was employed by Altman (1968) while envisaging bankruptcy. The model was utilized to forecast the likelihood that an organization may fall into bankruptcy in a period of two years. In addition, the Z-score model was instrumental in predicting corporate defaults. The model makes use of various organizational income and balance sheet data to weigh the financial soundness of a firm. The Z-score involves a Linear combination of five general financial ratios which are assessed through coefficients. The author employed the statistical technique of discriminant examination of data set sourced from publically listed manufacturers. A research study by Alexander (2012) made use of symmetric binary alternative models, otherwise referred to as conditional probability models. The study sought to establish the asymmetric binary options models subject to the extreme value theory in better explicating bankruptcy.
In their research study on the likelihood of default models examining Russian banks, Anatoly et al. (2014) made use of binary alternative models in predicting the likelihood of default. The study established that preface specialist clustering or mechanical clustering enhances the prediction capacity of the models. Rajan et al. (2010) accentuated the statistical default models as well as inducements. They postulated that purely numerical models disregard the concept that an alteration in the inducements of agents who produce the data may alter the very nature of data. The study attempted to appraise statistical models that unpretentiously pool resources on historical figures devoid of modeling the behavior of driving forces that generates these data. Goodhart (2011) sought to assess the likelihood of small businesses to default on loans. Making use of data on business loan assortment, the study established the particular lender, loan, and borrower characteristics as well as modifications in the economic environments that lead to a rise in the probability of default. The results of the study form the basis for the scoring model. Focusing on modeling default possibility, Singhee & Rutenbar (2010) found the risk as the uncertainty revolving around an enterprise’s capacity to service its obligations and debts.
Using the logistic model to forecast the probability of bank loan defaults, Adam et al. (2012) employed a data set with demographic information on borrowers. The authors attempted to establish the risk factors linked to borrowers are attributable to default. The identified risk factors included marital status, gender, occupation, age, and loan duration. Cababrese (2012) employed three accepted data mining algorithms, naïve Bayesian classifiers, artificial neural network decision trees coupled with a logical regression model to formulate a prediction m ...
The presentation aims to explain the meaning of ECONOMETRICS and why this subject is studied as a separate discipline.
The reference is based on the book "BASIC ECONOMETRICS" by Damodar N. Gujarati.
For further explanation, check out the youtube link:
https://youtu.be/S3SUDiVpUGU
International Journal of Computational Engineering Research (IJCER) ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
COMPARISON OF BANKRUPTCY PREDICTION MODELS WITH PUBLIC RECORDS AND FIRMOGRAPHICScscpconf
Many business operations and strategies rely on bankruptcy prediction. In this paper, we aim to
study the impacts of public records and firmographics and predict the bankruptcy in a 12-
month-ahead period with using different classification models and adding values to traditionally
used financial ratios. Univariate analysis shows the statistical association and significance of
public records and firmographics indicators with the bankruptcy. Further, seven statistical
models and machine learning methods were developed, including Logistic Regression, Decision
Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. The performance of models were evaluated and compared based on
classification accuracy, Type I error, Type II error, and ROC curves on the hold-out dataset.
Moreover, an experiment was set up to show the importance of oversampling for rare event
prediction. The result also shows that Bayesian Network is comparatively more robust than
other models without oversampling.
EAD Parameter : A stochastic way to model the Credit Conversion FactorGenest Benoit
This white paper aims at estimating credit risk by modelling the Credit Conversion Factor (CCF) parameter related to the Exposure-at-Default (EAD). It has been decided to perform the estimation thanks to stochastic processes instead of usual statistical methodologies (such as classification tree or GLM).
Our paper will focus on two types of model: the Ornstein Uhlenbeck (OU) model – part of ARMA model types – and the Geometric Brownian Movement (GBM) model. First, we will describe, then implement and calibrate each model to ensure relevance and robustness of our results. Then, we will focus on GBM model to model CCF.
Presentation by Jared Jageler, David Adler, Noelia Duchovny, and Evan Herrnstadt, analysts in CBO’s Microeconomic Studies and Health Analysis Divisions, at the Association of Environmental and Resource Economists Summer Conference.
More Related Content
Similar to Estimating the Uncertainty of the Economic Forecast Using CBO’s Bayesian Vector Autoregression Model
This slide deck outlines the models CBO uses to assess the budgetary effects of alternative economic scenarios such as those presented in CBO’s Current View of the Economy in 2023 and 2024 and the Budgetary Implications (November 2022).
Customer Churn is a burning problem for Telecom companies. In this project, we simulate one such case of customer churn where we work on a data of postpaid customers with a contract. The data has information about the customer usage behavior, contract details and the payment details. The data also indicates which were the customers who canceled their service. Based on this past data, we need to build a model which can predict whether a customer will cancel their service in the future or not.
A dynamic Nelson-Siegel model with forward-looking indicators for the yield c...FGV Brazil
This paper proposes a Factor-Augmented Dynamic Nelson-Siegel (FADNS) model to predict the yield curve in the US that relies on a large data set of weekly financial and macroeconomic variables. The FADNS model significantly improves interest rate forecasts relative to the extant models in the literature. For longer horizons, it beats autoregressive alternatives, with a reduction in mean absolute error of up to 40%. For shorter horizons, it offers a good challenge to autoregressive forecasting models, outperforming them for the 7- and 10-year yields. The out-of-sample analysis shows that the good performance comes mostly from the forward-looking nature of the variables we employ. Including them reduces the mean absolute error in 5 basis points on average with respect to models that reflect only past macroeconomic events.
Date: 2017-03
Authors:
Vieira, Fausto José Araújo
Chague, Fernando Daniel
Fernandes, Marcelo
This paper is a methodological exercices presenting the results obtained from the estimation of the growth convergence equation using different methodologies.
A dynamic balanced panel data is estimated using: OLS, WithinGroup, HsiaoAnderson, First Difference, GMM with endogenous and GMM with predetermined instruments. An unbalanced panel is also realized for OLS, WG and FD.
Results are discused in light of Monte Carlo studies.
The prognosis of Basel III costs in Albanian Economyinventionjournals
This article is based on the basic hypothesis according to which the implementation of Basel III will be associated with costs in the economy. Studies show that tightening of requirements for capital and liquidity under Basel III is expected to reduce the level of GDP in the long term. The aim of the study is to prove correlation between bank indicators and GDP, and then if it is proved the correlation, to calculate the costs.Banks can implement the increased capital and liquidity requirements under Basel III using different strategies. The goal is to predict the impact of these new macro strategies of banks in the framework of Basel III.Also an important part of this article is to understand the operation and impact of the banking system in some macroeconomic indicators. From the study it can be said that there is not a direct relationship between the level of capital and GDP, and the same for liquidity and GDP level connection. It is proved the link between the level of liquidity and interest rates as well as the link between the lending and interest rates
Customer lifetime value model is based on the discounted cash flows arising from the average annual revenues contributed by each customer (model A).
The second model is also based on the discounted cash flows arising from the average annual revenues contributed by a subscriber but a constant annual growth rate is also assumed to govern the rise in the growth in revenues (model B).
The improvements to be considered include giving due consideration to estimating the future cash flows and growth rates through regression analysis, accounting for the other revenue streams that the subscriber contributes such as DTV memberships and the value of the subscriber’s social network.
Multifactorial Heath-Jarrow-Morton model using principal component analysisIJECEIAES
In this study, we propose an implementation of the multifactor Heath-Jarrow- Morton (HJM) interest rate model using an approach that integrates principal component analysis (PCA) and Monte Carlo simulation (MCS) techniques. By integrating PCA and MCS with the multifactor HJM model, we successfully capture the principal factors driving the evolution of short-term interest rates in the US market. Additionally, we provide a framework for deriving spot interest rates through parameter calibration and forward rate estimation. For this, we use daily data from the US yield curve from June 2017 to December 2019. The integration of PCA, MCS with multifactor HJM model in this study represents a robust and precise approach to characterizing interest rate dynamics and compared to previous approaches, this method provided greater accuracy and improved understanding of the factors influencing US Treasury Yield interest rates.
Review Parameters Model Building & Interpretation and Model Tunin.docxcarlstromcurtis
Review Parameters: Model Building & Interpretation and Model Tuning
1. Model Building
a. Assessments and Rationale of Various Models Employed to Predict Loan Defaults
The z-score formula model was employed by Altman (1968) while envisaging bankruptcy. The model was utilized to forecast the likelihood that an organization may fall into bankruptcy in a period of two years. In addition, the Z-score model was instrumental in predicting corporate defaults. The model makes use of various organizational income and balance sheet data to weigh the financial soundness of a firm. The Z-score involves a Linear combination of five general financial ratios which are assessed through coefficients. The author employed the statistical technique of discriminant examination of data set sourced from publically listed manufacturers. A research study by Alexander (2012) made use of symmetric binary alternative models, otherwise referred to as conditional probability models. The study sought to establish the asymmetric binary options models subject to the extreme value theory in better explicating bankruptcy.
In their research study on the likelihood of default models examining Russian banks, Anatoly et al. (2014) made use of binary alternative models in predicting the likelihood of default. The study established that preface specialist clustering or mechanical clustering enhances the prediction capacity of the models. Rajan et al. (2010) accentuated the statistical default models as well as inducements. They postulated that purely numerical models disregard the concept that an alteration in the inducements of agents who produce the data may alter the very nature of data. The study attempted to appraise statistical models that unpretentiously pool resources on historical figures devoid of modeling the behavior of driving forces that generates these data. Goodhart (2011) sought to assess the likelihood of small businesses to default on loans. Making use of data on business loan assortment, the study established the particular lender, loan, and borrower characteristics as well as modifications in the economic environments that lead to a rise in the probability of default. The results of the study form the basis for the scoring model. Focusing on modeling default possibility, Singhee & Rutenbar (2010) found the risk as the uncertainty revolving around an enterprise’s capacity to service its obligations and debts.
Using the logistic model to forecast the probability of bank loan defaults, Adam et al. (2012) employed a data set with demographic information on borrowers. The authors attempted to establish the risk factors linked to borrowers are attributable to default. The identified risk factors included marital status, gender, occupation, age, and loan duration. Cababrese (2012) employed three accepted data mining algorithms, naïve Bayesian classifiers, artificial neural network decision trees coupled with a logical regression model to formulate a prediction m ...
The presentation aims to explain the meaning of ECONOMETRICS and why this subject is studied as a separate discipline.
The reference is based on the book "BASIC ECONOMETRICS" by Damodar N. Gujarati.
For further explanation, check out the youtube link:
https://youtu.be/S3SUDiVpUGU
International Journal of Computational Engineering Research (IJCER) ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
COMPARISON OF BANKRUPTCY PREDICTION MODELS WITH PUBLIC RECORDS AND FIRMOGRAPHICScscpconf
Many business operations and strategies rely on bankruptcy prediction. In this paper, we aim to
study the impacts of public records and firmographics and predict the bankruptcy in a 12-
month-ahead period with using different classification models and adding values to traditionally
used financial ratios. Univariate analysis shows the statistical association and significance of
public records and firmographics indicators with the bankruptcy. Further, seven statistical
models and machine learning methods were developed, including Logistic Regression, Decision
Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. The performance of models were evaluated and compared based on
classification accuracy, Type I error, Type II error, and ROC curves on the hold-out dataset.
Moreover, an experiment was set up to show the importance of oversampling for rare event
prediction. The result also shows that Bayesian Network is comparatively more robust than
other models without oversampling.
EAD Parameter : A stochastic way to model the Credit Conversion FactorGenest Benoit
This white paper aims at estimating credit risk by modelling the Credit Conversion Factor (CCF) parameter related to the Exposure-at-Default (EAD). It has been decided to perform the estimation thanks to stochastic processes instead of usual statistical methodologies (such as classification tree or GLM).
Our paper will focus on two types of model: the Ornstein Uhlenbeck (OU) model – part of ARMA model types – and the Geometric Brownian Movement (GBM) model. First, we will describe, then implement and calibrate each model to ensure relevance and robustness of our results. Then, we will focus on GBM model to model CCF.
Presentation by Jared Jageler, David Adler, Noelia Duchovny, and Evan Herrnstadt, analysts in CBO’s Microeconomic Studies and Health Analysis Divisions, at the Association of Environmental and Resource Economists Summer Conference.
Presentation by Mark Hadley, CBO's Chief Operating Officer and General Counsel, at the 2nd NABO-OECD Annual Conference of Asian Parliamentary Budget Officials.
Presentation by Daria Pelech, an analyst in CBO’s Health Analysis Division, at the Center for Health Insurance Reform McCourt School of Public Policy, Georgetown University.
This slide deck highlights CBO’s key findings about the outlook for the economy as described in its new report, The Budget and Economic Outlook: 2024 to 2034.
Presentation by CBO analysts Rebecca Heller, Shannon Mok, and James Pearce, and Census Bureau research economist Jonathan Rothbaum at the American Economic Association Annual Meeting, Committee on Economic Statistics.
Presentation by Eric J. Labs, an analyst in CBO’s National Security Division, at the Bank of America 2024 Defense Outlook and Commercial Aerospace Forum.
Presentation by Elizabeth Ash, William Carrington, Rebecca Heller, and Grace Hwang of CBO’s Labor, Income Security, and Long-Term Analysis and Health Analysis divisions to the Children’s Health Group, American Academy of Pediatrics.
Presentation by Molly Dahl, Chief of CBO’s Long-Term Analysis Unit, at a meeting of the National Conference of State Legislatures’ Budget Working Group.
In the President’s 2024 budget request, total military compensation is $551 billion, including veterans' benefits. That amount represents an increase of 134 percent since 1999 after removing the effects of inflation.
This session provides a comprehensive overview of the latest updates to the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (commonly known as the Uniform Guidance) outlined in the 2 CFR 200.
With a focus on the 2024 revisions issued by the Office of Management and Budget (OMB), participants will gain insight into the key changes affecting federal grant recipients. The session will delve into critical regulatory updates, providing attendees with the knowledge and tools necessary to navigate and comply with the evolving landscape of federal grant management.
Learning Objectives:
- Understand the rationale behind the 2024 updates to the Uniform Guidance outlined in 2 CFR 200, and their implications for federal grant recipients.
- Identify the key changes and revisions introduced by the Office of Management and Budget (OMB) in the 2024 edition of 2 CFR 200.
- Gain proficiency in applying the updated regulations to ensure compliance with federal grant requirements and avoid potential audit findings.
- Develop strategies for effectively implementing the new guidelines within the grant management processes of their respective organizations, fostering efficiency and accountability in federal grant administration.
A process server is a authorized person for delivering legal documents, such as summons, complaints, subpoenas, and other court papers, to peoples involved in legal proceedings.
Understanding the Challenges of Street ChildrenSERUDS INDIA
By raising awareness, providing support, advocating for change, and offering assistance to children in need, individuals can play a crucial role in improving the lives of street children and helping them realize their full potential
Donate Us
https://serudsindia.org/how-individuals-can-support-street-children-in-india/
#donatefororphan, #donateforhomelesschildren, #childeducation, #ngochildeducation, #donateforeducation, #donationforchildeducation, #sponsorforpoorchild, #sponsororphanage #sponsororphanchild, #donation, #education, #charity, #educationforchild, #seruds, #kurnool, #joyhome
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Many ways to support street children.pptxSERUDS INDIA
By raising awareness, providing support, advocating for change, and offering assistance to children in need, individuals can play a crucial role in improving the lives of street children and helping them realize their full potential
Donate Us
https://serudsindia.org/how-individuals-can-support-street-children-in-india/
#donatefororphan, #donateforhomelesschildren, #childeducation, #ngochildeducation, #donateforeducation, #donationforchildeducation, #sponsorforpoorchild, #sponsororphanage #sponsororphanchild, #donation, #education, #charity, #educationforchild, #seruds, #kurnool, #joyhome
ZGB - The Role of Generative AI in Government transformation.pdfSaeed Al Dhaheri
This keynote was presented during the the 7th edition of the UAE Hackathon 2024. It highlights the role of AI and Generative AI in addressing government transformation to achieve zero government bureaucracy
Estimating the Uncertainty of the Economic Forecast Using CBO’s Bayesian Vector Autoregression Model
1. Estimating the Uncertainty of the
Economic Forecast Using CBO’s
Bayesian Vector Autoregression Model
January 2023
2. 1
For details about the analysis, see Congressional Budget Office, CBO’s Current View of the Economy in 2023 and 2024 and the Budgetary Implications (November 2022),
www.cbo.gov/publication/58757. In the figure, real GDP growth is calculated for a given quarter relative to four quarters earlier. Q = quarter.
CBO’s View of the Economy as of November 2022
In November 2022, the
Congressional Budget Office
was asked about its view of
the economy. From the fourth
quarter of 2022 to the fourth
quarter of 2023, CBO
estimated, there is a two-
thirds chance that growth in
economic output—specifically,
gross domestic product (GDP)
adjusted to remove the effects
of inflation, or real GDP—will
be between −2.0 percent and
1.8 percent.
3. 2
For discussion of the first step, see Robert W. Arnold, How CBO Produces Its 10-Year Economic Forecast, Working Paper 2018-02 (Congressional Budget Office, February 2018),
www.cbo.gov/publication/53537. For discussion of the second step, see Congressional Budget Office, “Estimating the Uncertainty of the Economic Forecast Using CBO’s Expanded
Markov-Switching Model” (January 2023), www.cbo.gov/publication/58884. For related discussion, see Mark Lasky, The Congressional Budget Office’s Small-Scale Policy Model,
Working Paper 2022-08 (Congressional Budget Office, September 2022), www.cbo.gov/publication/57254.
The analysis of economic uncertainty was conducted in three main steps:
▪ Preliminary economic projections provided central estimates for each variable;
▪ 100 simulations of the rates of unemployment, inflation, and interest were jointly
estimated around the central estimates, reflecting asymmetric dynamics and relating
the variables through an expectations-augmented Phillips curve and an inertial
Taylor rule; and
▪ Forecasts conditional on those rates were estimated using symmetric distributions in
which economic output and other variables were synchronized with the simulations
of unemployment, inflation, and interest rates.
This document focuses on the third step, which used a Bayesian vector autoregression
(BVAR) model.
CBO’s Analytic Method for Estimating Uncertainty
4. 3
The BVAR model draws on historical correlations between macroeconomic variables
to produce conditional forecasts.
Key conditions are the simulations of the rates of unemployment, inflation, and
interest, which are estimated using an expanded version of CBO’s Markov-switching
model with asymmetric dynamics in which the unemployment rate rises rapidly in
some periods and falls gradually in others and interest rates do not fall below zero.
The projections of economic output and other variables are synchronized—using
symmetric distributions—with the simulations of unemployment, inflation, and interest
rates. Additional variables that can be simulated with symmetric distributions can be
easily incorporated.
The historical correlations between macroeconomic variables used in the model may
be less predictive of future outcomes in the event of extreme changes in economic
conditions.
Historical Dynamics Reflected in the Modeling
5. 4
Inputs include central forecasts from CBO’s large-scale macroeconometric model for
26 variables used to analyze effects of economic conditions on the federal budget.
Additional inputs are 100 simulations of six variables from CBO’s expanded Markov-
switching model: the unemployment rate, two inflation rates, and three interest rates.
For each calendar quarter, CBO uses those inputs to the BVAR model to project
values for the remaining 20 variables.
After 100 simulations are generated, the values are calibrated so that their average
equals CBO’s central forecast.
The parameters of the model are estimated using data from 1959 through 2022.
How CBO Uses the Model
6. 5
The unemployment rate projection is taken as a set of conditions.
CBO uses the BVAR model to project the following:
▪ Payroll employment,
▪ The number of people in the labor force,
▪ Hours of work, and
▪ Wages and salaries.
Labor Market Variables Projected Using the Model
7. 6
Two rates of inflation are taken as sets of conditions—the overall rate as measured
by the personal consumption expenditures price index and that rate excluding food
and energy prices.
CBO uses the BVAR model to project inflation as measured by:
▪ The GDP price index,
▪ The consumer price index for all urban consumers,
▪ The consumer price index for food at home, and
▪ The consumer price index for medical care.
Inflation Rates Projected Using the Model
8. 7
Three interest rates are taken as sets of conditions—the federal funds rate (the rate
that financial institutions charge each other for overnight loans of their monetary
reserves), the 3-month Treasury bill rate, and the 10-year Treasury note rate.
CBO uses the BVAR model to project the following:
▪ The 5-year Treasury note rate,
▪ The corporate Aaa bond rate, and
▪ The corporate Baa bond rate.
Interest Rates Projected Using the Model
9. 8
CBO uses the BVAR model to project the following:
Output Variables Projected Using the Model
▪ Real GDP
▪ Real personal consumption
expenditures
▪ Real nonresidential fixed
investment
▪ Real exports
▪ Real imports
▪ Total factor productivity
▪ Real potential GDP
▪ Nominal gross national product
▪ Nominal private nonresidential fixed
investment in equipment
10. 9
See Richard K. Crump and others, A Large Bayesian VAR of the United States Economy, Staff Report 976 (Federal Reserve Bank of New York, August 2021),
http://newyorkfed.org/research/staff_reports/sr976.html. CBO’s modeling follows this approach closely, and this staff report provides more details than are included here.
CBO adapted its approach to conditional forecasting from that used by the staff of the
Federal Reserve Bank of New York.
Bayesian techniques are particularly well suited to estimating parameters in a large
system of equations given a limited amount of data.
The modeling is structured so that a projection of a variable at a given point in time is
more likely to be influenced by recent data than by older data. The structure prevents
the estimation from explaining historical data well but having poor ability to forecast
beyond the data used for estimation—which would be the case if the estimation
process overfit the parameters.
The approach is flexible, and the staff of the Federal Reserve Bank of New York
found that it generated reasonable conditional forecasts.
How the Model Works
11. 10
CBO used the following equation:
yt = c + B1yt - 1 + ⋯ + Bp yt - p + ϵt , ϵt ~N(0,Σ)
where
▪ yt is a vector of m economic variables at time t (t = 1, … , T),
▪ Bs (s = 1, … , p) is a (m × m) matrix of parameters of lagged variables, and
▪ ϵt is an error term distributed by a normal distribution with a covariance matrix Σ.
The model has many parameters to be estimated when the number of variables is large.
Because m equals 26 and p equals 6 in the model, the set of Bs (s = 1, … , p) has 4,056
parameters (26 × 26 × 6). In this case, the traditional vector autoregression techniques are
vulnerable to overfitting and tend to show poor out-of-sample forecasting accuracy.
A Bayesian procedure addresses the overfitting issue by automatically selecting the degree
of shrinkage, using tighter priors when the number of unknown coefficients relative to
available data is high and looser priors otherwise.
The Bayesian Vector Autoregression Model
12. 11
For discussion of the Minnesota prior, see Robert Litterman, Techniques of Forecasting Using Vector Autoregressions, Working Paper 115 (Federal Reserve of Minneapolis,
November 1979), www.minneapolisfed.org/research/working-papers/techniques-of-forecasting-using-vector-autoregressions.
CBO used the Minnesota prior, under which each variable follows an independent random
walk process with potential drift. The prior sets the mean of each Bs as
where (Bs)ij is the row i, column j element of Bs.
The Minnesota prior sets tighter distributions for the parameters corresponding to smaller
lags. Variances and covariances between elements of B are set as
where
▪ λ determines the general tightness of the prior distribution of B,
▪ Σik is the row i, column k element of Σ, and
▪ ψj is an estimate of the variation of variable j in yt.
CBO estimated λ using the hierarchical Bayesian approach.
The Bayesian Prior Distribution
13. 12
The posterior distribution of the parameters was calculated via Bayes’ rule as
p(θ│YT )∝ p(θ)p(YT |θ)
where
▪ θ is the vector of the parameters in the BVAR (θ = (c, B1, … , Bp, Σ));
▪ YT is the vector of the historical values of all the variables or YT = (y1, … , yT);
▪ p(θ) is the prior distribution (the Minnesota prior); and
▪ p(YT |θ) is the likelihood function of the BVAR.
CBO used a Markov chain Monte Carlo (MCMC) algorithm to generate the draws of θ
(or θ(g) for g = 1, … , G) from the posterior distribution.
The Posterior Distribution and Draws of Parameters
14. 13
CBO projected all the variables in the BVAR using Bayesian inference by computing
the predictive density, defined as
p(yT + 1, … ,yT + h│YT ) = ∫ p(yT + 1, … ,yT + h | YT, θ)p(θ│YT)dθ
where h is the forecasting horizon. In practice, the future value is projected for each
MCMC draw of the parameters (θ(g)) using the structure of the BVAR. The whole set
of projected values is the predictive density.
CBO also generated conditional forecasts using conditional predictive densities,
defined as
p(yT + 1, … ,yT + h│YT, Ch) = ∫ p(yT + 1, … ,yT + h | YT, Ch, θ)p(θ│YT )dθ
where Ch is a set of given conditions for a scenario. The conditions can be imposed
on any of the variables in the BVAR for any time.
Forecasting
15. 14
To implement conditional forecasting, CBO cast the BVAR model into a linear state-
space model:
where
▪ yt
* is conditional on future values of some variables in yt (t = T + 1, … , T + H),
▪ xt is (yt, yt - 1,…, yt – p + 1, c)',
▪ Gt is a matrix identifying the conditioned future values in xt,
▪ F is a matrix representing the dynamics of xt, and
▪ ut = (ϵt, 0, … , 0)'.
Then, CBO applied the Kalman filter and smoother to generate conditional forecasts.
The approach is equivalent to estimating unobservable variables (or missing values)
while treating the conditions as observable variables (or nonmissing values).
Conditional Forecasting
16. 15
For discussion of CBO’s historical forecasting errors, see Congressional Budget Office, CBO’s Economic Forecasting Record: 2021 Update (December 2021),
www.cbo.gov/publication/57579.
The variance of the paths generated by conditional forecasting is related to the way
economic conditions have changed over time and to the simulations of the six
variables from CBO’s expanded Markov-switching model that are taken as sets of
conditions.
To create simulations for the purpose of communicating uncertainty about the central
estimates in the economic forecast, CBO calibrated their variance. Specifically, CBO
used the paths generated by conditional forecasting to create 100 simulations of the
20 variables discussed above (including real GDP growth), which incorporated
correlations with the six variables from CBO’s expanded Markov-switching model.
The variance of real GDP growth was calibrated by considering alternative ways to
average multiple paths to form a single simulation.
For this analysis, CBO used the simple average of two paths to form each simulation
because the range of the middle two-thirds of the distribution of those simulations
matched the middle two-thirds of CBO’s historical forecasting errors over two years.
Calibrating the Variance of Simulations
17. 16
This document was prepared to enhance the transparency of CBO’s work and to
encourage external review of that work. In keeping with CBO’s mandate to provide
objective, impartial analysis, the document makes no recommendations.
Byoung Hark Yoo prepared the document with guidance from Sebastien Gay. Robert
Arnold, Mark Lasky, and Michael McGrane provided comments.
Mark Hadley and Jeffrey Kling reviewed the document. Christine Browne edited it
and R. L. Rebach created the graphics. The document is available at
www.cbo.gov/publication/58883.
CBO seeks feedback to make its work as useful as possible. Please send comments
to communications@cbo.gov.
About This Document