This document discusses methods for estimating the output gap and decomposing it into observable components. It provides a unified framework by representing most estimation methods as linear filters. This allows the output gap estimate to be expressed as a weighted average of observed macroeconomic data over time. The document demonstrates how to decompose an output gap estimate into the contributions made by different data series, like output, inflation, and unemployment. It also shows how to analyze how estimates are revised as new data is incorporated. Understanding estimates as linear filters provides insight into which data drives the estimate and how sensitive it is to data revisions. The document applies these concepts to specific estimation techniques, including univariate filters, multivariate filters, VAR models, and DSGE models.
A SYSTEM FOR VISUALIZATION OF BIG ATTRIBUTED HIERARCHICAL GRAPHSIJCNCJournal
Information visualization is a process of transformation of large and complex abstract forms of information
into the visual forms, strengthening cognitive abilities of users and allowing them to take the most optimal
decisions. A graph is an abstract structure that is widely used to model complex information for its
visualization. In the paper, we consider a system aimed at supporting of visualization of big amounts of
complex information on the base of attributed hierarchical graphs.
This document summarizes the properties of two maximum likelihood estimators of the mean of a truncated exponential distribution. The estimators are based on either a random sample from the full exponential distribution or from the truncated exponential distribution. A simulation study with 50,000 trials evaluates the moment properties of the estimators. The results show that the estimator based on the full exponential distribution has lower variance and mean squared error compared to the estimator based on the truncated distribution, making it more efficient. The relative efficiency of the truncated estimator approaches 1 as the truncation point increases.
Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...Waqas Tariq
A fundamental task in many statistical analyses is to characterize the location and variability of a data set. A further characterization of the data includes skewness and kurtosis. This paper emphasizes the real time computational problem for generally the rth standardized moments and specially for both skewness and kurtosis. It has therefore been important to derive an optimum computational technique for the standardized moments. A new algorithm has been designed for the evaluation of the standardized moments. The evaluation of error analysis has been discussed. The new algorithm saved computational energy by approximately 99.95% than that of the previously published algorithms.
AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) ME...ijmpict
Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective qualifications of suppliers at any criterion.
Human’s facial parts extraction to recognize facial expressionijitjournal
Real-time facial expression analysis is an important yet challenging task in human computer interaction.
This paper proposes a real-time person independent facial expression recognition system using a
geometrical feature-based approach. The face geometry is extracted using the modified active shape
model. Each part of the face geometry is effectively represented by the Census Transformation (CT) based
feature histogram. The facial expression is classified by the SVM classifier with exponential chi-square
weighted merging kernel. The proposed method was evaluated on the JAFFE database and in real-world
environment. The experimental results show that the approach yields a high recognition rate and is
applicable in real-time facial expression analysis.
This paper analyzes the swap rates issued by the China Inter-bank Offered Rate(CHIBOR) and
selects the one-year FR007 daily data from January 1st, 2019 to June 30th, 2019 as a sample. To fit the data,
we conduct Monte Carlo simulation with several typical continuous short-term swap rate models such as the
Merton model, the Vasicek model, the CIR model, etc. These models contain both linear forms and nonlinear
forms and each has both drift terms and diffusion terms. After empirical analysis, we obtain the parameter
values in Euler-Maruyama scheme and relevant statistical characteristics of each model. The results show that
most of the short-term swap rate models can fit the swap rates and reflect the change of trend, while the CKLSO
model performs best.
The document discusses using topological data analysis (TDA) to analyze complex data by focusing on the underlying shape or structure. TDA generates topological summaries that are invariant to coordinate changes and deformations, providing a compressed representation of the data. These topological summaries can help reduce bias and discover patterns in the data without imposing assumptions on what patterns should exist.
Mathematical modeling models, analysis and applications ( pdf drive )UsairamSheraz
The document describes a textbook on mathematical modeling that covers modeling with all types of differential equations, including ordinary, partial, delay, and stochastic equations. It is a comprehensive textbook that addresses modeling techniques used in analysis. It incorporates MATLAB and Mathematica and includes examples and exercises that can be used for projects. The book is intended for engineers, scientists, and others who use modeling of discrete and continuous systems.
A SYSTEM FOR VISUALIZATION OF BIG ATTRIBUTED HIERARCHICAL GRAPHSIJCNCJournal
Information visualization is a process of transformation of large and complex abstract forms of information
into the visual forms, strengthening cognitive abilities of users and allowing them to take the most optimal
decisions. A graph is an abstract structure that is widely used to model complex information for its
visualization. In the paper, we consider a system aimed at supporting of visualization of big amounts of
complex information on the base of attributed hierarchical graphs.
This document summarizes the properties of two maximum likelihood estimators of the mean of a truncated exponential distribution. The estimators are based on either a random sample from the full exponential distribution or from the truncated exponential distribution. A simulation study with 50,000 trials evaluates the moment properties of the estimators. The results show that the estimator based on the full exponential distribution has lower variance and mean squared error compared to the estimator based on the truncated distribution, making it more efficient. The relative efficiency of the truncated estimator approaches 1 as the truncation point increases.
Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...Waqas Tariq
A fundamental task in many statistical analyses is to characterize the location and variability of a data set. A further characterization of the data includes skewness and kurtosis. This paper emphasizes the real time computational problem for generally the rth standardized moments and specially for both skewness and kurtosis. It has therefore been important to derive an optimum computational technique for the standardized moments. A new algorithm has been designed for the evaluation of the standardized moments. The evaluation of error analysis has been discussed. The new algorithm saved computational energy by approximately 99.95% than that of the previously published algorithms.
AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) ME...ijmpict
Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective qualifications of suppliers at any criterion.
Human’s facial parts extraction to recognize facial expressionijitjournal
Real-time facial expression analysis is an important yet challenging task in human computer interaction.
This paper proposes a real-time person independent facial expression recognition system using a
geometrical feature-based approach. The face geometry is extracted using the modified active shape
model. Each part of the face geometry is effectively represented by the Census Transformation (CT) based
feature histogram. The facial expression is classified by the SVM classifier with exponential chi-square
weighted merging kernel. The proposed method was evaluated on the JAFFE database and in real-world
environment. The experimental results show that the approach yields a high recognition rate and is
applicable in real-time facial expression analysis.
This paper analyzes the swap rates issued by the China Inter-bank Offered Rate(CHIBOR) and
selects the one-year FR007 daily data from January 1st, 2019 to June 30th, 2019 as a sample. To fit the data,
we conduct Monte Carlo simulation with several typical continuous short-term swap rate models such as the
Merton model, the Vasicek model, the CIR model, etc. These models contain both linear forms and nonlinear
forms and each has both drift terms and diffusion terms. After empirical analysis, we obtain the parameter
values in Euler-Maruyama scheme and relevant statistical characteristics of each model. The results show that
most of the short-term swap rate models can fit the swap rates and reflect the change of trend, while the CKLSO
model performs best.
The document discusses using topological data analysis (TDA) to analyze complex data by focusing on the underlying shape or structure. TDA generates topological summaries that are invariant to coordinate changes and deformations, providing a compressed representation of the data. These topological summaries can help reduce bias and discover patterns in the data without imposing assumptions on what patterns should exist.
Mathematical modeling models, analysis and applications ( pdf drive )UsairamSheraz
The document describes a textbook on mathematical modeling that covers modeling with all types of differential equations, including ordinary, partial, delay, and stochastic equations. It is a comprehensive textbook that addresses modeling techniques used in analysis. It incorporates MATLAB and Mathematica and includes examples and exercises that can be used for projects. The book is intended for engineers, scientists, and others who use modeling of discrete and continuous systems.
Hybrid medical image compression method using quincunx wavelet and geometric ...journalBEEI
The purpose of this article is to find an efficient and optimal method of compression by reducing the file size while retaining the information for a good quality processing and to produce credible pathological reports, based on the extraction of the information characteristics contained in medical images. In this article, we proposed a novel medical image compression that combines geometric active contour model and quincunx wavelet transform. In this method it is necessary to localize the region of interest, where we tried to localize all the part that contain the pathological, using the level set for an optimal reduction, then we use the quincunx wavelet coupled with the set partitioning in hierarchical trees (SPIHT) algorithm. After testing several algorithms we noticed that the proposed method gives satisfactory results. The comparison of the experimental results is based on parameters of evaluation.
Binary dependent variable classification model in context of large databases: interpretation via visual tools such as partial dependency plots for 1, 2, 3, and 4 variables and other plots. Presentation focuses on overall and not individual observation interpretation, and is still work in progress.
Histogram expansion a technique of histogram equlizationeSAT Journals
Abstract
In this paper I have described histogram expansion. Histogram expansion is a technique of histogram equalization. In this I have
described three different techniques of expansion namely dynamic range expansion, linear contrast expansion and symmetric range
expansion. Each of these has their specific uses and advantages. For colored images linear contrast expansion is used. These all
methods help in easy study of histograms and helps in image enhancement.
Index Terms: Histogram expansion, Dynamic range expansion, Linear contrast expansion, Symmetric range expansion
Representing Uncertainty in Situation Maps for Disaster Managementhje
1. The situation map is an important tool for emergency staff to gain situational awareness. It must distinguish between certain and uncertain information to properly plan emergency responses. Current methods of displaying uncertainty take up too much cognitive resources.
2. The researchers analyzed existing uncertainty visualization techniques and proposed two new methods: thin lines and dotted lines to depict uncertain information more efficiently.
3. An experiment found that the dotted line technique allowed emergency staff to classify certain and uncertain information faster than existing methods, reducing the cognitive load. The new techniques provide more effective displays of situational uncertainty.
This document discusses and compares different methods for solving assignment problems. It begins with an abstract that defines assignment problems as optimally assigning n objects to m other objects in an injective (one-to-one) fashion. It then provides an introduction to the Hungarian method and a new proposed Matrix Ones Assignment (MOA) method. The body of the document provides details on modeling assignment problems with cost matrices, formulations as linear programs, and step-by-step explanations of the Hungarian and MOA methods. It includes an example solved using the Hungarian method.
The document discusses staffing problems for call centers using queueing models. It outlines three main methods - exact, approximation, and simulation - for addressing the Erlang-A queueing model. The Erlang-A model incorporates customer abandonment, which is an important factor for call centers. The document implements the exact and approximation methods using MATLAB and designs a new simulation method for comparing results. Computational results from the three methods are presented and compared to evaluate their effectiveness in modeling call center performance measures like abandonment probability and waiting times.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
This document discusses correlation analysis in time and space using random fields and field meta-models. It provides examples of:
1) Parameterizing a dynamic process using random fields to model variations over time with only a few parameters.
2) Using field meta-models to perform sensitivity analysis on signals and identify which inputs most influence variation at different points in time.
3) Applying these methods to spatial variations, such as modeling geometric imperfections based on laser scans to generate random designs for robustness analysis.
Evaluation of 6 noded quareter point element for crack analysis by analytical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
HANDWRITTEN CHARACTER RECOGNITION USING STRUCTURAL SHAPE DECOMPOSITIONcsandit
This paper presents a statistical framework for recognising 2D shapes which are represented as
an arrangement of curves or strokes. The approach is a hierarchical one which mixes geometric
and symbolic information in a three-layer architecture. Each curve primitive is represented
using a point-distribution model which describes how its shape varies over a set of training
data. We assign stroke labels to the primitives and these indicate to which class they belong.
Shapes are decomposed into an arrangement of primitives and the global shape representation
has two components. The first of these is a second point distribution model that is used to
represent the geometric arrangement of the curve centre-points. The second component is a
string of stroke labels that represents the symbolic arrangement of strokes. Hence each shape
can be represented by a set of centre-point deformation parameters and a dictionary of
permissible stroke label configurations. The hierarchy is a two-level architecture in which the
curve models reside at the nonterminal lower level of the tree. The top level represents the curve
arrangements allowed by the dictionary of permissible stroke combinations. The aim in
recognition is to minimise the cross entropy between the probability distributions for geometric
alignment errors and curve label errors. We show how the stroke parameters, shape-alignment
parameters and stroke labels may be recovered by applying the expectation maximization EM
algorithm to the utility measure. We apply the resulting shape-recognition method to Arabic
character recognition.
This document provides an overview of multidimensional scaling (MDS) and overall similarity perceptual maps. It discusses the challenges with attribute rating perceptual maps and how MDS can help address these challenges by mapping similarities and dissimilarities among items based on consumer perceptions. The document outlines the history, methodology, implementation steps, and example application of MDS for creating a perceptual map of search engines. Key considerations for using MDS include having a large sample size of subjects and brands/products.
PCA and DCT Based Approach for Face Recognitionijtsrd
Recognizing the identity of the target. The research of face recognition has great theoretical value involving subject of pattern recognition, image processing, computer vision, machine learning, and physiology and so on, and it also has a high correlation with other biometrics recognition methods. In recent years, face recognition is one of the most active and challenging problems in the field of pattern recognition and artificial intelligence. Face recognition has a lot of advantages which are not involved in biometrics recognition methods such as nonaggressive, friendly, conveniently, and so on .Therefore, face recognition has a prospective application foreground, such as the criminal identification, security system, file management, entrance guard system, and so on . Research in the field of face recognition knew considerable progress during these last years. Among the most evoked techniques we find those which employ the optimization of the size of the data in order to get a representation which makes it possible to carry out the recognition. For these methods, the images of faces are seen like points in a space of very great dimensions. The face space is defined by Eigen face which are eigenvectors of the set of faces. In the DCT approach we take transform the image into the frequency domain and extract the feature from it. For feature extraction we use two approach. In the 1st approach we take the DCT of the whole image and extract the feature from it. In the 2nd approach we divide the image into sub images and take DCT of each of them and then extract the feature vector from them. Manish Varyani | Pallavi Narware | Lokendra Singh Banafar ""PCA and DCT Based Approach for Face Recognition"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23283.pdf
Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/23283/pca-and-dct-based-approach-for-face-recognition/manish-varyani
Debate de la TTF en el Ecofin(22.06.12)ManfredNolte
The document summarizes positions from various European countries on implementing a financial transaction tax (FTT) either at the EU27 level or through enhanced cooperation of some countries. While some countries strongly support the FTT, others have concerns or oppose it. There is no consensus for implementing it across the EU27. Some countries are open to an enhanced cooperation process (ECP) among a smaller group of countries that want the tax, while others say all alternatives at the EU27 level need to be exhausted first before allowing an ECP.
This document provides an economic outlook and projections from the OECD. It summarizes projections for real GDP growth, inflation, unemployment, trade growth, fiscal balances, and interest rates for major economies from 2012-2014. It finds that the global economy is weakening again due to lack of policy responses to issues like the fiscal cliff and eurozone crisis. Failure to take sufficient action now could push the global economy into recession. A positive policy response based on monetary, fiscal, and structural policies is needed to avoid downside risks and support more sustainable growth.
Evaluacion de la campaña de oxfam sobre la ttfManfredNolte
This document provides an evaluation report of Oxfam's Financial Transaction Tax (FTT) campaign. Some key points:
- The campaign aims to introduce an FTT that would raise funds for international development, climate change, and domestic poverty/social issues. It is coordinated by Oxfam GB and Oxfam International but countries take different approaches.
- The evaluation assessed the campaign's impact and effectiveness through interviews with Oxfam staff, partners, external stakeholders, and an online survey of UK supporters.
- While the underlying interest in financial sector taxation predated the campaign, there is evidence it increased public awareness of the FTT and influenced policymaking, especially in France where officials acknowledged the campaign
Financial transaction taxes: skimming the froth | the economistManfredNolte
The summary analyzes the early effects of France's 0.2% tax on stock transactions over 1 billion euros implemented in August 2012. According to analysts, French equity trading volumes dipped initially but recovered somewhat. However, trading activity in French equities across Europe fell 16% in the three months after the tax compared to prior months, a larger drop than other European stocks. Trading fell more for lower-capitalized stocks subject to the tax, while trading rose 19% for stocks not subject to it. This suggests the tax has negatively impacted mid-sized firms. Countries considering a financial transaction tax want to learn from France's experience but remain skeptical due to potential loopholes and negative historical experiences with such taxes.
A time for Christians to engage with the world ft.comManfredNolte
Pope Benedict XVI uses the story of Jesus being asked about paying taxes to the Roman empire to argue that Christians should engage with the world while maintaining their faith-based values and priorities. The birth of Jesus challenged people to reassess their priorities and way of life, focusing on humility, poverty, and simplicity rather than wealth and power. Christians should fight poverty and work for justice because of the dignity of every human being, yet they render obedience to God rather than any earthly authority like Caesar.
Spain is emerging from doubts about its economy and is poised to return to growth. It has implemented substantial fiscal adjustments and reforms without excessive negative economic impact. Spain remains competitive and is attractive to foreign investors due to its dynamic industries and leading companies. Continued reforms will help Spain succeed by promoting innovation and high value-added sectors.
The document summarizes discussions from a meeting of G20 Finance Ministers and Central Bank Governors. Key points include:
1) The global economy has avoided major risks but growth remains too weak and unemployment too high in many countries. Further policy actions are needed to strengthen recovery.
2) Countries committed to developing medium-term fiscal strategies and presenting them at the next meeting. Maintaining fiscal sustainability in advanced economies is essential.
3) Reforms to the IMF are needed to enhance its credibility, including ratifying the 2010 quota reform and completing the 15th General Quota Review by January 2014.
Double taxation row as brussels unveils transactions levyManfredNolte
The European Commission unveiled plans for a financial transaction tax (FTT) backed by 11 EU countries that would place a 0.1% levy on bonds and shares and 0.01% on derivatives. However, the tax faces criticism over potential double taxation issues, as traders could be taxed by both their domestic tax rules and the new FTT. Specifically, a UK trader buying or selling with a German institution would face both the UK's stamp duty and the new FTT. Critics argue this amounts to unacceptable double or multiple taxation on some products and transactions. The Commission believes the new tax will generate €30-35 billion annually for the 11 countries, but details on revenue disbursement are still unclear.
1) The recent global financial crisis demonstrated that the financial sector can impose significant costs on the broader economy through risky behavior like excessive leverage and reliance on short-term funding.
2) Many European governments have introduced taxes on the financial sector to recover costs from bailing out the sector during the crisis. Proponents argue these taxes can reduce risky behavior and make the sector bear the social costs it imposes.
3) Potential financial sector taxes being considered include taxes on an institution's balance sheet size, volatile wholesale funding sources, and high-frequency trading to encourage more stable funding and reduce financial system risks.
The G20 Leaders Declaration discusses actions to promote global economic growth and job creation. Key points include:
1) G20 leaders agreed to a coordinated action plan to strengthen recovery, restore confidence, and support job creation.
2) They committed to fiscal and monetary policies to support demand and recovery, while ensuring financial stability.
3) Leaders also pledged to pursue structural reforms to boost growth, employment, and global rebalancing.
Hybrid medical image compression method using quincunx wavelet and geometric ...journalBEEI
The purpose of this article is to find an efficient and optimal method of compression by reducing the file size while retaining the information for a good quality processing and to produce credible pathological reports, based on the extraction of the information characteristics contained in medical images. In this article, we proposed a novel medical image compression that combines geometric active contour model and quincunx wavelet transform. In this method it is necessary to localize the region of interest, where we tried to localize all the part that contain the pathological, using the level set for an optimal reduction, then we use the quincunx wavelet coupled with the set partitioning in hierarchical trees (SPIHT) algorithm. After testing several algorithms we noticed that the proposed method gives satisfactory results. The comparison of the experimental results is based on parameters of evaluation.
Binary dependent variable classification model in context of large databases: interpretation via visual tools such as partial dependency plots for 1, 2, 3, and 4 variables and other plots. Presentation focuses on overall and not individual observation interpretation, and is still work in progress.
Histogram expansion a technique of histogram equlizationeSAT Journals
Abstract
In this paper I have described histogram expansion. Histogram expansion is a technique of histogram equalization. In this I have
described three different techniques of expansion namely dynamic range expansion, linear contrast expansion and symmetric range
expansion. Each of these has their specific uses and advantages. For colored images linear contrast expansion is used. These all
methods help in easy study of histograms and helps in image enhancement.
Index Terms: Histogram expansion, Dynamic range expansion, Linear contrast expansion, Symmetric range expansion
Representing Uncertainty in Situation Maps for Disaster Managementhje
1. The situation map is an important tool for emergency staff to gain situational awareness. It must distinguish between certain and uncertain information to properly plan emergency responses. Current methods of displaying uncertainty take up too much cognitive resources.
2. The researchers analyzed existing uncertainty visualization techniques and proposed two new methods: thin lines and dotted lines to depict uncertain information more efficiently.
3. An experiment found that the dotted line technique allowed emergency staff to classify certain and uncertain information faster than existing methods, reducing the cognitive load. The new techniques provide more effective displays of situational uncertainty.
This document discusses and compares different methods for solving assignment problems. It begins with an abstract that defines assignment problems as optimally assigning n objects to m other objects in an injective (one-to-one) fashion. It then provides an introduction to the Hungarian method and a new proposed Matrix Ones Assignment (MOA) method. The body of the document provides details on modeling assignment problems with cost matrices, formulations as linear programs, and step-by-step explanations of the Hungarian and MOA methods. It includes an example solved using the Hungarian method.
The document discusses staffing problems for call centers using queueing models. It outlines three main methods - exact, approximation, and simulation - for addressing the Erlang-A queueing model. The Erlang-A model incorporates customer abandonment, which is an important factor for call centers. The document implements the exact and approximation methods using MATLAB and designs a new simulation method for comparing results. Computational results from the three methods are presented and compared to evaluate their effectiveness in modeling call center performance measures like abandonment probability and waiting times.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
This document discusses correlation analysis in time and space using random fields and field meta-models. It provides examples of:
1) Parameterizing a dynamic process using random fields to model variations over time with only a few parameters.
2) Using field meta-models to perform sensitivity analysis on signals and identify which inputs most influence variation at different points in time.
3) Applying these methods to spatial variations, such as modeling geometric imperfections based on laser scans to generate random designs for robustness analysis.
Evaluation of 6 noded quareter point element for crack analysis by analytical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
HANDWRITTEN CHARACTER RECOGNITION USING STRUCTURAL SHAPE DECOMPOSITIONcsandit
This paper presents a statistical framework for recognising 2D shapes which are represented as
an arrangement of curves or strokes. The approach is a hierarchical one which mixes geometric
and symbolic information in a three-layer architecture. Each curve primitive is represented
using a point-distribution model which describes how its shape varies over a set of training
data. We assign stroke labels to the primitives and these indicate to which class they belong.
Shapes are decomposed into an arrangement of primitives and the global shape representation
has two components. The first of these is a second point distribution model that is used to
represent the geometric arrangement of the curve centre-points. The second component is a
string of stroke labels that represents the symbolic arrangement of strokes. Hence each shape
can be represented by a set of centre-point deformation parameters and a dictionary of
permissible stroke label configurations. The hierarchy is a two-level architecture in which the
curve models reside at the nonterminal lower level of the tree. The top level represents the curve
arrangements allowed by the dictionary of permissible stroke combinations. The aim in
recognition is to minimise the cross entropy between the probability distributions for geometric
alignment errors and curve label errors. We show how the stroke parameters, shape-alignment
parameters and stroke labels may be recovered by applying the expectation maximization EM
algorithm to the utility measure. We apply the resulting shape-recognition method to Arabic
character recognition.
This document provides an overview of multidimensional scaling (MDS) and overall similarity perceptual maps. It discusses the challenges with attribute rating perceptual maps and how MDS can help address these challenges by mapping similarities and dissimilarities among items based on consumer perceptions. The document outlines the history, methodology, implementation steps, and example application of MDS for creating a perceptual map of search engines. Key considerations for using MDS include having a large sample size of subjects and brands/products.
PCA and DCT Based Approach for Face Recognitionijtsrd
Recognizing the identity of the target. The research of face recognition has great theoretical value involving subject of pattern recognition, image processing, computer vision, machine learning, and physiology and so on, and it also has a high correlation with other biometrics recognition methods. In recent years, face recognition is one of the most active and challenging problems in the field of pattern recognition and artificial intelligence. Face recognition has a lot of advantages which are not involved in biometrics recognition methods such as nonaggressive, friendly, conveniently, and so on .Therefore, face recognition has a prospective application foreground, such as the criminal identification, security system, file management, entrance guard system, and so on . Research in the field of face recognition knew considerable progress during these last years. Among the most evoked techniques we find those which employ the optimization of the size of the data in order to get a representation which makes it possible to carry out the recognition. For these methods, the images of faces are seen like points in a space of very great dimensions. The face space is defined by Eigen face which are eigenvectors of the set of faces. In the DCT approach we take transform the image into the frequency domain and extract the feature from it. For feature extraction we use two approach. In the 1st approach we take the DCT of the whole image and extract the feature from it. In the 2nd approach we divide the image into sub images and take DCT of each of them and then extract the feature vector from them. Manish Varyani | Pallavi Narware | Lokendra Singh Banafar ""PCA and DCT Based Approach for Face Recognition"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23283.pdf
Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/23283/pca-and-dct-based-approach-for-face-recognition/manish-varyani
Debate de la TTF en el Ecofin(22.06.12)ManfredNolte
The document summarizes positions from various European countries on implementing a financial transaction tax (FTT) either at the EU27 level or through enhanced cooperation of some countries. While some countries strongly support the FTT, others have concerns or oppose it. There is no consensus for implementing it across the EU27. Some countries are open to an enhanced cooperation process (ECP) among a smaller group of countries that want the tax, while others say all alternatives at the EU27 level need to be exhausted first before allowing an ECP.
This document provides an economic outlook and projections from the OECD. It summarizes projections for real GDP growth, inflation, unemployment, trade growth, fiscal balances, and interest rates for major economies from 2012-2014. It finds that the global economy is weakening again due to lack of policy responses to issues like the fiscal cliff and eurozone crisis. Failure to take sufficient action now could push the global economy into recession. A positive policy response based on monetary, fiscal, and structural policies is needed to avoid downside risks and support more sustainable growth.
Evaluacion de la campaña de oxfam sobre la ttfManfredNolte
This document provides an evaluation report of Oxfam's Financial Transaction Tax (FTT) campaign. Some key points:
- The campaign aims to introduce an FTT that would raise funds for international development, climate change, and domestic poverty/social issues. It is coordinated by Oxfam GB and Oxfam International but countries take different approaches.
- The evaluation assessed the campaign's impact and effectiveness through interviews with Oxfam staff, partners, external stakeholders, and an online survey of UK supporters.
- While the underlying interest in financial sector taxation predated the campaign, there is evidence it increased public awareness of the FTT and influenced policymaking, especially in France where officials acknowledged the campaign
Financial transaction taxes: skimming the froth | the economistManfredNolte
The summary analyzes the early effects of France's 0.2% tax on stock transactions over 1 billion euros implemented in August 2012. According to analysts, French equity trading volumes dipped initially but recovered somewhat. However, trading activity in French equities across Europe fell 16% in the three months after the tax compared to prior months, a larger drop than other European stocks. Trading fell more for lower-capitalized stocks subject to the tax, while trading rose 19% for stocks not subject to it. This suggests the tax has negatively impacted mid-sized firms. Countries considering a financial transaction tax want to learn from France's experience but remain skeptical due to potential loopholes and negative historical experiences with such taxes.
A time for Christians to engage with the world ft.comManfredNolte
Pope Benedict XVI uses the story of Jesus being asked about paying taxes to the Roman empire to argue that Christians should engage with the world while maintaining their faith-based values and priorities. The birth of Jesus challenged people to reassess their priorities and way of life, focusing on humility, poverty, and simplicity rather than wealth and power. Christians should fight poverty and work for justice because of the dignity of every human being, yet they render obedience to God rather than any earthly authority like Caesar.
Spain is emerging from doubts about its economy and is poised to return to growth. It has implemented substantial fiscal adjustments and reforms without excessive negative economic impact. Spain remains competitive and is attractive to foreign investors due to its dynamic industries and leading companies. Continued reforms will help Spain succeed by promoting innovation and high value-added sectors.
The document summarizes discussions from a meeting of G20 Finance Ministers and Central Bank Governors. Key points include:
1) The global economy has avoided major risks but growth remains too weak and unemployment too high in many countries. Further policy actions are needed to strengthen recovery.
2) Countries committed to developing medium-term fiscal strategies and presenting them at the next meeting. Maintaining fiscal sustainability in advanced economies is essential.
3) Reforms to the IMF are needed to enhance its credibility, including ratifying the 2010 quota reform and completing the 15th General Quota Review by January 2014.
Double taxation row as brussels unveils transactions levyManfredNolte
The European Commission unveiled plans for a financial transaction tax (FTT) backed by 11 EU countries that would place a 0.1% levy on bonds and shares and 0.01% on derivatives. However, the tax faces criticism over potential double taxation issues, as traders could be taxed by both their domestic tax rules and the new FTT. Specifically, a UK trader buying or selling with a German institution would face both the UK's stamp duty and the new FTT. Critics argue this amounts to unacceptable double or multiple taxation on some products and transactions. The Commission believes the new tax will generate €30-35 billion annually for the 11 countries, but details on revenue disbursement are still unclear.
1) The recent global financial crisis demonstrated that the financial sector can impose significant costs on the broader economy through risky behavior like excessive leverage and reliance on short-term funding.
2) Many European governments have introduced taxes on the financial sector to recover costs from bailing out the sector during the crisis. Proponents argue these taxes can reduce risky behavior and make the sector bear the social costs it imposes.
3) Potential financial sector taxes being considered include taxes on an institution's balance sheet size, volatile wholesale funding sources, and high-frequency trading to encourage more stable funding and reduce financial system risks.
The G20 Leaders Declaration discusses actions to promote global economic growth and job creation. Key points include:
1) G20 leaders agreed to a coordinated action plan to strengthen recovery, restore confidence, and support job creation.
2) They committed to fiscal and monetary policies to support demand and recovery, while ensuring financial stability.
3) Leaders also pledged to pursue structural reforms to boost growth, employment, and global rebalancing.
Banco de Pagos Internacionales: informe 2012ManfredNolte
The document is the 82nd Annual Report of the Bank for International Settlements (BIS). It contains six chapters that analyze the state of the global economy and policy challenges. Chapter I discusses structural challenges persisting from the financial crisis, the heavy burden on central banks, deteriorating fiscal outlooks, changes in the financial sphere, and issues facing European monetary union. Chapter II reviews the slowing global recovery in 2011 and issues like high commodity prices and the intensifying euro area sovereign debt crisis. Chapter III examines the need for structural adjustment and rebalancing of growth. Chapter IV analyzes monetary policy limits and challenges of prolonged accommodation. Chapter V focuses on restoring fiscal sustainability and sovereigns regaining their risk-free status. Finally, Chapter VI
This document examines aid instruments that donors can use to assist developing countries in strengthening their tax systems. It draws on literature reviews, surveys of aid agencies, and six case studies. The case studies provide real-world examples of how different aid modalities have worked in practice to support tax reforms. This report aims to provide guidance to donors on effective approaches. It also analyzes how tax reforms can strengthen governance and presents principles for international engagement on taxation issues developed by the OECD and other groups. The analysis in this report has already informed the work of OECD and other international initiatives on taxation and development.
The document is a report from the World Commission on Environment and Development that outlines their findings and recommendations. It begins with an introduction from the Chairman discussing the urgent need for coordinated global action on environmental issues. It then provides a table of contents that outlines the various sections of the report, which cover topics like population, food security, energy, urban development, and proposals for institutional and legal reforms to promote sustainable development. The overall report calls for higher global cooperation and ambitious political action to address common environmental challenges and work towards a sustainable future.
This document is the final report of the High-level Expert Group on reforming the structure of the EU banking sector, chaired by Erkki Liikanen. The report provides an analysis of EU bank sector developments and business models. It finds that no single business model fared particularly well or poorly in the crisis, and that excessive risk-taking and reliance on short-term funding were issues. The report reviews ongoing regulatory reforms and evaluates whether additional structural reforms are needed. It considers two options - one based on recovery and resolution plans and another based on mandatory separation of risky trading activities. The report does not conclusively recommend one approach but provides analysis to support further reform efforts.
This dissertation develops a path generating engine for pricing path dependent options under the Heston stochastic volatility model. It calibrates the Heston model to S&P 500 volatility surface data to price Asian options. Problems with the Heston characteristic function for short maturities are acknowledged, and superior performance of the Lewis-Lipton formulation is confirmed empirically. MATLAB code is provided to allow replication of the results, including two Excel files with calibrated Asian option prices.
The document discusses using econometric analysis to identify factors that influence imports to Germany from other countries. As an example, it looks at import data from 2004 for 54 countries and explores whether there is a relationship between imports and GDP of the exporting country. A scatter plot of the data suggests a positive relationship. A simple linear regression model is proposed to quantify the relationship, with the goal of estimating parameters that describe how imports change with GDP. This highlights how econometrics can be used to empirically analyze economic relationships and models using data.
This document presents a study on estimating parameters of a jump-diffusion model and applying it to option pricing on the Dar es Salaam Stock Exchange. It begins by introducing jump-diffusion models as an alternative to the Black-Scholes model that can account for features like jumps, heavy tails, and skewness seen in real market data. The maximum likelihood approach is shown to be invalid for parameter estimation in jump-diffusion models. The document then focuses on the Merton jump-diffusion model and derives an expectation maximization procedure for consistent parameter estimation. Model parameters are estimated using stock price data from the Dar es Salaam Stock Exchange and used to price options, with results compared to the Black-
This document provides an introduction to the book "An Introduction to Applied Multivariate Analysis with R" by Brian Everitt and Torsten Hothorn. It discusses the contents and organization of the book, which covers multivariate analysis techniques including principal components analysis, exploratory factor analysis, multidimensional scaling, cluster analysis, structural equation modeling, and linear mixed-effects models. The document also acknowledges support provided during writing and provides information on how to access R code used in the examples in the book.
This document is a thesis submitted by Tokelo Khalema to the University of the Free State in partial fulfillment of the requirements for a B.Sc. Honors degree in Mathematical Statistics. The thesis compares the Gaussian linear model, two Bayesian Student-t regression models, and the method of least absolute deviations through a Monte Carlo simulation study. The study aims to evaluate how soon and how severely the least squares regression model starts to lose optimality against these robust alternatives under violations of its assumptions. The document includes sections on robust statistical procedures, literature review of the models considered, research methodology, results and applications of the simulation study, and closing remarks.
An intro to applied multi stat with r by everitt et alRazzaqe
This document provides an introduction to the book "An Introduction to Applied Multivariate Analysis with R" by Brian Everitt and Torsten Hothorn. It discusses the contents of the book, which focuses on teaching core multivariate analysis techniques using examples in R. The book assumes a basic understanding of statistics and familiarity with R. It contains 8 chapters covering topics like principal components analysis, exploratory factor analysis, cluster analysis, and linear mixed models. Code used in the examples is available in the MVA package for R.
This thesis examines determinants of sovereign bond spreads using Bayesian model averaging (BMA). It considers 44 potential explanatory variables for bond spreads in 47 OECD countries from 1980-2010. Most variables previously suggested provide low inclusion probabilities, while unemployment and government consumption rank highly. These results are robust to different parameter and model priors. The author employs BMA to address uncertainty about which variables best explain spreads by averaging thousands of possible models.
The document summarizes a seminar report on robust regression methods. It discusses the need for robust regression when the classical linear regression model is contaminated by outliers in the data. It introduces concepts such as residuals, outliers, leverage, influence, and rejection points that are important for understanding robust regression. It outlines desirable properties for robust regression estimators including qualitative robustness, infinitesimal robustness, and quantitative robustness. The report aims to lay out properties, strengths, and weaknesses of robust regression estimators and specifically discuss M-estimators.
This document provides an introduction to computational cubical homology. It begins by summarizing simplicial homology, including definitions of simplicial complexes, chains, and the boundary operator. It then introduces cubical homology, defining k-cubes, chains, and the cubical boundary operator. The document describes how computational homology uses linear algebra and the Smith normal form algorithm to compute homology groups. It concludes by discussing computational tools for homology and applications to image analysis and data science.
Pricing and hedging of defaultable modelsMarta Leniec
This technical report presents a master's thesis on pricing and hedging financial derivatives with credit risk. The thesis first reviews the theory of modeling default risk using the intensity-based and density-based approaches. It then derives explicit formulas for pricing European call options in a Black-Scholes market model that includes the possibility of default. The model considers a default-free market and a defaultable market created by adding a defaultable asset. Default time is defined as the first time the defaultable asset's price crosses a barrier. The thesis prices the options for both a regular investor who only observes default-free prices and a special investor with information about the default time.
The document discusses dimensionality reduction techniques. It begins with an introduction describing the reasons for dimensionality reduction such as computational efficiency, data visualization, and statistical motivations. It then outlines and compares various linear and non-linear dimensionality reduction methods, including Principal Component Analysis (PCA), Multi-Dimensional Scaling (MDS), Locally Linear Embedding (LLE), and Diffusion Maps. Various applications of these techniques to image processing, financial data analysis, and clustering are also discussed.
An Introduction To Mathematical ModellingJoe Osborn
This document provides an introduction to mathematical modeling. It discusses what mathematical modeling is, the objectives it can achieve, and common classifications of models. The main stages of modeling are described as building models, studying models, testing models, and using models. Building models involves systems analysis through making assumptions, flow diagrams, and choosing mathematical equations. Studying models includes analyzing dimensionless form, asymptotic behavior, sensitivity, and modeling output. Testing models evaluates assumptions, structure, prediction, parameters, and comparing models. Models can then be used for predictions and decision support.
This thesis examines potential output, output gaps, and their relationship to inflation in the eurozone. It is divided into two parts. The first part evaluates three methods for estimating potential output and output gaps in the euro area since 1998: the Hodrick-Prescott filter, a production function approach, and a structural vector autoregression model. It finds that while ex-post estimates can describe past economic behavior, real-time estimates are highly uncertain. The second part analyzes the uncertainty and inflation forecasting power of output gap estimates. It finds little added value from including output gaps in inflation forecasting models, as autoregressive models perform comparably or better for euro area inflation forecasts in the medium and short term.
This document provides an introduction to statistical modeling of financial time series. It begins with concepts like arithmetic and geometric returns that are used to analyze changes in financial prices over time. It then discusses common time series models like the random walk model and autoregressive models. Subsequent sections cover modeling volatility with GARCH models, analyzing return distributions, building multivariate models, and applications like forecasting and risk management. The overall aim is to help practitioners apply statistical methods to quantitatively analyze and model financial time series data.
The Effectiveness of interest rate swapsRoy Meekel
This master's thesis analyzes the effectiveness of interest rate swaps for hedging interest rate risk in a pension fund portfolio. The author, Roy Meekel, uses yield curve simulation to evaluate how well an interest rate swap portfolio hedges the interest rate risk arising from a duration mismatch between a fictional pension fund's assets and liabilities. Three models for simulating yield curves are analyzed: a basic model, an adjusted-lambda model, and a modified data model that incorporates an ultimate forward rate to reduce volatility of rates at long maturities. The results of 10,000 yield curve simulations for each model are used to assess how effectively the interest rate swaps hedge interest rate risk for the pension fund.
This document summarizes a BSc thesis that explores using the Fast Fourier Transform (FFT) to efficiently calculate convolutions. It first provides theoretical background on direct convolution calculations and Fourier analysis. It then describes implementing a 2D convolution using the Cooley-Tukey FFT algorithm and analyzing its time complexity advantages over direct convolution. The document evaluates the implementation's correctness, benchmarks its performance against other methods, and discusses potential optimizations and improvements.
EMPIRICAL PROJECTObjective to help students put in practice w.docxSALU18
EMPIRICAL PROJECT
Objective: * to help students put in practice what they have learned in Econometrics I
* to teach students how to write an “economic paper”.
Steps
a) Selecting a topic
Topic areas: Macroeconomics: consumption function, investment function, demand
function, the Phillips curve…
Microeconomics: estimating production, cost, supply and demand. Data
are hard to obtain here.
Urban and Regional Economics: demand for housing, transportation…
International Economics: estimating import and export functions,
estimating purchasing power parity, estimating capital mobility…
Development Economics: measuring the determinants of per-capita
income, testing the per-capita output convergence among nations…
Labor Economics: testing theories of unionization, estimating labor force
participation, estimating wage differential among women, minorities…
Resource and Environmental Economics: estimating water pollution,
estimating the determinants of toxic emissions…
The resource journal is JEL (Journal of Economic Literature) + Internet EconLit .
b) Statement of the Problem
State clearly the problem that you are interested in (what are you trying
to achieve)
c) Review of literature
Point out (critically) what others have done concerning the topic of interest.
d) Formulation of a general model
The final model can be derived in several ways: utility maximization,
profit maximization, cost minimization, etc. The review of literature is
generally helpful to accomplish this task. In the course of deriving the model,
one must sort out clearly the dependent variable and the independent
variables. After transforming the economic model in econometric model, one
writes up the hypotheses to be tested: expected signs of the parameters and
magnitudes. To elaborate a bit, let use the following demand for some good:
Q
P
P
Y
u
be
be
o
=
+
+
+
+
a
b
g
d
where
Q
P
P
Y
and
u
be
be
o
,
,
,
represent the quantity of good of interest, the price
of that good, the price of another good (pork, etc), income and the error term,
respectively. Here
b
g
<
<>
0
0
,
depending on the nature of the good: >0
if substitute and <0 if complementary. The size of
b
depends on the nature of
product. Thus if the product is a necessity, price and income elasticities are
expected to be small.
e) Collecting Data
Sources: international, national, regional
primary or secondary.
Notes.
f) Empirical Analysis
Data analysis: outliers, level of variation…
Model estimation and hypothesis testing
g) Writing a Report
Statement of the problem: describe the problem you have studied,
the questi ...
Fall 2018 Statics Mid-Term Exam 3 Take-Home Name Please .docxmecklenburgstrelitzh
Fall 2018 Statics Mid-Term Exam 3 Take-Home Name:
Please show all free body diagrams and the corresponding equilibrium equations that you use. Neat
freehand sketches are fine. Write the general forms of the equilibrium equations (ΣFX = 0, ΣFX = 0, ΣMA =
0) first before writing out the forces and moments specific to that problem. The paper is out of 100
points, and there are 25 bonus points including the extra credit question.
1) If a 200 N force is applied on the cutting tool as shown, determine the corresponding force
acting at point E. (Hint 1: Remember that each component of a machine is a rigid body and
every component must be in equilibrium, Hint 2: Write out all the equilibrium equations for
each component first, that will direct you at how to solve for the unknown forces, Hint 3: Use
equilibrium equations that you did not use for solving as a check). (20 points)
2) Solve for all the joint forces in the following frame. The suspended bob has a mass of 100 kg.
Note that member ABDF is one monolithic member. (20 points)
3) For the beam shown below, draw the bending moment and shear force diagrams. You could
use either the short procedure shown in class, or the full calculation, either is okay. Either way,
please label the values of the bending moments and shear forces at points where the graph
changes shape. (30 points)
4) For the cable given below, the total length is given to be 35 feet. Determine the reactions at the
supports A and B, and the tension values in each of its segments. (Hint: Since the total length of
the beam is given, use pythogorean triplets to figure out the coordinates of point C, at which the
load acts). (20 points)
5) Draw the free body diagram for one simple structure (machine, frame, truss etc.) that you use in
daily life directly or indirectly. Make sure to reduce it to the most basic form possible, showing
only required geometry and joints. (2D idealization would be fine, 3D is okay too). Show the
free body diagram for the entire structure as well as the free body diagrams for each of the
component members. Make sure to include applied loads. (Examples: pliers, idealized frame of
your apartment/house, door frame, wall-mount frame for TV/Pictures etc., dining table). (20
points)
Extra Credit: Using the reactions obtained in problem 2, draw the axial force diagram, shear force
diagram, and bending moment diagram for members ABDF and ECD. (15 points)
25 kips
9 m 16 m
A B
C
The Validity of Company Valuation
Using Discounted Cash Flow Methods
Florian Steiger
1
Seminar Paper
Fall 2008
Abstract
This paper closely examines theoretical and practical aspects of the widely used discounted
cash flows (DCF) valuation method. It assesses its potentials as well as several weaknesses. A
special emphasize is being put on the valuation of companies using the DCF method.
This master's thesis explores designing, analyzing, and experimentally evaluating a distributed community detection algorithm. Specifically:
- A distributed version of the Louvain community detection method is developed using the Apache Spark framework. Its convergence and quality of detected communities are studied theoretically and experimentally.
- Experiments show the distributed algorithm can effectively parallelize community detection.
- Graph sampling techniques are explored for accelerating parameter selection in a resolution-limit-free community detection method. Random node selection and forest fire sampling are compared.
- Recommendations are made for choice of sampling algorithm and parameter values based on the comparison.
This document is a thesis submitted by Milan Bouda in partial fulfillment of the requirements for a Doctor of Philosophy degree. The thesis is dedicated to Bayesian estimation of DSGE models. It first outlines the history and development of DSGE modeling in the Czech Republic and worldwide. It then describes the comprehensive DSGE framework and provides details on specifying and estimating DSGE models within this framework. The thesis contains two empirical studies - the first estimates a New Keynesian DSGE model for the Czech Republic using Bayesian techniques, and the second develops a Small Open Economy DSGE model for the Czech Republic with a housing sector.
La Comisión europea informa sobre el progreso social en la UE.ManfredNolte
Bruselas confirma que el progreso social varía notablemente entre las regiones de la Unión Europea, y que los países nórdicos tienen un desempeño consistentemente mejor que el resto de los Estados miembros.
EL MERCADO LABORAL EN EL SEMESTRE EUROPEO. COMPARATIVA.ManfredNolte
Hoy repasaremos a uña de caballo otro reciente documento de la Comisión (SWD-2024) que lleva por título ‘Análisis de países sobre la convergencia social en línea con las características del Marco de Convergencia Social (SCF)’.
PIB,OKUN Y PARO ESTRUCTURAL: RELACIONES DIRECTAS E INVERSASManfredNolte
Me refiero a las ‘Previsiones económicas de primavera’ de la Comisión europea, que se han constituido la semana pasada en panegírico de nuestras bondades y que, como es natural, han sido aprovechadas por el Gobierno para el autobombo.
LOS MIMBRES HACEN EL CESTO: AGEING REPORT.ManfredNolte
El Informe sobre el envejecimiento concentra un ejercicio único en el sentido de que proporciona proyecciones para los Estados miembros de la UE y Noruega hasta 2070 basadas en datos supuestos y metodologías comunes. El informe suministra un amplio conjunto de datos comparables e internos para 28 países. Dan una idea del momento en que se produce el envejecimiento de la población, sus implicaciones económicas y los desafíos presupuestarios asociados.
Empresarios privados y públicos: ¿adversarios o aliados?ManfredNolte
La reciente notificación de la Sociedad Estatal de Participaciones Industriales (SEPI), acerca de la toma de un porcentaje relevante en el Capital de Telefónica, ha reabierto la recurrente polémica sobre la figura del Estado como Empresario público, su conveniencia, su oportunidad y su eficiencia
CARE ECONOMY: LA VIEJA Y NUEVA ECONOMIA DE LOS CUIDADOS.ManfredNolte
La economía del cuidado entiende del reconocimiento y valoración de todas las actividades que contribuyen a la atención de las personas, incluido el trabajo no remunerado realizado en los hogares, así como el trabajo remunerado que involucra el cuidado de niños, personas mayores, personas con discapacidades y aquellas que necesitan cualquier tipo de atención especial.
DEUDA PUBLICA Y CONVENIENCIA FISCAL: LLAMADOS AL ACUERDO.ManfredNolte
En su conjunto y en rasgos generales, el progreso de la economía, es decir el de su PIB, depende de dos fuentes básicas de alimentación: el aumento de sus factores productivos y el incremento de su productividad.
DESIGUALDAD PERMANENTE: EL ESTANCAMIENTO DE LA DISTRIBUCIÓN DE LA RIQUEZA.ManfredNolte
La teoría del ‘derrame’ postula atenuar la presión sobre las rentas de los grupos sociales con mayor propensión al ahorro, esto es, los sectores de mayores ingresos, sobre la base de su capacidad de ahorrar e invertir,
COYUNTURA ECONOMICA Y SUS SOMBRAS: INFORME TRIMESTRAL DEL BANCO DE ESPAÑA.ManfredNolte
h
Hay que recordar, que los fotos puntuales, aun cuando salgan bien, no pueden encubrir las carencias, las flaquezas de fondo, que en distintos flancos acechan a nuestra economía.
DESVELANDO LA REALIDAD SOCIAL: ENCUESTA DE CONDICIONES DE VIDA EN ESPAÑA.ManfredNolte
Junto a la de la tolerancia hacia los Paraísos fiscales, último vertedero de la evasión fiscal y del crimen organizado, la pobreza se constituye probablemente en la mayor de las grandes vergüenzas que se confinan en los búnkeres de la economía de mercado.
¿FIN DEL CRIPTOINVIERNO?: ASI HABLAN LOS MAXIMOS.ManfredNolte
Hay creencias firmes, impertérritas, capaces de sobrevivir a cualquier duda o adversidad, ajenas a las opiniones contrarias o simplemente nuevas, ciegas y sordas a cualquier idea o consejo que las desvíe de su camino.
CONOCIMIENTO INTERIOR BRUTO, la obsolescencia del PIB.ManfredNolte
El PIB no es un indicador exhaustivo del progreso económico y tampoco de bienestar social; Además el índice está ofreciendo registros descorazonadores.
LA AGROSFERA, DE NUEVO LA REBELIÓN DEL CAMPO.ManfredNolte
La reciente explosión de los agricultores -una más de una larga cadena histórica- es un suceso emocional y espontaneo y como tal no responde a un enunciado claro de reivindicaciones como podrían constar en un documento unificado de propuestas del sector.
TAMAÑO DEL ESTADO Y BIENESTAR EN LA OCDE.ManfredNolte
Hay un valor entendido, un tópico que circula en amplias capas de la opinión económica, incluso de la habitualmente informada, acerca de la existencia de un antagonismo de raíz entre los conceptos de libre mercado e intervención gubernamental.
MAS ALLA DE LA INCERTIDUMBRE:DESAFIOS DE LA ECONOMIA ESPAÑOLA.ManfredNolte
Un reciente informe de la OCDE (Economic Policy Papers, No. 33), avanza que la economía española retrocederá diez posiciones en la clasificación mundial de países por PIB per cápita, pasando desde la posición 23 en la actualidad a la posición 33 en 2060 .
Este documento describe la frustración de un columnista económico ante la dominación de la política en la agenda pública y los medios, relegando a un segundo plano los temas económicos. El autor se siente atrapado entre su deber de analizar aspectos económicos y el ambiente político polarizado que dificulta abordar cualquier tema. Advierte que la insoportable polarización está orillando el consenso sobre medidas económicas coherentes y que los peligros que enfrentamos son demasiados y gravísimos.
DAVOS: EL PESO Y EL CONSEJO DE UN PODER SOCIALIZADOR.ManfredNolte
Al pie de la montaña mágica de Tomas Mann, enero en la elite es ya sinónimo de esta especial convención, entendiendo por especial no solo el ‘espíritu de Davos’, sino también la naturaleza de sus invitados. Durante cinco días la apacible estación de esquí invernal se transforma en la más selecta y cosmopolita feria del planeta.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
5. 4
I. Introduction
This paper discusses several popular methods to estimate the ‘output gap’ using available
macroeconomic data, provides a unified, natural concept for the analysis, and demonstrates
how to decompose the output gap into contributions of observed data on output, inflation,
unemployment, and other variables. A simple bar-chart of contributing factors, in the case
of multi-variable methods, sharpens the intuition behind the estimates and ultimately shows
‘what is in your output gap.’ The paper demonstrates how to interpret effects of data revisions
and new data releases for output gap estimates (news effects) and how to obtain more insight
into real-time properties of estimators.
The unifying approach is the theory of linear filters. I demonstrate that most methods for out-
put gap estimation can be represented as a moving average of observed data – as a linear fil-
ter. Such a representation provides insight into which variables contribute to the estimate of
the unobserved variables, and at what frequencies. Knowing this provides a better understand-
ing of the estimate and of its revision properties.
Which output gap estimation approaches can be analyzed as linear filters? As demonstrated
below, these range from (i) univariate or multivariate statistical filters to (ii) simple multivariate-
filters with some economic theory (Phillips curve/IS curve), including also a (ii) production
function approach, and even (iv) state-of-the-art DSGE (dynamic stochastic general equilib-
rium) models with tight theory restrictions.1
One thing that this paper does not intend to discuss is what method for output gap estima-
tion is the most sensible, or the optimal one. It needs to be understood that the concept of the
output gap is meaningful only when properly defined, before being embedded into an empir-
ical model. Nevertheless, the importance of the output gap as a concept in economic policy
requires a thorough understanding of model-based estimates, when used for monetary or fis-
cal policy.
Example To frame the discussion below, consider a very simplified, stylised example that
illustrates a decomposition into observables. The multivariate model of the ‘extended Hodrick-
Prescott’ filter, as in principle suggested in the paper by Laxton and Tetlow (1992), features a
simple aggregate demand determination of the output gap, xt, and a backward-looking Phillips
1Calculations and analysis analogous to this paper have been used since 2007 together with the Czech
National Bank DSGE core projection model, see Andrle and others (2009b).
6. 5
curve to determine the deviation of inflation from the target, ˆπt. Aggregate output, yt, is com-
posed of the output gap and the potential output, τ. For simplicity, I assume that potential out-
put growth follows a driftless random walk.
The state-space form of the simple model is as follows:
yt = xt +τt (1)
τt −τt−1 = τt−1 −τt−2 +ετ
t (2)
xt = ρxt−1 −κˆπt +εx
t (3)
ˆπt = λˆπt−1 +θxt +επ
t . (4)
Given observed data for {yt, ˆπt} the problem is to estimate the decomposition of output into
its unobserved components, {xt,τt}. For a state-space form the estimates are easily available
using the well-known algorithms for the Kalman filter and smoother.
Equivalently, the implied penalized least squares formulation is
min
{τ}T
0
T
t=0
1
σ2
x
[εx
t ]2
+
1
σ2
τ
[ετ
t ]2
+
1
σ2
τ
[επ
t ]2
. (5)
The goal of the analysis is, however, the implied moving average representation of the model,
i.e. linear filter representation, given as follows:
xt|∞ = A(L)yt + B(L)ˆπ =
∞
i=−∞
Aiyt+i +
∞
i=−∞
Bi ˆπt+i, (6)
where A(L),B(L) are two-sided linear filters and L is a linear operator, such that Ljxt := xt− j.
The weights of these filters are completely determined by the structure of the economic model
and its parameters.2
Under rather general conditions, estimates using the state-space form (1)–(4), penalized least-
squares (5), and filter specification (6) are equivalent. Simply put, the unifying approach
makes use of equivalence between the methods of (i) Penalized Least-Squares, (ii) Wiener-
Kolmogorov filtering and (iii) Kalman filter associated with these model representations.
See e.g. Gomez (1999) for a lucid discussion. Each of the three approaches (least-squares,
Wiener-Kolmogorov or Kalman filters) has its benefits and limitations.
2see Appendix C for details
7. 6
The key ingredient for obtaining flexible decompositions of unobservables in terms of observed
data, and for the analysis of revision properties, is the linear filter representation (6). For the
model above, expression (6) clearly indicates what portion of the output gap is identified due
to observations on output, yt, and due to deviation of inflation from the inflation target, ˆπ.
Expression (6) assumes a doubly-infinite sample and is a starting point for the analysis of
finite sample implementation. Extending the sample size will lead to revised estimates, to a
well-known ‘end-point’ problems or ‘news effect’. The example provides the general result
that is analysed in greater detail in the rest of the paper.
Despite the extensive literature on output gap/potential output estimation using multivariate
filters, beginning with the aforementioned contribution of Laxton and Tetlow (1992), to my
knowledge, analysis in terms of decomposition into observables and filter analysis of output
gaps has not been presented before. Also, the idea of putting most of the estimation method
into a common framework of linear filters has not been explored explicitly before and is a
novel approach for comparing estimates obtained by different methods.
The roadmap of the paper is as follows. The Introduction motivated a filter representation for
output gap estimation and its decomposition into observables, showing ‘what is in your out-
put gap’. The Methods section demonstrates how to formulate output gap estimation methods
as linear filters, decompose the output gap into observables, and demonstrates the benefits of
the filter representation for understanding the revision properties of real-time estimates. The
subsequent Application section gives a simple extension of the Hodrick-Prescott filter, which
proves useful for multivariate models, and illustrates the main ideas of the paper using a semi-
structural and fully structural DSGE model for output gap estimation.
II. Methods
This section focuses on formulating output gap estimation methods as linear filters and decom-
posing it into observables. State-space models, univariate statistical filters, structural vector
auto-regressions, and a production function approach are considered. Subsequently, the revi-
sion properties of real-time estimates of output gap are discussed exploiting a filter represen-
tation.
Before delving into details of various estimation methods and calculations, it is crucial to
understand that the goal of obtaining a linear filter representation serves practical purposes.
It allows analysts to understand the weighting scheme behind the estimate, chart the output
8. 7
gap as a function of underlying data and quantify the sources of revised estimates when the
sample is extended or revised. A formalized analysis lowers the burden of excessive experi-
menting.
The benefits of the filter representation are numerous and this section focuses on a small sub-
set only. In particular, an explicit frequency domain analysis is omitted. It is important though
that the knowledge of the filter transfer function allows to design structural economic models
as optimal filters.
Output gap estimates using methods discussed below can be expressed as a (multivariate)
linear filter representation,
xt|T =
T
i=t0
w1,ty1,t−i +···+
T
i=t0
wn,tyn,t−i =
n
k=1
ξk, (7)
where a particular unobservable variable xt|T —the output gap in this case–is expressed as a
weighted average of an observed sample, yt, finite or infinite, when t0 → −∞,T → ∞. yi. Here
i = 1...n and thus xt|T is decomposed into contribution of n factors, ξj.
A multivariate version of the moving average, in case of multiple unobservables, with a doubly-
infinite sample, takes the form
Xt =
∞
i=−∞
WiYt−i = B(L)Yt, (8)
which is a starting point for a theoretical analysis. Practical calculations, however, are not
restricted to an infinite amount of data, nor to time-invariant weights.
The model-implied multivariate moving average, afilter B(L) = i Bizi, can be analyzed in
time- or frequency-domain, as is the case for univariate filters, in terms of their gain, coher-
ence or phase-shifts between variables and, the overall frequency-response function charac-
teristics.
The subsequent subsections provide a detailed treatment of output gap estimation methods
and their conversion to a filter representation analogous to (7) or (8), which answers the ques-
tion, ‘what is in the output gap’. Although the estimation methods are different, the principle
is always the same, which allows for a direct comparison of the results.
9. 8
A. Formulating Potential Output Estimates as Linear Filters
1. State-Space Forms – Semi-structural and DSGE Models
Formulating potential output estimators in a state-space form as linear filters is surprisingly
simple. This is the case since the celebrated Kalman filter, see e.g. Kalman (1960) or Whittle
(1983), originates from the Wiener filtering theory and deals with an important special class
of stochastic processes.
The state-space formulation of the potential output estimation became very popular, partly
due to its flexibility, see e.g. Kuttner (1994), Laubach and Williams (2003), inter alios. The
state-space model is easy to formulate and modify, easily handles missing data or non-stationary
dynamics, and is a natural representation for linearized recursive dynamic economic models.
The missing piece in the literature on the multivariate model analysis is the explicit acknowl-
edgement and use of the fact that the Kalman filter and smoother3 are actually just that –
filters. As demonstrated above, an explicit linear filter formulation is useful for obtaining a
decomposition into observables. The formulation of the state-space model as a linear filter,
the filter weights, and a very practical implementation of decomposition into observables for
state-space models, follow.
Filter representation For the purpose of analysis, it is assumed that the model takes the
following state-space form:
Xt = TXt−1 +Gεt (9)
Yt = ZXt +Hεt,. (10)
Here ε ∼ N(0,Σε), Σε = I and thus structural shocks are not correlated with measurement error
shocks, with no loss of generality. The vector of transition, or state, variables is denoted by
Xt, whereas observed variables are denoted by Yt. By imposing a restriction that GΣεH = 0,
it is guaranteed that measurement errors and structural shocks are uncorrelated.
The state-space model (9)–(10) can be used to estimate the unobserved states and shocks
{Xt,εt} from the available observables {Yt}. The output gap is one of the elements in Xt. I
3 The Kalman filter is a one-sided, causal estimate of the state xt based on information up to the period
[t0,...,t]. The Kalman smoother is a two-sided, non-causal filter that uses all available information to estimate
the state xt|T based on [t0,...,T].
10. 9
shall focus mainly on the ‘smoothing’ case, i.e. when estimates of Xt are based on observa-
tions available for t = 0,...T, using the notation E[Xt|YT ,...,Y0] = Xt|T .
In the case of multivariate models with multiple observables, the possibilities for analysis
are richer than in the case of univariate models. If meaningful, the exploration of impulse-
response and transfer functions provides insights regarding the model properties, together
with the popular structural shock decomposition. In other words, if the shocks have some
structural interpretations, one can express the observed data (and unobserved states) as cumu-
lative effects of past structural shocks,
Yt = [ZA(L)+H] ˆεt, Xt|∞ = A(L)ˆεt =
∞
j=0
Tj
Gˆεt− j, (11)
where A(L) = (I−TL)−1G and ˆεt, Xt|∞ denote a mean squared (Kalman smoother) estimate of
the structural shocks, ε, and state variables X.4 Expression (11) is frequently used in a DSGE
analysis for storytelling and interpretation of macroeconomic data.
Now is the time to reverse the logic and ask the question, “What observed data drive each
particular unobserved structural shock and state variables?" That is the purpose of this paper –
to draw a closer attention to a presentation of the unobserved state estimates as a function of
the observed data. In the case of a doubly-infinite sample the model can be expressed as
Xt|∞ = Ω(L)Yt =
∞
i=−∞
ΩiYt+i. (12)
In real world applications, where the sample is always finite, the optimal finite sample imple-
mentation of (12) leads –or at least should lead– to a multivariate linear filter with time vary-
ing weights,
Xt|T =
T
τ=t0
Ωτ,tYτ +Ω0,tX0. (13)
Here the weight sequence varies with every time period t. That is because the Kalman smoother
carries out an optimal mean-square approximation of the infinite filter Ω(z) = ∞
i=−∞ Ωjzj
with a finite length filter Ωt(z) = T
i=t0
Ωj,tzj, so as to minimize the distance ||Xt|∞ − Xt|T ||2. It
operates under the assumption that the model (9)–(10) is the data generating process for the
data. More on this in a discussion of real-time properties of output gap estimates below.
4Semi-infinite sample size is assumed for simplicity only, finite sample analysis is trivial.
11. 10
To decompose the output gap into observables and to analyze data revisions using (13), it
remains to either calculate the time varying weights of the filter or to reformulate the prob-
lem such that one can avoid the computation of weights. Luckily, both options are readily
available and their description follows.
Weights of the Filter Weights of the filter Ω(L) are not data-dependent and are a func-
tion of the model specification only.5 In the case of the doubly-infinite sample, the Wiener-
Kolmogorov formula, see Whittle (1983), implies that Ω(z) = ΓXY(z)ΓY(z)−1. Here ΓY(z) and
ΓXY(z) stand for auto-covariance and cross-covariance generating functions of the model:
ΓXY(z) = (I−Tz)−1
GΣεG (I−T z−1
)−1
Z (14)
ΓYY(z) = Z(I−Tz)−1
GΣεG (I−T z−1
)−1
Z +HΣεH . (15)
The transfer function of the model, Ω(z), is the key ingredient for a frequency-domain analy-
sis of the filter. In the Applications section below, I explore transfer function gains for a semi-
structural model of the output gap, which indicates the most relevant frequencies of observed
time series for the output gap estimate. The core of the analysis is in time domain and thus the
weights are needed.
For all but very simple and small models, it is difficult to get an analytical description of the
weights in (12) using the transfer function of the model. The weights can, however, always
be obtained numerically. An inefficient, but operational way would also be to compute the
inverse Fourier transform of (12). Koopman and Harvey (2003) provide a recursive way to
calculate time-varying weights in (13) for general state-space models and a lucid paper by
Gomez (2006) provides time-domain formulas to calculate weights in (12).
In particular, Gomez (2006) shows that for the model (9)–(10) the weights Ωj, adjusted to
model notation above, follow as
Ω0 = P(Z Σ−1
−LR|∞K) (16)
Ωj = (I−PR|∞)L−j−1
K j < 0 (17)
Ωj = PL j
(Z Σ−1
−L R|∞K) j > 0, (18)
where L ≡ T − KZ, K denotes the steady-state Kalman gain and P is the steady-state solu-
tion for the state error covariance given by a standard discrete time algebraic Ricatti equa-
5When the parameters of the model are estimated using the data, the weights become, indirectly, a function
of a particular dataset.
12. 11
tion (DARE) associated with a steady-state solution of the Kalman filter. R|∞ is a solution
to the Lyapounov equation, R|∞ = L R|∞L + Z(ZPZ + HΣεH )−1Z , associated with the
steady-state Kalman smoother solution. R|∞ is the steady-state variance of the process, rt|∞,
in the backward recursion, Xt|∞ = Xt|t−1 + Prt|∞, where in finite-data smoothing rt−1 is a
weighted sum of those innovations (prediction errors) coming after period t − 1. Finally,
Σ = Z(ZPZ +HΣεH ).6
The relationship between the time-invariant weights of the filter and the time-varying weights,
used in the case of the finite sample implementation by the Kalman smoother, is unique and is
discussed below in the section devoted to real-time properties and news effects. The intuition
behind the re-weighting is simple, though. The Kalman smoother implementation implic-
itly provides optimal linear forecasts and backcasts of the sample and applies the convergent
time-invariant weights.
Practical Implementation of the Decomposition into Observables To compute the observ-
able decomposition one can always calculate the weights using Koopman and Harvey (2003)
recursions and implement the moving average calculations. That requires calculating and
storing large objects, pre-programmed tools, or a little bit of advanced knowledge of state-
space modeling. Sometimes, time constraints might prevent analysts from using these tools.
Having a shortcut is thus beneficial.
A particularly simple and accurate way is to view the Kalman smoother as a linear function
of multiple inputs, denoted by X = F (Y), where X and Y are (T × n) and (T × m) matrices.
The great thing about that function is that it is linear. For stationary processes, the Kalman
smoother provides the least squares estimates of the form X = ΩY, where Ω is the matrix
of time-varying filter coefficients. Trivially, for two different sets of observables, {YA,YB},
one obtains XA − XB = Ω(YA − YB). By appropriate non-overlapping grouping of differences
in inputs, one can easily obtain the effects of the change of measurements on all estimated
unobservables and carry out the decomposition analysis. This method works for any model
with two different sets of observables and a common initial state, unless the change in the
initial state is treated as well. There is no need to know the values in Ω; the whole decompo-
sition of the deviations in the two estimates can be obtained by successive runs of the Kalman
smoother with different inputs.
6The question of non-stationary models is more difficult, but for detectable and stabilizable models the
Kalman filter/smoother converges to steady-state since, despite the infinite variance of states, the distance of
the state to the estimate is stationary with finite variance. In the case of Wiener-Kolmogorov filter, the formulas
apply if interpreted as a limit of minimimum mean square estimator, see Gomez (2006) or Bell (1984).
13. 12
The two data input structures can bear many forms. The only requirement is that the structure
of observations in both datasets must be identical. This setup is very feasible in the case of
data revision analysis and exploration of the effects of new observations, as shown below. The
counter-factual dataset can also take form of a steady-state (balanced growth path), uncondi-
tional forecasts, etc. – depending on the goals of the analysis. Decomposition into observables
in this paper is consistent with a dataset featuring missing observations and direct observa-
tions implementation of linear restrictions, often used for imposing the ‘expert judgement’.
2. Univariate filters – Band-Pass, Hodrick-Prescott, etc.
True, decomposition into observables in a univariate setting is not an issue, as the only vari-
able that enters the output gap estimation is the output itself. The analysis of their real-time
properties and news effects, however, follows the same principles as in the case of multivari-
ate filters. A proper understanding of frequently used univariate filters, such as band-pass fil-
ters or the Hodrick-Prescott filter, is important, as these often form parts of multivariate mod-
els.
Univariate filters are specified either directly in terms of their weights in time domain, directly
in terms of their transfer function in frequency domain, as a state-space model, or as a penal-
ized least squares problem – e.g. the Hodrick-Prescott filter or exponential smoothing filter.
Being specified in any of these ways, they have a time domain filter representation as
xt = F(L)yt =
∞
i=−∞
wiyt+i, (19)
where wi are the weights of the filter. This fact is well know and is restated just for clarity and
completeness. Univariate filters are usually discussed in terms of their spectral properties,
implied by F(z), but the weights of the filter are sometimes discussed as well, see Harvey and
Trimbur (2008), among others.
Most contributions to the literature focused on designing or testing the univariate filters. They
are concerned with (i) approximation of the ideal band-pass filter or (ii) revision properties of
the filters for increasing sample size. The ideal band-pass filter with a perfectly rectangular
gain function is often considered as a natural benchmark to judge univariate statistical fil-
ters against in terms of their ‘sharpness’ – i.e. leakage, strength of the Gibbs effect, or ease of
finite-sample implementation.
14. 13
The class of linear filters is large, including a variety of bandpass and highpass filters. Apart
from the Hodrick-Prescot/Leser filter, Butterworth filters analyzed in Gomez (2001), rational
square wave filters suggested by Pollock (2000), or even multiresolution Haar scaling filters
fit the representation (19).
Given a possibly infinite filter F(z), the revision properties depend on the quality of it’s finite
sample approximation, which is crucially dependent on the data generating process of the
data, see Christiano and Fitzgerald (2003) or Schleicher (2003). Discussion of the revision
properties and optimal finite sample implementation is discussed below, since univariate fil-
ters are often part of semi-structural multivariate models or production function approach
estimates of the output gap.
3. Structural VARs
Structural VARs have a natural moving average, linear filter representation. It seems there-
fore very desirable to express the potential output and the output gap as a linear combina-
tion of data inputs. A thorough analysis of the contributions of observed series and the SVAR
estimates frequency transfer function is crucial as these often are outliers in comparison with
other methods, see McNellis and Bagsic (2007), Cayen and van Norden (2005) or Scott (2000),
among others. Although SVAR models may seem to be used less frequently for output gap
estimation, they certainly belong in the toolbox of many central banks and applied econo-
mists.
Assume that an estimated reduced form VAR model of order p is available, that is,
A(L)Yt =
I−
p
j=1
Aj
Yt = ε, E[εεT
] = Σ, (20)
where residuals, εt (reduced-form shocks) are linked to ‘structural’ shocks ηt via an invert-
ible transformation εt = Qηt. I assume that the dimension of Yt is n. The identification often
imposes long-run restrictions following Blanchard and Quah (1989) to tell apart transitory
and permanent component of output, see e.g. Claus (1999). The structural VAR model is then
expressed as Yt = B(L)QQ−1
εt = S(L)ηt.
Assume that the j-th component of the data vector, Yt, is the GDP growth,Yj,t = ∆yt, then
∆yt = S11(L)η1,t +S12(L)η2,t +···+S1n(L)ηn,t (21)
15. 14
and the output gap, xt, is the part of the GDP not affected by permanent shocks
xt = S12(L)η2,t +···+S1n(L)ηn,t, (22)
assuming ∞
k=0 S12(k)η2,t−k + ··· + ∞
k=0 S1n(k)ηn,t−k = 0. The structural shocks are estimated
from the reduced form VAR residuals using ηt = Q−1εt = Q−1A(L)Yt. One can thus recover
the estimated output gap as a function of observations.
The identification scheme itself can be very case-specific, yet it is clear that for SVAR esti-
mates of the output gap their concurrent estimates and final estimates coincide, unless an
extended sample is used for paramter re-estimation. Investigating the spectral properties of
S(z) is advisable, since ex-ante it is not clear which frequencies of observed series are used
for estimation and SVAR estimates often stand out as outliers. The decomposition of the out-
put gap into observables can be done using the expressions above, where the output gap is a
function of structural shocks, which themselves are a function of observed data.
4. Production Function Approach
Even a production function (PF) approach can often be expressed as a filtering scheme. The
real-time revision properties then crucially depend on the filter representation, as in the case
of other methods. Many practitioners are, perhaps, aware of the production function approach
being very much dependent on the underlying filters used in various steps of the method. This
section provides an explicit formulation of the production function output gap estimate as a
linear filter, along with its structure and decomposition into observables.
Assume that the value added is produced using the Cobb-Douglas production function. Denot-
ing the logarithms of individual variables by lower-case letters, one gets
yt = at +(1−α)kt +αlt. (23)
Here at is the ‘Solow residual’ and kt and lt denote the actual levels of the capital stock and
hours worked in the economy. It is common that a trend total factor productivity, a∗
t , is identi-
fied from the Solow residual using some smoothing procedure, often a variant of the Hodrick-
Prescott or an other symmetric moving average filter. Denoting the smoothing filter as A(L), it
is clear that
a∗
t = A(L)at = A(L)yt −(1−α)A(L)kt −αA(L)lt. (24)
16. 15
The next step is usually a determination of an ‘equilibrium’ or a trend level of hours-worked,
or employment, and of the capital stock. I will consider only the estimate of the equilibrium
employment, which is often cast as a filtering problem for the NAIRU, frequently in terms of
inflation, capacity utilization, or other variables. Importantly, the sub-problem is most often a
linear filter.
With only little loss of generality, I assume that the equilibrium employment is given by the
trend component of the employment, obtained using the univariate filter as l∗
t = E(L)lt. The
output gap, xt, can then be expressed then as
xt = yt −y∗
t = yt −(a∗
t +(1−α)k∗
t +αl∗
t ) (25)
= (1− A(L))yt −(1−α)(1− A(L))kt −α(E(L)− A(L))lt (26)
=
∞
i=−∞
wy,iyt+i +
∞
i=−∞
wl,ilt+i +
∞
i=−∞
wk,ikt+i, (27)
which is a version of a multivariate linear filter with three observables, or signals.7 It is inter-
esting that in this simple case, if the trend component of the Solow residual and of the employ-
ment are obtained using the same procedure, then E(L) = A(L) and the contribution of observed
employment data gets eliminated. Further, given the fact that the capital stock usually has lit-
tle variance at business cycle frequencies–as a slow moving variable– its ‘gaps’ tend to be
small. The smaller they become, the more the production function approach to output gap
estimation approaches a simple univariate filter estimation of the output gap.8
In the case where A(L) and K(L) are transfer functions of the Hodrick-Prescott filter, which is
quite usual, the production function approach results tend to be quite similar to HP filter esti-
mates and suffer from most of the problems usually associated with the HP filter approach.
See Epstein and Macciarelli (2010) as an example.
When the production function approach estimate can feasibly be expressed as a linear func-
tion of its inputs (e.g. output, labor, capital stock, etc.), providing such a decomposition is
highly desirable. A finite-sample version of (25) is easy to obtain as long the process is lin-
ear. The production function estimates are often decomposed into the contributions of the
7In the case of an optimal finite sample implementation of the filter, it will be time varying, e.g. A(z) = A(z)t.
A decomposition into observables in a finite sample is equally simple.
8 One can consider more involved procedures, but the principle remains the same. I can assume that the
employment is determined by a working-age population, participation rate and a employment rate as lt = popt +
prt + ert. If an ‘equilibrium’ levels of the employment rate, that is (1 − nairut) and the participation rate are
determined by a time invariant filter I can express the equilibrium employment as l∗
t = popt + P(L)prt + E(L)ert,
and I can proceed by substituting these expressions to the production function as in the simpler case.
17. 16
total factor productivity, equilibrium employment, and capital stock, which are not all directly
observable.
Incorporating any non-causal filter into the production function approach calculations impacts
its real-time, revision properties. The linear filter analysis for revision properties and news
effects thus relates also to the production function approach to potential output estimation.
B. Analyzing Revision Properties – Data Revisions and News Effects
Revision properties of output gap estimators are often among the crucial criteria for the eval-
uation of a particular method. Policymakers need accurate information for their decisions and
the revisions of output gap estimates increase the uncertainty associated with policy imple-
mentation. Output gap estimates get revised due to (i) historical data revisions and (ii) new
information – new data that becomes available and affects interpretation of the past estimates.
Revisions of the output gap may not be a bad thing, per se, but excessively unreliable real-
time estimates may lead to large policy errors or render the concept of the output gap irrele-
vant.
There are many contributions to the literature pointing out the real-time unreliability of many
output gap estimation methods and offering various remedies to the problem, see e.g. Cayen
and van Norden (2005) or Orphanides and van Norden (2002). The contribution of this sec-
tion is both a conceptual and a practical one. Conceptually, it is crucial to thoroughly under-
stand the sources of revisions and real-time unreliability. Here the linear filter framework is
the most natural approach. From a practical point of view, a decomposition of the revisions
into contributions of the new data is a useful analytical result, which allows researchers to
asses the informativeness of available observations.
The linear filter representation allows an analysis of revisions in a very tractable way for all
estimators considered. Recall that in the case of a proper finite sample implementation of fil-
ters, the method of penalized least squares, Kalman filter/smoother, or Wiener-Kolmogorov
filtering yield equivalent results.
The present analysis can be used both for (i) data revisions and (i) news effects – arrival of
new information. It should not come as a surprise that the key concept for the analysis is
Xt|T =
T
j=t0
Ωj|tYj, (28)
18. 17
a finite sample implementation of Ω(z). Data revisions are discussed first, followed by a dis-
cussion of news effect, where the characteristics of the process Yt are crucial for optimal
finite sample implementation of Ω(z).
1. Decomposing Effects of Data Revision
By data revisions, only the revision of past data releases should be understood; most often
these concern GDP data. The treatment of data revisions is very simple as long as the filtering
problem is applied to the same sample and number of observed data series – revised and old,
say YA,YB. The revision and its decomposition into factors is then given by
Rt|T = XA
t|T −XB
t|T =
T
τ=t0
Ωt|τ[YA
τ −YB
τ ]. (29)
Again, it is quite useful to think of (29) in a stacked form. For the Kalman filter, which is
a solution to the least square problem, one has that X = ΩY and thus trivially XA − XB =
Ω(YA − YB). The only requirement is to keep the structure of Ω fixed, for which an identical
structure of observations (cross-section and time) is needed. This stacked, matrix represen-
tation of the filter is also useful for investigating news effects, treatment of missing variables
and filter tunes (linear constraints) after a simple modification, see below.
The data revision decomposition is useful not just for the output gap estimates but also for
understanding the revised estimates of technology, preference, and other structural shocks
in a forecasting framework based on a DSGE model, see Andrle and others (2009b). For
instance, the interpretation of inflationary pressures in the economy changes when the domes-
tic absorption is revised, in the context of unchanged estimates of the CPI inflation.
2. News Effects and End-Point Bias
Implementation of a doubly-infinite, non-causal filter Ω(z) is problematic, since all economic
applications feature finite samples. Newly available data lead to some revision of past esti-
mates of the output gap or other unobserved variables. Even in the case of an optimal finite
sample approximation of the infinite filter, increasing the sample size leads to revisions.
All non-causal filters suffer the from finite-sample problem and saying that state-space mod-
els do not would not be correct. State-space models using the Kalman smoother are no excep-
19. 18
tion to the rule. The statement by Proietti and Musso (2007) that their state-space signal extrac-
tion techniques ‘do not suffer from what is often referred to as end-of-sample bias’ is thus
incorrect.
Still, the double-infinite sample size formulation of the problem is the best starting point for
the analysis of news effects, restated here for convenience:
Xt|∞ =
∞
i=−∞
ΩiYt+i. (30)
The revision due to availability of new observations after period T, chronologically only, is
then obviously
Nt|T =
∞
j=T
Ωj Yj −Yj|T , (31)
which is just a weighted average of prediction errors, conditioned on the information set to
period T, given the constant weights filter. See Pierce (1980) for a discussion of revision vari-
ance in time series models. Population revision variance can be computed given the filter and
the data-generating process for the data, determining the prediction errors.
Intuitively, (i) the smaller the weight on future observations and (ii) the better the predictions
of the future values, the smaller the revision variance. The discussion below conditions on a
model as given and does not put forth advice on how to design a better filter with different
weights.
All output gap estimation methods considered in the paper adopt a solution to the infinite-
sample problem, either an explicit or an implicit one. Implicitly, all provide forecasts and
backcasts for the actual sample, if needed. A simple truncation of the filter weights can be
interpreted as zero-mean forecasts, which would be suitable only for an uncorrelated zero-
mean stationary process. The optimal finite sample implementation of the filter is a solution
to the following approximation problem:
min
Ωt, j,j∈[t0,...,T]
||Xt|∞ −Xt|T ||2
(32)
=
−∞
j=−∞
ΩjYj −
−T
j=−t0
Ωj|tYj
2
(33)
=
π
−π
Ω(e−iω
)−Ωt(e−iω
) 2
SY(ω)dω, (34)
20. 19
see Koopmans (1974), Christiano and Fitzgerald (2003), or Schleicher (2003) for details. The
solution delivers a sequence of filters Ωt(z) that, based on position in the sample, t, are the
weights of the time invariant filter Ωt(z), adjusted by a factor derived from the auto-covariance
function of the data.9
Importantly, the solution to a problem in (32) is equivalent to the one where the time invariant
infinite filter is applied on the available sample padded with forecasts and backcasts by the j-
step ahead, with j chosen such that the filter converges. The forecast is uniquely pinned down
by the auto-covariance function of the data generating process. Hence, a heuristic solution
adopted by practitioners is actually the optimal one.10
State-space models with the Kalman smoother and Penalized Least-Squares formulations
provide implicitly the optimal finite sample approximation to Ω(z) by re-weighting the time-
invariant weights. This is equivalent to providing backcasts and forecasts and padding the
sample. The crucial point is that the forecasts are based exclusively on the covariance gener-
ating function of the underlying model, which may not always represent well the covariance
structure of the data generating process. The mismatch between the model and the data results
in poor forecasting properties. When the weights of the filter are spread out to many periods,
poor forecasting properties translate into revision variance and the so called end-point bias.
It is often the case that the filter puts a large weight on observations at the end of the sample.
This is simply a consequence of the fact that for most models their forecasting formula puts
larger weight on the most recent observations. When a doubly-infinite filter is applied to a
padded sample, the end sample observations are implicitly counted many times due to the
chain rule of projections, receiving effectively a larger weight.
Example Consider the Hodrick-Prescott/Leser filter as an example. The filter is often men-
tioned for its poor revision properties, or its large ‘end-point’ bias. Unless one provides a fac-
torization of the filter representation, like Kaiser and Maravall (1999), the filter takes many
9It is a projection problem, matching the auto-covariance of the Xt|T and Xt|∞ as closely as possible. For sta-
tionary processes, the Toeplitz structure of the autocovariance generating matrix allows for an efficient recursive
implementation. The system of equations is not specified in full, as the paper focuses on the equivalence with the
forecast and backcasts.
10 Sometimes there are ways how to make the implementation more robust. For instance, Kaiser and Mar-
avall (1999) factor the filter Ω(z) = A(z)A(z−1) and use the Burnman-Wilson algorithm to implement a two-pass
estimate of the HP filter, which requires only four periods of back/fore-casts to implement the infinite-order fil-
ter, using an ARIMA model for time series at hand. The same principle is the element of X12-ARIMA seasonal
adjustment procedure, for instance. By lowering the number of prediction the process can be simplified and
robust.
21. 20
periods to converge – more than 20 data points on both sides, see Fig. 1. Viewing the HP/Leser
filter as a desirable filter, a smooth transition low-pass filter, the optimal finite sample imple-
mentation then suggests re-weighting the filter weights by the auto-covariance function of
the data or, equivalently, extending the sample with best linear backcasts and forecasts. View-
ing the HP/Leser filter as a model, it is clear that output is assumed to be an ARIMA(0,2,2)
model, where the output gap is uncorrelated white noise and the potential output growth
follows a random walk. Such a model is highly implausible based on economic theory and
econometric analysis to be a data generating process for any country’s data. Yet, this is exactly
the model that a state-space model (via the Kalman smoother) and the penalized least-squares
formulation would use, yielding equivalent results.
Practical News Effect Decomposition for State-Space Models New observations create
‘news effects’ only if they carry some new information, information not predictable from the
past data. The representation (31) gives a simple and practical way of calculating and decom-
posing news effects into components of newly observed data.
The unavailable (or missing) data estimates are simply expected values conditional on the
original information set. Padding (filling in) the sample data with these estimates does not
change anything, since the information set is identical and there is no new information.
The problem of different sample sizes is easy to convert into a problem of identical sample
sizes by padding the data with a model-based forecast and using (29) to carry out the decom-
position simply by successive runs of the Kalman smoother.11 The easiest way to see that
‘padding’ the data with conditional expectations or projections does not change the estimates
is to realize the structure of the Kalman filter updating step: Xt+1|t+1 = TXt|t +K(Yt+1 −Yt+1|t).
In the case of data padded by forecast, the prediction error (information) is zero. The informa-
tion sets of the original and padded data sets are identical.
This simple and practical approach enables the analyst to investigate a judgement-free fore-
cast of the model and contrast it to actual data. The prediction error is then distributed into the
revision of the past unobserved shocks, see e.g. Andrle and others (2009b) for examples using
11 The implementation is simple, requires very little coding and allows the analyst to use a standard, existing
Kalman filter routine. In comparison with computing weights explicitly, the approach is also usually faster and it
is easy to code the decompositions for flexible grouping of variables, etc. In the case of non-stationary models,
the situation is a little bit more involved, depending on the treatment of initial conditions, though the main prin-
ciples introduced above hold. Further, using explicit time-varying weights, as in Koopman and Harvey (2003),
works in every case when the Kalman smoother is applicable.
22. 21
a DSGE model. The decomposition easily accounts for judgement imposed on the filter using
a dummy-observations approach.
III. Applications – Decomposition into Observables & News Effects
This sections demonstrates applications of methods discussed in the first part of the paper. A
simple extension of the Hodrick-Prescott/Leser filter is followed by an illustration of how to
decompose the output gap into observables using semi-structural and structural DSGE mod-
els. Both applications thus use a state-space representation of the model.
A. Variants of Hodrick-Prescott/Leser Filter and Local Linear Trend Models
The Hodrick-Prescott/Leser filter, see Leser (1961) and Hodrick and Prescott (1997), is an
undeniably popular method to estimate the output gap. Most economists either love it or hate
it. Due to its important role in the applied work and in the development of many multivariate
models, the I discuss the filter in a little bit more detail, despite its univariate nature. How-
ever, the focus will be mostly on things not dealt with in the literature and relevant for issues
analyzed in the paper – most importantly an assumption of steady-state growth of output.12
(a) Hodrick-Prescott Filter/Leser An often used specification of the Hodrick-Prescott fil-
ter13 is a penalized least squares (PLS) form
min
{¯yt}∞
t=−∞
∞
t=−∞
(yt − ¯yt)2
+
∞
t=−∞
λ (¯yt − ¯yt−1)−(¯yt−1 − ¯yt−2) 2
. (35)
It is easy to see, e.g. King and Rebelo (1993), that the doubly-infinite sample model (35)
implies a reduced form ARIMA(0,2,2) model for yt.
12This paper only scratches the surface of all properties of the HP filter, see Kaiser and Maravall (2001) for
many details from a frequency domain and filtering point of view.
13See the original paper by Leser (1961) for exactly the same idea. Ideas of variants of the filter have been
around in the engineering community since 1940s.
23. 22
The output gap estimate, xt, can be then formulated as a linear time-invariant filter with a
transfer function C(L) and weights wx,k, given by
xt = C(L)yt =
λ(1− L)2(1− L−1)2
1+λ(1− L)2(1− L−1)
yt =
∞
k=−∞
wx,kyt−k (36)
recall L denotes a ‘lag operator’, Lyt = yt−1. For details, see King and Rebelo (1993), inter
alios.
In terms of unobserved components (UC) models, as also mentioned in Hodrick and Prescott
(1997), the infinite-sample version of the Hodrick-Prescott filter can be rewritten intuitively
as
yt = ¯yt + xt (37)
xt = εx
t εx
t ∼ N(0,σ2
x) (38)
¯yt − ¯yt−1 = ¯yt−1 − ¯yt−2 +ε
g
t ε
g
t ∼ N(0,σ2
g), (39)
which clearly provides a model-based interpretation for the HP filter. The output gap is assumed
to be a non-persistent random noise and the potential output growth is assumed to follow a
random walk. As it is well known,
√
λ = σg/σx is the signal-to-noise ratio.
State-space representation of the HP filter or its modifications and extensions can easily be
written in a stationary form, where only the growth rates of the output are observed. One
simply defines ∆yt = ∆xt +gt, where gt = gt−1 +ε
g
t is coupled with a simple identity xt − xt−1 =
∆xt. A stationary state-space form is easily initialized with the unconditional mean and vari-
ance of the model, avoiding the need to deal with many ways to initialize non-stationary mod-
els, using variants of a diffuse Kalman filter/smoother. The weights of the HP filter that oper-
ates on growth rates of GDP can be obtained using the integration filter transfer function to
obtain xt = [C(L)/(1− L)]∆yt in terms of (36).
(b) Modified Hodrick-Prescott Filter A simple but useful modification of the HP filter is
the incorporation of more realistic processes for the output gap and for the potential output.
The literature is rich in these extensions, see e.g. Proietti (2009).
I will consider only a simple extension useful for better understanding the frequent treatment
of the potential output in semi-structural models used for policy analysis, e.g. multivariate-
filters of Benes and N’Diaye (2004) and Benes and others (2010) or semi-structural forward-
24. 23
looking models of Carabenciov and others (2008) or Andrle and others (2009a). This repre-
sentation is a common building block of more complex multivariate filters.14
The least complex model assumes that the output gap is a simple AR(1) stationary process
and that the potential output is subject to (i) level and (ii) growth rate shocks. Importantly,
potential output growth is a mean-reverting, persistent process – not a random-walk.
For many economies the assumption of mean reverting potential output growth is quite a
plausible one and it is this feature of the model that stands behind its improved revision prop-
erties. This is an important aspect of the estimates, though most statistical or econometric
literature, e.g. Proietti (2009), does not work with explicit steady-states. Steady-states are
neither used in SVAR literature nor early literature on multivariate filters, e.g. Laxton and Tet-
low (1992) or Conway and Hunt (1997), but are present in Benes and N’Diaye (2004), for
instance.
The model, then, is as follows:
yt = ¯yt + xt (40)
xt = ρxxt−1 +εx
t εx
t ∼ N(0,σx) (41)
¯yt = ¯yt−1 +gt +ε
¯y
t εx
t ∼ N(0,σ¯y) (42)
gt = ρggt−1 +(1−ρg)gss +ε
g
t εx
t ∼ N(0,σg). (43)
Given this data-generating process for the GDP of a particular country, it is obviously possi-
ble –if of interest– to design a parametrization of a modified Hodrick-Prescott filter that keeps
its gain function as close to HP as possible, but lowers revision variance.
(c) Example: US output gap and revision properties As a simple example, I parametrize
the modified HP as ρ1 = 0.70, ρg = 0.95, σ¯y = 0, σx = 1/(1 − ρ1) and σg =
√
(1/λ) × [1/(1 −
ρg)], λ = 1600 and apply this simple heuristic model to US output gap with sample 1967:1–
2010:3. The steady-state growth of potential output is assumed to be 2 %. Fig. 1 demonstrates
the difference between the output gaps estimated and the time invariant weights implied by
these two filters. There cannot be any claim of optimality of this filter; far from that, it is only
14 I work mainly with a state-space representation but, as explained above, penalized least squares, state-
space, and linear filters methods are equivalent. The early literature on multivariate filters for output gap estima-
tion, e.g. Laxton and Tetlow (1992), Conway and Hunt (1997) or de Brouwer (1998) somehow seems to contrast
penalized least squares problems and ‘unobserved component’ as different methods, comparing often markedly
different model specification, e.g. white noise versus AR(2) process for the output gap.
25. 24
a demonstration. Revision properties are judged by the standard deviation of the difference
of the final output gap estimate versus the real time estimates. For the standard HP filter, the
standard deviation of the revision process is 1.489, whereas for the modified filter it is 0.858,
which is essentially only 57.6% of the revision standard deviation. In this case, both the filter
weights and the forecasting properties contribute to the result.
As is well known, the standard Hodrick-Prescott/Leser filter, without any priors or modifi-
cations, is an extremely infeasible model for real-time estimation of the output gap or any
cyclical features of the data, due to its very poor revision properties. This is well known, but
the reason is often poorly understood.
Figure 1. HP filter vs. Modified HP filter – estimate & weights
1966:1 1971:1 1976:1 1981:1 1986:1 1991:1 1996:1 2001:1 2006:1
−5
−4
−3
−2
−1
0
1
2
3
4
−20 −15 −10 −5 0 5 10 15 20
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
B. Output Gap Estimation using a Multivariate Semi-Structural Filter
This part of the paper discusses the results of a state-of-the-art multivariate model (filter) for
the output gap estimation, for the US economy, developed by Benes and others (2010).15 This
15The replication materials are publicly available and can be freely downloaded from
www.douglaslaxton.com
26. 25
particular model structure has been and is being used in policy analysis in many instances,
see Cheng (2011), Scott and Weber (2011) or Babihuga (2011), among others. This paper
suggests some additional angles of viewing the model properties that practitioners could use
to gain more insight into the model as a filtering device, along its economic structure.
I analyze how the application of the model for the US economy makes use of the observed
data and I provide an elementary frequency domain analysis of the implied filter. The model
features an exceptionally small revision variance, together with very good forecasting proper-
ties, see Benes and others (2010), hence these will not be discussed in detail.
The model is specified by equations (44)–(55). The authors formulate a simple backward-
looking output gap equation, the Phillips curve, Okun’s law, and also use the capacity utiliza-
tion series as an additional measurement of the cyclical signal in the economy. A deviation
of the year-over-year inflation from long-term inflation expectations contributes to the output
gap negatively, as a supply shock and an implicit tightening of the monetary policy stance. On
the other hand, a positive output gap increases the inflation due to excess demand pressures.
The output gap, yt, is linked to capacity utilisation gap, ct, unemployment gap, ut, via simple
measurement relationship and the Okun’s law, unlike a bit more structural output gap and
Phillips curve relationship. Capacity utilisation, unemployment, and GDP feature the trend-
component specification essentially identical to (40)–(43). An interesting aspect of the model
is that a year-over-year inflation, π4
t , follows a unit root process, though it is anchored by
long-term inflation expectations process, π4LTE
t .16 The authors consider the model to be a
simple and pragmatic way to obtain a measure of the output gap, which has outstanding revi-
sion properties and is thus suitable for real-time policy making.17
16 The specification of inflation in year-over-year terms also has structural implications. First, a year-over-
year filter (1 − L4) attenuates high and seasonal frequencies, as a high frequency noise is hardly expected to be
related to the output gap. Second, the filter implies a phase delay of around 5.5 months, since it essentially is a
one-sided geometric moving average. Inflation developments thus propagate only very gradually to output gap.
17Authors also suggest that more complex, forward-looking models are the subject of their further research.
27. 26
The model is as follows:
yt = Yt − ¯Yt ct = Ct − ¯Ct ut = ¯Ut −Ut (44)
yt = ρyt−1 − ˜ρ2(π4
t−1 −π4LTE
t−1 )+εy
(45)
π4
t = π4
t−1 +βyt +Ω(yt −yt−1)+επ4
t (46)
ut = φ1ut−1 +φ2yt +εu
t (47)
ct = κ1ut−1 +κ2yt +εc
t (48)
π4LTE
t = π4LTE
t +επ4LTE
t (49)
¯Yt = ¯Yt−1 +G
¯Y
t /4+θ( ¯U − ¯Ut−1)−(1−θ)( ¯U − ¯Ut−20)/19+ε
¯Y
t (50)
G
¯Y
t = τG
¯Y
S S +(1−τ)G
¯Y
t−1 +εG
¯Y
t (51)
¯Ut = ¯Ut−1 +G
¯U
t − ˜ωyt−1 − ˜λ( ¯Ut−1 −US S
)+ε
¯U
t (52)
G
¯U
t = (1−α)G
¯U
t−1 +εG
¯U
t (53)
¯Ct = ¯Ct−1 +G
¯C
t +ε
¯C
t (54)
G
¯C
t = (1−δ)G
¯C
t−1 +εG
¯C
t , (55)
where all innovation processes εi
t are uncorrelated and follow the Gaussian distribution with
a zero mean and variance specified in Benes and others (2010). Obviously, the standard devia-
tions of all innovations are a crucial part of the model’s transfer function, although the impulse-
response function remains unaffected.
In terms of shock decomposition, the model’s output gap can be function of its own output
gap innovations and innovations to inflation or inflation expectations and thus is not very
interesting, though it cannot be omitted from the analysis. A more interesting and non-standard
analysis uses the filter representation and provides the decomposition into observables. Such
analysis is given below.
1. Decomposition into Observables
An interesting question is how individual observables (GDP, y/y inflation, capacity utiliza-
tion, or unemployment) contribute to the final estimate of the output gap. This is easily answered
by carrying out the decomposition into observables of the model’s state-space form. This
complements the analysis in Benes and others (2010) and provides the example use of the
methods discussed above.
Contribution of all observables to the output gap is depicted at the Fig. 2. The first thing to
notice is that the contribution of the GDP growth to the ‘Great Recession’ estimates started in
2007 and is smaller than in the case of the largest recession within the sample (in terms of the
28. 27
Figure 2. Output-Gap Observable Decomposition of Benes et al. (2010) model
1970:1 1975:1 1980:1 1985:1 1990:1 1995:1 2000:1 2005:1 2010:1
−6
−5
−4
−3
−2
−1
0
1
2
3
4
Inflation
GDP Growth
Capacity Util.
Unemployment
rest
output gap) in 1980–1985, whereas the contribution of the unemployment series is the great-
est ever. Second, the information extracted from observing the inflation data is rather lim-
ited. The fact that the parameterization of the model attributes a negligible role for inflation
in identification of the output gap could be controversial. It may also undermine any potential
interpretation of the output gap with respect to non-accelerating inflation or New Keynesian
theories.
The contribution of unemployment is large and more persistent than the contribution of GDP
and capacity utilization. That is quite consistent with the jobless recovery that the US econ-
omy has been experiencing since the 1990s, where the GDP and capacity utilization recover
faster than the unemployment rate and the inflation expectations are well anchored. The model
is parametrized to imply that NAIRU takes more time to change than potential output. Both
NAIRU and potential output growth vary less than most models based on definition of cycle
with frequency 6–32 periods, which contributes to realistic and interpretable magnitudes of
the output gap.
29. 28
Empirically, the unemployment lags output at business cycle frequencies, with lower ampli-
tude and high coherence. This is one of the most robust stylised facts across most developed
economies. Fig. 10 depicts output, capacity utilisation and unemployment (shift to lead by
one quarter) after applying a low-pass filter (HP filter, λ = 1600 for simplicity18) and scaled
to output gap volatility, with a phase-shift. The tight co-movement is important for a robust
signal extraction and lower revision variance of the model, see the Appendix for details.
These results complement the analysis in Benes and others (2010). The low role for inflation
signal can be easily understood by looking at parameter estimates. Importantly, the loading
coefficient of the ‘inflation gap’ to the output gap is only ˜ρ2 = 0.005 and also the parameters
determining output gap effect on inflation β,Ω are small.
Figure 3. Transfer function gains, Beneš et al. (2010) model
0.5 1 1.5 2 2.5 3
0.15
0.2
0.25
Filter gain: Y <−− GROWTH_
0.5 1 1.5 2 2.5 3
0.02
0.025
0.03
0.035
0.04
0.045
0.05
Filter gain: Y <−− PIE4_
0.5 1 1.5 2 2.5 3
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Filter gain: Y <−− UNR_
0.5 1 1.5 2 2.5 3
0.5
1
1.5
Filter gain: Y <−− D_CAPU_
18There is nothing magical about the value of 1600, arguments can be found as to why such a value is inap-
propriate when the recovered cycle should be used as an output gap concept.
31. 30
The most influential observable in the model turns out to be the unemployment. This can be
easily seen at Fig. 6, where the output gap estimates using all observables and the unemploy-
ment as the only input are contrasted. Adding the capacity utilization series brings the model
closer to the final estimates, but the series all alone performs worse in after 1990’s. Using the
GDP growth observations alongside the unemployment and capacity utilisation modifies the
estimates just a tiny bit, inflation and the inflation expectations bear close to no information in
the model framework.
Figure 6. Output-Gap Estimates from the Beneš et al. (2010) model
1966:4 1971:4 1976:4 1981:4 1986:4 1991:4 1996:4 2001:4 2006:4
−6
−5
−4
−3
−2
−1
0
1
2
3
full model
only unemployment
Analysis of the transfer function of the model As in the case of univariate filters it is fea-
sible to analyze the transfer function of the model in the frequency domain. Fig. 3 presents
the gain of the model’s transfer function for the output gap, Y, and four key observables in the
model – GDP growth, inflation, unemployment, and first change in the capacity utilisation.
The interpretation of the gain is standard, as in the univariate filters analysis. It demonstrates
which frequencies of the observed variables spill into the output gap estimate. One can see
that the gain from the inflation is rather flat across the whole spectrum and does not distin-
guish between business cycle and high frequency dynamics. It is also very small. Looking at
the gain of output with respect to the GDP growth and changes in the capacity utilization,
32. 31
indicates that the filter places large weight at lower frequencies, some weight at business
cycle frequencies (6-32 quarters) and little weight at high frequencies. One has to take into
account, however, that the first difference filter itself boosts high frequencies. The analysis
of the output gain for the level of the observed variables requires a straightforward applica-
tion of the integration filter, the inverse of the first difference filter with the transfer function
1/(2−2cos(ω)).
The gain profile going from the level of unemployment to the output gap is of great interest,
following the result on the importance of unemployment observations for determination of
the gap in the model. The weight on low frequencies and only a gradual decline of the gain
with an increase in the periodicity suggest rather large spillover of longer cycles into the out-
put gap. This is also clear from the spectral density of the output gap implied by the model.
To frame the discussion within a time domain, one can search for specification of the simple
Hodrick-Prescott filter or a frequency cutoff of Christiano and Fitzerald’s band pass filter that
would minimize the distance to the output gap estimated by the model. The model draws
more cyclical information from the unemployment series rather from the GDP. Unemploy-
ment is used to back out the output gap so as to match the model’s estimate. Fig. 4 demon-
strates that a univariate approximation using the HP filter with a large value of the smooth-
ing parameter λ, i.e. smooth trend, is quite successful. The results are not surprising, as the
importance of unemployment series and weight on lower frequencies (longer cycles) was
explained.
2. News Effects Decomposition
Despite its excellent revision properties, the model can be used to illustrate the news effect
and a decomposition of output gap revisions into relevant observables. The illustration focuses
2007Q2 and 2009Q1, both being interesting periods with respect to the ‘Great Recession’.
The results are depicted in Fig. 5. As can be seen, the revision properties of the model are
quite favourable. A news effect is defined as a projection error, and thus it can be easily quan-
tified and decomposed into components.
The data arriving in 2009Q1 resulted into a further deepening of the output gap estimate. The
drop in the output gap was a complete news, as the model dynamics would imply a start of a
recovery and closing of the gap. The largest contribution to the news is due to new data obser-
vation for the GDP growth, followed by capacity utilization, and unemployment numbers.
33. 32
Consistently with above findings observed data on inflation or inflation expectations contrib-
ute only modestly and revise the estimate in a persistent way.
Clearly, to properly interpret the contribution graph in Fig. 5 it is important to understand
not only the derivative of the filter to the new data, but also the differential, i.e. the difference
between forecast of observables and the actual outcome. In the case of the example using the
2009Q1 the GDP growth, capacity utilization, and unemployment were both lower than the
model would predict. The more complex is the model, the bigger is the value of a formal and
automatized approach.
C. Natural Output Gap in a DSGE Model
This section decomposes the ’flexible-price equilibrium output gap’ into observables using
the model of Smets and Wouters (2007). The model is an estimated model of the US econ-
omy.19 The model is a medium-sized DSGE model that uses the output gap concept consis-
tent with a theoretical definition of the ‘natural rate of output’ in the absence of nominal wage
and price rigidities and with no ‘mark-up’ shocks to wages and prices.
The output gap in the model is defined as a deviation of the ‘natural rate of output’ from the
actual output. Despite a very different definition and modeling framework from most empiri-
cal measures of the output gap, all tools discussed in the paper continue to apply. The decom-
position of the flexible-price output gap is depicted in Fig. 7
In a closed economy model, the output gap is usually analyzed only in terms of the shock
decomposition, which provides a structural interpretation of the historical developments. The
decomposition into observables is essentially a reverse process, indicating which observables
are bing used by the structure of the model to identify latent variables – technology, prefer-
ence, and other shocks. One can observe that some variables are important at high and busi-
ness cycle frequencies (hours worked, consumption, and output), whereas inflation or interest
rates contribute mainly to low frequency dynamics of the output gap in this case. The model
thus extracts the information about the natural rate of output without placing much weight on
the observed inflation data, similarly to a semi-structural model above. The small weight on
inflation would be easy to explain if the observations of real wages would contribute signifi-
cantly to business cycle dynamics of the output gap, yet that is not the case.
19I would like to thank to authors for making available the codes for replicating their work. As I have moved
the codes from Dynare, I retain the blame for any errors.
34. 33
Figure 7. Output Gap Observable Decomposition from a DSGE Model
1967:3 1972:3 1977:3 1982:3 1987:3 1992:3 1997:3 2002:3
−20
−15
−10
−5
0
5
10
15
Infl
Interest
Wages
Consumption
Inv
Output
Labor
35. 34
IV. Conclusions
This paper suggests a simple and useful method for exploring ‘what is in the output gap.’ A
decomposition of the unobservable output gap in terms of observed inputs, e.g. output, infla-
tion, or unemployment rate is provided. The procedure answers the question of what observed
variables and at which periodicity contribute to the estimate of an unobservable quantity – the
output gap. Importantly, the decomposition into observables also allows researches to quan-
tify the ‘news effects’ caused by new available data, generating revisions of the output gap
estimates. A better understanding of the role that new data play for a change in the estimate is
easier to obtain using the decomposition into observables.
The paper demonstrates that the most frequently used methods for potential output estimation
can be cast in terms of a linear filter theory. This enables both a frequency- and time-domain
analysis and provides insights into the nature of revisions of the unobserved variables esti-
mates. The analysis in the paper applies to simple multivariate filters, semi-structural models,
production function method estimates, and fully articulated DSGE models.
The method is illustrated using a semi-structural multivariate filter and a fully articulated
DSGE model. Using the multivariate filter for potential output estimation, the paper demon-
strates that new insight is obtained due to the decomposition into observables. Both models
considered attribute very low weight to observed data on inflation when identifying the output
gap.
Revision properties and the ‘end-point bias’ of individual approaches can be better under-
stood as properties of the two-sided moving average, or, the filter representation. The more
spread out the weights and the worse the forecasting properties of the filter-implied model,
the larger the real-time revision variance of the estimate. For a particular data-generating
process a population or analytical exploration of revision error stationary stochastic process
can be easily performed.
The paper also shows that, a priori, there is no reason to expect that multivariate filters, expressed
in a state-space form, should feature better revision properties than univariate filters. The key
is the structure of the model, providing the link between the economic theory and optimal
signal extraction principles.
36. 35
References
Andrle, M., 2009, “DSGE Filters for Forecasting and Policy Analysis,” Techn. rep., Czech
National Bank/European Central Bank.
Andrle, M., Ch. Freedman, R. Garcia-Saltos, D. Hermawan, D. Laxton, and H. Munandar,
2009a, “Adding Indonesia to the Global Projection Model,” Working Paper 09/253, Interna-
tional Monetary Fund, Washington DC.
Andrle, M., T. Hlédik, O. Kameník, and J. Vlˇcek, 2009b, “Implementing the New Structural
Model of the Czech National Bank,” Working paper no. 2, Czech National Bank.
Babihuga, R., 2011, “How Large is Sweden’s Output Gap,” Sweden: 2011 Article IV
Consultation–Staff Report; Public Information Notice on the Executive Board Discussion;
and Statement by the Executive Director for Sweden. IMF Country Report No. 11/171
11/171, International Monetary Fund, Washington DC.
Bell, W.R., 1984, “Signal extraction for nonstationary time series,” Annals of Statistics,
Vol. 12, pp. 646–664.
Benes, J., K. Clinton, R. Garcia-Saltos, M. Johnson, D. Laxton, P. Manchev, and T. Mathe-
son, 2010, “Estimating Potential Output with a Multivariate Filter,” Working Paper 10/285,
International Monetary Fund, Washington DC.
Benes, J., and P. N’Diaye, 2004, “A Multivariate Filter for Measuring Potential Output and
the Nairu,” Working Paper 04/45, International Monetary Fund, Washington DC.
Blanchard, O., and D. Quah, 1989, “The dynamic effects of aggregate supply and demand
disturbances,” American Economic Review, Vol. 79, pp. 655–673.
Carabenciov, I., I. Ermolaev, Ch. Freedman, M. Juillard, O. Kamenik, D. Korshunov, and
D. Laxton, 2008, “A Small Quarterly Projection Model of the US Economy,” Working Paper
08/278, International Monetary Fund, Washington DC.
Cayen, J.-P., and S. van Norden, 2005, “The Reliability of Canadian Output-Gap Estimates,”
The North American Journal of Economics and Finance, Vol. 16, pp. 373–393.
Cheng, K. C., 2011, “France’s Potential Output during the Crisis and Recovery,” France:
Selected Issues Paper, IMF Country Report No. 11/212 11/212, International Monetary Fund,
Washington DC.
Christiano, L.J., and T.J. Fitzgerald, 2003, “The Bandpass Filter,” International Economic
Review, Vol. 44, No. 2, pp. 435–465.
Claus, I., 1999, “Estimating potential output for New Zealand: a structural VAR approach,”
Discussion Paper 2000/03, Reserve Bank of New Zealand.
Conway, P., and B. Hunt, 1997, “Estimating Potential Output: A Semi-Structural Approach,”
Discussion Paper G97/9, Reserve Bank of New Zealand.
37. 36
de Brouwer, G., 1998, “Estimating Output Gaps,” Research Discussion Paper 9809, Reserve
Bank of Australia.
Epstein, N., and C. Macciarelli, 2010, “Estimating Poland’s Potential Output: A Production
Function Approach,” Working Paper 10/15, International Monetary Fund, Washington DC.
Gomez, V., 1999, “Three Equivalent Methods for Filtering Finite Nonstationary Time Series,”
Journal of Business & Economics Statistics, Vol. 17, No. 1, pp. 109–116.
———, 2001, “The Use of Butterworth Filters for Trend and Cycle Estimation in Economic
Time Series,” Journal of Business & Economic Statistics, Vol. 19, pp. 365–373.
———, 2006, “Wiener-Kolmogorov Filtering and Smoothing for Multivariate Series with
State-Space Structure,” Journal of Time Series Analysis, Vol. 28, No. 3, pp. 361–385.
Harvey, A., and T. Trimbur, 2008, “Trend Estimation and the Hodrick-Prescott Filter,” Jour-
nal of Japan Statistical Society, Vol. 38, No. 1, pp. 41–49.
Hodrick, R., and E. Prescott, 1997, “Post-War Business Cycles: An Empirical Investigation,”
Journal of Money, Credit and Banking, Vol. 29, No. 1, pp. 1–16.
Kaiser, R., and A. Maravall, 1999, “Estimation of the Business Cycle: A Modified Hodrick-
Prescott Filter,” Spanish Economic Review, Vol. 1, pp. 175–206.
———, 2001, Measuring Business Cycles in Economic Time Series (New York: Springer-
Verlag, Lecture Notes on Statistics 154).
Kalman, R.E., 1960, “A new approach to linear filtering and prediction problems,” Trans,
ASME, Ser. D., Journal of Basic Engineering, Vol. 82, pp. 35–45.
King, R.G., and S.T. Rebelo, 1993, “Low frequency filtering and real business cycles,” Jour-
nal of Economic Dynamics and Control, Vol. 17, No. 1–2, pp. 207–231.
Koopman, S.J., and A. Harvey, 2003, “Computing observation weights for signal extraction
and filtering,” Journal of Economic Dynamics and Control, Vol. 27, pp. 1317–1333.
Koopmans, L.H., 1974, The Spectral Analysis of Time Series (San Diego, CA: Academic
Press).
Kuttner, K., 1994, “Estimating Potential Output as a Latent Variable,” Journal of Business
and Economic Statistics, Vol. 12, No. 3, pp. 361–368.
Laubach, T., and J.C. Williams, 2003, “Measuring the Natural Rate of Interest,” The Review
of Economics and Statistics, Vol. 85, No. November, pp. 1063–1070.
Laxton, D., and R. Tetlow, 1992, “A Simple Multivariate Filter for the Measurement of the
Potential Output,” Technical Report 59 (June), Bank of Canada.
Leser, C.E.V., 1961, “A Simple Method of Trend Construction,” Journal of the Royal Statisti-
cal Society, Series B (Methodological), Vol. 23, No. 1, pp. 91–107.
38. 37
McNellis, P.D., and C.B. Bagsic, 2007, “Output Gap Estimation for Inflation Forecasting: the
Case of the Philippines,” Techn. rep., Bangko Sentral ng Pilipina.
Orphanides, A., and S. van Norden, 2002, “The Unreliability of Output-Gap Estimates in
Real Time,” Review of Economics and Statistics, Vol. 84, pp. 569–583.
Pierce, D.A., 1980, “Data revisions in moving average seasonal adjustment procedures,”
Journal of Econometrics, Vol. 14, No. 1, pp. 95–114.
Pollock, D.S.G., 2000, “Trend Estimation and De-trending via Rational Square-wave Filters,”
Journal of Econometrics, Vol. 98, No. 1-3, pp. 317–334.
Proietti, T., 2009, “On the Model-Based Interpretation of Filters and the Reliability of Trend-
Cycle Estimates,” Econometric Reviews, Vol. 28, No. 1-3, pp. 186–208.
Proietti, T., and A. Musso, 2007, “Growth Accounting for the Euroarea – a Structural
Approach,” Working Paper 804, European Central Bank.
Schleicher, Ch., 2003, “Wiener-Kolmogorov Filters for Finite Time Series,” Techn. rep., Uni-
versity of British Columbia.
Scott, A., 2000, “Stylised Facts from Output Gap Measures,” Discussion Paper DP2000/07,
Reserve Bank of New Zealand, Wellington.
Scott, A., and S. Weber, 2011, “Potential Output Estimates and Structural Policy,” Kingdom
of The Netherlands–Netherlands: Selected Issues and Analytical Notes, IMF Country Report
No. 11/212 11/143, International Monetary Fund, Washington DC.
Smets, F., and R. Wouters, 2007, “Shocks and Frictions in US Business Cycles: A Bayesian
DSGE Approach,” American Economic Review, Vol. 97, No. 3, pp. 586–606.
Whittle, P., 1983, Prediction and Regulation by Linear Least-Square Methods, Second Ed.
(Minneapolis: University of Minnesota Press).
39. 38
Appendix A. Parameter Estiates from Beneˇs et al. (2010)
The model by Benes and others (2010), as specified by equations (44)–(55) is econometri-
cally estimated using United States data for period 1967:1–2010:2. The approach is a Bayesian-
likelihood, more specifically a ‘regularized maximum likelihood’ – a method popular in engi-
neering. The method is equivalent to a likelihood estimation with an independent joint prior
on parameters coming from truncated-Normal distribution, as upper and lower bound for
parameter are estimated.
41. 40
Appendix B. Not for publication: Difference
between two representations of the HP filter
The HP filter state-space form is often represented in the following form:
yt = ¯yt + xt (56)
xt = εx
t εx
t ∼ N(0,σ2
x) (57)
¯yt = ¯yt−1 +βt−1 (58)
βt = βt−1 +ε
g
t ε
g
t ∼ N(0,σ2
g), (59)
which is equivalent the the HP filter state-space representation included in the text and the
results from both state-space implementations are identical. Moreover, these state-space rep-
resentations are identical to a standard matrix formulas of HP filter implementations.
Appendix C. Example: Simple Multivariate Filter – Three Representations
In this section I provide a simple example, beyond univariate HP filter, of a semi-structural
multivariate filters represented as a (i) state-space model, (ii) Wiener-Kolmogorov filter and
(iii) penalized least squares.
The state-space form of the model is
yt = xt +τt (60)
τt −τt−1 = τt−1 −τt−2 +ετ
t (61)
xt = ρxt−1 −κπ
gap
t +εx
t (62)
π
gap
t = λπ
gap
t−1 +θxt +επ
t (63)
and can be casted in a penalized least squares problem
min
{τ}T
0
T
t=0
1
σ2
x
[εx
t ]2
+
1
σ2
τ
[ετ
t ]2
+
1
σ2
τ
[επ
t ]2
, (64)
which is in a form of ‘multivariate Hodrick-Prescott filter’, as suggested by Laxton and Tet-
low (1992) and for ρ = κ = θ = 0 and λ = (σx/στ)2 is equivalent to the penalized least squares
formulation of the HP filter. The problem (64) is easy to solve by finding first-order condi-
tions with respect to {τt}T
t=0.
42. 41
The implied Winer-Kolmogorov estimate of the output gap xt is given by the multivariate
filter in terms of output level and inflation (or output growth and inflation) is as follows
xt = (1−Wy(L))yt −Wπ(L)π
gap
t =
(1−Wy(L)
(1− L)
∆yt −Wπ(L)π
gap
t (65)
where
Wy(L) ≡
(λ1 +λ2α2 +ρ2λ1)−λ1ρ(L− L−1)
(λ1 −2λ3 +α2λ2 +ρ2λ1)−(4λ3 +λ1ρ)(L+ L−1)+λ3(L2 + L−2)
(66)
Wπ(L) ≡
−(λ1κ +αλ2)+αλ2φL+ρλ1κL−1
(λ1 −2λ3 +α2λ2 +ρ2λ1)−(4λ3 +λ1ρ)(L+ L−1)+λ3(L2 + L−2)
(67)
express the z-transform of the two-sided filter weights. The time domain weights profile can
be easily obtained either from a state-space representation using the theory outlined in the
paper, or from (66) and (67) by computing inverse z-transform either numerically or analyti-
cally by factorizing the formulas.
Appendix D. Not for publication: Variance Reduction
via Common Component and Multiple Measures
In this section I briefly review the STAT-101 intuition about revision variance reduction by
adding multiple relevant measures of an underlying signals. In case there are more noisy, but
relevant measurement available, then (i) the variance of the estimates is lowered and, equiva-
lently, (ii) the weights of the filter are less spread-out, lowering revision variance.
The most simple example is an estimate of a deterministic signal µ from one or two noisy
signals: z1 = µ + u1 and z2 = µ + u2, with u1 ∼ N(0,σ2
1) and u1 ∼ N(0,σ2
2). In case of just
one signal, z1, the estimate is simply ˆµ = z1, with variance of the estimate σ2
1. When both
measurements are available, the estimate is given by
ˆµ =
σ2
2
σ2
1 +σ2
2
z1 +
σ2
1
σ2
1 +σ2
2
z2 MSEˆµ =
1
σ2
1
+
1
σ2
2
−1
. (68)
It is clear from (68) that as long as the second measurement is available, i.e. σ2
2 < ∞, the pre-
cision of the estimate is increasing. This is a principle that carries over into more complex,
dynamic models analyzed using Kalman or Wiener-Kolmogorov filtering. The larger the pre-
cision of the estimate, ceteris paribus, the less spread out weights of dynamic models are.
I can also analyze more realistic model. Still, for simplicity I will ignore the trend compo-
nents of the signal. Let us assume a model of the AR(1) signal xt and two available measure-
43. 42
ments y1 and y2:
xt = ρxt−1 +ν (69)
y1,t = xt +ε1,t (70)
y2,t = φxt +ε2,t, (71)
where the parameter φ is indicating the degree of relevance of the signal, together with vari-
ances associated with error terms, σ2
ν,σ2
1 and σ2
2. The signal extraction differs dramatically
for various parameter values of ρ and variances.
This is rather simple and well understood problem with analytical solution. The weights of
the two-sided filter will be symmetric and follow the scheme
xt|∞ = c1
∞
i=−∞
λ|i|
Xt−i, (72)
where a variable X denotes convex combination of both observables, following essentially the
weighting scheme (68). The parameter λ can be recovered from a solution of quadratic equa-
tion associated with the transfer function of the filter. For the clarity of exposition a numerical
examples are provided below.
(D.0.0.1) Single measurement In case of single measurement only, or, equivalently, φ = 0,
the estimate of xt using doubly infinite sample is given by
xt =
q
q+|1−ρL|2
y1,t, (73)
which implies symmetric two-sided filter with weights decaying in an exponential way. The
higher is the persistence ρ the slower is the decay and the larger is the revision variance. For
ρ = 0 only the concurrent values of y1,t are used – xt = q/(1+q) = σ2
x/(σ2
x +σ2
1)yt.
(D.0.0.2) Two measurements of the signal The easiest case to analyze is when there is no
dynamics, ρ = 0, as the estimator uses only current period values of observables y1,t,y2,t.
The estimation with dynamics and two observables yields the two-sided filter of the form
xt =
q1
(q1 +φq2)+|1−ρL|2
y1,t +
φq2
(q1 +φq2)+|1−ρL|2
y2,t. (74)
44. 43
In case ρ = 0 the problem is trivial and solved above. Note that the weights for both observ-
ables will by symmetric and proportional, only rescaled by appropriate factor in terms of their
relative informativeness, given by the variance of the measurement error and cross-correlation
φ. Fig. 8 depicts the problem with ρ = 0.50 and ρ = 1.0, given φ = 1 and σy1 = σy2 = 0.9.
Figure 8. Weights of AR(1) model
−5 0 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
y1 −− rho 1.0 versus 0.5
base
alternative
−5 0 5
0
0.05
0.1
0.15
0.2
0.25
0.3
y1 −− std. errors 0.9 vs 1.9
base
alternative
−5 0 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
y2
−5 0 5
0
0.05
0.1
0.15
0.2
0.25
0.3
y2