This paper presents an extension of JDemetra+ that can be used to operationalize the process of nowcasting: 1) short term forecasting and 2) reading the newsflow. The second point is not possible with "partial" models such as bridge equations or any kind of univariate regressions. Our multivariate modeling approach, inspired by the state-of-the-art literature on nowcasting, is parsimonious and estimation is feasible even in the presence of large and heterogeneous data sets. By taking into account the calendar of macro-economic releases, it can also provide a realistic measure of the forecasting accuracy and how it decreases over time as more and more information becomes available. An example based on well known US variables illustrates the power of this methodology and the usefulness of our visualization approach. Further examples can be found in our wiki (https://github.com/nbbrd/jdemetra-nowcasting/wiki)
We show how JDemetra+ can be used for monitoring the German business cycle and visualizing the real-time dataflow, which contributes to automatically update our perception of events. The formalization of the nowcasting problem is not specific to the use of dynamic factor models. It allows the forecaster to take into account the timeliness and quality of the various data releases in the process monitoring the economy in real time.
Real-time illustration of how the Nowcasting library of JDemetra+ can be used to update expectations for GDP growth. I will update the result frequently but not regularily, hoping that you will be curious about the methodology and decide to specify your own models with this tool, which is free and open source software
OECD Workshop “Assessing the socio-economic losses and damages from climate c...OECD Environment
Presentation from the OECD Workshop “Assessing the socio-economic losses and damages from climate change” (13 January 2021) - Session 2, Head of the Economic Growth and Human Development working group, MCC; Professor of Climate Change, Development and Economic Growth, University of Potsdam
Textual information analysis for the integration of different data repositoriescarloamati
Methodological guidelines for record matching in absence of common identification codes in case of different data sources on investment projects, with practical application based on textual information
This paper presents an extension of JDemetra+ that can be used to operationalize the process of nowcasting: 1) short term forecasting and 2) reading the newsflow. The second point is not possible with "partial" models such as bridge equations or any kind of univariate regressions. Our multivariate modeling approach, inspired by the state-of-the-art literature on nowcasting, is parsimonious and estimation is feasible even in the presence of large and heterogeneous data sets. By taking into account the calendar of macro-economic releases, it can also provide a realistic measure of the forecasting accuracy and how it decreases over time as more and more information becomes available. An example based on well known US variables illustrates the power of this methodology and the usefulness of our visualization approach. Further examples can be found in our wiki (https://github.com/nbbrd/jdemetra-nowcasting/wiki)
We show how JDemetra+ can be used for monitoring the German business cycle and visualizing the real-time dataflow, which contributes to automatically update our perception of events. The formalization of the nowcasting problem is not specific to the use of dynamic factor models. It allows the forecaster to take into account the timeliness and quality of the various data releases in the process monitoring the economy in real time.
Real-time illustration of how the Nowcasting library of JDemetra+ can be used to update expectations for GDP growth. I will update the result frequently but not regularily, hoping that you will be curious about the methodology and decide to specify your own models with this tool, which is free and open source software
OECD Workshop “Assessing the socio-economic losses and damages from climate c...OECD Environment
Presentation from the OECD Workshop “Assessing the socio-economic losses and damages from climate change” (13 January 2021) - Session 2, Head of the Economic Growth and Human Development working group, MCC; Professor of Climate Change, Development and Economic Growth, University of Potsdam
Textual information analysis for the integration of different data repositoriescarloamati
Methodological guidelines for record matching in absence of common identification codes in case of different data sources on investment projects, with practical application based on textual information
The Census Hub Project can be considerated at the moment as the most advanced project where Internet technologies and SDMX solutions for data transmission get together for an ambicious goal: the data dissemination of Census 2011 results.
We analyze the Census Hub architecture, where a central Hub at Eurostat side manage the user interface, transforming all selections made by the user on the screen in an sdmx query. This query is sent to the web service at NSI side, that parses the query and transforms it in an SQL query that can be used with a data base containing census data. Depending on how many countrys are involved in the answer, the hub will query the web service provided for that country. Finally, the Hub receive all answer fron NSI's and build up a final table, putting all answers toghether. The importance of this implementation is that is a completely new system that change completely the way to disseminate and exchange official data among organizations.
CCCXG Global Forum March 2017 BGC What information would be needed and how co...OECD Environment
CCCXG Global Forum March 2017 BGC What information would be needed and how could it contribute to the enhancement of Parties' actions and support on adaptation? by Irene Suarez
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Reporting uncertainties - too much information?Dario Buono
Ocial statistics are published by government agencies and other international institutes to provide infor-
mation on the economy, living conditions, social development etc. These metrics are evaluated using dierent
sources, primarily surveys and censuses and, in addition, data obtained from government administrations or
private sector information.
Several qualitative criteria are considered the basis for trustworthy ocial statistics, such as impartiality,
transparency, relevance and independency. However, as the published metrics are derived from statistical
analysis of imperfect and potentially incomplete data, errors and uncertainties are inevitable and, in some
cases, require revisions or corrections that can lead to reduced condence in the overall process. It is also
important to recognize that the uncertainties originate both from statistical errors, such as the use of limited
raw data, and from bias induced by incomplete information or modeling assumptions.
Understanding how dierent sources of errors lead to bias and variance helps us improve the overall process
resulting in more accurate predictions. Quantitative measures of the variance errors are commonplace and
easy to convey; among those the standard-error-in the mean is perhaps the most popular and results in the
symbol. Measures of the spread in the actual data are also easy to estimate and disseminated using the
variance, or more frequently the standard deviation. More complete representation of the statistical spread
in the row data leads to percentiles and, eventually, to reporting the complete probability distributions.
Measures of bias, on the other hand, are not well developed because in many cases are not directly computable.
In the engineering community rather than presenting the variance and bias error, the focus is to identify
and rank the sources of uncertainties that explain the imprecision in the estimates. In this work we will
discuss applications of two global sensitivity metrics, the Sobol indices and the active subspace variables as
tools to describe the variance errors. Furthermore, we will discuss the distance metric as a strategy to assess
bias errors derived from classical measures of discrepancies between probability distribution functions.
More Related Content
Similar to Eurostat tools for benchmarking and seasonal adjustment j_demetra+ and jecotrim_buono_final_20130820
The Census Hub Project can be considerated at the moment as the most advanced project where Internet technologies and SDMX solutions for data transmission get together for an ambicious goal: the data dissemination of Census 2011 results.
We analyze the Census Hub architecture, where a central Hub at Eurostat side manage the user interface, transforming all selections made by the user on the screen in an sdmx query. This query is sent to the web service at NSI side, that parses the query and transforms it in an SQL query that can be used with a data base containing census data. Depending on how many countrys are involved in the answer, the hub will query the web service provided for that country. Finally, the Hub receive all answer fron NSI's and build up a final table, putting all answers toghether. The importance of this implementation is that is a completely new system that change completely the way to disseminate and exchange official data among organizations.
CCCXG Global Forum March 2017 BGC What information would be needed and how co...OECD Environment
CCCXG Global Forum March 2017 BGC What information would be needed and how could it contribute to the enhancement of Parties' actions and support on adaptation? by Irene Suarez
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Reporting uncertainties - too much information?Dario Buono
Ocial statistics are published by government agencies and other international institutes to provide infor-
mation on the economy, living conditions, social development etc. These metrics are evaluated using dierent
sources, primarily surveys and censuses and, in addition, data obtained from government administrations or
private sector information.
Several qualitative criteria are considered the basis for trustworthy ocial statistics, such as impartiality,
transparency, relevance and independency. However, as the published metrics are derived from statistical
analysis of imperfect and potentially incomplete data, errors and uncertainties are inevitable and, in some
cases, require revisions or corrections that can lead to reduced condence in the overall process. It is also
important to recognize that the uncertainties originate both from statistical errors, such as the use of limited
raw data, and from bias induced by incomplete information or modeling assumptions.
Understanding how dierent sources of errors lead to bias and variance helps us improve the overall process
resulting in more accurate predictions. Quantitative measures of the variance errors are commonplace and
easy to convey; among those the standard-error-in the mean is perhaps the most popular and results in the
symbol. Measures of the spread in the actual data are also easy to estimate and disseminated using the
variance, or more frequently the standard deviation. More complete representation of the statistical spread
in the row data leads to percentiles and, eventually, to reporting the complete probability distributions.
Measures of bias, on the other hand, are not well developed because in many cases are not directly computable.
In the engineering community rather than presenting the variance and bias error, the focus is to identify
and rank the sources of uncertainties that explain the imprecision in the estimates. In this work we will
discuss applications of two global sensitivity metrics, the Sobol indices and the active subspace variables as
tools to describe the variance errors. Furthermore, we will discuss the distance metric as a strategy to assess
bias errors derived from classical measures of discrepancies between probability distribution functions.
Skills for the new generation of statisticians Dario Buono
This presentation analyses the competence profile of official statisticians with a particular focus on new data science competences. Modernization of official statistics will depend on the capability to incorporate new data sources and benefit from “disruptive technologies”. This will require new capabilities, skills and competences that may not be part of the traditional skill set of official statisticians. The document was presented to the Conference of European Statisticians organised at the United Nation in Geneva
JDemetra+ Java Tool for Seasonal AdjustmentDario Buono
JDemetra+ is a tool for seasonal adjustment (SA) developed by the National Bank of Belgium (NBB) in cooperation with the Deutsche Bundesbank and Eurostat in accordance with the Guidelines of the European Statistical System (ESS). User support, training and methodological development is provided by the devoted Centre of Excellence on Seasonal Adjustment coordinated by INSEE, the French National Statistical Office.
JDemetra+ has been officially recommended, since February 2015, to the members of the ESS and the European System of Central Banks as software for seasonal and calendar adjustment of official statistics.
Big data and macroeconomic nowcasting from data access to modellingDario Buono
Parallel advances in IT and in the social use of Internet-related applications, provide the general public with access to a vast amount of information. The associated Big Data are potentially very useful for a variety of applications, ranging from marketing to tapering fiscal evasion.
From the point of view of official statistics, the main question is whether and to what extent Big Data are a field worth investing to expand, check and improve the data production process and which types of partnerships will have to be formed for this purpose. Nowcasting of macroeconomic indicators represents a well-identified field where Big Data has the potential to play a decisive role in the future.
In this paper we present the results and main recommendations from the Eurostat-funded project “Big Data and macroeconomic nowcasting”, implemented by GOPA Consultants, which benefits from the cooperation and work of the Eurostat task force on Big Data and a few external academic experts.
Big Data Analysis: The curse of dimensionality in official statisticsDario Buono
Statistical authorities need to produce accurate data faster and in a cost effective way, to become more responsive to users´ demands, while at the same time continuing to provide high quality output. One way to fulfil this is to make use of all new accessible data sources, as for example administrative data and big data. As a result, statistical offices will have to deal more and more with a "huge" number" of time series, in particular for producing model based statistics.
Using high dimensional datasets will most likely urge statistical authorities to follow a different approach, in particular to be conscious that the measurement of socio-economic variables will follow more and more non-linear processes that could not be described by probability distributions that could be easily described by few parameters.
It will thus imply to adapt the way to observe the world through data taking into account at a greater extent uncertainty and complexity, which will in turn impact dissemination and communication activities of statistical authorities.
Reliability of estimates in socio-demographic groups with small samplesDario Buono
The aim of this work is twofold: to investigate the possibilities of model-based approach implementation in the official statistics so to ensure reliability of data for social conditions by different breakdowns; and to discuss advantages, disadvantages, and the potentiality of use of small area estimation techniques and tools in production of the official statistics.
In order to try to analyse fitting of the models to different type of data, various run were conducted using several small area estimation techniques (such as empirical Bayesian, hierarchical Bayes, etc.) already built-in within the R software (packages sae, hbsae, etc.) to obtain area and unit level based at-risk-of-poverty estimates and the mean squared errors of the estimates
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Eurostat tools for benchmarking and seasonal adjustment j_demetra+ and jecotrim_buono_final_20130820
1. Eurostat tools for Benchmarking and Seasonal
adjustment JDemetra+ and JEcotrim
by
Dario BUONO, Ph.D.
Eurostat, European Commission, Luxembourg
Macroeconomic Imbalances Procedure team
IASC satellite conference on
"Big data and computational statistics"
August 22nd
2013, 13:45, Seoul, Korea
SESSION SS2R5: Practical Issues in Chain Linking and Benchmarking
2. Objective
Inform the about the Eurostat IT tools available for
Seasonal Adjustment and Benchmarking and
Temporal Disaggregation
No methodological issues will be addressed
Summary information on JDEMETRA+ with briefing on
the ESS guidelines on Seasonal Adjustment
Summary information on JECOTRIM
User support provided by Eurostat
Q & A
3. Content
Recall of seasonal adjustment issue;
An overview of the seasonal adjustment software;
IT solutions for SA software;
The role of Eurostat;
JDemetra+;
– Aims of the project, Functionalities, Advantages;
JEcotrim: a plugin of JDEMETRA+
4. What is Seasonal Adjustment?
SEASONAL
ADJUSTMENT
Fluctuations observed during the year (each
month, each quarter) and which appear to
repeat themselves on a more or less regular
basis from one year to the other
Remove Seasonality
Seasonality:
5. What happens after SA?
Original and Seasonally Adjusted series
Original Series
6. What happens after SA?
Original and Seasonally Adjusted series
The series has been cleaned!!
Seasonally Adjusted series
8. What happens after SA?
Growth Rates
Seasonally Adjusted series
1
1
−=
−t
t
t
X
X
G
9. Leading seasonal adjustment software – a
quick review
The main SA programs are:
– TSW – the Windows application, developed by Bank of
Spain, that integrates the TRAMO and the SEATS
programs;
– X-12-ARIMA and X-13ARIMA-SEATS – the programs
produced by the U.S. Census Bureau, that include X-12-
ARIMA method
• (X-13ARIMA-SEATS is also capable to generate
ARIMA model-based SA).
Both written in a FORTRAN language.
10. Seasonal adjustment software from an IT
perspective
The algorithms written originally in FORTRAN might be
applied to solve time series related issues, but are not
designed for reusability;
In case of introduction of the new functionality, the actual
programs are modified;
Uncertain future of the FORTRAN language;
– Lack of developers;
– Not a strictly object-oriented language.
11. Eurostat’s scopes in area of SA
Eurostat aims to:
– Promote the idea of seasonal adjustment;
– Ease an access of non-specialists to TRAMO/SEATS and
X-12-ARIMA (X-13ARIMA-SEATS);
– Converge towards a harmonised process for seasonally
and calendar adjustment practices.
12. SA software promoted by Eurostat
Demetra (2002);
– Initially successful because of its user-friendliness;
Demetra+ (2010);
– Implementation of the ESS Guidelines on SA;
– Provides graphical interface and common input/output
diagnostics for TRAMO/SEATS and X-12-ARIMA;
– Includes complex technical solutions. Cannot be used
under IT environments other than Windows (.NET)
technology;
JDemetra+ (2012);
– Fortran codes re-written in JAVA using NetBeans.
JECOTRIM (2013) as plug in of JDEMETRA+
13. What is JDEMETRA+?
JDEMETRA+ is a new tool for
Seasonal and Calendar Adjustment
developed by NBB and EUROSTAT
Identify more components
Trend-Cycle Component
Outliers
Irregular Component
14. New tool new issue!!
Maintenance of the tool in the long-term;
Integration of the libraries in the IT environments of
many institutions (portability issue of Demetra+.NET);
Re-use of the modules/algorithms for other
purposes.
15. Aims of the current project
Provide a tool for SA which:
– Is flexible, i.e.:
• encompasses the leading SA algorithms;
• could evolve independently when improvements or
alternative methods appear.
– Is versatile, i.e. can be:
• used in a rich graphical interface (JDemetra+ itself);
• integrated in other (in-house) developments.
– Consists of modules that can be reuse in the other
circumstances;
– Is an open source, and therefore may increase the
transparency of statistical computation and contribute to a
better sharing of the statistical knowledge.
16. JDemetra+ functionalities
SA methods:
– TRAMO/SEATS;
– X-13ARIMA-SEATS;
– X-12-ARIMA;
– Structural models;
– Mixed Airline;
– Generalised Airline.
SA tools:
– Seasonality tests;
– Direct/indirect comparison;
– Calendars with weights on holidays.
Other tools:
– Benchmarking (JEcotrim);
– Temporal disaggregation (JECOTRIM);
17. Advantages of JDemetra+
Efficient process of large datasets;
A user-friendly graphical interface.
Possibility for different teams to progressively take over
the software or to contribute to its evolution.
Core engines rewritten in Java, by NetBeans platform:
– supported by almost all IT operating systems;
– that allows for easy extensions (plug-ins) and
improvements;
18. Future plans
Modification and extension of the code, e. g.;
– Modification of existing functionalities;
– New data providers due to genetic serialization
functionality;
– Additional diagnostics and output;
– New seasonal adjustment methods (using batch
processing).
– Plug in for revision analysis
– Plugin for business cycle analysis
19. ESS Guidelines on SA
Introduced in 2009
Chapters subdivided into specific items describing different steps of the SA process
Items presented in a standard structure providing:
1. Description of the issue
2. List of options which could be followed to perform the step
3. Prioritized list of three alternatives from most recommended one to the
one to avoid (A, B and C)
4. Concise list of main references
Added value:
1. Conceptual framework and practical implementation steps
2. Both for experienced users and beginners
http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/K
20. JECOTRIM: un update of ECOTRIM
ECOTRIM contains procedures based on temporal
disaggregation, benchmarking, reconciliation of low
frequency series and matrix balancing via complex
mathematical and statistical methods.
Ecotrim was developed in C++ (for Windows) by Eurostat
JEcotrim can be defined simply as an upgrade of Ecotrim
– Correction of some bugs
– New methods
– Plugged to Jdemetra
21. Definition
Temporal Disaggregation
– Process of deriving high frequency data from low
frequency data and, if available, related high frequency
information
22. Temporal Disaggregation techniques are useful in
compiling short-term statistics:
Quarterly National Accounts (QNA)
Give a quarterly breakdown of the figures in the annual
accounts
Flash estimates
Use the available information in the best possible way
including, in the framework of a statistical model, the
short-term available information and the low frequency
data in a coherent way
Monthly indicators of GDP
The monthly estimates are derived from the available
information respecting the coherence with quarterly data
23. Basic principles
Distribution
– When annual data are either sums or averages of
quarterly data (e.g., GDP, consumption, indexes and in
general all flow variables and all average stock variables)
Interpolation
– When annual value equals by definition that of the fourth
(or first) quarter (e.g., population at the end of the year,
money stock, and all stock variables)
Extrapolation
– When estimates of quarterly data are made when the
relevant annual data are not yet available
24. Estimates have to be consistent and coherent
Temporal consistency
– Quarterly values have to match annual values (for
example the sum of quarterly values of the GDP must be
equal to the annual value)
Accounting coherence
– Quarterly components of an account should respect the
accounting constraints (for example, the sum of quarterly
values of the GDP expenditure side components should
be equal to the corresponding quarterly value of GDP)
25. Temporal Disaggregation and
Benchmarking available in JEcotrim
Univariate Approach (temporal Benchmarking)
– Modified Denton,
– Chow-Lin, Fernández,Litterman
Multivariate Approach (accounting Benchmarking)
– RAS-PM
– Two-Step reconciliation
26. Webpages on JDEMETRA+ and JEcotrim
You can download the latest version of the tools at
http://www.cros-portal.eu/content/seasonal-adjustment
Together with:
ESS guidelines on Seasonal Adjustment
User Manual and other documents
Help-Desk e-mail: estat-methodology@ec.europa.eu
Information about the EUROSTAT ESTP Training Course