Network and risk spillovers: a multivariate GARCH perspectiveSYRTO Project
M. Billio, M. Caporin, L. Frattarolo, L. Pelizzon: “Network and risk spillovers: a multivariate GARCH perspective”.
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
An important, and well studied, class of stochastic models is given by stochastic differential equations (SDEs). In this talk, we consider Bayesian inference based on measurements from several individuals, to provide inference at the "population level" using mixed-effects modelling. We consider the case where dynamics are expressed via SDEs or other stochastic (Markovian) models. Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that account for (i) the intrinsic random variability in the latent states dynamics, as well as (ii) the variability between individuals, and also (iii) account for measurement error. This flexibility gives rise to methodological and computational difficulties.
Fully Bayesian inference for nonlinear SDEMEMs is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameters of interest. The algorithm is made computationally efficient through careful use of blocking strategies, particle filters (sequential Monte Carlo) and correlated pseudo-marginal approaches. The resulting methodology is is flexible, general and is able to deal with a large class of nonlinear SDEMEMs [1]. In a more recent work [2], we also explored ways to make inference even more scalable to an increasing number of individuals, while also dealing with state-space models driven by other stochastic dynamic models than SDEs, eg Markov jump processes and nonlinear solvers typically used in systems biology.
[1] S. Wiqvist, A. Golightly, AT McLean, U. Picchini (2020). Efficient inference for stochastic differential mixed-effects models using correlated particle pseudo-marginal algorithms, CSDA, https://doi.org/10.1016/j.csda.2020.107151
[2] S. Persson, N. Welkenhuysen, S. Shashkova, S. Wiqvist, P. Reith, G. W. Schmidt, U. Picchini, M. Cvijovic (2021). PEPSDI: Scalable and flexible inference framework for stochastic dynamic single-cell models, bioRxiv doi:10.1101/2021.07.01.450748.
Approximate Bayesian Computation (ABC) can be used as a new empirical Bayes approach when the likelihood function is not available in closed form. ABC replaces the intractable likelihood with a non-parametric approximation and summarizes data with insufficient statistics. ABC has opened opportunities for new inference machines that are legitimate but different from classical Bayesian approaches, raising questions about how closely ABC relates to Bayesian inference. ABC originated in population genetics where likelihoods are often intractable, and population geneticists have contributed significantly to ABC methodology.
Information topology, Deep Network generalization and Consciousness quantific...Pierre BAUDOT
1. The document discusses using information topology and cohomology to quantify patterns and statistical interactions in complex data.
2. It introduces information cohomology, which defines information functions on simplicial complexes and studies their additive and multiplicative properties.
3. Information functions include entropy, mutual information, and conditional mutual information. Independence of random variables can be characterized topologically as certain information functions equalling zero.
Entropy and systemic risk measures
M. Billio, R. Casarin, M. Costola, A. Pasqualini
Ca’ Foscari Venice University
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...SYRTO Project
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time Series Models. Andre Lucas. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Network and risk spillovers: a multivariate GARCH perspectiveSYRTO Project
M. Billio, M. Caporin, L. Frattarolo, L. Pelizzon: “Network and risk spillovers: a multivariate GARCH perspective”.
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
An important, and well studied, class of stochastic models is given by stochastic differential equations (SDEs). In this talk, we consider Bayesian inference based on measurements from several individuals, to provide inference at the "population level" using mixed-effects modelling. We consider the case where dynamics are expressed via SDEs or other stochastic (Markovian) models. Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that account for (i) the intrinsic random variability in the latent states dynamics, as well as (ii) the variability between individuals, and also (iii) account for measurement error. This flexibility gives rise to methodological and computational difficulties.
Fully Bayesian inference for nonlinear SDEMEMs is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameters of interest. The algorithm is made computationally efficient through careful use of blocking strategies, particle filters (sequential Monte Carlo) and correlated pseudo-marginal approaches. The resulting methodology is is flexible, general and is able to deal with a large class of nonlinear SDEMEMs [1]. In a more recent work [2], we also explored ways to make inference even more scalable to an increasing number of individuals, while also dealing with state-space models driven by other stochastic dynamic models than SDEs, eg Markov jump processes and nonlinear solvers typically used in systems biology.
[1] S. Wiqvist, A. Golightly, AT McLean, U. Picchini (2020). Efficient inference for stochastic differential mixed-effects models using correlated particle pseudo-marginal algorithms, CSDA, https://doi.org/10.1016/j.csda.2020.107151
[2] S. Persson, N. Welkenhuysen, S. Shashkova, S. Wiqvist, P. Reith, G. W. Schmidt, U. Picchini, M. Cvijovic (2021). PEPSDI: Scalable and flexible inference framework for stochastic dynamic single-cell models, bioRxiv doi:10.1101/2021.07.01.450748.
Approximate Bayesian Computation (ABC) can be used as a new empirical Bayes approach when the likelihood function is not available in closed form. ABC replaces the intractable likelihood with a non-parametric approximation and summarizes data with insufficient statistics. ABC has opened opportunities for new inference machines that are legitimate but different from classical Bayesian approaches, raising questions about how closely ABC relates to Bayesian inference. ABC originated in population genetics where likelihoods are often intractable, and population geneticists have contributed significantly to ABC methodology.
Information topology, Deep Network generalization and Consciousness quantific...Pierre BAUDOT
1. The document discusses using information topology and cohomology to quantify patterns and statistical interactions in complex data.
2. It introduces information cohomology, which defines information functions on simplicial complexes and studies their additive and multiplicative properties.
3. Information functions include entropy, mutual information, and conditional mutual information. Independence of random variables can be characterized topologically as certain information functions equalling zero.
Entropy and systemic risk measures
M. Billio, R. Casarin, M. Costola, A. Pasqualini
Ca’ Foscari Venice University
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...SYRTO Project
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time Series Models. Andre Lucas. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Scalable inference for a full multivariate stochastic volatilitySYRTO Project
Scalable inference for a full multivariate stochastic volatility
P. Dellaportas, A. Plataniotis and M. Titsias UCL(London), AUEB(Athens), AUEB(Athens)
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
This document discusses Bayesian model choice and alternatives. It covers several key topics:
1. The Bayesian framework for inference which conditions on observations using Bayes' theorem and the posterior distribution. This provides a coherent way to incorporate new information.
2. Choosing between models or testing models using Bayesian methods. The posterior distribution is compared for different models given the data.
3. Some criticisms of Bayesian inference including that noninformative priors are difficult to define and may not truly represent no information. Improper priors are also controversial but can be justified in some cases.
4. Alternatives to traditional Bayesian hypothesis testing are discussed to address criticisms of the Bayesian approach to model selection and hypothesis testing.
Inference via Bayesian Synthetic Likelihoods for a Mixed-Effects SDE Model of...Umberto Picchini
- The document describes a talk given by Umberto Picchini on Bayesian inference for a mixed-effects stochastic differential equation (SDE) model of tumor growth.
- The model introduces a state-space model for tumor growth in mice with dynamics driven by an SDE and formulates a mixed-effects SDE model to estimate population parameters.
- The talk aims to show how to perform approximate Bayesian inference for the mixed-effects SDE model using synthetic likelihoods.
The document discusses Approximate Bayesian Computation (ABC), a computational technique for Bayesian inference when the likelihood function is intractable. ABC allows sampling from the likelihood and making inferences based on simulated data without calculating the actual likelihood. The technique originated in population genetics models where likelihoods for genetic polymorphism data cannot be calculated in closed form. ABC is presented as both an inference machine with its own legitimacy compared to classical Bayesian approaches, as well as a way to address computational issues with intractable likelihoods.
A Dynamic Factor Model: Inference and Empirical Application. Ioannis Vrontos SYRTO Project
The document describes a dynamic factor model to analyze how financial risks are interconnected within the Eurozone. It uses the model to examine risk dynamics using sovereign CDS and equity returns from 2007-2009 covering the US financial crisis and pre-sovereign crisis in Europe. The model relates asset returns to latent sector factors, macro factors, and covariates. Bayesian inference is applied using MCMC to estimate the time-varying parameters and latent factors.
1) Mathematicians use statistical models to predict future trends and values based on historical data. Both continuous and discrete univariate and multivariate models are explored.
2) Specific models examined include the Ornstein-Uhlenbeck process, Euler-Maruyama and Milstein schemes for numerical approximations of continuous processes, and autoregressive AR(p) models for discrete processes.
3) The models are fitted to inflation rate data to predict future inflation values based on parameter estimation techniques like maximum likelihood estimation. Model outputs like predicted values and distributions are examined.
A short introduction to statistical learningtuxette
This document provides an introduction to statistical learning methods. It begins with background information on statistical learning problems and discusses concepts like underfitting, overfitting, and consistency. It then summarizes decision trees and random forests, describing how they are learned from data and make predictions. Support vector machines and neural networks are also briefly mentioned. Key goals of statistical learning methods include accuracy on training data as well as generalization to new data.
Wavelet-based Reflection Symmetry Detection via Textural and Color HistogramsMohamed Elawady
This document presents a methodology for wavelet-based reflection symmetry detection using textural and color histograms. It extracts multiscale edge segments using Log-Gabor filters and measures symmetry based on edge orientations, local texture histograms, and color histograms. Evaluation on public datasets shows it outperforms previous methods in detecting single and multiple symmetries, with quantitative and qualitative results presented. Future work could improve the detection using continuous maximal-seeking.
1. The document discusses various importance sampling methods for Bayesian model comparison and discrimination between embedded models.
2. It compares regular importance sampling, bridge sampling, mixtures to bridge sampling, harmonic means, Chib's solution, and the Savage-Dickey ratio as solutions for approximating Bayes factors for model comparison.
3. As an example, it applies regular importance sampling to test whether a diabetes pedigree function variable is significant in a probit model for predicting diabetes in Pima Indian women, using a g-prior specification.
This document discusses an online EM algorithm and some extensions. It begins by outlining the goals of maximum likelihood estimation, good scaling, processing data incrementally without storage, and simple implementation. It then provides an overview of the topics covered, which include the EM algorithm in exponential families, the limiting EM recursion, the online EM algorithm, using online EM for batch maximum likelihood estimation, and extensions. The document uses a Poisson mixture model as a running example to illustrate the E and M steps of the EM algorithm.
WSC 2011, advanced tutorial on simulation in StatisticsChristian Robert
This document discusses recent advances in simulation methods for statistics. It motivates the use of such methods by explaining how latent variable models can make inference computationally difficult. It introduces Monte Carlo integration and the Metropolis-Hastings algorithm as two important simulation techniques. The document also discusses how Bayesian analysis provides a framework to combine prior information with data, but computing the posterior distribution can be challenging for complex models. Simulation methods are presented as a way to approximate solutions to these computationally difficult statistical problems.
The document discusses adaptive Markov chain Monte Carlo (MCMC) for Bayesian inference of spatial autologistic models. It notes that standard MCMC cannot be implemented when the likelihood function is unavailable or the completion step is too costly due to high dimensionality. Adaptive MCMC is proposed as an alternative that bypasses computation of the normalizing constant. Questions are raised about how to combine adaptations of the proposal distribution, tuning parameters, and sample sizes to improve the method.
Introduction to the guide of uncertainty in measurementMaurice Maeck
This document provides an introduction to measurement uncertainty and the Guide to the Expression of Uncertainty in Measurement (GUM). It discusses key concepts such as measurement uncertainty, types of uncertainty evaluation, and the stages of the uncertainty evaluation process. The formulation stage involves expressing the measurement mathematically in terms of input quantities and their estimates and uncertainties. The calculation stage determines the measurement result, its combined uncertainty, and expanded uncertainty. Examples are provided on probability distribution functions, the central limit theorem, and the t-distribution.
This document discusses various methods for summarizing and displaying univariate and multivariate data, including measures of central tendency (mean, median), dispersion (variance, standard deviation), histograms, box plots, stem-and-leaf diagrams, and scatter plots. It provides the definitions and steps for constructing these various graphs and analyzing metrics. The goal is to concisely describe datasets through visualization and numerical measures.
Mathematicians use univariate and multivariate analyses to predict the future. For univariate analysis, they use Ornstein-Uhlenbeck and autoregressive models to analyze time series data. For multivariate analysis, they use linear regression to analyze correlations between multiple time series and predict values. Their analyses generate forecasts, confidence bands around predictions, and evaluations of prediction errors. The conclusion indicates the methods provide useful predictions and that inflation rates are correlated across measures.
Chapter 2. Multivariate Analysis of Stationary Time SeriesChengjun Wang
This document discusses multivariate time series analysis and vector autoregressive (VAR) models. It covers simulation and estimation of VAR models in R, as well as diagnostic testing, forecasting, impulse response analysis, and structural VAR (SVAR) models which impose restrictions to identify structural shocks. Methods for SVAR models include the A-model which restricts the A matrix and the B-model which restricts the B matrix. Impulse response functions and forecast error variance decompositions are used to analyze the dynamic effects of shocks in SVAR models.
The document discusses analyzing multivariate time series of five energy futures (crude oil, ethanol, gasoline, heating oil, natural gas) using vector autoregressive (VAR) and vector error correction (VEC) models. It finds the futures are cointegrated using Johansen and Engle-Granger tests, indicating they share a common stochastic trend. A VAR(1) model is estimated and found stable. The VEC model captures the error correction behavior as futures return to their long-run equilibrium. Forecasts are generated and limitations of the Engle-Granger approach discussed.
A researcher assigns 33 subjects to 3 groups receiving dietary information via different modes: an online website, a nurse practitioner, or a video. The researcher measures 3 dependent variables related to the presentation: difficulty, usefulness, and importance. MANOVA is an appropriate analysis to determine if the modes of presentation have a significant effect on a combination of the dependent variables, while accounting for correlations between them. The researcher can use MANOVA to test whether the interactive website is superior to the other modes in conveying the information in a comprehensive yet cost-effective manner.
In Part II of the Anomaly Detection Series, we discuss the challenges in analyzing Temporal datasets and discuss methods for outlier analysis. We focus on single time series and discuss point outlier and sub-sequence methods.
Scalable inference for a full multivariate stochastic volatilitySYRTO Project
Scalable inference for a full multivariate stochastic volatility
P. Dellaportas, A. Plataniotis and M. Titsias UCL(London), AUEB(Athens), AUEB(Athens)
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
This document discusses Bayesian model choice and alternatives. It covers several key topics:
1. The Bayesian framework for inference which conditions on observations using Bayes' theorem and the posterior distribution. This provides a coherent way to incorporate new information.
2. Choosing between models or testing models using Bayesian methods. The posterior distribution is compared for different models given the data.
3. Some criticisms of Bayesian inference including that noninformative priors are difficult to define and may not truly represent no information. Improper priors are also controversial but can be justified in some cases.
4. Alternatives to traditional Bayesian hypothesis testing are discussed to address criticisms of the Bayesian approach to model selection and hypothesis testing.
Inference via Bayesian Synthetic Likelihoods for a Mixed-Effects SDE Model of...Umberto Picchini
- The document describes a talk given by Umberto Picchini on Bayesian inference for a mixed-effects stochastic differential equation (SDE) model of tumor growth.
- The model introduces a state-space model for tumor growth in mice with dynamics driven by an SDE and formulates a mixed-effects SDE model to estimate population parameters.
- The talk aims to show how to perform approximate Bayesian inference for the mixed-effects SDE model using synthetic likelihoods.
The document discusses Approximate Bayesian Computation (ABC), a computational technique for Bayesian inference when the likelihood function is intractable. ABC allows sampling from the likelihood and making inferences based on simulated data without calculating the actual likelihood. The technique originated in population genetics models where likelihoods for genetic polymorphism data cannot be calculated in closed form. ABC is presented as both an inference machine with its own legitimacy compared to classical Bayesian approaches, as well as a way to address computational issues with intractable likelihoods.
A Dynamic Factor Model: Inference and Empirical Application. Ioannis Vrontos SYRTO Project
The document describes a dynamic factor model to analyze how financial risks are interconnected within the Eurozone. It uses the model to examine risk dynamics using sovereign CDS and equity returns from 2007-2009 covering the US financial crisis and pre-sovereign crisis in Europe. The model relates asset returns to latent sector factors, macro factors, and covariates. Bayesian inference is applied using MCMC to estimate the time-varying parameters and latent factors.
1) Mathematicians use statistical models to predict future trends and values based on historical data. Both continuous and discrete univariate and multivariate models are explored.
2) Specific models examined include the Ornstein-Uhlenbeck process, Euler-Maruyama and Milstein schemes for numerical approximations of continuous processes, and autoregressive AR(p) models for discrete processes.
3) The models are fitted to inflation rate data to predict future inflation values based on parameter estimation techniques like maximum likelihood estimation. Model outputs like predicted values and distributions are examined.
A short introduction to statistical learningtuxette
This document provides an introduction to statistical learning methods. It begins with background information on statistical learning problems and discusses concepts like underfitting, overfitting, and consistency. It then summarizes decision trees and random forests, describing how they are learned from data and make predictions. Support vector machines and neural networks are also briefly mentioned. Key goals of statistical learning methods include accuracy on training data as well as generalization to new data.
Wavelet-based Reflection Symmetry Detection via Textural and Color HistogramsMohamed Elawady
This document presents a methodology for wavelet-based reflection symmetry detection using textural and color histograms. It extracts multiscale edge segments using Log-Gabor filters and measures symmetry based on edge orientations, local texture histograms, and color histograms. Evaluation on public datasets shows it outperforms previous methods in detecting single and multiple symmetries, with quantitative and qualitative results presented. Future work could improve the detection using continuous maximal-seeking.
1. The document discusses various importance sampling methods for Bayesian model comparison and discrimination between embedded models.
2. It compares regular importance sampling, bridge sampling, mixtures to bridge sampling, harmonic means, Chib's solution, and the Savage-Dickey ratio as solutions for approximating Bayes factors for model comparison.
3. As an example, it applies regular importance sampling to test whether a diabetes pedigree function variable is significant in a probit model for predicting diabetes in Pima Indian women, using a g-prior specification.
This document discusses an online EM algorithm and some extensions. It begins by outlining the goals of maximum likelihood estimation, good scaling, processing data incrementally without storage, and simple implementation. It then provides an overview of the topics covered, which include the EM algorithm in exponential families, the limiting EM recursion, the online EM algorithm, using online EM for batch maximum likelihood estimation, and extensions. The document uses a Poisson mixture model as a running example to illustrate the E and M steps of the EM algorithm.
WSC 2011, advanced tutorial on simulation in StatisticsChristian Robert
This document discusses recent advances in simulation methods for statistics. It motivates the use of such methods by explaining how latent variable models can make inference computationally difficult. It introduces Monte Carlo integration and the Metropolis-Hastings algorithm as two important simulation techniques. The document also discusses how Bayesian analysis provides a framework to combine prior information with data, but computing the posterior distribution can be challenging for complex models. Simulation methods are presented as a way to approximate solutions to these computationally difficult statistical problems.
The document discusses adaptive Markov chain Monte Carlo (MCMC) for Bayesian inference of spatial autologistic models. It notes that standard MCMC cannot be implemented when the likelihood function is unavailable or the completion step is too costly due to high dimensionality. Adaptive MCMC is proposed as an alternative that bypasses computation of the normalizing constant. Questions are raised about how to combine adaptations of the proposal distribution, tuning parameters, and sample sizes to improve the method.
Introduction to the guide of uncertainty in measurementMaurice Maeck
This document provides an introduction to measurement uncertainty and the Guide to the Expression of Uncertainty in Measurement (GUM). It discusses key concepts such as measurement uncertainty, types of uncertainty evaluation, and the stages of the uncertainty evaluation process. The formulation stage involves expressing the measurement mathematically in terms of input quantities and their estimates and uncertainties. The calculation stage determines the measurement result, its combined uncertainty, and expanded uncertainty. Examples are provided on probability distribution functions, the central limit theorem, and the t-distribution.
This document discusses various methods for summarizing and displaying univariate and multivariate data, including measures of central tendency (mean, median), dispersion (variance, standard deviation), histograms, box plots, stem-and-leaf diagrams, and scatter plots. It provides the definitions and steps for constructing these various graphs and analyzing metrics. The goal is to concisely describe datasets through visualization and numerical measures.
Mathematicians use univariate and multivariate analyses to predict the future. For univariate analysis, they use Ornstein-Uhlenbeck and autoregressive models to analyze time series data. For multivariate analysis, they use linear regression to analyze correlations between multiple time series and predict values. Their analyses generate forecasts, confidence bands around predictions, and evaluations of prediction errors. The conclusion indicates the methods provide useful predictions and that inflation rates are correlated across measures.
Chapter 2. Multivariate Analysis of Stationary Time SeriesChengjun Wang
This document discusses multivariate time series analysis and vector autoregressive (VAR) models. It covers simulation and estimation of VAR models in R, as well as diagnostic testing, forecasting, impulse response analysis, and structural VAR (SVAR) models which impose restrictions to identify structural shocks. Methods for SVAR models include the A-model which restricts the A matrix and the B-model which restricts the B matrix. Impulse response functions and forecast error variance decompositions are used to analyze the dynamic effects of shocks in SVAR models.
The document discusses analyzing multivariate time series of five energy futures (crude oil, ethanol, gasoline, heating oil, natural gas) using vector autoregressive (VAR) and vector error correction (VEC) models. It finds the futures are cointegrated using Johansen and Engle-Granger tests, indicating they share a common stochastic trend. A VAR(1) model is estimated and found stable. The VEC model captures the error correction behavior as futures return to their long-run equilibrium. Forecasts are generated and limitations of the Engle-Granger approach discussed.
A researcher assigns 33 subjects to 3 groups receiving dietary information via different modes: an online website, a nurse practitioner, or a video. The researcher measures 3 dependent variables related to the presentation: difficulty, usefulness, and importance. MANOVA is an appropriate analysis to determine if the modes of presentation have a significant effect on a combination of the dependent variables, while accounting for correlations between them. The researcher can use MANOVA to test whether the interactive website is superior to the other modes in conveying the information in a comprehensive yet cost-effective manner.
In Part II of the Anomaly Detection Series, we discuss the challenges in analyzing Temporal datasets and discuss methods for outlier analysis. We focus on single time series and discuss point outlier and sub-sequence methods.
This document provides an overview of remote sensing through a seminar presented by Ashwathy Babu Paul. It defines remote sensing as obtaining information about an object without physical contact through electromagnetic radiation. It describes the basic components and process of remote sensing systems including energy sources, sensor recording, transmission and processing. Various sensors and platforms are discussed along with advantages and applications in fields like agriculture, natural resource management, national security, geology, meteorology, and more. Challenges are addressed but advantages of remote sensing are said to far outweigh these.
Clustering in dynamic causal networks as a measure of systemic risk on the eu...SYRTO Project
Clustering in dynamic causal networks as a measure of systemic risk on the euro zone
M. Billio, H. Gatfaoui, L. Frattarolo, P. de Peretti
IESEG/ Universitè Paris1 Panthèon-Sorbonne/ University Ca' Foscari
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
A review of two decades of correlations, hierarchies, networks and clustering...Gautier Marti
Opinionated review of two decades of correlations, hierarchies,
networks and clustering in financial markets presented at Ton Duc Thang University in Ho Chi Minh City, Vietnam.
Dependent processes in Bayesian NonparametricsJulyan Arbel
This document summarizes dependent processes in Bayesian nonparametrics. It motivates the need for dependent random probability measures to accommodate temporal dependence structures beyond the exchangeability assumption. It describes modeling collections of random probability measures indexed by time as either discrete-time or continuous-time processes. The diffusive Dirichlet process is introduced as a dependent Dirichlet process with Dirichlet marginal distributions at each time point and continuous sample paths. Simulation and estimation methods are discussed for this model.
Kernel methods for data integration in systems biology tuxette
This document provides an overview of a seminar presentation on kernel methods for data integration in systems biology. It begins with short biographies of the presenter, who is trained as a mathematician and statistician and applies their skills to research in human health and animal genomics using various omics data types. Examples are given of the presenter's past work inferring networks and integrating gene expression and lipid data, as well as expression and 3D DNA location data. The talk will discuss how to integrate multiple omics data from different sources and types using kernels. Kernels allow reducing high-dimensional data to similarity matrices and are not restricted to numeric data. They also allow embedding expert knowledge and provide a framework for statistical learning.
This document introduces the plm package for panel data econometrics in R. It discusses panel data models and estimation approaches, describes the software approach and functions in plm, and compares plm to other R packages for longitudinal data analysis. Key features of plm include functions for estimating linear panel models with fixed effects, random effects, and GMM, as well as data management and diagnostic testing capabilities for panel data. The package aims to make panel data analysis intuitive and straightforward for econometricians in R.
The document discusses panel data models and issues related to missing data and measurement errors. It provides a general specification for panel data models that includes time effects, individual effects, and time-varying and time-invariant explanatory variables. It also discusses assumptions required for different estimation methods and how to address endogeneity and heterogeneity. The document outlines challenges related to unbalanced panels, attrition, and missing data, distinguishing between missing at random and missing completely at random.
This document outlines the author's PhD research, which bridges probabilistic modelling and deep learning. It first provides an overview of the author's work moving from probabilistic modelling to deep learning. It then provides a deep dive into a probabilistic model for temporal interaction data using point processes. Finally, it discusses open questions around bridging statistical methods and deep learning by making each approach more effective.
I conti economici trimestrali: avanzamenti metodologici e prospettive di innovazione
Seminario
Roma, 21 aprile 2016
Istat, Aula Magna
Via Cesare Balbo, 14
Mini useR! in Melbourne https://www.meetup.com/fr-FR/MelbURN-Melbourne-Users-of-R-Network/events/251933078/
MelbURN (Melbourne useR group) https://www.meetup.com/fr-FR/MelbURN-Melbourne-Users-of-R-Network
July 16th, 2018
Melbourne, Australia
The document discusses the path from randomness and uncertainty to information, thermodynamics, and intelligence of an observer. It proposes that random interactions from an observable process can be integrated into information processes by an observer. The observer applies "impulse controls" that cut off portions of the observable process, converting uncertainty to certainty and extracting information. This establishes an "information path" where the observer sequentially decreases entropy and maximizes information. The information is encoded in an "information network" within the observer that exhibits logical computation and leads to the development of intelligence. In this way, the model aims to unite uncertainty, information dynamics, and the emergence of an intelligent observer.
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
This document summarizes a paper that evaluates methods for distinguishing causes from effects using only observational data. It reviews two families of causal discovery methods called Additive Noise Methods (ANM) and Information Geometric Causal Inference (IGCI). It presents a benchmark dataset of 100 cause-effect pairs from various domains, and reports results evaluating the performance of these methods on both real and simulated data. A key finding is that certain ANM methods can distinguish causes from effects with above-chance accuracy on real-world data, though more data is needed to draw strong conclusions. The paper also proves the consistency of the original ANM method proposed by Hoyer et al. (2009).
This document compares different methods for disaggregating low frequency economic time series data into higher frequency data: Chow-Lin (static model), Fernandez (static model), Litterman (static model), and Santo Silvacardoso (dynamic model). The Chow-Lin, Fernandez, and Litterman models are static, while Santo Silvacardoso uses a dynamic regression model. The models were used to disaggregate annual private consumption expenditure data into monthly data. Results showed that all methods produced high correlation between original and disaggregated data annually. At the monthly level, Santo Silvacardoso performed best with the lowest standard deviation, while Litterman performed worst.
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsXin-She Yang
This document analyzes metaheuristic optimization algorithms and discusses open problems in their analysis. It reviews convergence analyses that have been done for simulated annealing and particle swarm optimization. It also provides a novel convergence analysis for the firefly algorithm, showing that it can converge for certain parameter values but also exhibit chaos which can be advantageous for exploration. The document outlines the need for further mathematical analysis of convergence and efficiency in metaheuristics.
Data fusion is the process of combining data from different sources to enhance the utility of the combined product. In remote sensing, input data sources are typically massive, noisy, and have different spatial supports and sampling characteristics. We take an inferential approach to this data fusion problem: we seek to infer a true but not directly observed spatial (or spatio-temporal) field from heterogeneous inputs. We use a statistical model to make these inferences, but like all models it is at least somewhat uncertain. In this talk, we will discuss our experiences with the impacts of these uncertainties and some potential ways addressing them.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
A new-quantile-based-fuzzy-time-series-forecasting-modelCemal Ardil
The document presents a new quantile based fuzzy time series forecasting model. It begins by reviewing existing fuzzy time series forecasting methods and their applications. It then proposes a new method that bases forecasts on predicting future trends in the data using third order fuzzy relationships. The method converts statistical quantiles into fuzzy quantiles using membership functions. It uses a fuzzy metric and trend forecast to calculate future values. The method is applied to TAIFEX index forecasting. Results show the proposed method performs comparably better than other fuzzy time series methods in terms of complexity and forecasting accuracy.
Similar to Blind Signal Separation & Multivariate Non-Linear Dependency Measures - Peter Addo. December, 15 2014 (20)
Predicting the economic public opinions in EuropeSYRTO Project
Predicting the economic public opinions in Europe
Maurizio Carpita, Enrico Ciavolino, Mariangela Nitti
University of Brescia & University of Salento
SYRTO Project Final Conference, Paris – February 19, 2016
Results of the SYRTO Project
Roberto Savona - Primary Coordinator of the SYRTO Project
University of Brescia
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Comment on:Risk Dynamics in the Eurozone: A New Factor Model forSovereign C...SYRTO Project
Comment on:Risk Dynamics in the Eurozone: A New Factor Model forSovereign CDS and Equity Returnsby Dellaportas, Meligkotsidou, Savona, Vrontos. Andre Lucas. Amsterda, June, 25 2015. Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time Series Models. Andre Lucas. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of N...SYRTO Project
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of Network Connectivity on Factor Exposures, Asset pricing and Portfolio Diversification” by Billio, Caporin, Panzica and Pelizzon. Arjen Siegmann. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Spillover dynamics for sistemic risk measurement using spatial financial time...SYRTO Project
Spillover dynamics for sistemic risk measurement using spatial financial time series models. Julia Schaumburg, Andre Lucas, Siem Jan Koopman, and Francisco Blasques. ESEM - Toulouse, August 25-29, 2014
http://www.eea-esem.com/eea-esem/2014/prog/viewpaper.asp?pid=1044
Sovereign credit risk, liquidity, and the ecb intervention: deus ex machina? ...SYRTO Project
Sovereign credit risk, liquidity, and the ecb intervention: deus ex machina? - Loriana Pelizzon, Marti Subrahmanyam, Davide Tomio, Jun Uno. June, 5 2014. First International Conference on Sovereign Bond Markets.
Measuring the behavioral component of financial fluctuaction. An analysis bas...SYRTO Project
This document summarizes a study that measures the behavioral component of financial market fluctuations using a model with two types of investors - rational investors who maximize expected utility, and behavioral investors who have S-shaped utility functions. The model blends the asset selections of these two investor types using a Bayesian approach, with the rational investor preferences as the prior and behavioral investor preferences as the conditional. An empirical analysis is conducted using the S&P 500 to estimate the optimal weighting parameter between the two investor types that maximizes past cumulative returns.
Measuring the behavioral component of financial fluctuation: an analysis bas...SYRTO Project
Measuring the behavioral component of financial fluctuation: an analysis based on the S&P500 - Caporin M., Corazzini L., Costola M. June, 27 2013. IFABS 2013 - Posters session.
The microstructure of the european sovereign bond market. Loriana Pellizzon. ...SYRTO Project
This study analyzes the microstructure of the European sovereign bond market during the Eurozone crisis between 2011-2012. It finds that credit risk, as measured by CDS spreads, is non-linearly related to market liquidity, as higher credit risk leads to much greater illiquidity. Market makers temporarily stopped participating when CDS spreads widened significantly. ECB interventions successfully reduced solvency concerns and improved liquidity. The analysis uses a unique high-frequency dataset of order and trade data from the Italian sovereign bond market, the largest in the Eurozone, to examine changes in liquidity measures like bid-ask spreads and quote quantities around periods of financial stress.
Time-Varying Temporal Dependene in Autoregressive Models - Francisco Blasques...SYRTO Project
Time-Varying Temporal Dependene in Autoregressive Models - Francisco Blasques, Siem Jan Koopman, Andre Lucas. June 2014. International Association for Applied Econometrics Annual Conference
OJP data from firms like Vicinity Jobs have emerged as a complement to traditional sources of labour demand data, such as the Job Vacancy and Wages Survey (JVWS). Ibrahim Abuallail, PhD Candidate, University of Ottawa, presented research relating to bias in OJPs and a proposed approach to effectively adjust OJP data to complement existing official data (such as from the JVWS) and improve the measurement of labour demand.
1. Elemental Economics - Introduction to mining.pdfNeal Brewster
After this first you should: Understand the nature of mining; have an awareness of the industry’s boundaries, corporate structure and size; appreciation the complex motivations and objectives of the industries’ various participants; know how mineral reserves are defined and estimated, and how they evolve over time.
2. Elemental Economics - Mineral demand.pdfNeal Brewster
After this second you should be able to: Explain the main determinants of demand for any mineral product, and their relative importance; recognise and explain how demand for any product is likely to change with economic activity; recognise and explain the roles of technology and relative prices in influencing demand; be able to explain the differences between the rates of growth of demand for different products.
Seminar: Gender Board Diversity through Ownership NetworksGRAPE
Seminar on gender diversity spillovers through ownership networks at FAME|GRAPE. Presenting novel research. Studies in economics and management using econometrics methods.
Lecture slide titled Fraud Risk Mitigation, Webinar Lecture Delivered at the Society for West African Internal Audit Practitioners (SWAIAP) on Wednesday, November 8, 2023.
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
WhatsPump Thriving in the Whirlwind of Biden’s Crypto Roller Coaster
Blind Signal Separation & Multivariate Non-Linear Dependency Measures - Peter Addo. December, 15 2014
1. Blind Signal Separation &
Multivariate Non-Linear
Dependency Measures
SYstemic Risk TOmography:
Signals, Measurements, Transmission Channels, and
Policy Interventions
Peter Martey Addo (Speaker)
(Centre National De La Recherche Scientifique (CNRS), Universite
Paris1 Pantheon-Sorbonne )
Hayette Gatfaoui (NEOMA Business School)
Philippe de Peretti (Universite Paris1 Pantheon-Sorbonne)
CSRA research meeting – December, 15 2014
2. Outline: a package of (2) methodologies
1 Part I: Causal dependencies in multivariate time series
Time series graphs & information theoretic measure
2 Part II: Tracking dependency in large multivariate financial systems.
Coupling & decoupling information
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 2 / 20
3. Part I: Causal dependencies in multivariate time series Time series graphs & information theoretic measure
Part I: Multivariate Dependency Measures
“The kiss of information theory that captures systemic risk tomography”(joint
with Philippe de Peretti)
Detection and quantification of causal dependencies in multivariate time
series
How do we create a formal measure of systemic risk that adequately captures
complex linkages in the financial system?
Granger causality & transfer entropy ??
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 3 / 20
4. Introduction
Drawbacks of Granger causality & transfer entropy
Transfer entropy is the information-theoretic analogue of Granger causality
It reduces to Granger causality for vector auto-regressive processes
It is advantageous for the analysis of non-linear signals where the model
assumption of Granger causality might not hold.
The transfer entropy is not uniquely determined by the interaction of the two
components alone and depends on misleading effects such as autodependency
and interaction with other process.
It requires arbitrary truncation during estimation, it usually requires more
samples for accurate estimation
Can lead to false interpretation since it is not lag-specific.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 4 / 20
5. Introduction
Our Offer
A novel information theoretic model-free approach to understanding systemic
risk
To detect and quantify causal dependencies from multivariate time series.
It gives similar scores to equally noisy dependencies.
It is uniquely determined by the interaction of the two components alone and
in a way autonomous of their interaction with the remaining process.
Excludes the misleading influence of autodependency within a process
Lag-specific. Enhance better interpretation.
Only requires that the multivariate time series be stationary.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 5 / 20
6. Overview of approach
Methodology
Let X be a multivariate time series with a set of subprocesses V at each time
t ∈ Z and directional links be defined in E. Then
G = (V × Z, E)
is the time series graph of X, where the set of nodes in the graph are made up of
V .
Like graphical models (Lauritzen 1996), TSG’s are based on the concept of
conditional independence.
Note that the time-dependence in the time series is used to define directional
links in the graph.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 6 / 20
7. Overview of approach
Definition (Lag-specific directed link)
We say that the nodes At−τ ∈ G and Bt ∈ G are connected by a lag-specific
directed link“At −→τ
Bt”pointing forward in time if and only if τ > 0 and
I
A
link
−→B
(τ) ≡ I At−τ ; Bt X−
{At−τ } > 0. (1)
Thus, At−τ and Bt are connected if they are not independent conditionally on the
past of the whole process excluding {At−τ } (denoted by the symbol ) which
implies a lag-specific causality with respect to X. I ·; ·| · denotes conditional
mutual information.
If A = B then“At −→τ
Bt”represents a coupling at lag τ.
An autodependency at lag τ corresponds to A = B.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 7 / 20
8. Overview of approach
Definition (Undirected contemporaneous link)
The nodes At ∈ G and Bt ∈ G are connected by an undirected contemporaneous
link“A − B”if and only if
I
A
link
−B
≡ I At; Bt X−
t+1{At, Bt} > 0. (2)
Definition (Notion of parents and neighbors of subprocesses.)
Given the nodes At ∈ G and Bt ∈ G, the parents PBt
and neighbors NBt
of node
Bt are defined as
PBt
≡ {At−τ : A ∈ X, τ > 0, At−τ −→ Bt} (3)
NBt ≡ {At : A ∈ X, At − Bt} (4)
Definition (Causal Markov Condition)
Any node Bt ∈ G in the time series graph is conditionally independent of X−
t PBt
given its parents PBt
.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 8 / 20
9. Overview of approach
Definition (Momentary Information Transfer (MIT) Links)
For two subprocesses A, B of a stationary multivariate discrete time process X
with parents PAt and PBt in the associated time series graph and τ > 0, the
general information theoretic measure between nodes At−τ and Bt is given by
I
A
MIT
−→B
(τ) ≡ I At−τ ; Bt PBt {At−τ }, PAt−τ > 0
= H Bt PBt {At−τ }, PAt−τ − H Bt PBt
and contemporaneous MIT defined by
I
A
MIT
− B
≡ I At; Bt PBt
, PAt
, NAt
{Bt}, NBt
{At}, PNAt {Bt }, PNBt {At } . (5)
where H(X) is Shannon’s entropy and H(X|Y) denotes conditional Shannon’s
entropy.
The parents of all subprocesses in X together with the contemporaneous links
forms the time series graph.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 9 / 20
10. Overview of approach
Definition (Coupling Strength)
The partial correlation MIT measure, denoted ρMIT
, associated with equation (5),
for the strength of coupling mechanism between At−τ and Bt is given by
ρ
A
MIT
−→B
(τ) ≡ ρ At−τ ; Bt PBt
{At−τ }, PAt−τ
. (6)
The measure ρMIT
quantifies how much the variability in A at the exact lag τ
directly influences B, irrespective of the past of At−τ and Bt.
ρMIT
is the cross-correlation of the residual after At−τ and Bt have been
regressed on both the parents of At−τ and Bt.
The contemporaneous MIT in the linear case is equivalent to the partial
correlation of the errors after regressing each process on its parents.
Unlike classical statistics, interactions in the framework of information theory
are viewed as transfers of information and thus this approach is model-free.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 10 / 20
11. Overview of approach
Estimation
Summary
In the first Step, PC algorithm (Spirtes et al [SSCR 1991]) is used to
estimate the parents of each process, i.e., as a variable selection method.
Unlike graphical models, only undirected links are inferred and the second step
of PC algorithm is omitted.
This first step determines the existence or absence of a link, which also provide
useful information on the causality between lagged components of the
multivariate process.
In the second step, MIT is used and all possible links are tested again.
In this step, the problem of serial dependencies is drastically reduced using
MIT (Runge et al[Physical Review E, 2012]).
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 11 / 20
12. Overview of approach
Simulation Exercise
Consider a simulated 1000 points of a stationary multivariate autoregressive
process made up of four subprocesses {Xt, Yt, Zt, Wt} defined by
Xt = aXt−1 + cZt−4 + εx (7)
Yt = kXt−1 + hYt−1 + εy (8)
Zt = dYt−2 + bZt−1 + fWt−1 + εz (9)
Wt = eYt−3 + gWt−1 + εw (10)
and the innovation covariance matrix given by Σε =
1 0 d 0
0 1 0 d
d 0 1 0
0 d 0 1
, where
a = 0.6, b = 0.4, c = 0.3, d = −0.3, e = −0.6, f = 0.2, g = 0.4, k = 0.3, and
h = 0.6.
Notice that the lagged causal chain for this process is X −→1
Y −→2
Z with
feedback Z −→4
X, and Y −→3
W −→1
Z, plus contemporaneous links X − Z
and Y − W .Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 12 / 20
13. Overview of approach
X
Y
Z
W
12
3
4
1
0.0 0.2 0.4 0.6 0.8 1.0
Auto-MIT
1.0 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0
Cross-MIT
0.6
0.4
0.2
0.0
0.2
0.4
0.6
X
→ X → Y → Z → W
0.6
0.4
0.2
0.0
0.2
0.4
0.6
Y
0.6
0.4
0.2
0.0
0.2
0.4
0.6
Z
0 1 2 3 4 5 6
0.6
0.4
0.2
0.0
0.2
0.4
0.6
W
0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6
ρMIT
lag τ [units]
Figure : MIT plot for the simulated process. The plot of significant lags for simulated
process associated with the MIT plot.
The results indicates a lagged causal chain as X −→1
Y −→2
Z with feedback
Z −→4
X, and Y −→3
W −→1
Z, plus contemporaneous links X − Z and
Y − W .
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 13 / 20
14. Overview of approach
Research-in-progress
Develop nonlinear equivalent measure of the coupling strength.
Empirical application: Causal dependencies in financial institutions
Further Application: Sovereign CDS, Credit Risk etc
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 14 / 20
15. Part II: Tracking dependency in large multivariate financial systems. Coupling & decoupling information
Part II: Blind Signal Separation
Tracking dependency in large multivariate financial systems.
Track dependency in large multivariate financial systems.
Study the time-varying information coupling and decoupling
Deduce measures of excess dependency, frailty of the market or multivariate
financial risk
Deduce Early Warning Indicators.
Build dependency measures for multivariate systems that exhibit :
Rapid changing dynamics, i.e. exhibit a high degree of non-linearity
Non-Gaussianity
Non-stationarity.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 15 / 20
16. Part II: Tracking dependency in large multivariate financial systems. Coupling & decoupling information
Independent Component Analysis (ICA)
Assume that the dynamics of the multivariate signal is driven by latent
independent non-gaussian signals
Let {Xt}T
t=1 be a multivariate process with d ∈ Z+
components such that
Xt = ASt (11)
where Xt is the observed process, St are the unobserved independent signals.
St signals, as well as the unmixing matrix W :
St = WXt
Thus W = A−1
will contain all relevant information about the dependency
structure of the system, revealed by off-diagonal elements.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 16 / 20
17. Part II: Tracking dependency in large multivariate financial systems. Coupling & decoupling information
The Idea is to build information coupling measures entirely based on W.
Here we have full information decoupling if W is the identity matrix, and
information coupling between some components if some elements in rows are
non-zero.
Then given this measures, use rolling windows to study how the coupling
information evolves over time. Uses it to develep new risk measures or early
warnings.
The mixing process A might change with time - thus a dynamic mixing
process can arise due to regime changes or structural changes.
Note that mixing of the latent signals does not need to be linear.
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 17 / 20
18. Part II: Tracking dependency in large multivariate financial systems. Coupling & decoupling information
Research continues
“If you’re not prepared to be wrong, you’ll never come up with anything original.”
(Sir Ken Robinson at TED 2006)
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 18 / 20
19. Part II: Tracking dependency in large multivariate financial systems. Coupling & decoupling information
EU’s Seventh Framework Programme (SYRTO Project)
This project has received funding from the European Union’s Seventh Framework
Programme (FP7-SSH/2007-2013) for research, technological development and
demonstration under grant agreement no
320270 (SYRTO)
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 19 / 20
20. Part II: Tracking dependency in large multivariate financial systems. Coupling & decoupling information
Thank you for your attention
Peter Martey Addo (joint with Hayette Gatfaoui & Philippe de Peretti) Centre National De La Recherche Scientifique Universit´e Paris1 Panth´eon-Sorbonne NEOMInformation theory, Econometrics and Systemic risk December 15, 2014 20 / 20
21. This project has received funding from the European Union’s
Seventh Framework Programme for research, technological
development and demonstration under grant agreement n° 320270
www.syrtoproject.eu
This document reflects only the author’s views.
The European Union is not liable for any use that may be made of the information contained therein.