Scalable inference for a full multivariate stochastic volatility
P. Dellaportas, A. Plataniotis and M. Titsias UCL(London), AUEB(Athens), AUEB(Athens)
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...SYRTO Project
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time Series Models. Andre Lucas. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Clustering in dynamic causal networks as a measure of systemic risk on the eu...SYRTO Project
Clustering in dynamic causal networks as a measure of systemic risk on the euro zone
M. Billio, H. Gatfaoui, L. Frattarolo, P. de Peretti
IESEG/ Universitè Paris1 Panthèon-Sorbonne/ University Ca' Foscari
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Network and risk spillovers: a multivariate GARCH perspectiveSYRTO Project
M. Billio, M. Caporin, L. Frattarolo, L. Pelizzon: “Network and risk spillovers: a multivariate GARCH perspective”.
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Entropy and systemic risk measures
M. Billio, R. Casarin, M. Costola, A. Pasqualini
Ca’ Foscari Venice University
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
This document outlines the agenda for the second part of a lecture on Approximate Bayesian Computation (ABC). It begins with a discussion of simulation-based methods in econometrics like simulated method of moments. Next, it discusses the genetic origins and applications of ABC in population genetics, including coalescent theory. The document then covers using indirect inference to provide summary statistics for ABC and estimating demographic parameters from genetic data when the likelihood is intractable.
I conti economici trimestrali: avanzamenti metodologici e prospettive di innovazione
Seminario
Roma, 21 aprile 2016
Istat, Aula Magna
Via Cesare Balbo, 14
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...SYRTO Project
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time Series Models. Andre Lucas. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Clustering in dynamic causal networks as a measure of systemic risk on the eu...SYRTO Project
Clustering in dynamic causal networks as a measure of systemic risk on the euro zone
M. Billio, H. Gatfaoui, L. Frattarolo, P. de Peretti
IESEG/ Universitè Paris1 Panthèon-Sorbonne/ University Ca' Foscari
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Network and risk spillovers: a multivariate GARCH perspectiveSYRTO Project
M. Billio, M. Caporin, L. Frattarolo, L. Pelizzon: “Network and risk spillovers: a multivariate GARCH perspective”.
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Entropy and systemic risk measures
M. Billio, R. Casarin, M. Costola, A. Pasqualini
Ca’ Foscari Venice University
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
This document outlines the agenda for the second part of a lecture on Approximate Bayesian Computation (ABC). It begins with a discussion of simulation-based methods in econometrics like simulated method of moments. Next, it discusses the genetic origins and applications of ABC in population genetics, including coalescent theory. The document then covers using indirect inference to provide summary statistics for ABC and estimating demographic parameters from genetic data when the likelihood is intractable.
I conti economici trimestrali: avanzamenti metodologici e prospettive di innovazione
Seminario
Roma, 21 aprile 2016
Istat, Aula Magna
Via Cesare Balbo, 14
Approximate Bayesian Computation (ABC) can be used as a new empirical Bayes approach when the likelihood function is not available in closed form. ABC replaces the intractable likelihood with a non-parametric approximation and summarizes data with insufficient statistics. ABC has opened opportunities for new inference machines that are legitimate but different from classical Bayesian approaches, raising questions about how closely ABC relates to Bayesian inference. ABC originated in population genetics where likelihoods are often intractable, and population geneticists have contributed significantly to ABC methodology.
Approximate Bayesian computation (ABC) is a computational technique for Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC approximates the likelihood by simulating data under different parameter values and comparing simulated and observed data using summary statistics. ABC produces a parameter sample without evaluating the full likelihood function, thus allowing Bayesian inference when likelihoods are unavailable or difficult to compute.
This document discusses computational issues that arise in Bayesian statistics. It provides examples of latent variable models like mixture models that make computation difficult due to the large number of terms that must be calculated. It also discusses time series models like the AR(p) and MA(q) models, noting that they have complex parameter spaces due to stationarity constraints. The document outlines the Metropolis-Hastings algorithm, Gibbs sampler, and other methods like Population Monte Carlo and Approximate Bayesian Computation that can help address these computational challenges.
This document provides an overview of a course on forecasting time series using state space methods and unobserved components models. The course covers introduction to univariate component models, state space methods, forecasting different time series components, and exercises for practical forecasting applications with examples. Key topics include white noise processes, random walk processes, the local level model, and simulated data from a local level model.
This document describes a collapsed dynamic factor analysis model for macroeconomic forecasting. It summarizes that multivariate time series models can more accurately capture relationships between economic variables compared to univariate models. The document then presents a collapsed dynamic factor model that relates a target time series (yt) to unobserved dynamic factors (Ft) estimated from related macroeconomic data (gt). Out-of-sample forecasting experiments on US personal income and industrial production data demonstrate the model achieves more accurate point forecasts than univariate benchmarks like random walk or AR(2) models.
This document discusses portfolio optimization and the role of financial correlations. It begins by outlining the importance of considering financial correlations when constructing optimal investment portfolios. Various methods for portfolio optimization using correlation and covariance matrices are described, including Markowitz theory and the efficient frontier. However, it is noted that empirical correlation matrices are often dominated by noise, making direct application of these methods problematic. The document then discusses various noise filtering and cleaning techniques that can be applied to empirical correlation matrices to address this issue, including eigenvalue filtering, power mapping, and other methods.
11.the comparative study of finite difference method and monte carlo method f...Alexander Decker
This document compares the finite difference method and Monte Carlo method for pricing European options. It provides an overview of these two primary numerical methods used in financial modeling. The Monte Carlo method simulates asset price paths and averages discounted payoffs to estimate option value. It is well-suited for path-dependent options but converges slower than finite difference. The finite difference method solves the Black-Scholes PDE by approximating it on a grid. Specifically, it discusses the Crank-Nicolson scheme, which is unconditionally stable and converges faster than Monte Carlo for standard options.
The document discusses Approximate Bayesian Computation (ABC), a computational technique for Bayesian inference when the likelihood function is intractable. ABC allows sampling from the likelihood and making inferences based on simulated data without calculating the actual likelihood. The technique originated in population genetics models where likelihoods for genetic polymorphism data cannot be calculated in closed form. ABC is presented as both an inference machine with its own legitimacy compared to classical Bayesian approaches, as well as a way to address computational issues with intractable likelihoods.
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
This document discusses market states and correlations between financial time series. It begins by introducing the Epps effect, where measured correlations decrease with smaller time intervals, and describes compensating for this using asynchronity and tick size corrections. Non-Gaussian dependencies are then covered, showing correlations can misrepresent relationships. Market states are identified using similarity measures between correlation matrices over time. Eight distinct market states are found for the US market between 1992-2010 based on industrial sector correlations.
1. The document provides guidelines and syllabus for mathematics classes 11-12 in India.
2. It outlines 5 units of study for class 11 including sets and functions, algebra, coordinate geometry, calculus, and mathematical reasoning.
3. It outlines 6 units of study for class 12 including relations and functions, algebra, calculus, vectors, three-dimensional geometry and linear programming.
Computational Tools and Techniques for Numerical Macro-Financial ModelingVictor Zhorin
A set of numerical tools used to create and analyze non-linear macroeconomic models with financial sector is discussed. New methods and results for computing Hansen-Scheinkman-Borovicka shock-price and shock-exposure elasticities for variety of models are presented. Spectral approximation technology (chebfun):
numerical computation in Chebyshev functions piece-wise smooth functions
breakpoints detection
rootfinding
functions with singularities
fast adaptive quadratures continuous QR, SVD, least-squares linear operators
solution of linear and non-linear ODE
Frechet derivatives via automatic differentiation PDEs in one space variable plus time
Stochastic processes:
(quazi) Monte-Carlo simulations, Polynomial Expansion (gPC), finite-differences (FD) non-linear IRF
Boroviˇcka-Hansen-Sc[heinkman shock-exposure and shock-price elasticities Malliavin derivatives
Many states:
Dimensionality Curse Cure
low-rank tensor decomposition
sparse Smolyak grids
This document discusses state space methods for time series analysis and forecasting. It begins by introducing the basic state space model framework, which represents a time series using unobserved states that evolve over time according to a state equation and generate observations according to an observation equation. The document then provides examples of how various time series models, such as regression models with time-varying coefficients, ARMA models, and univariate component models can be expressed as state space models. Finally, it introduces the Kalman filter algorithm, which provides a recursive means of estimating the unobserved states from the observations.
On clustering financial time series - A need for distances between dependent ...Gautier Marti
This document discusses clustering financial time series data using distances between dependent random variables. It notes that traditional clustering based only on correlation can lead to spurious clusters, as correlation does not fully capture dependence. The paper proposes a distance measure that combines information about both the correlation and distribution of random variables. It tests this distance measure on synthetic data from a hierarchical block model and real credit default swap market data, finding it performs better than distances based only on correlation or distribution individually. Some open questions are also discussed, such as how to select the optimal weighting of correlation vs distribution information.
The document discusses adaptive Markov chain Monte Carlo (MCMC) for Bayesian inference of spatial autologistic models. It notes that standard MCMC cannot be implemented when the likelihood function is unavailable or the completion step is too costly due to high dimensionality. Adaptive MCMC is proposed as an alternative that bypasses computation of the normalizing constant. Questions are raised about how to combine adaptations of the proposal distribution, tuning parameters, and sample sizes to improve the method.
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
Markov Chain Monte Carlo (MCMC) methods generate dependent samples from a target distribution using a Markov chain. The Metropolis-Hastings algorithm constructs a Markov chain with a desired stationary distribution by proposing moves to new states and accepting or rejecting them probabilistically. The algorithm is used to approximate integrals that are difficult to compute directly. It has been shown to converge to the target distribution as the number of iterations increases.
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
Predicting the economic public opinions in EuropeSYRTO Project
Predicting the economic public opinions in Europe
Maurizio Carpita, Enrico Ciavolino, Mariangela Nitti
University of Brescia & University of Salento
SYRTO Project Final Conference, Paris – February 19, 2016
Approximate Bayesian Computation (ABC) can be used as a new empirical Bayes approach when the likelihood function is not available in closed form. ABC replaces the intractable likelihood with a non-parametric approximation and summarizes data with insufficient statistics. ABC has opened opportunities for new inference machines that are legitimate but different from classical Bayesian approaches, raising questions about how closely ABC relates to Bayesian inference. ABC originated in population genetics where likelihoods are often intractable, and population geneticists have contributed significantly to ABC methodology.
Approximate Bayesian computation (ABC) is a computational technique for Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC approximates the likelihood by simulating data under different parameter values and comparing simulated and observed data using summary statistics. ABC produces a parameter sample without evaluating the full likelihood function, thus allowing Bayesian inference when likelihoods are unavailable or difficult to compute.
This document discusses computational issues that arise in Bayesian statistics. It provides examples of latent variable models like mixture models that make computation difficult due to the large number of terms that must be calculated. It also discusses time series models like the AR(p) and MA(q) models, noting that they have complex parameter spaces due to stationarity constraints. The document outlines the Metropolis-Hastings algorithm, Gibbs sampler, and other methods like Population Monte Carlo and Approximate Bayesian Computation that can help address these computational challenges.
This document provides an overview of a course on forecasting time series using state space methods and unobserved components models. The course covers introduction to univariate component models, state space methods, forecasting different time series components, and exercises for practical forecasting applications with examples. Key topics include white noise processes, random walk processes, the local level model, and simulated data from a local level model.
This document describes a collapsed dynamic factor analysis model for macroeconomic forecasting. It summarizes that multivariate time series models can more accurately capture relationships between economic variables compared to univariate models. The document then presents a collapsed dynamic factor model that relates a target time series (yt) to unobserved dynamic factors (Ft) estimated from related macroeconomic data (gt). Out-of-sample forecasting experiments on US personal income and industrial production data demonstrate the model achieves more accurate point forecasts than univariate benchmarks like random walk or AR(2) models.
This document discusses portfolio optimization and the role of financial correlations. It begins by outlining the importance of considering financial correlations when constructing optimal investment portfolios. Various methods for portfolio optimization using correlation and covariance matrices are described, including Markowitz theory and the efficient frontier. However, it is noted that empirical correlation matrices are often dominated by noise, making direct application of these methods problematic. The document then discusses various noise filtering and cleaning techniques that can be applied to empirical correlation matrices to address this issue, including eigenvalue filtering, power mapping, and other methods.
11.the comparative study of finite difference method and monte carlo method f...Alexander Decker
This document compares the finite difference method and Monte Carlo method for pricing European options. It provides an overview of these two primary numerical methods used in financial modeling. The Monte Carlo method simulates asset price paths and averages discounted payoffs to estimate option value. It is well-suited for path-dependent options but converges slower than finite difference. The finite difference method solves the Black-Scholes PDE by approximating it on a grid. Specifically, it discusses the Crank-Nicolson scheme, which is unconditionally stable and converges faster than Monte Carlo for standard options.
The document discusses Approximate Bayesian Computation (ABC), a computational technique for Bayesian inference when the likelihood function is intractable. ABC allows sampling from the likelihood and making inferences based on simulated data without calculating the actual likelihood. The technique originated in population genetics models where likelihoods for genetic polymorphism data cannot be calculated in closed form. ABC is presented as both an inference machine with its own legitimacy compared to classical Bayesian approaches, as well as a way to address computational issues with intractable likelihoods.
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
This document discusses market states and correlations between financial time series. It begins by introducing the Epps effect, where measured correlations decrease with smaller time intervals, and describes compensating for this using asynchronity and tick size corrections. Non-Gaussian dependencies are then covered, showing correlations can misrepresent relationships. Market states are identified using similarity measures between correlation matrices over time. Eight distinct market states are found for the US market between 1992-2010 based on industrial sector correlations.
1. The document provides guidelines and syllabus for mathematics classes 11-12 in India.
2. It outlines 5 units of study for class 11 including sets and functions, algebra, coordinate geometry, calculus, and mathematical reasoning.
3. It outlines 6 units of study for class 12 including relations and functions, algebra, calculus, vectors, three-dimensional geometry and linear programming.
Computational Tools and Techniques for Numerical Macro-Financial ModelingVictor Zhorin
A set of numerical tools used to create and analyze non-linear macroeconomic models with financial sector is discussed. New methods and results for computing Hansen-Scheinkman-Borovicka shock-price and shock-exposure elasticities for variety of models are presented. Spectral approximation technology (chebfun):
numerical computation in Chebyshev functions piece-wise smooth functions
breakpoints detection
rootfinding
functions with singularities
fast adaptive quadratures continuous QR, SVD, least-squares linear operators
solution of linear and non-linear ODE
Frechet derivatives via automatic differentiation PDEs in one space variable plus time
Stochastic processes:
(quazi) Monte-Carlo simulations, Polynomial Expansion (gPC), finite-differences (FD) non-linear IRF
Boroviˇcka-Hansen-Sc[heinkman shock-exposure and shock-price elasticities Malliavin derivatives
Many states:
Dimensionality Curse Cure
low-rank tensor decomposition
sparse Smolyak grids
This document discusses state space methods for time series analysis and forecasting. It begins by introducing the basic state space model framework, which represents a time series using unobserved states that evolve over time according to a state equation and generate observations according to an observation equation. The document then provides examples of how various time series models, such as regression models with time-varying coefficients, ARMA models, and univariate component models can be expressed as state space models. Finally, it introduces the Kalman filter algorithm, which provides a recursive means of estimating the unobserved states from the observations.
On clustering financial time series - A need for distances between dependent ...Gautier Marti
This document discusses clustering financial time series data using distances between dependent random variables. It notes that traditional clustering based only on correlation can lead to spurious clusters, as correlation does not fully capture dependence. The paper proposes a distance measure that combines information about both the correlation and distribution of random variables. It tests this distance measure on synthetic data from a hierarchical block model and real credit default swap market data, finding it performs better than distances based only on correlation or distribution individually. Some open questions are also discussed, such as how to select the optimal weighting of correlation vs distribution information.
The document discusses adaptive Markov chain Monte Carlo (MCMC) for Bayesian inference of spatial autologistic models. It notes that standard MCMC cannot be implemented when the likelihood function is unavailable or the completion step is too costly due to high dimensionality. Adaptive MCMC is proposed as an alternative that bypasses computation of the normalizing constant. Questions are raised about how to combine adaptations of the proposal distribution, tuning parameters, and sample sizes to improve the method.
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
Markov Chain Monte Carlo (MCMC) methods generate dependent samples from a target distribution using a Markov chain. The Metropolis-Hastings algorithm constructs a Markov chain with a desired stationary distribution by proposing moves to new states and accepting or rejecting them probabilistically. The algorithm is used to approximate integrals that are difficult to compute directly. It has been shown to converge to the target distribution as the number of iterations increases.
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
Predicting the economic public opinions in EuropeSYRTO Project
Predicting the economic public opinions in Europe
Maurizio Carpita, Enrico Ciavolino, Mariangela Nitti
University of Brescia & University of Salento
SYRTO Project Final Conference, Paris – February 19, 2016
A Dynamic Factor Model: Inference and Empirical Application. Ioannis Vrontos SYRTO Project
The document describes a dynamic factor model to analyze how financial risks are interconnected within the Eurozone. It uses the model to examine risk dynamics using sovereign CDS and equity returns from 2007-2009 covering the US financial crisis and pre-sovereign crisis in Europe. The model relates asset returns to latent sector factors, macro factors, and covariates. Bayesian inference is applied using MCMC to estimate the time-varying parameters and latent factors.
Results of the SYRTO Project
Roberto Savona - Primary Coordinator of the SYRTO Project
University of Brescia
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
Comment on:Risk Dynamics in the Eurozone: A New Factor Model forSovereign C...SYRTO Project
Comment on:Risk Dynamics in the Eurozone: A New Factor Model forSovereign CDS and Equity Returnsby Dellaportas, Meligkotsidou, Savona, Vrontos. Andre Lucas. Amsterda, June, 25 2015. Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time Series Models. Andre Lucas. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of N...SYRTO Project
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of Network Connectivity on Factor Exposures, Asset pricing and Portfolio Diversification” by Billio, Caporin, Panzica and Pelizzon. Arjen Siegmann. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
This document describes a collapsed dynamic factor analysis model for macroeconomic forecasting. It summarizes that multivariate time series models can more accurately capture relationships between economic variables compared to univariate models. The document then presents a collapsed dynamic factor model that relates a target time series (yt) to unobserved dynamic factors (Ft) estimated from related macroeconomic data (gt). Out-of-sample forecasting experiments on US personal income and industrial production data demonstrate the model achieves more accurate point forecasts than univariate benchmarks like random walk or AR(2) models.
The document discusses a stochastic volatility model that incorporates jumps in volatility and the possibility of default. It describes the dynamics of the model and how it can be used to price volatility and credit derivatives. Analytical and numerical methods are presented for solving the pricing problem. As an example application, the model is fit to data on General Motors to analyze the implications.
On estimating the integrated co volatility usingkkislas
This document proposes a method to estimate the integrated co-volatility of two asset prices using high-frequency data that contains both microstructure noise and jumps.
It considers two cases - when the jump processes of the two assets are independent, and when they are dependent. For the independent case, it proposes an estimator that is robust to jumps. For the dependent case, it proposes a threshold estimator that combines pre-averaging to remove noise with a threshold method to reduce the effect of jumps. It proves the estimators are consistent and establishes their central limit theorems. Simulation results are also presented to illustrate the performance of the proposed methods.
11.generalized and subset integrated autoregressive moving average bilinear t...Alexander Decker
This document proposes generalized integrated autoregressive moving average bilinear (GBL) time series models and subset generalized integrated autoregressive moving average bilinear (GSBL) models to achieve stationary for all nonlinear time series. It presents the models' formulations and discusses their properties including stationary, convergence, and parameter estimation. An algorithm is provided to fit the one-dimensional models. The generalized models are applied to Wolfer sunspot numbers and the GBL model is found to perform better than the GSBL model.
International journal of engineering and mathematical modelling vol2 no1_2015_1IJEMM
Our efforts are mostly concentrated on improving the convergence rate of the numerical procedures both from the viewpoint of cost-efficiency and accuracy by handling the parametrization of the shape to be optimized. We employ nested parameterization supports of either shape, or shape deformation, and the classical process of degree elevation resulting in exact geometrical data transfer from coarse to fine representations. The algorithms mimick classical multigrid strategies and are found very effective in terms of convergence acceleration. In this paper, we analyse and demonstrate the efficiency of the two-level correction algorithm which is the basic block of a more general miltilevel strategy.
This document compares different methods for disaggregating low frequency economic time series data into higher frequency data: Chow-Lin (static model), Fernandez (static model), Litterman (static model), and Santo Silvacardoso (dynamic model). The Chow-Lin, Fernandez, and Litterman models are static, while Santo Silvacardoso uses a dynamic regression model. The models were used to disaggregate annual private consumption expenditure data into monthly data. Results showed that all methods produced high correlation between original and disaggregated data annually. At the monthly level, Santo Silvacardoso performed best with the lowest standard deviation, while Litterman performed worst.
The document discusses pricing interest rate derivatives using the one factor Hull-White short rate model. It begins with an introduction to short rate models and the Hull-White model specifically. It describes how the Hull-White model can be calibrated to market prices by relating its parameter θ to the market term structure. The document then discusses implementing the Hull-White model using trinomial trees and pricing constant maturity swaps.
This document provides an overview of generalized linear models (GLMs) and maximum likelihood estimation (MLE).
It discusses the exponential family distribution framework for GLMs, which allows the use of the same tools of inference across different distributions. It presents examples of link functions and canonical forms for the Gaussian and Poisson distributions.
The document also covers likelihood theory and how to calculate parameter estimates and their uncertainty through taking derivatives of the log-likelihood function. It introduces MLE as a method for finding the parameter values that maximize the likelihood of observing the data. Computational estimation of GLMs is performed through an iterative least squares method using weights from the distributions' Fisher information.
The document discusses modeling a flood in a river using 1D Saint-Venant equations. It presents the mass and momentum equations, describes common routing methods varying in complexity, and provides details on explicit and implicit numerical schemes to solve the equations. It also explains the HEC-1 hydrologic routing method and provides sample input parameters and results from a test model run.
This document summarizes a master's thesis that implemented a continuous sequential importance resampling (CSIR) algorithm to estimate predictive densities in stochastic volatility (SV) models. The thesis began with an introduction to relevant econometrics concepts. It then explained SV models and particle filtering approaches. The thesis described implementing and testing functions to develop an R package for CSIR estimation in SV models. Diagnostics and parameter estimates from simulated and real stock return data were reported. The thesis concluded by discussing the package's applications and potential for future development.
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
Basic concepts and how to measure price volatility
Presented by Carlos Martins-Filho at the AGRODEP Workshop on Analytical Tools for Food Prices
and Price Volatility
June 6-7, 2011 • Dakar, Senegal
For more information on the workshop or to see the latest version of this presentation visit: http://www.agrodep.org/first-annual-workshop
The document summarizes key concepts from Chapter 8 of the textbook "Fundamentals of Multimedia" on lossy compression algorithms. It introduces lossy compression and discusses distortion measures, rate-distortion theory, quantization techniques including uniform, non-uniform, and vector quantization. It also covers transform coding techniques such as the discrete cosine transform and its use in image compression standards to remove spatial redundancies by transforming pixel values into frequency coefficients.
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
Pricing average price advertising options when underlying spot market prices ...Bowei Chen
Advertising options have been recently studied as a special type of guaranteed contracts in online advertising, which are an alternative sales mechanism to real-time auctions. An advertising option is a contract which gives its buyer a right but not obligation to enter into transactions to purchase page views or link clicks at one or multiple pre-specified prices in a specific future period. Different from typical guaranteed contracts, the option buyer pays a lower upfront fee but can have greater flexibility and more control of advertising. Many studies on advertising options so far have been restricted to the situations where the option payoff is determined by the underlying spot market price at a specific time point and the price evolution over time is assumed to be continuous. The former leads to a biased calculation of option payoff and the latter is invalid empirically for many online advertising slots. This paper addresses these two limitations by proposing a new advertising option pricing framework. First, the option payoff is calculated based on an average price over a specific future period. Therefore, the option becomes path-dependent. The average price is measured by the power mean, which contains several existing option payoff functions as its special cases. Second, jump-diffusion stochastic models are used to describe the movement of the underlying spot market price, which incorporate several important statistical properties including jumps and spikes, non-normality, and absence of autocorrelations. A general option pricing algorithm is obtained based on Monte Carlo simulation. In addition, an explicit pricing formula is derived for the case when the option payoff is based on the geometric mean. This pricing formula is also a generalized version of several other option pricing models discussed in related studies.
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
Dependent processes in Bayesian NonparametricsJulyan Arbel
This document summarizes dependent processes in Bayesian nonparametrics. It motivates the need for dependent random probability measures to accommodate temporal dependence structures beyond the exchangeability assumption. It describes modeling collections of random probability measures indexed by time as either discrete-time or continuous-time processes. The diffusive Dirichlet process is introduced as a dependent Dirichlet process with Dirichlet marginal distributions at each time point and continuous sample paths. Simulation and estimation methods are discussed for this model.
1. The document describes three exercises related to statistical methods for financial institutions. The first exercise considers portfolio returns and risk, defines the expected return and variance of a portfolio, and discusses the difference between arithmetic and continuous returns. The second exercise analyzes stock price data to estimate parameters and risks, and simulates portfolio values with Monte Carlo methods. The third exercise covers factor models, the CAPM, and APT for analyzing portfolio performance.
2. Key steps include: computing the expected return and variance of a portfolio as a linear combination of stock returns and risks; estimating means, variances, and covariances from stock price data; simulating portfolio values over time; and decomposing portfolio risk using factor models. Confidence
Spillover dynamics for sistemic risk measurement using spatial financial time...SYRTO Project
Spillover dynamics for sistemic risk measurement using spatial financial time series models. Julia Schaumburg, Andre Lucas, Siem Jan Koopman, and Francisco Blasques. ESEM - Toulouse, August 25-29, 2014
http://www.eea-esem.com/eea-esem/2014/prog/viewpaper.asp?pid=1044
Sovereign credit risk, liquidity, and the ecb intervention: deus ex machina? ...SYRTO Project
Sovereign credit risk, liquidity, and the ecb intervention: deus ex machina? - Loriana Pelizzon, Marti Subrahmanyam, Davide Tomio, Jun Uno. June, 5 2014. First International Conference on Sovereign Bond Markets.
Measuring the behavioral component of financial fluctuaction. An analysis bas...SYRTO Project
This document summarizes a study that measures the behavioral component of financial market fluctuations using a model with two types of investors - rational investors who maximize expected utility, and behavioral investors who have S-shaped utility functions. The model blends the asset selections of these two investor types using a Bayesian approach, with the rational investor preferences as the prior and behavioral investor preferences as the conditional. An empirical analysis is conducted using the S&P 500 to estimate the optimal weighting parameter between the two investor types that maximizes past cumulative returns.
Measuring the behavioral component of financial fluctuation: an analysis bas...SYRTO Project
Measuring the behavioral component of financial fluctuation: an analysis based on the S&P500 - Caporin M., Corazzini L., Costola M. June, 27 2013. IFABS 2013 - Posters session.
The microstructure of the european sovereign bond market. Loriana Pellizzon. ...SYRTO Project
This study analyzes the microstructure of the European sovereign bond market during the Eurozone crisis between 2011-2012. It finds that credit risk, as measured by CDS spreads, is non-linearly related to market liquidity, as higher credit risk leads to much greater illiquidity. Market makers temporarily stopped participating when CDS spreads widened significantly. ECB interventions successfully reduced solvency concerns and improved liquidity. The analysis uses a unique high-frequency dataset of order and trade data from the Italian sovereign bond market, the largest in the Eurozone, to examine changes in liquidity measures like bid-ask spreads and quote quantities around periods of financial stress.
Time-Varying Temporal Dependene in Autoregressive Models - Francisco Blasques...SYRTO Project
Time-Varying Temporal Dependene in Autoregressive Models - Francisco Blasques, Siem Jan Koopman, Andre Lucas. June 2014. International Association for Applied Econometrics Annual Conference
Maximum likelihood estimation for generalized autoregressive score models - A...SYRTO Project
Maximum likelihood estimation for generalized autoregressive score models - Andre Lucas, Francisco Blasques, Siem Jan Koopman. June 2014. International Association for Applied Econometrics Annual Conference
Spillover dynamics for systemic risk measurement using spatial financial time...SYRTO Project
Spillover dynamics for systemic risk measurement using spatial financial time series models - Blasques F., Koopman S.J., Lucas A., Schaumburg J. June, 12 2014. 7th Annual SoFiE (Society of Financial Econometrics) Conference
Conditional probabilities for euro area sovereign default risk - Andre Lucas,...SYRTO Project
This document presents a novel framework for modeling conditional and joint probabilities of sovereign default risk in the Euro area based on credit default swap data. The model uses a dynamic multivariate skewed-t distribution with time-varying volatility and correlations. The analysis finds that while large-scale asset purchase programs by central banks reduced joint default risks, they did not significantly impact perceived interconnectedness between countries.
Abhay Bhutada, the Managing Director of Poonawalla Fincorp Limited, is an accomplished leader with over 15 years of experience in commercial and retail lending. A Qualified Chartered Accountant, he has been pivotal in leveraging technology to enhance financial services. Starting his career at Bank of India, he later founded TAB Capital Limited and co-founded Poonawalla Finance Private Limited, emphasizing digital lending. Under his leadership, Poonawalla Fincorp achieved a 'AAA' credit rating, integrating acquisitions and emphasizing corporate governance. Actively involved in industry forums and CSR initiatives, Abhay has been recognized with awards like "Young Entrepreneur of India 2017" and "40 under 40 Most Influential Leader for 2020-21." Personally, he values mindfulness, enjoys gardening, yoga, and sees every day as an opportunity for growth and improvement.
Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...Vighnesh Shashtri
Under the leadership of Abhay Bhutada, Poonawalla Fincorp has achieved record-low Non-Performing Assets (NPA) and witnessed unprecedented growth. Bhutada's strategic vision and effective management have significantly enhanced the company's financial health, showcasing a robust performance in the financial sector. This achievement underscores the company's resilience and ability to thrive in a competitive market, setting a new benchmark for operational excellence in the industry.
Seminar: Gender Board Diversity through Ownership NetworksGRAPE
Seminar on gender diversity spillovers through ownership networks at FAME|GRAPE. Presenting novel research. Studies in economics and management using econometrics methods.
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
5 Tips for Creating Standard Financial ReportsEasyReports
Well-crafted financial reports serve as vital tools for decision-making and transparency within an organization. By following the undermentioned tips, you can create standardized financial reports that effectively communicate your company's financial health and performance to stakeholders.
BONKMILLON Unleashes Its Bonkers Potential on Solana.pdfcoingabbar
Introducing BONKMILLON - The Most Bonkers Meme Coin Yet
Let's be real for a second – the world of meme coins can feel like a bit of a circus at times. Every other day, there's a new token promising to take you "to the moon" or offering some groundbreaking utility that'll change the game forever. But how many of them actually deliver on that hype?
1. Elemental Economics - Introduction to mining.pdfNeal Brewster
After this first you should: Understand the nature of mining; have an awareness of the industry’s boundaries, corporate structure and size; appreciation the complex motivations and objectives of the industries’ various participants; know how mineral reserves are defined and estimated, and how they evolve over time.
Vicinity Jobs’ data includes more than three million 2023 OJPs and thousands of skills. Most skills appear in less than 0.02% of job postings, so most postings rely on a small subset of commonly used terms, like teamwork.
Laura Adkins-Hackett, Economist, LMIC, and Sukriti Trehan, Data Scientist, LMIC, presented their research exploring trends in the skills listed in OJPs to develop a deeper understanding of in-demand skills. This research project uses pointwise mutual information and other methods to extract more information about common skills from the relationships between skills, occupations and regions.
Scalable inference for a full multivariate stochastic volatility
1. Scalable inference for a full
multivariate stochastic
volatility model
SYstemic Risk TOmography:
Signals, Measurements, Transmission Channels, and
Policy Interventions
P. Dellaportas, A. Plataniotis and M. Titsias
UCL(London),AUEB(Athens),AUEB(Athens)
Final SYRTO Conference - Université Paris1 Panthéon-Sorbonne
February 19, 2016
2. I An important indicator of systemic risk is instantaneous volatilities and
correlations
I N-dimensional asset returns: rt = µt + "t , "t ⇠ N(0, ⌃t ), t = 1, · · · , T.
I The focus is shifted to modelling and predicting the covariance matrices ⌃t so
we assume that rt ⌘ "t .
I For realistic financial applications (portfolio allocation, systemic risk) think of N in
hundreds and T = 2000.
I Problem 1: The number of parameters in ⌃t is N(N + 1)/2 which grows
quadratically in N. The total number of parameters that need to be estimated is
TN(N + 1)/2.
I Problem 2: The N(N + 1)/2 parameters of each ⌃t are restricted since ⌃t
should be positive definite.
I Problem 3: There are many missing values (about 3% in the data we looked at)
and series with short lengths.
3. 1-d Stochastic volatility model
I 1-dimensional returns
rt ⇠ N(µt , 2
t ),
with unobservable variances
log 2
t+1 = µ + log 2
t + ⌘t , ⌘t ⇠ N(0, ⌧2
),
I MCMC algorithms since 1994; sequential importance sampling, adaptive MCMC,
Laplace approximations, etc.
I Compare the stochastic volatility parameter-driven models with GARCH-type
observational-driven models
4. Volatility matrices - State of the art
I Two recent review articles on mulativariate stochastic volatility (Asai, McAleer,Yu,
2006; Chib, Omori, Asai, 2009); current state of the art is parsinomious
modelling of ⌃t and factor models with few independent factors, each one of
them being modelled as univariate stochastic volatility processes.
I A review article on multivariate GARCH models (Bauwens, Laurent, Rombouts;
2006); state of the art is parsimonious modelling of ⌃t and two-step estimation
procedures.
I Other approaches include Wishart processes (Philipov and Glickman; 2006) and
dynamic matrix-variate graphical models via inverted Wishart processes
(Carvalho and West; 2007).
5. Dynamic eigenvalue and eigenvector modelling
I We decompose ⌃t = Ut ⇤t UT
t and model Ut and ⇤t with an AR(1) process.
Direct modelling of Ut is hard.
I Since Ut is a rotation matrix, it can be parameterised w.r.t. N(N 1)/2 Givens
angles, each one belonging to matrix Gjt :
Ut =
N(N 1)
2Y
j=1
Gjt
6. 2-Dim
⌃t =
0
B
B
@
cos(!t ) sin(!t )
sin(!t ) cos(!t )
1
C
C
A
0
B
B
@
1t 0
0 2t
1
C
C
A
0
B
B
@
cos(!t ) sin(!t )
sin(!t ) cos(!t )
1
C
C
A
T
I Uniqueness: 1t > 2t , ⇡
2
< !t < ⇡
2
7. 3-Dim
(Ignoring t): ⌃ = U⇤UT = G12G13G23⇤GT
23GT
13GT
12
U =
0
B
B
B
B
B
@
cos(!12) sin(!12) 0
sin(!12) cos(!12) 0
0 0 1
1
C
C
C
C
C
A
0
B
B
B
B
B
@
cos(!13) 0 sin(!13)
0 1 0
sin(!13) 0 cos(!13)
1
C
C
C
C
C
A
0
B
B
B
B
B
@
1 0 0
0 cos(!23) sin(!23)
0 sin(!23) cos(!23)
1
C
C
C
C
C
A
8. U =
N(N 1)
2Y
j=1,k>j
Gjk =
N(N 1)
2Y
j=1,k>j
0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@
1 0 . . . . . . 0
...
0 cos(!jk ) 0 . . . 0 sin(!jk ) 0 . . .
...
0 0 . . . 1 . . . 0 . . . 0
...
0 sin(!jk ) 0 . . . 0 cos(!jk ) 0 . . .
...
0 0 0 . . . 0 1
1
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A
I Note the sparsity of the N-dimensional rotation matrix: it contains 4 elements
with cosines and sines of the angle, ones in the diagonal, and zeroes
everywhere else.
9. I rt = (r1t , . . . , rNt )T , rt ⇠ MVN 0, Ut ⇤t UT
t
o
.
I Transformations: hit = log ⇤it , it = log
⇣
⇡/2+!it
⇡/2 !it
⌘
, i = 1, . . . , N, t = 1, . . . , T
hi,t+1 = µh
i + h
i · (hit µh
i ) + h
i · ⌘h
it , i = 1, . . . , N
j,t+1 = µj + j · ( jt µj ) + j · ⌘jt , j = 1, . . . ,
N(N 1)
2
where ⌘h
it , ⌘jt ⇠ N
n
0, 1
o
independently, and we denote
✓h = ( h
1, . . . , h
N , µh
1, . . . , µh
N , h
1, . . . , h
N )
✓ = ( 1, . . . , N(N 1)
2
, µ1, . . . , µN(N 1)
2
, 1⌘, . . . , N(N 1)
2
)
10. Priors
hi,t+1 = µh
i + h
i · (hit µh
i ) + h
i · ⌘h
it , i = 1, . . . , N
j,t+1 = µj + j · ( jt µj ) + j · ⌘jt , j = 1, . . . ,
N(N 1)
2
µh
i ⇠ N(µ1, 2
1), i = 1, . . . , N
h
i ⇠ N(µ2, 2
2), i = 1, . . . , N
µj ⇠ N(µ3, 2
3), j = 1, . . . ,
N(N 1)
2
j ⇠ N(µ3, 2
3), j = 1, . . . ,
N(N 1)
2
The Exchangeability assumption via a hierarchical model allows borrowing strength.
Partial exchangeability conditional on markets, sectors, etc is probably more realistic.
11. A general model formulation
A more general structure is a K-factor model constructed with an N ⇥ K matrix of factor
loadings B:
I rt = Bft + et , ✏t ⇠ N(0, 2I)
I The factor loadings matrix B has fixed/known structure while its non-zero
elements follow a Gaussian prior distribution
I ft ⇠ N(0, ⌃t )
I ⌃t follows the multivariate stochastic volatility model with the Givens matrix
construction
I We need to constrain B so that the model is identifiable
I We do NOT need this model only when N is large: this model can treat missing
values -this is very important in real applications.
12. Computation
I With the Givens angles type model formulation we now deal a non-linear
likelihood plus a Gaussian process prior
I MCMC for these problems: Use an auxiliary Langevin MCMC based on an idea
by Titsias in the discussion of the RSSB discussion paper by Girolami and
Carderhead (2011).
I The Computational complexity: It is O(d3) for Normal densities of dimesion d;
we achieve O(d2) even for the derivatives of the likelihood wrt Givens angles, so
our MCMC algorithm has complexity O(d2).
I Missing data are treated without any problem
13. The Sampling algorithm
Model: rt = Bft + et , ✏t ⇠ N(0, 2I), ft ⇠ N(0, ⌃t ) Denote by X
all latent paths
p(B, 2
, (ft )T
t=1|rest) /
TY
t=1
N(rt |Bft , 2
I)N(ft |0, ⌃t (xt ))
!
p(B, 2
),
p(X|rest) /
TY
t=1
N(ft |0, ⌃t (xt ))
!
p(X|✓h, ✓ ),
p(✓h, ✓ |rest) / p(X|✓h, ✓ )p(✓h, ✓ ).
We do not need to generate the missing data in rt
14. Sampling the Gaussian latent process
I Denote F = (f1, . . . , fT )
I Prior p(X) = N(X|M, Q 1)
I Current state of X is Xn. Use slice Gibbs:
I Introduce auxiliary variables U that live in the same space as X:
p(U|Xn) = N(U|Xn + 2
r log p(F|Xn), 2
I)
I U injects Gaussian noise into Xn and shifts it by ( /2)r log p(F|Xn)
I We cannot sample from p(X|U) so we use a Metropolis step: Propose Y from
proposal q:
q(Y|U) =
1
Z(U)
N(Y|U,
2
I)p(Y)
= N(Y|(I +
2
Q) 1
(U +
2
QM),
2
(I +
2
Q) 1
).
where Z(U) =
R
N(Y|U, 2
I)p(Y)dY.
15. I Accept Y with Metropolis-Hastings probability min(1, r):
r =
p(F|Y)p(U|Y)p(Y)
p(F|Xn)p(U|Xn)p(Xn)
q(Xn|U)
q(Y|U)
=
p(F|Y)p(U|Y)p(Y)
p(F|Xn)p(U|Xn)p(Xn)
1
Z(U)
N(Xn|U, 2
I)p(Xn)
1
Z(U)
N(Y|U, 2
I)p(Y)
=
p(F|Y)N(U|Y + 2
Gy , 2
I)
p(F|Xn)N(U|Xn + 2
Gt , 2
I)
N(Xn|U, 2
I)
N(Y|U, 2
I)
=
p(F|Y)
p(F|Xn)
exp
⇢
(U Xn)T
Gt + (U Y)T
Gy
4
(||Gy ||2
||Gt ||2
)
where Gt = r log p(F|Xn), Gy = r log p(F|Y) and ||Z|| denotes the Euclidean
norm of a vector Z.
I The Gaussian prior terms p(Xn) and p(Y) have been cancelled out from the
acceptance probability, so their computationally expensive evaluation is not
required: the resulting q(Y|U) is invariant under the Gaussian prior.
I Tune to achieve an acceptance rate of around 50 60%.
16. O(K2) computation for the K-factor MSV model
I ft ⇠ N(0, ⌃t ), ⌃t = Ut ⇤t UT
t , Ut =
Q K(K 1)
2
j=1
Gjt
log MSV(ft ) =
K
2
log(2⇡)
1
2
KX
i=1
hit
1
2
vT
t vt , (1)
where vt = ⇤
1
2
t UT
t ft and where we used that log |⌃t | = log |⇤t | =
PK
i=1 hit .
I Given vt the above expression takes O(K) time to compute.
I Gij (!ji,t )T ft takes O(1) time to compute since all of its elements are equal to the
corresponding ones from the vector ft apart from the i-th and j-th elements that
become ft [i] cos(!ji,t ) ft [i] sin(!ji,t ) and ft [j] sin(!ji,t ) + ft [j] cos(!ji,t ),
respectively.
I Similarly rht
log MSV and r!ij,t
log MSV are calculated in O(K2) time.
17. O(N2) computation for the MSV model
Initialize vt = ft .
for i = 1 to N 1 do
for j = i + 1 to N do
c = cos(!ji,t ), s = sin(!ji,t )
t1 = vt [i], t2 = vt [j]
vt [i] c ⇤ t1 s ⇤ t2
vt [j] s ⇤ t1 + c ⇤ t2
end for
end for
vt = vt diag(⇤
1
2
t ) (elementwise product)
18. The Sampling algorithm revisited
Model: rt = Bft + et , ✏t ⇠ N(0, 2I), ft ⇠ N(0, ⌃t )
Denote by X all latent paths
p(B, 2
, (ft )T
t=1|rest) /
0
@
TY
t=1
N(rt |Bft , 2
I)N(ft |0, ⌃t (xt ))
1
A p(B, 2
),
p(X|rest) /
0
@
TY
t=1
N(ft |0, ⌃t (xt ))
1
A p(X|✓h, ✓ ),
p(✓h, ✓ |rest) / p(X|✓h, ✓ )p(✓h, ✓ ).
19. Sampling the latent factors in O(TNK) time
I p(ft |rest) / N(rt |Bft , 2I)N(ft |0, ⌃t ) = N(ft | 2M 1
t BT rt , M 1
t ) where
Mt = 2BT B + ⌃t . To simulate from this Gaussian we need first to compute
the stochastic volatility matrix ⌃t and subsequently the Cholesky decomposition
of Mt . Both operations have a cost O(K3).
I We replace the exact Gibbs step with a much faster Metropolis within Gibbs step
that scales as O(T(NK + K2)).
I To achieve this we apply the same auxiliary Langevin scheme as before
20. The Data
I 571 stocks from Europe Stoxx 600 index
I Daily data from 08/01/2010 to 5/1/2014 (T = 2017)
I 36340 missing values or 36340/(571 ⇤ 2017) = 3.2%
I Factor model with 30 factors: the dimension of the latent path is
2017 ⇥ 30 ⇥ 31/2 = 937, 905
I Choice of number of factors: Based on predictive performance wrt quadratic
covariation. We tried 20, 30 and 40 factors.
24. I January 2009: Banking shares in the UK plummet as the Royal Bank of Scotland
posts the biggest loss in British history. The Bank of England reduces the base
rate of interest to a new historic low of 1%. The U.S. economy lost 598,000 jobs
during January 2009, with unemployment rising to 7.6 percent. Bankruptcies in
the United Kingdom rose during 2008 by 50 percent to an all-time high.
California’s Alliance Bank and Georgia’s FirstBank are closed, raising the
number of 2009 U.S. bank failures to eight.
I July 2012: Barclays chairman and Chief Executive of British bank Barclays
resign following a scandal in which the bank tried to manipulate the Libor and
Euribor interest rates systems. The central banks of the European Union, Great
Britain, and the People’s Republic of China, in what appears to be a co-ordinated
action, each loosen their respective monetary systems.
25. Discussion
I Incorporation of Leverage effects, Jumps
I Small N: Nested Laplace approximations (PhD thesis by Plataniotis,
AUEB),importance sampling based on copulas (in progress)
I Bayesian model determination for number of factors
I Relations with other PCN proposals
26. This project has received funding from the European Union’s
Seventh Framework Programme for research, technological
development and demonstration under grant agreement n° 320270
www.syrtoproject.eu
This document reflects only the author’s views.
The European Union is not liable for any use that may be made of the information contained therein.