This document summarizes the research contributions of Arthur Charpentier in studying dependence. It outlines some of his fundamental results in areas like: 1) developing multivariate Archimax copulas to model dependence between extremes; 2) proposing nonparametric methods for estimating dependence and density functions; and 3) estimating copula densities using a probit transformation. It also provides biographical details on Charpentier's education and professional positions held from 2006 to 2016.
The document discusses provisions for outstanding claims in non-life insurance. It defines key terms like incurred but not reported (IBNR) and incurred but not paid (IBNP) claims. It also presents the chain ladder method for estimating reserves, which assumes a constant development factor between years. The chain ladder method is demonstrated on a numeric example using a triangle of cumulative paid claims.
This document discusses various modeling approaches for non-life insurance tariffication including frequency-severity models, Tweedie regression models, and high-dimensional modeling techniques like ridge regression and the LASSO. It compares individual risk and collective risk models, explores the impact of the Tweedie parameter, and applies regularization methods to insurance data.
This document summarizes Arthur Charpentier's presentation at the Rennes Risk Workshop in April 2015. It discusses extending concepts of risk from univariate to multivariate prospects, including characterizing attitudes to multivariate notions of increasing risk like the Rothschild-Stiglitz mean preserving increase in risk and Quiggin's monotone mean preserving increase in risk. It also generalizes the Bickel-Lehmann dispersion order to multivariate risks and examines its implications for risk sharing.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and Pickands–Balkema–de Haan theorem, which state that maximums of iid random variables converge to one of three extreme value distributions. It also discusses estimators for the shape parameter of these distributions, such as the Hill estimator, and using the generalized Pareto distribution above a threshold to estimate value-at-risk quantiles. Examples are given applying these methods to Danish fire insurance loss data.
This document summarizes a presentation on testing for volatility transmission between international markets using high frequency data. It discusses using realized volatility to estimate true latent volatility processes while controlling for jumps and microstructure noise. The presentation focuses on testing for transmission of only extreme or large volatility values between markets. A quantile model is used to define extreme periods, and cross-covariances are computed to test for non-causality between markets' extreme periods using Ljung-Box statistics. Simulations are performed based on a three-regime smooth-transition model to assess the test in finite samples.
Arthur Charpentier presents a model for insurance equilibria covering natural catastrophes in heterogeneous regions. The model considers both private insurance companies with limited liability and possible government intervention. It examines a one region model with homogeneous agents and a common shock model for natural disaster risks. Finally, it develops a two region model to analyze equilibriums when considering strategic decisions between regions.
This document discusses several nonparametric methods for estimating copula densities from data, which are useful for modeling multivariate dependence. It first provides background on copulas and density estimation. It then describes several techniques for handling boundary issues that arise when estimating densities supported on [0,1], including the mirror image method, transformed kernels, beta kernels, and averaging histograms. Examples are given comparing the performance of these different approaches. The goal is to provide flexible, data-driven estimates of copula densities without imposing a parametric copula model.
This document discusses various modeling techniques for non-life insurance ratemaking including individual and collective models, Tweedie regression, and the LASSO method. It explores using a Tweedie distribution for compound Poisson models and the relationship between individual and collective models. The document also examines issues with high-dimensional data in insurance, bias-variance tradeoffs, and regularization methods like ridge regression and the LASSO for variable selection.
The document discusses provisions for outstanding claims in non-life insurance. It defines key terms like incurred but not reported (IBNR) and incurred but not paid (IBNP) claims. It also presents the chain ladder method for estimating reserves, which assumes a constant development factor between years. The chain ladder method is demonstrated on a numeric example using a triangle of cumulative paid claims.
This document discusses various modeling approaches for non-life insurance tariffication including frequency-severity models, Tweedie regression models, and high-dimensional modeling techniques like ridge regression and the LASSO. It compares individual risk and collective risk models, explores the impact of the Tweedie parameter, and applies regularization methods to insurance data.
This document summarizes Arthur Charpentier's presentation at the Rennes Risk Workshop in April 2015. It discusses extending concepts of risk from univariate to multivariate prospects, including characterizing attitudes to multivariate notions of increasing risk like the Rothschild-Stiglitz mean preserving increase in risk and Quiggin's monotone mean preserving increase in risk. It also generalizes the Bickel-Lehmann dispersion order to multivariate risks and examines its implications for risk sharing.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and Pickands–Balkema–de Haan theorem, which state that maximums of iid random variables converge to one of three extreme value distributions. It also discusses estimators for the shape parameter of these distributions, such as the Hill estimator, and using the generalized Pareto distribution above a threshold to estimate value-at-risk quantiles. Examples are given applying these methods to Danish fire insurance loss data.
This document summarizes a presentation on testing for volatility transmission between international markets using high frequency data. It discusses using realized volatility to estimate true latent volatility processes while controlling for jumps and microstructure noise. The presentation focuses on testing for transmission of only extreme or large volatility values between markets. A quantile model is used to define extreme periods, and cross-covariances are computed to test for non-causality between markets' extreme periods using Ljung-Box statistics. Simulations are performed based on a three-regime smooth-transition model to assess the test in finite samples.
Arthur Charpentier presents a model for insurance equilibria covering natural catastrophes in heterogeneous regions. The model considers both private insurance companies with limited liability and possible government intervention. It examines a one region model with homogeneous agents and a common shock model for natural disaster risks. Finally, it develops a two region model to analyze equilibriums when considering strategic decisions between regions.
This document discusses several nonparametric methods for estimating copula densities from data, which are useful for modeling multivariate dependence. It first provides background on copulas and density estimation. It then describes several techniques for handling boundary issues that arise when estimating densities supported on [0,1], including the mirror image method, transformed kernels, beta kernels, and averaging histograms. Examples are given comparing the performance of these different approaches. The goal is to provide flexible, data-driven estimates of copula densities without imposing a parametric copula model.
This document discusses various modeling techniques for non-life insurance ratemaking including individual and collective models, Tweedie regression, and the LASSO method. It explores using a Tweedie distribution for compound Poisson models and the relationship between individual and collective models. The document also examines issues with high-dimensional data in insurance, bias-variance tradeoffs, and regularization methods like ridge regression and the LASSO for variable selection.
This document discusses copulas and their use in modeling risk dependence. It introduces copulas as joint distribution functions with uniform margins that can be used to fully characterize dependence between random variables. Several classical copulas are described, including the independent, comonotonic, and countermonotonic copulas. Elliptical copulas like the Gaussian and Student t copulas are presented. Archimedean and extreme value copulas are also discussed. The document explores how copulas can capture dependence information that may not be reflected in correlation alone. Copulas provide flexible tools for modeling multivariate risks and dependencies.
- The document discusses nonparametric kernel estimation methods for copula density functions.
- It proposes using a probit transformation of the data to estimate the copula density on the unit square, which improves consistency at the boundaries compared to standard kernel methods.
- Two improved probit-transformation kernel copula density estimators are presented - one using a local log-linear approximation and one using a local log-quadratic approximation.
This document discusses using regression models for claims reserving. Specifically, it examines using Poisson regression with incremental payments modeled as Poisson distributed, with the mean depending on occurrence year and development year factors. It provides an example of fitting such a model in R and summarizing the results. It also discusses using the model to estimate reserves and quantifying the uncertainty in those estimates through bootstrap simulations of the residuals.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and three limiting distributions used to model extreme values. It also discusses estimators like the Hill estimator that are used to estimate the shape parameter of distributions modeling extreme risks. Methods for estimating value-at-risk and tail-value-at-risk based on the generalized Pareto distribution above a threshold are also presented.
This document discusses quantile estimation techniques, including parametric, semiparametric, and nonparametric approaches. Parametric estimation assumes a distribution like Gaussian and estimates quantiles based on parameters of that distribution. Semiparametric estimation uses extreme value theory to model upper tails with a generalized Pareto distribution. Nonparametric estimation estimates quantiles directly from the data without assuming a particular distribution. The document presents several techniques for quantile estimation and compares their performance.
1. The document discusses quantiles and quantile regressions, which are important concepts in analyzing inequalities, risk, and other areas where conditional distributions are relevant.
2. Quantile regression models the relationship between covariates X and the conditional quantiles of the response variable Y. This generalizes ordinary least squares regression, which models the conditional mean of Y.
3. Median regression uses the 1-norm (sum of absolute deviations) instead of the 2-norm (sum of squared deviations) used in OLS. It estimates the conditional median of Y rather than the conditional mean.
This document discusses granularity issues that arise when analyzing climatic time series data. It begins by discussing the concept of the "period of return" in the context of climate change. It then examines models for flood event data that account for the duration and timing of individual flood events. The document proposes a two-duration model for flood data that is analogous to models used for high-frequency financial data. Finally, it discusses long-range dependence and seasonality in climatic variables like wind speed, and methods for estimating return periods from long memory models.
The document discusses provisions for outstanding claims in non-life insurance. It defines key terms like incurred but not reported (IBNR) and incurred but not paid (IBNP) claims. It also presents the chain ladder method for estimating future claim payments based on historical payment patterns represented in triangular payment tables. The chain ladder method estimates development factors that are applied to cumulative paid amounts to project final claim costs.
The document discusses using the programming language R for actuarial science applications. It presents R as a vector-based language suitable for working with life tables and performing actuarial calculations. Examples are given of how to model life contingencies like life expectancies, annuities, and insurance values using vectors and matrices in R. The document also discusses using R to fit prospective mortality models like the Lee-Carter model to data matrices.
This document discusses quantile and expectile regressions. It begins by explaining the differences between the econometrics and machine learning approaches. It then introduces quantile and expectile regressions as generalizations of ordinary least squares regression that minimize different loss functions. Finally, it discusses properties of quantile and expectile regressions such as their elicitable measures and how they can be estimated.
This document discusses tail distribution and dependence measures, including copulas and conditional copulas. It provides an introduction to copulas and some commonly used copula families like Clayton and Gumbel copulas. It then discusses measures of dependence like tail dependence functions and conditional copulas. Conditional copulas can quantify dependence in the lower or upper tails. The document applies these concepts to analyze dependence between insurance loss and expense variables and between types of insurance.
This document discusses dynamic dependence ordering for Archimedean copulas. It begins by defining copulas and Archimedean copulas. It then shows how conditioning and ageing affect the copula for Archimedean copulas. Specifically, it demonstrates that conditioning and ageing result in copulas that are also Archimedean, with modified generators. The document also provides methods to order the tails of Archimedean copulas based on properties of the generator's derivative. Finally, it analyzes how specific Archimedean copulas, such as Frank, Clayton, and Gumbel, are affected by conditioning and ageing.
The document discusses Archimax copulas and other copula families. It begins with an overview of copulas in general and defines them for dimensions 2 and greater than 2. It then discusses some standard copula families like the independent copula and comonotonic copula. It introduces elliptical distributions and spherical distributions, which give rise to elliptical copulas. Finally, it defines Archimax copulas and discusses their properties in dimensions 2 and greater than 2.
This document provides an overview of various classification techniques in data science, including linear discriminant analysis, logistic regression, probit regression, k-nearest neighbors, classification trees (CART), random forests, and techniques for double classification like uplift modeling. It discusses consistency of models and the risk of overfitting when the training sample size is small. Key classification algorithms like logistic regression and CART are explained in detail over multiple pages.
This document discusses the probabilistic foundations of econometrics and relationships to machine learning techniques. It describes how econometrics uses probability distributions and maximum likelihood estimation for linear regression models. It also discusses how machine learning uses loss functions and penalization methods like ridge regression to select models and avoid overfitting. Boosting techniques are mentioned as a way to sequentially learn from previous errors.
This document provides an overview and agenda for a master's level course on probability and statistics. It covers key topics like statistical models, probability distributions, conditional distributions, convergence theorems, sampling, confidence intervals, decision theory, and testing procedures. Examples of common probability distributions and functions are also presented, including the cumulative distribution function, probability density function, independence, and conditional independence. Additional references for further reading are included.
The document outlines two sessions on risk management in banking and finance. Session 9 will cover risk measures, regulatory aspects, and basic principles, including defining risk measures, academic vs accounting standards, desirable properties, and estimating risks from samples. Session 10 will cover correlations, copulas, modeling dependencies between risks, diversification effects, comparing risks under dependence vs independence, and analyzing individual risk contributions. Examples of applications to finance, environmental risks, and credit risk are also provided.
This document summarizes the use of log-Poisson regression models for claims reserving and calculating reserves. It shows how to fit a log-Poisson regression model to incremental claims payments data and use it to estimate total reserves. It also provides methods for calculating the prediction error and quantifying the uncertainty of reserve estimates, including using the bootstrap procedure to generate multiple simulated reserve estimates.
This document summarizes the results of an actuarial pricing game simulation with multiple insurance companies. It finds that:
1) In the initial game with all companies, premium levels varied widely between companies and market shares shifted significantly.
2) When additional data was provided in a second round, premium variability decreased but loss ratios were similar.
3) A smaller game with just three companies showed that the choice of pricing strategy (always cheapest vs. random between cheapest two) impacted market shares and loss ratios.
This document discusses measures of inequality in economics. It begins by examining inequality comparisons in 2-person and 3-person economies using tools like the Kolm triangle. It then explores measures of inequality for n-person economies, including the Lorenz curve, Gini coefficient, and quantile ratios. The document also discusses standard statistical measures of dispersion like variance and coefficient of variation. Finally, it introduces an axiomatic approach for evaluating inequality indices based on principles like anonymity and transfers between individuals.
The document discusses dependence between extremal events and copulas. It provides definitions and examples of copulas, including Archimedean copulas like Clayton's and Gumbel's copula. It discusses how copulas can be used to model multivariate dependence, including dependence between extreme events. However, it notes that extending univariate extreme value theory to higher dimensions is challenging due to the lack of a natural order in higher dimensions.
Talk at the modcov19 CNRS workshop, en France, to present our article COVID-19 pandemic control: balancing detection policy and lockdown intervention under ICU sustainability
This document discusses copulas and their use in modeling risk dependence. It introduces copulas as joint distribution functions with uniform margins that can be used to fully characterize dependence between random variables. Several classical copulas are described, including the independent, comonotonic, and countermonotonic copulas. Elliptical copulas like the Gaussian and Student t copulas are presented. Archimedean and extreme value copulas are also discussed. The document explores how copulas can capture dependence information that may not be reflected in correlation alone. Copulas provide flexible tools for modeling multivariate risks and dependencies.
- The document discusses nonparametric kernel estimation methods for copula density functions.
- It proposes using a probit transformation of the data to estimate the copula density on the unit square, which improves consistency at the boundaries compared to standard kernel methods.
- Two improved probit-transformation kernel copula density estimators are presented - one using a local log-linear approximation and one using a local log-quadratic approximation.
This document discusses using regression models for claims reserving. Specifically, it examines using Poisson regression with incremental payments modeled as Poisson distributed, with the mean depending on occurrence year and development year factors. It provides an example of fitting such a model in R and summarizing the results. It also discusses using the model to estimate reserves and quantifying the uncertainty in those estimates through bootstrap simulations of the residuals.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and three limiting distributions used to model extreme values. It also discusses estimators like the Hill estimator that are used to estimate the shape parameter of distributions modeling extreme risks. Methods for estimating value-at-risk and tail-value-at-risk based on the generalized Pareto distribution above a threshold are also presented.
This document discusses quantile estimation techniques, including parametric, semiparametric, and nonparametric approaches. Parametric estimation assumes a distribution like Gaussian and estimates quantiles based on parameters of that distribution. Semiparametric estimation uses extreme value theory to model upper tails with a generalized Pareto distribution. Nonparametric estimation estimates quantiles directly from the data without assuming a particular distribution. The document presents several techniques for quantile estimation and compares their performance.
1. The document discusses quantiles and quantile regressions, which are important concepts in analyzing inequalities, risk, and other areas where conditional distributions are relevant.
2. Quantile regression models the relationship between covariates X and the conditional quantiles of the response variable Y. This generalizes ordinary least squares regression, which models the conditional mean of Y.
3. Median regression uses the 1-norm (sum of absolute deviations) instead of the 2-norm (sum of squared deviations) used in OLS. It estimates the conditional median of Y rather than the conditional mean.
This document discusses granularity issues that arise when analyzing climatic time series data. It begins by discussing the concept of the "period of return" in the context of climate change. It then examines models for flood event data that account for the duration and timing of individual flood events. The document proposes a two-duration model for flood data that is analogous to models used for high-frequency financial data. Finally, it discusses long-range dependence and seasonality in climatic variables like wind speed, and methods for estimating return periods from long memory models.
The document discusses provisions for outstanding claims in non-life insurance. It defines key terms like incurred but not reported (IBNR) and incurred but not paid (IBNP) claims. It also presents the chain ladder method for estimating future claim payments based on historical payment patterns represented in triangular payment tables. The chain ladder method estimates development factors that are applied to cumulative paid amounts to project final claim costs.
The document discusses using the programming language R for actuarial science applications. It presents R as a vector-based language suitable for working with life tables and performing actuarial calculations. Examples are given of how to model life contingencies like life expectancies, annuities, and insurance values using vectors and matrices in R. The document also discusses using R to fit prospective mortality models like the Lee-Carter model to data matrices.
This document discusses quantile and expectile regressions. It begins by explaining the differences between the econometrics and machine learning approaches. It then introduces quantile and expectile regressions as generalizations of ordinary least squares regression that minimize different loss functions. Finally, it discusses properties of quantile and expectile regressions such as their elicitable measures and how they can be estimated.
This document discusses tail distribution and dependence measures, including copulas and conditional copulas. It provides an introduction to copulas and some commonly used copula families like Clayton and Gumbel copulas. It then discusses measures of dependence like tail dependence functions and conditional copulas. Conditional copulas can quantify dependence in the lower or upper tails. The document applies these concepts to analyze dependence between insurance loss and expense variables and between types of insurance.
This document discusses dynamic dependence ordering for Archimedean copulas. It begins by defining copulas and Archimedean copulas. It then shows how conditioning and ageing affect the copula for Archimedean copulas. Specifically, it demonstrates that conditioning and ageing result in copulas that are also Archimedean, with modified generators. The document also provides methods to order the tails of Archimedean copulas based on properties of the generator's derivative. Finally, it analyzes how specific Archimedean copulas, such as Frank, Clayton, and Gumbel, are affected by conditioning and ageing.
The document discusses Archimax copulas and other copula families. It begins with an overview of copulas in general and defines them for dimensions 2 and greater than 2. It then discusses some standard copula families like the independent copula and comonotonic copula. It introduces elliptical distributions and spherical distributions, which give rise to elliptical copulas. Finally, it defines Archimax copulas and discusses their properties in dimensions 2 and greater than 2.
This document provides an overview of various classification techniques in data science, including linear discriminant analysis, logistic regression, probit regression, k-nearest neighbors, classification trees (CART), random forests, and techniques for double classification like uplift modeling. It discusses consistency of models and the risk of overfitting when the training sample size is small. Key classification algorithms like logistic regression and CART are explained in detail over multiple pages.
This document discusses the probabilistic foundations of econometrics and relationships to machine learning techniques. It describes how econometrics uses probability distributions and maximum likelihood estimation for linear regression models. It also discusses how machine learning uses loss functions and penalization methods like ridge regression to select models and avoid overfitting. Boosting techniques are mentioned as a way to sequentially learn from previous errors.
This document provides an overview and agenda for a master's level course on probability and statistics. It covers key topics like statistical models, probability distributions, conditional distributions, convergence theorems, sampling, confidence intervals, decision theory, and testing procedures. Examples of common probability distributions and functions are also presented, including the cumulative distribution function, probability density function, independence, and conditional independence. Additional references for further reading are included.
The document outlines two sessions on risk management in banking and finance. Session 9 will cover risk measures, regulatory aspects, and basic principles, including defining risk measures, academic vs accounting standards, desirable properties, and estimating risks from samples. Session 10 will cover correlations, copulas, modeling dependencies between risks, diversification effects, comparing risks under dependence vs independence, and analyzing individual risk contributions. Examples of applications to finance, environmental risks, and credit risk are also provided.
This document summarizes the use of log-Poisson regression models for claims reserving and calculating reserves. It shows how to fit a log-Poisson regression model to incremental claims payments data and use it to estimate total reserves. It also provides methods for calculating the prediction error and quantifying the uncertainty of reserve estimates, including using the bootstrap procedure to generate multiple simulated reserve estimates.
This document summarizes the results of an actuarial pricing game simulation with multiple insurance companies. It finds that:
1) In the initial game with all companies, premium levels varied widely between companies and market shares shifted significantly.
2) When additional data was provided in a second round, premium variability decreased but loss ratios were similar.
3) A smaller game with just three companies showed that the choice of pricing strategy (always cheapest vs. random between cheapest two) impacted market shares and loss ratios.
This document discusses measures of inequality in economics. It begins by examining inequality comparisons in 2-person and 3-person economies using tools like the Kolm triangle. It then explores measures of inequality for n-person economies, including the Lorenz curve, Gini coefficient, and quantile ratios. The document also discusses standard statistical measures of dispersion like variance and coefficient of variation. Finally, it introduces an axiomatic approach for evaluating inequality indices based on principles like anonymity and transfers between individuals.
The document discusses dependence between extremal events and copulas. It provides definitions and examples of copulas, including Archimedean copulas like Clayton's and Gumbel's copula. It discusses how copulas can be used to model multivariate dependence, including dependence between extreme events. However, it notes that extending univariate extreme value theory to higher dimensions is challenging due to the lack of a natural order in higher dimensions.
Talk at the modcov19 CNRS workshop, en France, to present our article COVID-19 pandemic control: balancing detection policy and lockdown intervention under ICU sustainability
This document discusses kernel-based estimation methods for inequality indices and risk measures. It begins with an overview of stochastic dominance and related indices like first-order, convex, and second-order stochastic dominance. It then discusses nonparametric estimation of densities and copula densities using kernel methods. Specifically, it proposes using beta kernels and transformed kernels to improve estimation at the boundaries. The document explores combining these approaches and using mixtures of distributions like beta distributions within the kernels. It concludes by discussing applications to heavy-tailed distributions.
This document models the COVID-19 pandemic using a compartmental SIDUHR+/- model that divides the population into susceptible (S), infected asymptomatic (I-), infected symptomatic (I+), recovered asymptomatic (R-), recovered symptomatic (R+), hospitalized (H), ICU (U), and dead (D) categories. Optimal lockdown policies are determined by minimizing costs related to deaths, economic impact, testing needs, and immunity while ensuring ICU sustainability. Increasing ICU capacity allows less stringent lockdown policies while achieving similar outcomes. Faster detection of asymptomatic cases through increased testing also enables more flexible lockdown policies.
This document discusses classification and goodness of fit in machine learning. It introduces concepts like confusion matrices, ROC curves, and measures like sensitivity, specificity, and AUC. ROC curves are constructed by plotting the true positive rate vs. false positive rate for different classification thresholds. The AUC can measure classifier performance, with higher values indicating better classification. Chi-square tests and bootstrapping are also discussed for evaluating goodness of fit.
The document discusses optimal foraging and information use in animal groups. It covers several key topics:
1) The producer-scrounger game model which examines strategies for finding food sources when foraging socially. Producers find food on their own while scroungers follow others to find food.
2) Learning rules that allow individuals to adjust their strategies based on previous payoffs. A relative payoff sum rule is described.
3) Social learning heuristics where individuals observe and copy the highest paying strategies of their neighbors.
4) Coevolutionary models where predator information use and prey grouping behavior can evolve in response to each other over time. Prey benefit from manipulating predator information
A Practical Reliability-Based Method for Assessing Soil Liquefaction PotentialCes Nit Silchar
Lecture Topic: A Practical Reliability-Based Method for Assessing Soil Liquefaction Potential
By Prof. Jin-Hung Hwang of National Central University, Taiwan.
2013.06.17 Time Series Analysis Workshop ..Applications in Physiology, Climat...NUI Galway
Professor Dimitris Kugiumtzis, Aristotle University of Thessaloniki, Greece, presented this workshop on time series analysis as part of the Summer School on Modern Statistical Analysis and Computational Methods hosted by the Social Sciences Computing Hub at the Whitaker Institute, NUI Galway on 17th-19th June 2013.
This document discusses model and variable selection in advanced econometrics. It covers topics like numerical optimization techniques, convex problems, Lagrangian functions, and the Karush–Kuhn–Tucker conditions for solving constrained optimization problems. It also references Bayesian and frequentist approaches to statistical inference and the importance of avoiding overfitting models to ensure good generalization to new data.
The document discusses quantiles and quantile regression. It begins by defining quantiles as the inverse of a cumulative distribution function. Quantile regression models the relationship between covariates and conditional quantiles, similar to how ordinary least squares regression models the conditional mean. The document also discusses median regression, which estimates relationships using the 1-norm rather than the 2-norm used in OLS. Median regression provides consistent estimates when the error term has a symmetric distribution.
This document summarizes several methods for estimating copula densities from sample data in a nonparametric way, including using kernel density estimation with different types of kernels and variable transformations. It describes the standard kernel estimate, issues with it near boundaries, a mirror kernel estimate, using beta kernels, a probit transformation of variables, and improved probit transformation estimators that use local polynomial approximations. The goal is to find estimators that are consistent along the boundaries of the copula support and improve inference about the copula density.
The Relation Between Acausality and Interference in Quantum-Like Bayesian Net...Catarina Moreira
The document summarizes a research presentation on building quantum probabilistic models for decision making under uncertainty. It discusses:
1) Current Bayesian network models require manual parameter tuning and do not scale well for complex scenarios.
2) The presentation proposes a quantum-like Bayesian network that uses quantum interference effects and converts classical probabilities to quantum amplitudes.
3) A key challenge is that the number of quantum parameters grows exponentially large, making predictions sensitive and uncertain.
Econometric Investigation into Cryptocurrency Price Bubbles in Bitcoin and Et...Siddharth Hitkari
At this stage, it is common knowledge that cryptocurrency prices are indeed, a bubble. However, does modern-day finance have the tools to detect explosive behaviour in absence of a fundamental value?
Glad to have worked with Shane Jose to release a paper in a bid to answer the aforementioned question!
An Econometric Investigation into Cryptocurrency Price Bubbles in Bitcoin and...Shane Jose
–A time-series analysis of BTC and ETH log-prices using Augmented Dickey-Fuller tests with recursive, rolling and reverse recursive windows.
–Successfully detects explosive behaviour while simultaneously linking real-world events to these bubbles.
Assessing the impact of a health intervention via user-generated Internet con...Vasileios Lampos
Assessing the effect of a health-oriented intervention by traditional epidemiological methods is commonly based only on population segments that use healthcare services. Here we introduce a complementary framework for evaluating the impact of a targeted intervention, such as a vaccination campaign against an infectious disease, through a statistical analysis of user-generated content submitted on web platforms. Using supervised learning, we derive a nonlinear regression model for estimating the prevalence of a health event in a population from Internet data. This model is applied to identify control location groups that correlate historically with the areas, where a specific intervention campaign has taken place. We then determine the impact of the intervention by inferring a projection of the disease rates that could have emerged in the absence of a campaign. Our case study focuses on the influenza vaccination program that was launched in England during the 2013/14 season, and our observations consist of millions of geo-located search queries to the Bing search engine and posts on Twitter. The impact estimates derived from the application of the proposed statistical framework support conventional assessments of the campaign.
Anomaly Detection in Sequences of Short Text Using Iterative Language ModelsCynthia Freeman
The document discusses various methods for anomaly detection in time series data. It begins by defining time series and anomalies, noting that anomaly detection is challenging due to issues like lack of labeled data and data imbalance. It then covers characteristics of time series like seasonality, trends, and concept drift, and how to detect them. Various anomaly detection methods are outlined, including STL, SARIMA, Prophet, Gaussian processes, and RNNs. Evaluation methods and factors to consider in choosing a detection method are also discussed. The document provides an overview of approaches to determining the optimal anomaly detection model for a given time series and application.
日本機械学会年次大会2016で登壇したときの資料です.
英訳:Spatial and temporal variations in epileptic discharges using coupled non-linear oscillator
連成非線形振動子のモデルパラメータを実験波形に合うように同定して,てんかん性異常脳波の時空間解析を実施しています.
Dependent processes in Bayesian NonparametricsJulyan Arbel
This document summarizes dependent processes in Bayesian nonparametrics. It motivates the need for dependent random probability measures to accommodate temporal dependence structures beyond the exchangeability assumption. It describes modeling collections of random probability measures indexed by time as either discrete-time or continuous-time processes. The diffusive Dirichlet process is introduced as a dependent Dirichlet process with Dirichlet marginal distributions at each time point and continuous sample paths. Simulation and estimation methods are discussed for this model.
Biosight: Quantitative Methods for Policy Analysis: Stochastic Dynamic Progra...IFPRI-EPTD
This document discusses stochastic dynamic programming and its applications. It covers Bellman's principle of optimality, solving stochastic dynamic programming problems using value function iteration, and applying these concepts to agroforestry and livestock herd dynamics models. It also discusses estimating intertemporal preferences using dynamic models that relax the assumption of time-additive separability and allow for risk aversion. Examples are provided of solving a resource management problem numerically using value iteration over continuous state and control variables.
This document discusses using extreme value theory and Bayesian analysis to reassess hurricane risk in Puerto Rico after Hurricane Maria. It analyzes rainfall data from San Juan to estimate return levels for extreme rainfall events using maximum likelihood estimation and Bayesian modeling. The Bayesian analysis results in slightly more precise predictions of extreme rainfall amounts compared to the maximum likelihood estimates. Hurricane Maria dropped over 36 inches of rain in some areas of Puerto Rico in September 2017, the highest rainfall amount ever recorded from a hurricane in Puerto Rico.
This document discusses how family history can impact life insurance premiums. It reviews existing literature on relationships between family members' lifespans, such as husbands and wives or parents and children. Genealogical data is used to analyze dependencies between generations, like grandchildren and grandparents. Quantities important for life insurance are calculated based on family information, showing how premiums may differ depending on how many family members are still alive. The goal is to better understand how family history can influence longevity and mortality risk factors used in life insurance underwriting.
Family History and Life Insurance (UConn actuarial seminar)Arthur Charpentier
This document discusses how family history can impact life insurance premiums. It reviews existing literature on relationships between family members' lifespans. The document analyzes a genealogical dataset to study dependencies between husbands and wives, parents and children, and grandparents and grandchildren. It finds modest but robust correlations between related individuals' lifespans. This dependency is quantified for various life insurance metrics like annuities and whole life insurance, showing family history can impact premiums.
The document discusses research on the relationship between family history and life insurance. It summarizes existing literature showing modest but robust connections between the lifespans of family members like spouses, parents and children, and grandparents and grandchildren. The document then presents analyses using a genealogical dataset, finding correlations between related individuals' lifespans. It explores how these family dependencies could impact life insurance premiums and quantities like annuities, widow's pensions, and life expectancies.
This document discusses the use of machine learning techniques in actuarial science and insurance. It begins with an overview of predictive modeling applications in insurance such as fraud detection, premium computation, and claims reserving. It then covers traditional econometric techniques like Poisson and gamma regression models and how machine learning is emerging as an alternative. The document emphasizes evaluating model goodness of fit and uncertainty, and addresses issues like price discrimination and fairness.
This document summarizes a paper on reinforcement learning in economics and finance. It introduces reinforcement learning concepts like agents, environments, actions, rewards, and states. It then discusses applications of reinforcement learning frameworks in economic problems like inventory management, consumption and income dynamics, and experiments. Finally, it notes connections between reinforcement learning and other fields like operations research, stochastic games, and finance.
The document summarizes research on using genealogical data to model dependencies in life spans between family members and quantify the impact on insurance premiums. It presents analysis of husband-wife, parent-child, and grandparent-grandchild relationships, showing dependencies exist. Mortality rates, life expectancies, and insurance quantities like annuities are estimated conditionally based on family history information.
The document discusses natural language processing techniques including word embeddings, text classification using naive Bayes classifiers, and probabilistic language models. It provides examples of part-of-speech tagging and analyzing sentiment. Key concepts covered include the bag-of-words assumption, n-gram models, and maximum likelihood estimation. Various papers on related topics are cited throughout.
This document discusses network representation and analysis. It defines networks as consisting of nodes (vertices) and edges, and describes different ways to represent networks mathematically using adjacency matrices, incidence matrices, and Laplacian matrices. It also discusses visualizing networks using multidimensional scaling and plotting them in R. Special types of networks like complete graphs and random graphs are briefly introduced.
The document discusses various techniques for classifying pictures using neural networks, including convolutional neural networks. It describes how convolutional neural networks can be used to classify images by breaking them into overlapping tiles, applying small neural networks to each tile, and pooling the results. The document also discusses using recurrent neural networks to classify videos by treating them as higher-dimensional tensors.
The document discusses using unusual data sources in insurance. It provides examples of using pictures, text, social media data, telematics, and satellite imagery in insurance. It also discusses challenges in analyzing complex and high-dimensional data from these sources and introduces machine learning tools like PCA, generalized linear models, and evaluating models using loss, risk, and cross-validation.
1) Support vector machines (SVMs) aim to find the optimal separating hyperplane that maximizes the margin between two classes of data points.
2) SVMs can be extended to non-linearly separable data using kernels to project the data into a higher dimensional feature space. Common kernels include polynomial and Gaussian radial basis function kernels.
3) The dual formulation of SVMs involves solving a quadratic programming problem to determine the support vectors, which lie closest to the separating hyperplane. These support vectors are then used to define the optimal hyperplane.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
1. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
Contributions to Dependence Modeling*
A. Charpentier (Université de Rennes 1)
Habilitation à Diriger des Recherches,
Rennes, 2016.
@freakonometrics 1
2. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
2006-2016, Brief Summary
2006: PhD in Applied Mathematics
Depedencies, with Applications in Insurance and Finance
Supervised by Jan Beirlant & Michel Denuit
2006-2010: Maître de Conférences
Université de Rennes 1
2010-2014: Professeur
Université du Québec à Montréal
2014-2016: Maître de Conférences
Université de Rennes 1
@freakonometrics 2
3. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Fondamental Results’ Dependence & Extremes
with Anne-Laure Fougères, Christian Genest & Johanna Nešlehová Multivariate
Archimax Copulas (JMVA, 2014).
Let be a d-variate stable tail dependence function
and φ be the generator of a d-variate Archimedean
copula. Then
Cφ, (u1, · · · , ud) = φ−1
[ (φ(u1) + · · · + φ(ud))]
is a d-dimensional copula. Further, if 1 − ψ(1/s)
is regularly varying (at ∞) with index α ∈ [0, 1],
Cφ, ∈ MDA(C ),
C (u1, · · · , ud) = exp − α
| log(u1)|
1
α , · · · , | log(ud)|
1
α
see results obtained with Johan Segers Tails of Archimedean Copulas (JMVA).
@freakonometrics 3
4. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Fondamental Results’ Nonparametric estimation (and Borders)
with Emmanuel Flachaire Transformed Kernel & Inequality and Risk Indices
(Actualité Économique, 2015)
Density
−2 −1 0 1 2
0.00.10.20.30.40.50.6
Density
−2 −1 0 1 2
0.00.10.20.30.40.50.6
Density
−2 −1 0 1 2
0.00.10.20.30.40.50.6
Density
0 2 4 6 8 10 12
0.00.10.20.30.40.50.6
Density
0 2 4 6 8 10 12
0.00.10.20.30.40.50.6
Density
0 2 4 6 8 10 12
0.00.10.20.30.40.50.6
@freakonometrics 4
5. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Fondamental Results’ Nonparametric estimation (and Borders)
with Emmanuel Flachaire Transformed Kernel & Inequality and Risk Indices
(Actualité Économique, 2015)
Density
0.0 0.2 0.4 0.6 0.8 1.0
0.00.51.01.52.0
Density
0.0 0.2 0.4 0.6 0.8 1.0
0.00.51.01.52.0
Density
0.0 0.2 0.4 0.6 0.8 1.0
0.00.51.01.52.0
Density
0 2 4 6 8 10 12
0.00.10.20.30.40.50.6
Density
0 2 4 6 8 10 12
0.00.10.20.30.40.50.6
Density
0 2 4 6 8 10 12
0.00.10.20.30.40.50.6
@freakonometrics 5
7. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Fondamental Results’ Nonparametric estimation (and Borders)
with Gery Geenens & Davy Pandaveine Copula Density Estimation and Probit
Trransform, (Bernoulli, 2015)
From a n-i.i.d. samepl {xi, yi} define the nor-
malized pseudo-sample {(si, ti)}
ui = Φ−1
(ui) = Φ−1
(FX(xi)) and vi = Φ−1
(vi)
fST (s, t) =
1
n|HST |1/2
n
i=1
K H
−1/2
ST
s − si
t − ti
.
c(τ)
(u, v) =
fST (Φ−1
(u), Φ−1
(v))
φ(Φ−1(u))φ(Φ−1(v))
is the so-called naive estimator...
c~(τ2)
Loss (X)
ALAE(Y)
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1.25
1.25
1.5
1.5
2
2
4
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1
1.25
1.25
1.5
1.5
2
2
4
c^
β
Loss (X)
ALAE(Y)
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1.25
1.25
1.5
1.5
2
2
4
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
0.25
0.25
0.25 0.25
0.5
0.5
0.75
0.75
0.75
1
1
1
1
1
1.25
1.25
1.25
1.25
1.25
1.5
1.5
2
2
2
4
c^
b
Loss (X)
ALAE(Y)
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1.25
1.25
1.5
1.5
2
2
4
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1.25
1.25
1.5
1.5
2
2
c^
p
Loss (X)
ALAE(Y)
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1.25
1.25
1.5
1.5
2
2
4
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
0.25
0.25
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1
1.25
1.25
1.25
1.25
1.5
1.5
1.5
2
2
4
@freakonometrics 7
8. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Fondamental Results’ Nonparametric estimation (and Borders)
with Gery Geenens & Davy Pandaveine Copula Density Estimation and Probit
Trransform, (Bernoulli, 2015)
... with possible ameliorations.
One can derive asymptotic normality
√
nh2 ˜c∗(τ,2)
(u, v) − c(u, v) − h4
B(u, v)
L
→ N 0, σ(2)
2
(u, v) as n → ∞,
where
B(u, v) =
b(2)(u, v)
φ(Φ−1(u)) · φ(Φ−1(v))
Application: LOSS-Alae dataset.
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
c~(τ2)
Loss (X)
ALAE(Y)
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1
1.25
1.25
1.5
1.5
2
2
4
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
c^
β
Loss (X)
ALAE(Y)
0.25
0.25
0.25 0.25
0.5
0.5
0.75
0.75
0.75
1
1
1
1
1
1.25
1.25
1.25
1.25
1.25
1.5
1.5
1.5
2
2
2
4
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
c^
b
Loss (X)
ALAE(Y)
0.25
0.25
0.5
0.5
0.75
0.751
1
1.25
1.25
1.5
1.5
2
2
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
c^
p
Loss (X)
ALAE(Y)
0.25
0.25
0.25
0.25
0.5
0.5
0.75
0.75
1
1
1
1.25
1.25
1.25
1.25
1.5
1.5
1.5
2
2
4
@freakonometrics 8
9. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Fondamental Results’ Multivariate Risk Aversion
with Alfred Galichon and Marc Henry, Multivariate Local Utility, (MOR, 2015)
Machina (1982) R has a local utility representation if there is UP such that
R(P) − R(Pε) = − UP(x)d(P − Pε)(x) + o( P − Pε ),
i.e. UP is Fréchet derivative of R in P.
Ex: Entropic measure, R(P) = −
1
α
EP(e−αX
), then UP(x) =
1
α
e−αx
EP(e−αX)
.
Ex: Distorted measure R(P) =
1
0
F−1
X (u)ϕ(u)du, then UP(x) =
x
ϕ(FX(z))dz.
R is Schur-concave if and only if UP is concave, ∀P ∈ L2
.
Let X0 ∼ P and X1 ∼ Q such that E(X1|X0) = X0. There exists (Xt)t∈[0,1]
(martingale interpolation) such that X0 = X0, X1 = X1, with dXt = ΣtdBt,
and R(Xs) ≤ R(Xt) for all s < t.
@freakonometrics 9
10. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Applied Mathematics’ Causality and Time Series
with David Sibaï Hurst/Gumbel and Floods, (Environmentrics 2008) or Heat
Waves, (CC 2011) or with Marilou Durand Dynamics of Earthquakes, (JS 2015)
q
q
q
q
q
q
q
q
q
q
q
q q
q q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q q
q
q q q
q
q
q q
q
q
q
q
q
q
q
q q
q
q
q
q
q q q
q
q
101520
Temperaturein°C
juil. 02 juil. 12 juil. 22 août 01 août 11 août 21 août 31
q
q
q
q
q q
0 1 2 3 4 5
010203040506070
Number of large earthquake (Magn.>7) per year, 1,000 km from Tokyo
Frequency(in%)
q Benchmark
Gamma−Pareto
Weibull−Pareto
q
q
q
q
q
q
q
q
q
q
q
q q q q q
0 5 10 15
05101520
Number of large earthquake (Magn.>7) per decade, 1,000 km from Tokyo
Frequency(in%)
q Benchmark
Gamma−Pareto
Weibull−Pareto
0 50 100 150 200
0.00.20.40.60.81.0
Distribution function of the period of return
Years before next heat wave
4consecutivedaysexceeding24degrees
GARMA + Gaussian noise
ARMA + t noise
ARMA + Gaussian noise
0 50 100 150 200
0.00.20.40.60.81.0
Distribution function of the period of return
Years before next heat wave
11consecutivedaysexceeding19degrees
GARMA + Gaussian noise
ARMA + t noise
ARMA + Gaussian noise
q
q
q q q q
0 1 2 3 4 5
020406080
Number of large earthquake (Magn.>7.5) per year, 1,000 km from Tokyo
Frequency(in%)
q Benchmark
Gamma−Pareto
Weibull−Pareto
q
q
q
q
q
q
q q q q q q q q q q
0 5 10 15
0102030
Number of large earthquake (Magn.>7.5) per decade, 1,000 km from Tokyo
Frequency(in%)
q Benchmark
Gamma−Pareto
Weibull−Pareto
@freakonometrics 10
11. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Applied Mathematics’ Causality and Time Series
with Mathieu Boudrault Multivariate INAR, (2012)
Inspired by Steutel & van Harn (1979) define a mul-
tivariate thinning operator,
[P ◦ N]i =
d
j=1
pi,j ◦ Nj, with p ◦ N =
N
k=1
Yk
where Y1, Y2, · · · are i.id. B(p)’s. A MINAR is
Xt = P ◦ Xt−1 + εt
where εt are i.id. Poisson random vectors.
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Granger Causality test, 3 hours
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Granger Causality test, 6 hours
See also joint work with M Toledo Bastos & Dan Mercea Onsite & Online Protest
Activity, (JC, 2015).
@freakonometrics 11
12. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Applied Mathematics’ Applications of Game Theory
with Benoit Le Maux Natural Catastrophes and Cooperation, (JPE, 2014)
E[u(ω − X)]
no insurance
≤ E[u(ω − α−l + I)]
insurance=V
with indeminty I(·) can be function of the propor-
tion of the population claiming a loss.
With limited liability
V = U(−α)−
1
0
x[U(−α)−U(−α− +I(x))]f(x)dx
See also work with Stéphane Mussard Income Inequality Games (JEI, 2011) and
with Romuald Élie .
@freakonometrics 12
13. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
‘Applied Mathematics’ Actuarial Science
Books on Mathématiques de l’Assurance
Non-Vie with Michel Denuit and book
on Computational Actuarial Science.
Articles on insurance models, Insurability of Climate Risk (GP 2009), on Claims
Reserving, micro vs. macro with Mathieu Pigeon (Risks, 2016) or on Bonus-Malus
Systems with Arthur David & Romuald Élie (2016).
@freakonometrics 13
14. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
Popular Writing / Articles en Français
@freakonometrics 14
15. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
On-going work
Enora Betz , Pierre-Yves Geoffard & Julien Tomas Bodily Injury Claims in France:
Court or Negociated Settlement ? »*
Emmanuel Flachaire & Magali Fromont Machine Learning & Econometrics
Alfred Galichon & Lucas Vernet Min-Cost Flows Models in Economics »*
Amadou Barry & Karim Oualkacha Quantile and Expectile Regression for random
effects model »*
Antoine Ly Classification with Unbalanced Samples »*
Ewen Gallic & Olivier Cabrignac Mortality in France and Familal Dependencies,
from Genealogical Data »*
Arnaud Goussebaile Insurance of Natural Catastrophes, Risk and Ambiguity »*
Ndéné Ka, Stéphane Mussard & Oumar Ndiaye Gini Regression and
Heteroskedasticity »*
@freakonometrics 15
16. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
« Min-Cost Flow Models in Economics*
(source Church (2009))
@freakonometrics 16
17. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
« Min-Cost Flow Models in Economics*
@freakonometrics 17
18. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
« Quantile and Expectile Regression for Random Effects Models*
Quantile, q(α, Y ) = argmin
θ ∈ R
E(rQ
α (Y − θ)) with rQ
α (u) = |α − 1(u ≤ 0)| · |u|.
Empirical version q(α, Y ) = argmin
θ ∈ R
1
n
n
i=1
rQ
α (yi − θ)
Quantile Regression β
Q
(α, y, x) = argmin
β ∈ Rp
1
n
n
i=1
rQ
α (yi − xi
T
β)
see Koenker (2005). Following Newey & Powell (1987) define expectiles as
µ(τ, Y ) = argmin
θ ∈ R
E(rE
τ (Y − θ)) with rE
τ (u) = |τ − 1(u ≤ 0)| · u2
.
Expectile Regression β
E
(τ, y, x) = argmin
β ∈ Rp
1
n
n
i=1
rE
τ (yi − xi
T
β) .
Properties of estimators in the context of panel data, (yi,t, xi,t).
@freakonometrics 18
19. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
« Quantile and Expectile Regression for Random Effects Models
@freakonometrics 19
20. Arthur CHARPENTIER, HdR: Contribution à l’étude de la dépendance
« Conclusion (?)
Work in ‘fundamental’ results
as well as applications
(insurance, finance, economics, climate).
Work with researchers in applied
mathematics and economics
involving students
(undergraduate, graduate, PhD, post-doc)
Currently involved in projects
• ANR, multivariate inequalities
• ACTINFO research chair
@freakonometrics 20