The document discusses provisions for outstanding claims in non-life insurance. It defines key terms like incurred but not reported (IBNR) and incurred but not paid (IBNP) claims. It also presents the chain ladder method for estimating reserves, which assumes a constant development factor between years. The chain ladder method is demonstrated on a numeric example using a triangle of cumulative paid claims.
Arthur Charpentier presents a model for insurance equilibria covering natural catastrophes in heterogeneous regions. The model considers both private insurance companies with limited liability and possible government intervention. It examines a one region model with homogeneous agents and a common shock model for natural disaster risks. Finally, it develops a two region model to analyze equilibriums when considering strategic decisions between regions.
This document discusses various modeling approaches for non-life insurance tariffication including frequency-severity models, Tweedie regression models, and high-dimensional modeling techniques like ridge regression and the LASSO. It compares individual risk and collective risk models, explores the impact of the Tweedie parameter, and applies regularization methods to insurance data.
The document discusses provisions for outstanding claims in non-life insurance. It defines key terms like incurred but not reported (IBNR) and incurred but not paid (IBNP) claims. It also presents the chain ladder method for estimating future claim payments based on historical payment patterns represented in triangular payment tables. The chain ladder method estimates development factors that are applied to cumulative paid amounts to project final claim costs.
This document summarizes the research contributions of Arthur Charpentier in studying dependence. It outlines some of his fundamental results in areas like: 1) developing multivariate Archimax copulas to model dependence between extremes; 2) proposing nonparametric methods for estimating dependence and density functions; and 3) estimating copula densities using a probit transformation. It also provides biographical details on Charpentier's education and professional positions held from 2006 to 2016.
This document summarizes the results of an actuarial pricing game simulation with multiple insurance companies. It finds that:
1) In the initial game with all companies, premium levels varied widely between companies and market shares shifted significantly.
2) When additional data was provided in a second round, premium variability decreased but loss ratios were similar.
3) A smaller game with just three companies showed that the choice of pricing strategy (always cheapest vs. random between cheapest two) impacted market shares and loss ratios.
This document summarizes Arthur Charpentier's presentation at the Rennes Risk Workshop in April 2015. It discusses extending concepts of risk from univariate to multivariate prospects, including characterizing attitudes to multivariate notions of increasing risk like the Rothschild-Stiglitz mean preserving increase in risk and Quiggin's monotone mean preserving increase in risk. It also generalizes the Bickel-Lehmann dispersion order to multivariate risks and examines its implications for risk sharing.
This document discusses various modeling techniques for non-life insurance ratemaking including individual and collective models, Tweedie regression, and the LASSO method. It explores using a Tweedie distribution for compound Poisson models and the relationship between individual and collective models. The document also examines issues with high-dimensional data in insurance, bias-variance tradeoffs, and regularization methods like ridge regression and the LASSO for variable selection.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and Pickands–Balkema–de Haan theorem, which state that maximums of iid random variables converge to one of three extreme value distributions. It also discusses estimators for the shape parameter of these distributions, such as the Hill estimator, and using the generalized Pareto distribution above a threshold to estimate value-at-risk quantiles. Examples are given applying these methods to Danish fire insurance loss data.
Arthur Charpentier presents a model for insurance equilibria covering natural catastrophes in heterogeneous regions. The model considers both private insurance companies with limited liability and possible government intervention. It examines a one region model with homogeneous agents and a common shock model for natural disaster risks. Finally, it develops a two region model to analyze equilibriums when considering strategic decisions between regions.
This document discusses various modeling approaches for non-life insurance tariffication including frequency-severity models, Tweedie regression models, and high-dimensional modeling techniques like ridge regression and the LASSO. It compares individual risk and collective risk models, explores the impact of the Tweedie parameter, and applies regularization methods to insurance data.
The document discusses provisions for outstanding claims in non-life insurance. It defines key terms like incurred but not reported (IBNR) and incurred but not paid (IBNP) claims. It also presents the chain ladder method for estimating future claim payments based on historical payment patterns represented in triangular payment tables. The chain ladder method estimates development factors that are applied to cumulative paid amounts to project final claim costs.
This document summarizes the research contributions of Arthur Charpentier in studying dependence. It outlines some of his fundamental results in areas like: 1) developing multivariate Archimax copulas to model dependence between extremes; 2) proposing nonparametric methods for estimating dependence and density functions; and 3) estimating copula densities using a probit transformation. It also provides biographical details on Charpentier's education and professional positions held from 2006 to 2016.
This document summarizes the results of an actuarial pricing game simulation with multiple insurance companies. It finds that:
1) In the initial game with all companies, premium levels varied widely between companies and market shares shifted significantly.
2) When additional data was provided in a second round, premium variability decreased but loss ratios were similar.
3) A smaller game with just three companies showed that the choice of pricing strategy (always cheapest vs. random between cheapest two) impacted market shares and loss ratios.
This document summarizes Arthur Charpentier's presentation at the Rennes Risk Workshop in April 2015. It discusses extending concepts of risk from univariate to multivariate prospects, including characterizing attitudes to multivariate notions of increasing risk like the Rothschild-Stiglitz mean preserving increase in risk and Quiggin's monotone mean preserving increase in risk. It also generalizes the Bickel-Lehmann dispersion order to multivariate risks and examines its implications for risk sharing.
This document discusses various modeling techniques for non-life insurance ratemaking including individual and collective models, Tweedie regression, and the LASSO method. It explores using a Tweedie distribution for compound Poisson models and the relationship between individual and collective models. The document also examines issues with high-dimensional data in insurance, bias-variance tradeoffs, and regularization methods like ridge regression and the LASSO for variable selection.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and Pickands–Balkema–de Haan theorem, which state that maximums of iid random variables converge to one of three extreme value distributions. It also discusses estimators for the shape parameter of these distributions, such as the Hill estimator, and using the generalized Pareto distribution above a threshold to estimate value-at-risk quantiles. Examples are given applying these methods to Danish fire insurance loss data.
The document is a presentation on property and casualty insurance reserving methods. It discusses provisions for outstanding claims, IBNR claims, and triangles used to estimate reserves. Triangles show incremental and cumulative claim payments by accident year and development year. The chain ladder method is presented as a common approach to estimating reserves that assumes a constant development factor at each step of the triangle. Formulas are provided and an example is worked through to demonstrate calculating development factors and estimating ultimate claim amounts.
This document discusses copulas and their use in modeling risk dependence. It introduces copulas as joint distribution functions with uniform margins that can be used to fully characterize dependence between random variables. Several classical copulas are described, including the independent, comonotonic, and countermonotonic copulas. Elliptical copulas like the Gaussian and Student t copulas are presented. Archimedean and extreme value copulas are also discussed. The document explores how copulas can capture dependence information that may not be reflected in correlation alone. Copulas provide flexible tools for modeling multivariate risks and dependencies.
This document discusses using regression models for claims reserving. Specifically, it examines using Poisson regression with incremental payments modeled as Poisson distributed, with the mean depending on occurrence year and development year factors. It provides an example of fitting such a model in R and summarizing the results. It also discusses using the model to estimate reserves and quantifying the uncertainty in those estimates through bootstrap simulations of the residuals.
This document summarizes a presentation on testing for volatility transmission between international markets using high frequency data. It discusses using realized volatility to estimate true latent volatility processes while controlling for jumps and microstructure noise. The presentation focuses on testing for transmission of only extreme or large volatility values between markets. A quantile model is used to define extreme periods, and cross-covariances are computed to test for non-causality between markets' extreme periods using Ljung-Box statistics. Simulations are performed based on a three-regime smooth-transition model to assess the test in finite samples.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and three limiting distributions used to model extreme values. It also discusses estimators like the Hill estimator that are used to estimate the shape parameter of distributions modeling extreme risks. Methods for estimating value-at-risk and tail-value-at-risk based on the generalized Pareto distribution above a threshold are also presented.
This document discusses quantile estimation techniques, including parametric, semiparametric, and nonparametric approaches. Parametric estimation assumes a distribution like Gaussian and estimates quantiles based on parameters of that distribution. Semiparametric estimation uses extreme value theory to model upper tails with a generalized Pareto distribution. Nonparametric estimation estimates quantiles directly from the data without assuming a particular distribution. The document presents several techniques for quantile estimation and compares their performance.
This document discusses quantile and expectile regressions. It begins by explaining the differences between the econometrics and machine learning approaches. It then introduces quantile and expectile regressions as generalizations of ordinary least squares regression that minimize different loss functions. Finally, it discusses properties of quantile and expectile regressions such as their elicitable measures and how they can be estimated.
This document discusses measures of inequality in economics. It begins by examining inequality comparisons in 2-person and 3-person economies using tools like the Kolm triangle. It then explores measures of inequality for n-person economies, including the Lorenz curve, Gini coefficient, and quantile ratios. The document also discusses standard statistical measures of dispersion like variance and coefficient of variation. Finally, it introduces an axiomatic approach for evaluating inequality indices based on principles like anonymity and transfers between individuals.
The document outlines two sessions on risk management in banking and finance. Session 9 will cover risk measures, regulatory aspects, and basic principles, including defining risk measures, academic vs accounting standards, desirable properties, and estimating risks from samples. Session 10 will cover correlations, copulas, modeling dependencies between risks, diversification effects, comparing risks under dependence vs independence, and analyzing individual risk contributions. Examples of applications to finance, environmental risks, and credit risk are also provided.
This document provides an overview and agenda for a master's level course on probability and statistics. It covers key topics like statistical models, probability distributions, conditional distributions, convergence theorems, sampling, confidence intervals, decision theory, and testing procedures. Examples of common probability distributions and functions are also presented, including the cumulative distribution function, probability density function, independence, and conditional independence. Additional references for further reading are included.
- The document discusses nonparametric kernel estimation methods for copula density functions.
- It proposes using a probit transformation of the data to estimate the copula density on the unit square, which improves consistency at the boundaries compared to standard kernel methods.
- Two improved probit-transformation kernel copula density estimators are presented - one using a local log-linear approximation and one using a local log-quadratic approximation.
This document summarizes the use of log-Poisson regression models for claims reserving and calculating reserves. It shows how to fit a log-Poisson regression model to incremental claims payments data and use it to estimate total reserves. It also provides methods for calculating the prediction error and quantifying the uncertainty of reserve estimates, including using the bootstrap procedure to generate multiple simulated reserve estimates.
The document summarizes quantile and expectile regression models. It defines quantiles and expectiles, and how they are estimated from sample data. It also discusses quantile and expectile regression, including extensions for fixed effects and random effects panels. Key points include:
- Quantiles minimize an asymmetric absolute loss function, while expectiles minimize an asymmetric squared loss function.
- Quantile regression parameters are estimated by minimizing the weighted sum of losses. Expectile regression parameters are estimated similarly using weighted squared losses.
- Panel data models include penalized quantile regression with fixed effects, and quantile/expectile regression with random effects and their asymptotic properties.
1. The document discusses quantiles and quantile regressions, which are important concepts in analyzing inequalities, risk, and other areas where conditional distributions are relevant.
2. Quantile regression models the relationship between covariates X and the conditional quantiles of the response variable Y. This generalizes ordinary least squares regression, which models the conditional mean of Y.
3. Median regression uses the 1-norm (sum of absolute deviations) instead of the 2-norm (sum of squared deviations) used in OLS. It estimates the conditional median of Y rather than the conditional mean.
The document discusses optimal reinsurance strategies to meet a target ruin probability. It describes proportional and nonproportional reinsurance contracts. For proportional reinsurance, the ruin probability can always be decreased by increasing the cession ratio. For nonproportional reinsurance, ruin is possible even if it did not occur without reinsurance. The document models different claim size distributions and analyzes how the optimal deductible varies to meet the target ruin probability.
This document discusses several nonparametric methods for estimating copula densities from data, which are useful for modeling multivariate dependence. It first provides background on copulas and density estimation. It then describes several techniques for handling boundary issues that arise when estimating densities supported on [0,1], including the mirror image method, transformed kernels, beta kernels, and averaging histograms. Examples are given comparing the performance of these different approaches. The goal is to provide flexible, data-driven estimates of copula densities without imposing a parametric copula model.
The document discusses claims reserving and quantification of uncertainty in incurred but not reported (IBNR) claims. It introduces some key concepts, including:
1) Claims reserving involves predicting future claim payments and amounts are uncertain due to reporting and settlement delays. This makes claims reserving a prediction problem with uncertainty.
2) The Chain Ladder technique is commonly used to estimate future claim payments based on historical payment patterns represented in triangles.
3) Key metrics included in triangles are incremental and cumulative claim payments by accident and development year, which can be used to calculate Chain Ladder factors and estimates.
This document provides an overview of various classification techniques in data science, including linear discriminant analysis, logistic regression, probit regression, k-nearest neighbors, classification trees (CART), random forests, and techniques for double classification like uplift modeling. It discusses consistency of models and the risk of overfitting when the training sample size is small. Key classification algorithms like logistic regression and CART are explained in detail over multiple pages.
The document discusses using the programming language R for actuarial science applications. It presents R as a vector-based language suitable for working with life tables and performing actuarial calculations. Examples are given of how to model life contingencies like life expectancies, annuities, and insurance values using vectors and matrices in R. The document also discusses using R to fit prospective mortality models like the Lee-Carter model to data matrices.
This document summarizes notes from an advanced econometrics graduate course. It discusses topics like model and variable selection, numerical optimization techniques like gradient descent, convex optimization problems, and the Karush-Kuhn-Tucker conditions for solving convex problems. It also covers reducing dimensionality with techniques like principal component analysis and partial least squares, penalizing complex models, and information criteria like AIC that balance model fit and complexity.
The document is a presentation on property and casualty insurance reserving methods. It discusses provisions for outstanding claims, IBNR claims, and triangles used to estimate reserves. Triangles show incremental and cumulative claim payments by accident year and development year. The chain ladder method is presented as a common approach to estimating reserves that assumes a constant development factor at each step of the triangle. Formulas are provided and an example is worked through to demonstrate calculating development factors and estimating ultimate claim amounts.
This document discusses copulas and their use in modeling risk dependence. It introduces copulas as joint distribution functions with uniform margins that can be used to fully characterize dependence between random variables. Several classical copulas are described, including the independent, comonotonic, and countermonotonic copulas. Elliptical copulas like the Gaussian and Student t copulas are presented. Archimedean and extreme value copulas are also discussed. The document explores how copulas can capture dependence information that may not be reflected in correlation alone. Copulas provide flexible tools for modeling multivariate risks and dependencies.
This document discusses using regression models for claims reserving. Specifically, it examines using Poisson regression with incremental payments modeled as Poisson distributed, with the mean depending on occurrence year and development year factors. It provides an example of fitting such a model in R and summarizing the results. It also discusses using the model to estimate reserves and quantifying the uncertainty in those estimates through bootstrap simulations of the residuals.
This document summarizes a presentation on testing for volatility transmission between international markets using high frequency data. It discusses using realized volatility to estimate true latent volatility processes while controlling for jumps and microstructure noise. The presentation focuses on testing for transmission of only extreme or large volatility values between markets. A quantile model is used to define extreme periods, and cross-covariances are computed to test for non-causality between markets' extreme periods using Ljung-Box statistics. Simulations are performed based on a three-regime smooth-transition model to assess the test in finite samples.
This document discusses modeling and estimating extreme risks and quantiles in non-life insurance. It introduces the generalized extreme value distribution and three limiting distributions used to model extreme values. It also discusses estimators like the Hill estimator that are used to estimate the shape parameter of distributions modeling extreme risks. Methods for estimating value-at-risk and tail-value-at-risk based on the generalized Pareto distribution above a threshold are also presented.
This document discusses quantile estimation techniques, including parametric, semiparametric, and nonparametric approaches. Parametric estimation assumes a distribution like Gaussian and estimates quantiles based on parameters of that distribution. Semiparametric estimation uses extreme value theory to model upper tails with a generalized Pareto distribution. Nonparametric estimation estimates quantiles directly from the data without assuming a particular distribution. The document presents several techniques for quantile estimation and compares their performance.
This document discusses quantile and expectile regressions. It begins by explaining the differences between the econometrics and machine learning approaches. It then introduces quantile and expectile regressions as generalizations of ordinary least squares regression that minimize different loss functions. Finally, it discusses properties of quantile and expectile regressions such as their elicitable measures and how they can be estimated.
This document discusses measures of inequality in economics. It begins by examining inequality comparisons in 2-person and 3-person economies using tools like the Kolm triangle. It then explores measures of inequality for n-person economies, including the Lorenz curve, Gini coefficient, and quantile ratios. The document also discusses standard statistical measures of dispersion like variance and coefficient of variation. Finally, it introduces an axiomatic approach for evaluating inequality indices based on principles like anonymity and transfers between individuals.
The document outlines two sessions on risk management in banking and finance. Session 9 will cover risk measures, regulatory aspects, and basic principles, including defining risk measures, academic vs accounting standards, desirable properties, and estimating risks from samples. Session 10 will cover correlations, copulas, modeling dependencies between risks, diversification effects, comparing risks under dependence vs independence, and analyzing individual risk contributions. Examples of applications to finance, environmental risks, and credit risk are also provided.
This document provides an overview and agenda for a master's level course on probability and statistics. It covers key topics like statistical models, probability distributions, conditional distributions, convergence theorems, sampling, confidence intervals, decision theory, and testing procedures. Examples of common probability distributions and functions are also presented, including the cumulative distribution function, probability density function, independence, and conditional independence. Additional references for further reading are included.
- The document discusses nonparametric kernel estimation methods for copula density functions.
- It proposes using a probit transformation of the data to estimate the copula density on the unit square, which improves consistency at the boundaries compared to standard kernel methods.
- Two improved probit-transformation kernel copula density estimators are presented - one using a local log-linear approximation and one using a local log-quadratic approximation.
This document summarizes the use of log-Poisson regression models for claims reserving and calculating reserves. It shows how to fit a log-Poisson regression model to incremental claims payments data and use it to estimate total reserves. It also provides methods for calculating the prediction error and quantifying the uncertainty of reserve estimates, including using the bootstrap procedure to generate multiple simulated reserve estimates.
The document summarizes quantile and expectile regression models. It defines quantiles and expectiles, and how they are estimated from sample data. It also discusses quantile and expectile regression, including extensions for fixed effects and random effects panels. Key points include:
- Quantiles minimize an asymmetric absolute loss function, while expectiles minimize an asymmetric squared loss function.
- Quantile regression parameters are estimated by minimizing the weighted sum of losses. Expectile regression parameters are estimated similarly using weighted squared losses.
- Panel data models include penalized quantile regression with fixed effects, and quantile/expectile regression with random effects and their asymptotic properties.
1. The document discusses quantiles and quantile regressions, which are important concepts in analyzing inequalities, risk, and other areas where conditional distributions are relevant.
2. Quantile regression models the relationship between covariates X and the conditional quantiles of the response variable Y. This generalizes ordinary least squares regression, which models the conditional mean of Y.
3. Median regression uses the 1-norm (sum of absolute deviations) instead of the 2-norm (sum of squared deviations) used in OLS. It estimates the conditional median of Y rather than the conditional mean.
The document discusses optimal reinsurance strategies to meet a target ruin probability. It describes proportional and nonproportional reinsurance contracts. For proportional reinsurance, the ruin probability can always be decreased by increasing the cession ratio. For nonproportional reinsurance, ruin is possible even if it did not occur without reinsurance. The document models different claim size distributions and analyzes how the optimal deductible varies to meet the target ruin probability.
This document discusses several nonparametric methods for estimating copula densities from data, which are useful for modeling multivariate dependence. It first provides background on copulas and density estimation. It then describes several techniques for handling boundary issues that arise when estimating densities supported on [0,1], including the mirror image method, transformed kernels, beta kernels, and averaging histograms. Examples are given comparing the performance of these different approaches. The goal is to provide flexible, data-driven estimates of copula densities without imposing a parametric copula model.
The document discusses claims reserving and quantification of uncertainty in incurred but not reported (IBNR) claims. It introduces some key concepts, including:
1) Claims reserving involves predicting future claim payments and amounts are uncertain due to reporting and settlement delays. This makes claims reserving a prediction problem with uncertainty.
2) The Chain Ladder technique is commonly used to estimate future claim payments based on historical payment patterns represented in triangles.
3) Key metrics included in triangles are incremental and cumulative claim payments by accident and development year, which can be used to calculate Chain Ladder factors and estimates.
This document provides an overview of various classification techniques in data science, including linear discriminant analysis, logistic regression, probit regression, k-nearest neighbors, classification trees (CART), random forests, and techniques for double classification like uplift modeling. It discusses consistency of models and the risk of overfitting when the training sample size is small. Key classification algorithms like logistic regression and CART are explained in detail over multiple pages.
The document discusses using the programming language R for actuarial science applications. It presents R as a vector-based language suitable for working with life tables and performing actuarial calculations. Examples are given of how to model life contingencies like life expectancies, annuities, and insurance values using vectors and matrices in R. The document also discusses using R to fit prospective mortality models like the Lee-Carter model to data matrices.
This document summarizes notes from an advanced econometrics graduate course. It discusses topics like model and variable selection, numerical optimization techniques like gradient descent, convex optimization problems, and the Karush-Kuhn-Tucker conditions for solving convex problems. It also covers reducing dimensionality with techniques like principal component analysis and partial least squares, penalizing complex models, and information criteria like AIC that balance model fit and complexity.
This document discusses the probabilistic foundations of econometrics and relationships to machine learning techniques. It describes how econometrics uses probability distributions and maximum likelihood estimation for linear regression models. It also discusses how machine learning uses loss functions and penalization methods like ridge regression to select models and avoid overfitting. Boosting techniques are mentioned as a way to sequentially learn from previous errors.
The document discusses provisions for outstanding claims, incurred but not reported (IBNR) claims, and triangles in non-life insurance. It introduces the concepts of provisions for outstanding claims, development of claims over time, and triangles used to analyze claims development. Triangles show incurred claims and payments by accident year and development year. Methods like Chain Ladder can be used to estimate future claims payments based on historical development patterns in triangles.
This document discusses algorithms for solving the traveling salesman problem (TSP), including the Little algorithm. The Little algorithm uses a branch and bound approach to find the optimal solution to the TSP. It works by first computing a lower bound on the cost, then iteratively considering paths with the highest "regret" or deviation from the lower bound, branching on whether to include or exclude that path until an optimal solution is found. While exact, the Little algorithm runs into computational issues as the number of cities increases due to the large search tree that must be explored.
The document describes the simplex method for solving linear programming problems. It begins with an example problem of maximizing beer production given constraints on barley and corn supplies. It introduces slack variables to transform inequalities into equalities. The coefficients are written in a tableau and an initial basic feasible solution is chosen. Gaussian elimination is performed to introduce new basic variables while removing others. The process is repeated, moving through the feasible space, until an optimal solution is found without any negative entries in the objective function row. Duality between minimizing costs and maximizing profits is also discussed.
This document summarizes Arthur Charpentier's presentation on data science and actuarial science using R. It discusses S3 and S4 classes in R for defining custom object classes. It also covers matrices, vectors, numbers and memory management in R. The presentation references advanced R techniques and packages for insurance and data mining applications.
This document outlines an introduction to advanced R concepts taught in a Data Science for Actuaries course in March-June 2015. It covers topics like classes, matrices, numbers, and memory management in R. References for further reading on R and data mining are also provided.
Arthur Charpentier's presentation covered perspectives on predictive modeling. He discussed prediction versus estimation, parametric versus nonparametric models, linear models and least squares, modeling categorical variables, and prediction using covariates. Key points included defining prediction as estimating the expected value, providing confidence intervals to quantify uncertainty, using maximum likelihood to estimate parameters, and modeling conditional distributions based on covariates.
This document provides an overview of a course on income distributions, inequality, and poverty indices. It lists several references on these topics, including works by Atkinson, Stiglitz, Fleurbaey, Maniquet, Kolm, and Sen. The document then presents data from various sources on the distribution of wealth and incomes in countries like the US, France, UK, Canada, and Germany. It shows that perceptions of wealth distribution differ from reality. It also discusses trends in top income shares from Piketty's work, rising poverty levels in France between 2009-2010, and graphs of income distribution in France.
The document discusses actuarial models for analyzing loss development triangles in non-life insurance. It introduces the chain ladder method, which assumes a multiplicative relationship between incremental payments in successive years. The chain ladder factors are estimated using weighted least squares. The method is used to predict total payments and calculate reserves for unpaid losses. A stochastic model is also described where payments follow a Markov process with transition factors and variances estimated from the triangle.
The document discusses emerging risks from an actuarial perspective. It outlines some key challenges actuaries face in modeling emerging risks, such as lacking sufficient historical data. Emerging risks may involve events that are highly correlated or have undefined probabilities that are difficult to insure. The document also discusses how actuaries use statistical models to quantify uncertainty and assess emerging risks, but notes models have limitations and probabilities may change over time as new data becomes available.
This document discusses building technically sound simulation models in Crystal Ball. It covers:
- Common applications of simulation modeling and Crystal Ball software.
- The ModelAssist reference tool for simulation best practices.
- Key technical considerations like properly modeling multiplications as sums, distinguishing variability from uncertainty, and accounting for dependencies between variables.
- A checklist of best practices such as engaging decision-makers, keeping models simple, and clearly communicating results.
Introduction to the actuarial profession.pdftraderoptimist
- An actuary is a business professional who deals with assessing and mitigating financial risks using mathematical and statistical analysis rather than prediction.
- Actuaries help design and price insurance products like health, life, auto, and pension plans by analyzing various risk factors through statistical modeling to balance covering costs and remaining profitable.
- Becoming an actuary requires passing a series of rigorous examinations administered by actuarial organizations to earn a designation as well as relevant work experience. Students must first enroll as student members to begin taking exams.
Page rank for anomaly detection - Big Things meetup in IsraelOfer Mendelevitch
This document provides a summary of using PageRank for anomaly detection in healthcare fraud. It describes building a graph of provider similarity based on medical procedure patterns, then running a personalized PageRank algorithm to identify providers with unusually high scores given their specialty. The approach is implemented at scale using Apache Pig on Hadoop to efficiently compute similarities and detect potential fraud cases for further review.
Après la mise en production d'un système de Data science, il y a le ré-entraînement, l'interprétabilité, les évolutions... mais aussi le monitoring ! Rarement prioritaire et souvent oublié, il est pourtant nécessaire pour s'assurer que notre système fonctionne comme prévu.
Vue pragmatique et rationnelle du monitoring des systèmes de data science, accompagné d'un retour d'expérience !
Speakers : Emmanuel-Lin Toulemonde et Mehdi Houacine, consultants data @ OCTO Technology
Given an outcome, we often exaggerate our ability to predict and therefore avoid the same fate. In cybersecurity, this misconception can lead to a false sense of corporate security, or worse, bury the true causes of incidents and lead to repeated data breaches or business-disrupting cyber incidents.
This document discusses classification and goodness of fit in machine learning. It introduces concepts like confusion matrices, ROC curves, and measures like sensitivity, specificity, and AUC. ROC curves are constructed by plotting the true positive rate vs. false positive rate for different classification thresholds. The AUC can measure classifier performance, with higher values indicating better classification. Chi-square tests and bootstrapping are also discussed for evaluating goodness of fit.
Introducing R package ESG at Rmetrics Paris 2014 conferenceThierry Moudiki
The document describes the ESG package in R, which provides tools for generating economic scenarios for use in insurance valuation and capital requirements calculations. It discusses the available risk factors in ESG, including nominal interest rates, equity returns, property returns, and corporate bond returns. It also outlines the package's object-oriented structure and provides examples of how to use ESG to simulate scenarios and calculate insurance liabilities and capital requirements.
I have developed a maintenance cost optimisation model and utilised simulation technique which basically imitates the operation of the real-world processes and system over time. Simulations require the use of models that represents the key characteristics or behaviours of the selected system or processes, whereas the simulation represents the evolution of the model over time. Simulation is used in many contexts, such as simulation of technology for performance tuning or optimising, safety engineering, testing, training, education, and video games. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in econimics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist.
Skyscraper Security Mgt.- Administration Mgt. Section 1 Part VRichard Garrity
This document provides guidance on proper documentation procedures for high-rise security management. It discusses the importance of accurately documenting daily activities, incidents, and accidents. Various report forms are recommended, such as daily reports, serious incident reports, and workplace accident reports. Guidelines are given for completing these reports thoroughly yet objectively. Managers are advised to review reports for quality and provide training to staff on documentation standards. Monthly compilation of incident reports for property managers is also recommended.
Inventory Model with Different Deterioration Rates under Exponential Demand, ...inventionjournals
An inventory model with different deterioration rates under exponential demand with inflation and permissible delay in payments is developed. Holding cost is taken as linear function of time. Shortages are allowed. Numerical examples are provided to illustrate the model and sensitivity analysis is also carried out for parameters.
IHP 450 Module Four Journal Rubric Prompt Your expe.docxShiraPrater50
IHP 450 Module Four Journal Rubric
Prompt: Your expertise in healthcare finance has attracted the attention of some of the newer employees in your organization. Wanting to become managers
someday, they have asked you to share some of what you know in the online journal that serves as the company’s knowledge-sharing portal. Create a journal
assignment in which you respond to these specific topics from your colleagues:
Describe the ways in which financial statements, financial calculations, and working capital are used in the organizational budget process
Identify the main objective of managing cash flows
Explain why financial leaders are concerned about having cash on hand
Identify the three factors that set the healthcare industry apart from most other industries with regard to accounts receivable
Explain the ways in which trends within Medicare, Medicaid, and third-party payers affect working capital and the cost elements associated with the
revenue cycle
Critical Elements Proficient (100%) Needs Improvement (55%) Not Evident (0%) Value
Uses of Financial
Statements,
Financial
Calculations, and
Working Capital
Assignment accurately describes
the primary ways in which
financial statements, financial
calculations, and working capital
are used in the organizational
budget process
Assignment is missing some
important ways in which financial
statements, financial calculations,
or working capital are used in the
organizational budget process, or
some descriptions are inaccurate
Assignment does not describe
the primary ways in which
financial statements, financial
calculations, and working capital
are used in the organizational
budget process
22
Objective of
Managing Cash
Flow
Assignment accurately identifies
the main objective of managing
cash flows
Assignment does not accurately
identify the main objective of
managing cash flows
12
Concern about
Cash on Hand
Assignment accurately explains
why financial leaders are
concerned about having cash on
hand
Explanation as to why financial
leaders are concerned about
having cash on hand is
incomplete or inaccurate in
important respects
Assignment does not explain why
financial leaders are concerned
about having cash on hand
22
Factors that Set
the Healthcare
Industry Apart
Assignment accurately identifies
all of the factors that set the
healthcare industry apart from
most other industries with regard
to accounts receivable
Identification of the factors that
set the healthcare industry apart
from most other industries with
regard to accounts receivable is
incomplete or inaccurate in
important respects
Assignment does not identify the
factors that set the healthcare
industry apart from most other
industries with regard to
accounts receivable
22
Effects of Trends
within Medicare,
Medicaid, and
Third-Party
Payers
Assignment accurately explains
the ways in which trends ...
Juan Carlos Cuestas. The Great (De)leveraging in the GIIPS countries. Foreign...Eesti Pank
This document summarizes a presentation given by Juan Carlos Cuestas and Karsten Staehr on analyzing the relationship between capital inflows and private credit expansion in Greece, Ireland, Italy, Portugal, and Spain from 1998 to 2013. It begins with introducing the presenters and their background. It then reviews stylized facts about the pre-crisis boom in these countries fueled by low interest rates, a global savings glut, and willingness of international investors to take on more risk. The presentation analyzes quarterly data on private credit as a percentage of GDP and net foreign liabilities as a percentage of GDP using cointegration tests and vector autoregression/vector error correction modeling to examine the long-run relationship and causal dynamics between
Monte Carlo Simulations (UC Berkeley School of Information; July 11, 2019)Ivan Corneillet
My guest lecture on Monte Carlo simulations [or "how to be approximately right, now vs. precisely wrong, later (or never…)"] for the Managing Cyber Risk course of UC Berkeley School of Information's Cybersecurity Master.
Review Types - Testing Traveler, The Post About Review TypesCharlie Congdon
The document provides instructions for creating an account and submitting a paper writing request on the HelpWriting.net site in 5 steps:
1. Create an account by providing a password and email.
2. Complete a 10-minute order form with instructions, sources, deadline, and attach a sample if wanting the writer to imitate your style.
3. Review bids from writers and choose one based on qualifications, history, and feedback, then pay a deposit to start the assignment.
4. Review the completed paper and authorize final payment if pleased, or request free revisions.
5. Choose HelpWriting.net confidently knowing your needs will be met, and plagiarized work will be ref
The Way We Work Has Changed - Now Looking Toward 2030Manage Damage
Jillian Hamilton presents on February 25th & 26th 2020 as an Expert to the IEC MSB White Paper on the Future of Safety in Geneva, Switzerland.
Millions of devices that contain electronics, and use or produce electricity, rely on IEC International Standards and Conformity Assessment Systems to perform, fit and work safely together.
Founded in 1906, the IEC (International Electrotechnical Commission) is the world’s leading organization for the preparation and publication of International Standards for all electrical, electronic and related technologies. These are known collectively as “electrotechnology”.
IEC provides a platform to companies, industries and governments for meeting, discussing and developing the International Standards they require.
All IEC International Standards are fully consensus-based and represent the needs of key stakeholders of every nation participating in IEC work. Every member country, no matter how large or small, has one vote and a say in what goes into an IEC International Standard.
The International Electrotechnical Commission (IEC) is the world’s leading organization that prepares and publishes International Standards for all electrical, electronic and related technologies.
Close to 20 000 experts from industry, commerce, government, test and research labs, academia and consumer groups participate in IEC Standardization work.
The IEC is one of three global sister organizations (IEC, ISO, ITU) that develop International Standards for the world.
When appropriate, IEC cooperates with ISO (International Organization for Standardization) or ITU (International Telecommunication Union) to ensure that International Standards fit together seamlessly and complement each other. Joint committees ensure that International Standards combine all relevant knowledge of experts working in related areas.
The document provides an overview of reliability engineering concepts including definitions of reliability, mean time to failure, hazard rate, and the Weibull distribution model. It discusses the importance of reliability for products and businesses. Examples are provided on how to calculate reliability metrics like reliability and failure rate from failure data using the exponential and Weibull distribution models. The versatility of the Weibull model in modeling early failure, constant, and wear-out failure regions is also highlighted.
The aircraft manufacturing industry grew rapidly during World War II to become the largest industry in America by 1944. However, it declined in the following years due to issues like metal fatigue from flaws in materials and engine performance, as well as outdated production methods. Manufacturers used aluminum and steel but were limited by the technologies of the time.
This document discusses how family history can impact life insurance premiums. It reviews existing literature on relationships between family members' lifespans, such as husbands and wives or parents and children. Genealogical data is used to analyze dependencies between generations, like grandchildren and grandparents. Quantities important for life insurance are calculated based on family information, showing how premiums may differ depending on how many family members are still alive. The goal is to better understand how family history can influence longevity and mortality risk factors used in life insurance underwriting.
Family History and Life Insurance (UConn actuarial seminar)Arthur Charpentier
This document discusses how family history can impact life insurance premiums. It reviews existing literature on relationships between family members' lifespans. The document analyzes a genealogical dataset to study dependencies between husbands and wives, parents and children, and grandparents and grandchildren. It finds modest but robust correlations between related individuals' lifespans. This dependency is quantified for various life insurance metrics like annuities and whole life insurance, showing family history can impact premiums.
Talk at the modcov19 CNRS workshop, en France, to present our article COVID-19 pandemic control: balancing detection policy and lockdown intervention under ICU sustainability
The document discusses research on the relationship between family history and life insurance. It summarizes existing literature showing modest but robust connections between the lifespans of family members like spouses, parents and children, and grandparents and grandchildren. The document then presents analyses using a genealogical dataset, finding correlations between related individuals' lifespans. It explores how these family dependencies could impact life insurance premiums and quantities like annuities, widow's pensions, and life expectancies.
This document discusses the use of machine learning techniques in actuarial science and insurance. It begins with an overview of predictive modeling applications in insurance such as fraud detection, premium computation, and claims reserving. It then covers traditional econometric techniques like Poisson and gamma regression models and how machine learning is emerging as an alternative. The document emphasizes evaluating model goodness of fit and uncertainty, and addresses issues like price discrimination and fairness.
This document summarizes a paper on reinforcement learning in economics and finance. It introduces reinforcement learning concepts like agents, environments, actions, rewards, and states. It then discusses applications of reinforcement learning frameworks in economic problems like inventory management, consumption and income dynamics, and experiments. Finally, it notes connections between reinforcement learning and other fields like operations research, stochastic games, and finance.
This document models the COVID-19 pandemic using a compartmental SIDUHR+/- model that divides the population into susceptible (S), infected asymptomatic (I-), infected symptomatic (I+), recovered asymptomatic (R-), recovered symptomatic (R+), hospitalized (H), ICU (U), and dead (D) categories. Optimal lockdown policies are determined by minimizing costs related to deaths, economic impact, testing needs, and immunity while ensuring ICU sustainability. Increasing ICU capacity allows less stringent lockdown policies while achieving similar outcomes. Faster detection of asymptomatic cases through increased testing also enables more flexible lockdown policies.
The document summarizes research on using genealogical data to model dependencies in life spans between family members and quantify the impact on insurance premiums. It presents analysis of husband-wife, parent-child, and grandparent-grandchild relationships, showing dependencies exist. Mortality rates, life expectancies, and insurance quantities like annuities are estimated conditionally based on family history information.
The document discusses natural language processing techniques including word embeddings, text classification using naive Bayes classifiers, and probabilistic language models. It provides examples of part-of-speech tagging and analyzing sentiment. Key concepts covered include the bag-of-words assumption, n-gram models, and maximum likelihood estimation. Various papers on related topics are cited throughout.
This document discusses network representation and analysis. It defines networks as consisting of nodes (vertices) and edges, and describes different ways to represent networks mathematically using adjacency matrices, incidence matrices, and Laplacian matrices. It also discusses visualizing networks using multidimensional scaling and plotting them in R. Special types of networks like complete graphs and random graphs are briefly introduced.
The document discusses various techniques for classifying pictures using neural networks, including convolutional neural networks. It describes how convolutional neural networks can be used to classify images by breaking them into overlapping tiles, applying small neural networks to each tile, and pooling the results. The document also discusses using recurrent neural networks to classify videos by treating them as higher-dimensional tensors.
The document discusses using unusual data sources in insurance. It provides examples of using pictures, text, social media data, telematics, and satellite imagery in insurance. It also discusses challenges in analyzing complex and high-dimensional data from these sources and introduces machine learning tools like PCA, generalized linear models, and evaluating models using loss, risk, and cross-validation.
1) Support vector machines (SVMs) aim to find the optimal separating hyperplane that maximizes the margin between two classes of data points.
2) SVMs can be extended to non-linearly separable data using kernels to project the data into a higher dimensional feature space. Common kernels include polynomial and Gaussian radial basis function kernels.
3) The dual formulation of SVMs involves solving a quadratic programming problem to determine the support vectors, which lie closest to the separating hyperplane. These support vectors are then used to define the optimal hyperplane.
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
"Does Foreign Direct Investment Negatively Affect Preservation of Culture in the Global South? Case Studies in Thailand and Cambodia."
Do elements of globalization, such as Foreign Direct Investment (FDI), negatively affect the ability of countries in the Global South to preserve their culture? This research aims to answer this question by employing a cross-sectional comparative case study analysis utilizing methods of difference. Thailand and Cambodia are compared as they are in the same region and have a similar culture. The metric of difference between Thailand and Cambodia is their ability to preserve their culture. This ability is operationalized by their respective attitudes towards FDI; Thailand imposes stringent regulations and limitations on FDI while Cambodia does not hesitate to accept most FDI and imposes fewer limitations. The evidence from this study suggests that FDI from globally influential countries with high gross domestic products (GDPs) (e.g. China, U.S.) challenges the ability of countries with lower GDPs (e.g. Cambodia) to protect their culture. Furthermore, the ability, or lack thereof, of the receiving countries to protect their culture is amplified by the existence and implementation of restrictive FDI policies imposed by their governments.
My study abroad in Bali, Indonesia, inspired this research topic as I noticed how globalization is changing the culture of its people. I learned their language and way of life which helped me understand the beauty and importance of cultural preservation. I believe we could all benefit from learning new perspectives as they could help us ideate solutions to contemporary issues and empathize with others.
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck mari...Donc Test
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck maria r mitchell.docx
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck maria r mitchell.docx
TEST BANK Principles of cost accounting 17th edition edward j vanderbeck maria r mitchell.docx
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...sameer shah
Delve into the world of STREETONOMICS, where a team of 7 enthusiasts embarks on a journey to understand unorganized markets. By engaging with a coffee street vendor and crafting questionnaires, this project uncovers valuable insights into consumer behavior and market dynamics in informal settings."
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy VisaAmit Kakkar
Discover essential details about Thailand's recent visa policy changes, tailored for tourists and students. Amit Kakkar Easy Visa provides a comprehensive overview of new requirements, application processes, and tips to ensure a smooth transition for all travelers.
Economic Risk Factor Update: June 2024 [SlideShare]Commonwealth
May’s reports showed signs of continued economic growth, said Sam Millette, director, fixed income, in his latest Economic Risk Factor Update.
For more market updates, subscribe to The Independent Market Observer at https://blog.commonwealth.com/independent-market-observer.
OJP data from firms like Vicinity Jobs have emerged as a complement to traditional sources of labour demand data, such as the Job Vacancy and Wages Survey (JVWS). Ibrahim Abuallail, PhD Candidate, University of Ottawa, presented research relating to bias in OJPs and a proposed approach to effectively adjust OJP data to complement existing official data (such as from the JVWS) and improve the measurement of labour demand.
Discover the Future of Dogecoin with Our Comprehensive Guidance36 Crypto
Learn in-depth about Dogecoin's trajectory and stay informed with 36crypto's essential and up-to-date information about the crypto space.
Our presentation delves into Dogecoin's potential future, exploring whether it's destined to skyrocket to the moon or face a downward spiral. In addition, it highlights invaluable insights. Don't miss out on this opportunity to enhance your crypto understanding!
https://36crypto.com/the-future-of-dogecoin-how-high-can-this-cryptocurrency-reach/
Every business, big or small, deals with outgoing payments. Whether it’s to suppliers for inventory, to employees for salaries, or to vendors for services rendered, keeping track of these expenses is crucial. This is where payment vouchers come in – the unsung heroes of the accounting world.
Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...
Slides ensae-2016-10
1. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Actuariat de l’Assurance Non-Vie # 10
A. Charpentier (Université de Rennes 1)
ENSAE ParisTech, Octobre / Décembre 2016.
http://freakonometrics.hypotheses.org
@freakonometrics 1
2. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Généralités sur les Provisions pour Sinistres à Payer
Références: de Jong & Heller (2008), section 1.5 et 8.1, and Wüthrich & Merz
(2015), chapitres 1 à 3, et Pigeon (2015).
“ Les provisions techniques sont les provisions destinées à permettre le réglement
intégral des engagements pris envers les assurés et bénéficaires de contrats. Elles
sont liées à la technique même de l’assurance, et imposées par la réglementation.”
“It is hoped that more casualty actuaries will involve themselves in this important
area. IBNR reserves deserve more than just a clerical or cursory treatment and
we believe, as did Mr. Tarbell Chat ‘the problem of incurred but not reported
claim reserves is essentially actuarial or statistical’. Perhaps in today’s
environment the quotation would be even more relevant if it stated that the
problem ‘...is more actuarial than statistical’.” Bornhuetter & Ferguson (1972)
@freakonometrics 2
3. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
La vie des sinistres
• date t1 : survenance du sinistre
• période [t1, t2] : IBNR Incureed But Not Reported
• date t2 : déclaration du sinistre à l’assureur
• période [t2, t3] : IBNP Incureed But Not Paid
• période [t2, t6] : IBNS Incureed But Not Settled
• dates t3, t4, t5 : paiements
• date t6 : clôture du sinistre
@freakonometrics 3
4. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
La vie des sinistres
À la survenance, un montant estimé est indiqué par les gestionnaires de sinistres
(charge dossier/dossier). Ensuite deux opérations sont possibles :
• effectuer un paiement
• réviser le montant du sinistre
@freakonometrics 4
5. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
La vie des sinistres: Problématique des PSAP
La Provision pour Sinistres à Payer est la différence entre le montant du sinistre
et le paiement déjà effectué.
@freakonometrics 5
6. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : du micro au macro
Analyse micro
Sinistres i = 1, · · · , n
survenances, date Ti,0
déclaration, date Ti,0 + Qi
paiements, dates
Ti,j = Ti,0 + Qi +
j
k=1
Zi,k
et montants (Ti,j, Yi,j)
Montant de i à la date t, Ci(t) =
j:Ti,j ≤t
Yi,j
Provision (idéale) à la date t, Ri(t) = Ci(∞) − Ci(t)
@freakonometrics 6
7. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : du micro au macro
Analyse macro
On agrége les sinistres par
année de survenance :
Si = {k : Tk,0 ∈ [i, i + 1)}
On regarde les paiements
effectués après j années
Yi,j =
k∈Si Tk, ∈[j,j+1)
Zk,
Ici Yn−1,0
@freakonometrics 7
8. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : du micro au macro
Analyse macro
On agrége les sinistres par
année de survenance :
Si = {k : Tk,0 ∈ [i, i + 1)}
On regarde les paiements
effectués après j années
Yi,j =
k∈Si Tk, ∈[j,j+1)
Zk,
Ici Yn−1,1 et Yn,0
@freakonometrics 8
9. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : du micro au macro
Analyse macro
On agrége les sinistres par
année de survenance :
Si = {k : Tk,0 ∈ [i, i + 1)}
On regarde les paiements
effectués après j années
Yi,j =
k∈Si Tk, ∈[j,j+1)
Zk,
Ici Yn−1,2, Yn,1 et Yn+1,0
@freakonometrics 9
10. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : du micro au macro
• les provisions techniques peuvent représenter 75% du bilan,
• le ratio de couverture (provision / chiffre d’affaire) peut dépasser 2,
• certaines branches sont à développement long, en montant
n n + 1 n + 2 n + 3 n + 4
habitation 55% 90% 94% 95% 96%
automobile 55% 80% 85% 88% 90%
dont corporels 15% 40% 50% 65% 70%
R.C. 10% 25% 35% 40% 45%
@freakonometrics 10
11. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : complément sur les données micro
On note Fn
t l’information accessible à la date t,
Fn
t = σ {(Ti,0, Ti,j, Xi,j), i = 1, · · · , n, Ti,j ≤ t}
et C(∞) =
n
i=1
Ci(∞).
Notons que Mt = E [C(∞)|F ] est une martingale, i.e.
E[Mt+h|Ft] = Mt
alors que Vt = Var [C(∞)|F ] est une sur-martingale, i.e.
E[Vt+h|Ft] ≤ Vt.
@freakonometrics 11
12. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : les méthodes usuelles
Source : http://www.actuaries.org.uk
@freakonometrics 12
13. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : les méthodes usuelles
Source : http://www.actuaries.org.uk
@freakonometrics 13
14. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : les méthodes usuelles
Source : http://www.actuaries.org.uk
@freakonometrics 14
15. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles de Paiements : les méthodes usuelles
Source : http://www.actuaries.org.uk
@freakonometrics 15
16. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: multirisques habitation
@freakonometrics 16
17. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: risque incendies entreprises
@freakonometrics 17
18. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: automobile (total)
@freakonometrics 18
19. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: automobile matériel
@freakonometrics 19
20. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: automobile corporel
@freakonometrics 20
21. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: responsabilité civile entreprise
@freakonometrics 21
22. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: responabilité civile médicale
@freakonometrics 22
23. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Exemples de cadence de paiement: assurance construction
@freakonometrics 23
24. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Les triangles: incréments de paiements
Notés Yi,j, pour l’année de survenance i, et l’année de développement j,
0 1 2 3 4 5
0 3209 1163 39 17 7 21
1 3367 1292 37 24 10
2 3871 1474 53 22
3 4239 1678 103
4 4929 1865
5 5217
@freakonometrics 24
25. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Les triangles: paiements cumulés
Notés Ci,j = Yi,0 + Yi,1 + · · · + Yi,j, pour l’année de survenance i, et l’année de
développement j,
0 1 2 3 4 5
0 3209 4372 4411 4428 4435 4456
1 3367 4659 4696 4720 4730
2 3871 5345 5398 5420
3 4239 5917 6020
4 4929 6794
5 5217
@freakonometrics 25
26. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Les triangles: nombres de sinistres
Notés Ni,j sinistres survenus l’année i connus (déclarés) au bout de j années,
0 1 2 3 4 5
0 1043.4 1045.5 1047.5 1047.7 1047.7 1047.7
1 1043.0 1027.1 1028.7 1028.9 1028.7
2 965.1 967.9 967.8 970.1
3 977.0 984.7 986.8
4 1099.0 1118.5
5 1076.3
@freakonometrics 26
27. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
La prime acquise
Notée πi, prime acquise pour l’année i
année i 0 1 2 3 4 5
Pi 4591 4672 4863 5175 5673 6431
@freakonometrics 27
28. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles
1 > rm(list=ls())
2 > source("http:// freakonometrics .free.fr/bases.R")
3 > ls()
4 [1] "INCURRED" "NUMBER" "PAID" "PREMIUM"
5 > PAID
6 [,1] [,2] [,3] [,4] [,5] [,6]
7 [1,] 3209 4372 4411 4428 4435 4456
8 [2,] 3367 4659 4696 4720 4730 NA
9 [3,] 3871 5345 5398 5420 NA NA
10 [4,] 4239 5917 6020 NA NA NA
11 [5,] 4929 6794 NA NA NA NA
12 [6,] 5217 NA NA NA NA NA
@freakonometrics 28
29. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles ?
Actually, there might be two different cases in practice, the first one being when
initial data are missing
0 1 2 3 4 5
0 ◦ ◦ ◦ • • •
1 ◦ ◦ • • •
2 ◦ • • •
3 • • •
4 • •
5 •
In that case it is mainly an index-issue in calculation.
@freakonometrics 29
30. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Triangles ?
Actually, there might be two different cases in practice, the first one being when
final data are missing, i.e. some tail factor should be included
0 1 2 3 4 5
0 • • • • ◦ ◦
1 • • • ◦ ◦ ◦
2 • • ◦ ◦ ◦ ◦
3 • ◦ ◦ ◦ ◦ ◦
In that case it is necessary to extrapolate (with past information) the final loss
(tail factor).
@freakonometrics 30
31. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
The Chain Ladder estimate
We assume here that
Ci,j+1 = λj · Ci,j for all i, j = 0, 1, · · · , n.
A natural estimator for λj based on past history is
λj =
n−j
i=0
Ci,j+1
n−j
i=0
Ci,j
for all j = 0, 1, · · · , n − 1.
Hence, it becomes possible to estimate future payments using
Ci,j = λn+1−i · · · λj−1 Ci,n+1−i.
@freakonometrics 31
43. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
18 Mack.S.E 79.30
19 CV(IBNR): 0.03
@freakonometrics 43
44. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Three ways to look at triangles
There are basically three kind of approaches to model development
• developments as percentages of total incured, i.e. consider
ϕ = (ϕ0, ϕ1, · · · , ϕn), with ϕ0 + ϕ1 + · · · + ϕn = 1, such that
E(Yi,j) = ϕjE(Ci,n), where j = 0, 1, · · · , n.
• developments as rates of total incured, i.e. consider γ = (γ0, γ1, · · · , γn), such
that
E(Ci,j) = γjE(Ci,n), where j = 0, 1, · · · , n.
• developments as factors of previous estimation, i.e. consider
λ = (λ0, λ1, · · · , λn), such that
E(Ci,j+1) = λjE(Ci,j), where j = 0, 1, · · · , n.
@freakonometrics 44
45. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Three ways to look at triangles
From a mathematical point of view, it is strictly equivalent to study one of those.
Hence,
γj = ϕ0 + ϕ1 + · · · + ϕj =
1
λj
1
λj+1
· · ·
1
λn−1
,
λj =
γj+1
γj
=
ϕ0 + ϕ1 + · · · + ϕj + ϕj+1
ϕ0 + ϕ1 + · · · + ϕj
ϕj =
γ0 if j = 0
γj − γj−1 if j ≥ 1
=
1
λ0
1
λ1
· · ·
1
λn−1
, if j = 0
1
λj+1
1
λj+2
· · ·
1
λn−1
−
1
λj
1
λj+1
· · ·
1
λn−1
, if j ≥ 1
@freakonometrics 45
46. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Three ways to look at triangles
On the previous triangle,
0 1 2 3 4 n
λj 1,38093 1,01143 1,00434 1,00186 1,00474 1,0000
γj 70,819% 97,796% 98,914% 99,344% 99,529% 100,000%
ϕj 70,819% 26,977% 1,118% 0,430% 0,185% 0,000%
@freakonometrics 46
47. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
d-triangles
It is possible to define the d-triangles, with empirical λ’s, i.e. λi,j
0 1 2 3 4 5
0 1.362 1.009 1.004 1.002 1.005
1 1.384 1.008 1.005 1.002
2 1.381 1.010 1.001
3 1.396 1.017
4 1.378
5
@freakonometrics 47
48. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
The Chain-Ladder estimate
The Chain-Ladder estimate is probably the most popular technique to estimate
claim reserves. Let Ft denote the information avalable at time t, or more formally
the filtration generated by {Ci,j, i + j ≤ t} - or equivalently {Xi,j, i + j ≤ t}
Assume that incremental payments are independent by occurence years, i.e.
Ci1,· and Ci2,· are independent for any i1 and i2 [H1]
.
Further, assume that (Ci,j)j≥0 is Markov, and more precisely, there exist λj’s
and σ2
j ’s such that
E(Ci,j+1|Fi+j) = E(Ci,j+1|Ci,j) = λj · Ci,j [H2]
Var(Ci,j+1|Fi+j) = Var(Ci,j+1|Ci,j) = σ2
j · Ci,j [H3]
Under those assumption (see Mack (1993)), one gets
E(Ci,j+k|Fi+j) = (Ci,j+k|Ci,j) = λj · λj+1 · · · λj+k−1Ci,j
@freakonometrics 48
49. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Testing assumptions
Assumption H2 can be interpreted as a linear regression model, i.e.
Yi = β0 + Xi · β1 + εi, i = 1, · · · , n, where ε is some error term, such that
E(ε) = 0, where β0 = 0, Yi = Ci,j+1 for some j, Xi = Ci,j, and β1 = λj.
Weighted least squares can be considered, i.e. min
n−j
i=1
ωi (Yi − β0 − β1Xi)
2
where the ωi’s are proportional to Var(Yi)−1
. This leads to
min
n−j
i=1
1
Ci,j
(Ci,j+1 − λjCi,j)
2
.
As in any linear regression model, it is possible to test assumptions H1 and H2,
the following graphs can be considered, given j
• plot Ci,j+1’s versus Ci,j’s. Points should be on the straight line with slope λj.
• plot (standardized) residuals εi,j =
Ci,j+1 − λjCi,j
Ci,j
versus Ci,j’s.
@freakonometrics 49
50. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Testing assumptions
H1 is the accident year independent assumption. More precisely, we assume there
is no calendar effect.
Define the diagonal Bk = {Ck,0, Ck−1,1, Ck−2,2 · · · , C2,k−2, C1,k−1, C0,k}. If there
is a calendar effect, it should affect adjacent factor lines,
Ak =
Ck,1
Ck,0
,
Ck−1,2
Ck−1,1
,
Ck−2,3
Ck−2,2
, · · · ,
C1,k
C1,k−1
,
C0,k+1
C0,k
= ”
δk+1
δk
”,
and
Ak−1 =
Ck−1,1
Ck−1,0
,
Ck−2,2
Ck−2,1
,
Ck−3,3
Ck−3,2
, · · · ,
C1,k−1
C1,k−2
,
C0,k
C0,k−1
= ”
δk
δk−1
”.
For each k, let N+
k denote the number of elements exceeding the median, and
N−
k the number of elements lower than the mean. The two years are
independent, N+
k and N−
k should be “closed”, i.e. Nk = min N+
k , N−
k should be
“closed” to N+
k + N−
k /2.
@freakonometrics 50
51. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Testing assumptions
Since N−
k and N+
k are two binomial distributions B p = 1/2, n = N−
k + N+
k ,
then
E (Nk) =
nk
2
−
nk − 1
mk
nk
2nk
where nk = N+
k + N−
k and mk =
nk − 1
2
and
V (Nk) =
nk (nk − 1)
2
−
nk − 1
mk
nk (nk − 1)
2nk
+ E (Nk) − E (Nk)
2
.
Under some normality assumption on N, a 95% confidence interval can be
derived, i.e. E (Z) ± 1.96 V (Z).
@freakonometrics 51
53. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
From Chain-Ladder to Grossing-Up
The idea of the Chain-Ladder technique was to estimate the λj’s, so that we can
derive estimates for Ci,n, since
Ci,n = Ci,n−i ·
n
k=n−i+1
λk
Based on the Chain-Ladder link ratios, λ, it is possible to define grossing-up
coefficients
γj =
n
k=j
1
λk
and thus, the total loss incured for accident year i is then
Ci,n = Ci,n−i ·
γn
γn−i
@freakonometrics 53
54. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Variant of the Chain-Ladder Method (1)
Historically (see e.g.), the natural idea was to consider a (standard) average of
individual link ratios.
Several techniques have been introduces to study individual link-ratios.
A first idea is to consider a simple linear model, λi,j = aji + bj. Using OLS
techniques, it is possible to estimate those coefficients simply. Then, we project
those ratios using predicted one, λi,j = aji + bj.
@freakonometrics 54
55. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Variant of the Chain-Ladder Method (2)
A second idea is to assume that λj is the weighted sum of λ··· ,j’s,
λj =
j−1
i=0
ωi,jλi,j
j−1
i=0
ωi,j
If ωi,j = Ci,j we obtain the chain ladder estimate. An alternative is to assume
that ωi,j = i + j + 1 (in order to give more weight to recent years).
@freakonometrics 55
56. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Variant of the Chain-Ladder Method (3)
Here, we assume that cumulated run-off triangles have an exponential trend, i.e.
Ci,j = αj exp(i · βj).
In order to estimate the αj’s and βj’s is to consider a linear model on log Ci,j,
log Ci,j = aj
log(αj )
+βj · i + εi,j.
Once the βj’s have been estimated, set γj = exp(βj), and define
Γi,j = γn−i−j
j · Ci,j.
@freakonometrics 56
57. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
The extended link ratio family of estimators
For convenience, link ratios are factors that give relation between cumulative
payments of one development year (say j) and the next development year (j + 1).
They are simply the ratios yi/xi, where xi’s are cumulative payments year j (i.e.
xi = Ci,j) and yi’s are cumulative payments year j + 1 (i.e. yi = Ci,j+1).
For example, the Chain Ladder estimate is obtained as
λj =
n−j
i=0 yi
n−j
k=0 xk
=
n−j
i=0
xi
n−j
k=1 xk
·
yi
xi
.
But several other link ratio techniques can be considered, e.g.
λj =
1
n − j + 1
n−j
i=0
yi
xi
, i.e. the simple arithmetic mean,
λj =
n−j
i=0
yi
xi
n−j+1
, i.e. the geometric mean,
@freakonometrics 57
58. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
λj =
n−j
i=0
x2
i
n−j
k=1 x2
k
·
yi
xi
, i.e. the weighted average “by volume squared”,
Hence, these techniques can be related to weighted least squares, i.e.
yi = βxi + εi, where εi ∼ N(0, σ2
xδ
i ), for some δ > 0.
E.g. if δ = 0, we obtain the arithmetic mean, if δ = 1, we obtain the Chain
Ladder estimate, and if δ = 2, the weighted average “by volume squared”.
The interest of this regression approach, is that standard error for predictions
can be derived, under standard (and testable) assumptions. Hence
• standardized residuals (σx
δ/2
i )−1
εi are N(0, 1), i.e. QQ plot
• E(yi|xi) = βxi, i.e. graph of xi versus yi.
@freakonometrics 58
59. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Properties of the Chain-Ladder estimate
Further
λj =
n−j−1
i=0 Ci,j+1
n−j−1
i=0 Ci,j
is an unbiased estimator for λj, given Fj, and λj and λj+h are non-correlated,
given Fj. Hence, an unbiased estimator for E(Ci,j|Fn) is
Ci,j = λn−i · λn−i+1 · · · λj−2 λj−1 − 1 · Ci,n−i.
Recall that λj is the estimator with minimal variance among all linear estimators
obtained from λi,j = Ci,j+1/Ci,j’s. Finally, recall that
σ2
j =
1
n − j − 1
n−j−1
i=0
Ci,j+1
Ci,j
− λj
2
· Ci,j
is an unbiased estimator of σ2
j , given Fj (see Mack (1993) or Denuit & Charpentier
(2005)).
@freakonometrics 59
60. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Prediction error of the Chain-Ladder estimate
We stress here that estimating reserves is a prediction process: based on past
observations, we predict future amounts. Recall that prediction error can be
explained as follows,
E[(Y − Y )2
]
prediction variance
= E[ (Y − EY ) + (E(Y ) − Y )
2
]
≈ E[(Y − EY )2
]
process variance
+ E[(EY − Y )2
]
estimation variance
.
• the process variance reflects randomness of the random variable
• the estimation variance reflects uncertainty of statistical estimation
@freakonometrics 60
61. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Process variance of reserves per occurrence year
The amount of reserves for accident year i is simply
Ri = λn−i · λn−i+1 · · · λn−2λn−1 − 1 · Ci,n−i.
Note that E(Ri|Fn) = Ri Since
Var(Ri|Fn) = Var(Ci,n|Fn) = Var(Ci,n|Ci,n−i)
=
n
k=i+1
n
l=k+1
λ2
l σ2
kE[Ci,k|Ci,n−i]
and a natural estimator for this variance is then
Var(Ri|Fn) =
n
k=i+1
n
l=k+1
λ2
l σ2
kCi,k
= Ci,n
n
k=i+1
σ2
k
λ2
kCi,k
.
@freakonometrics 61
62. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Note that it is possible to get not only the variance of the ultimate cumulate
payments, but also the variance of any increment. Hence
Var(Yi,j|Fn) = Var(Yi,j|Ci,n−i)
= E[Var(Yi,j|Ci,j−1)|Ci,n−i] + Var[E(Yi,j|Ci,j−1)|Ci,n−i]
= E[σ2
i Ci,j−1|Ci,n−i] + Var[(λj−1 − 1)Ci,j−1|Ci,n−i]
and a natural estimator for this variance is then
Var(Yi,j|Fn) = Var(Ci,j|Fn) + (1 − 2λj−1)Var(Ci,j−1|Fn)
where, from the expressions given above,
Var(Ci,j|Fn) = Ci,n−i
j−1
k=i+1
σ2
k
λ2
kCi,k
.
@freakonometrics 62
63. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Parameter variance when estimating reserves per occurrence year
So far, we have obtained an estimate for the process error of technical risks
(increments or cumulated payments). But since parameters λj’s and σ2
j are
estimated from past information, there is an additional potential error, also
called parameter error (or estimation error). Hence, we have to quantify
E [Ri − Ri]2
. In order to quantify that error, Murphy (1994) assume the
following underlying model,
Ci,j = λj−1 · Ci,j−1 + ηi,j
with independent variables ηi,j. From the structure of the conditional variance,
Var(Ci,j+1|Fi+j) = Var(Ci,j+1|Ci,j) = σ2
j · Ci,j,
@freakonometrics 63
64. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Parameter variance when estimating reserves per occurrence year
it is natural to write the equation above
Ci,j = λj−1Ci,j−1 + σj−1 Ci,j−1εi,j,
with independent and centered variables with unit variance εi,j. Then
E [Ri − Ri]2
|Fn = R2
i
n−i−1
k=0
σ2
i+k
λ2
i+k C·,i+k
+
σ2
n−1
[λn−1 − 1]2 C·,i+k
Based on that estimator, it is possible to derive the following estimator for the
Conditional Mean Square Error of reserve prediction for occurrence year i,
CMSEi = Var(Ri|Fn) + E [Ri − Ri]2
|Fn .
@freakonometrics 64
65. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Variance of global reserves (for all occurrence years)
The estimate total amount of reserves is Var(R) = Var(R1) + · · · + Var(Rn).
In order to derive the conditional mean square error of reserve prediction, define
the covariance term, for i < j, as
CMSEi,j = RiRj
n
k=i
σ2
i+k
λ2
i+k C·,k
+
σ2
j
[λj−1 − 1]λj−1 C·,j+k
,
then the conditional mean square error of overall reserves
CMSE =
n
i=1
CMSEi + 2
j>i
CMSEi,j.
@freakonometrics 65
68. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
A short word on Munich Chain Ladder
Munich chain ladder is an extension of Mack’s technique based on paid (P) and
incurred (I) losses.
Here we adjust the chain-ladder link-ratios λj’s depending if the momentary
(P/I) ratio is above or below average. It integrated correlation of residuals
between P vs. I/P and I vs. P/I chain-ladder link-ratio to estimate the
correction factor.
Use standard Chain Ladder technique on the two triangles.
@freakonometrics 68
69. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
A short word on Munich Chain Ladder
The (standard) payment triangle, P
0 1 2 3 4 5
0 3209 4372 4411 4428 4435 4456
1 3367 4659 4696 4720 4730 4752.4
2 3871 5345 5398 5420 5430.1 5455.8
3 4239 5917 6020 6046.15 6057.4 6086.1
4 4929 6794 6871.7 6901.5 6914.3 6947.1
5 5217 7204.3 7286.7 7318.3 7331.9 7366.7
@freakonometrics 69
74. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
11 Totals
12 Paid Incurred P/I Ratio
13 Latest: 32 ,637 35 ,247 0.93
14 Ultimate: 35 ,271 35 ,259 1.00
It is possible to get a model mixing the two approaches together...
@freakonometrics 74
75. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Bornhuetter Ferguson
One of the difficulties with using the chain ladder method is that reserve
forecasts can be quite unstable. The Bornhuetter & Ferguson (1972) method
provides a procedure for stabilizing such estimates.
Recall that in the standard chain ladder model,
Ci,n = Fn · Ci,n−i, where Fn =
n−1
k=n−i
λk
If Ri denotes the estimated outstanding reserves,
Ri = Ci,n − Ci,n−i = Ci,n ·
Fn − 1
Fn
.
@freakonometrics 75
76. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Bornhuetter Ferguson
For a bayesian interpretation of the Bornhutter-Ferguson model, England &
Verrall (2002) considered the case where incremental paiments Yi,j are i.i.d.
overdispersed Poisson variables. Here
E(Yi,j) = aibj and Var(Yi,j) = ϕ · aibj,
where we assume that b1 + · · · + bn = 1. Parameter ai is assumed to be a drawing
of a random variable Ai ∼ G(αi, βi), so that E(Ai) = αi/βi, so that
E(Ci,n) =
αi
βi
= Ci ,
which is simply a prior expectation of the final loss amount.
@freakonometrics 76
77. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Bornhuetter Ferguson
The posterior distribution of Xi,j+1 is then
E(Xi,j+1|Fi+j) = Zi,j+1Ci,j + [1 − Zi,j+1]
Ci
Fj
· (λj − 1)
where Zi,j+1 =
F−1
j
βϕ + Fj
, where Fj = λj+1 · · · λn.
Hence, Bornhutter-Ferguson technique can be interpreted as a Bayesian method,
and a a credibility estimator (since bayesian with conjugated distributed leads to
credibility).
@freakonometrics 77
78. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Bornhuetter Ferguson
The underlying assumptions are here
• assume that accident years are independent
• assume that there exist parameters µ = (µ0, · · · , µn) and a pattern
β = (β0, β1, · · · , βn) with βn = 1 such that
E(Ci,0) = β0µi
E(Ci,j+k|Fi+j) = Ci,j + [βj+k − βj] · µi
Hence, one gets that E(Ci,j) = βjµi.
The sequence (βj) denotes the claims development pattern. The
Bornhuetter-Ferguson estimator for E(Ci,n|Ci, 1, · · · , Ci,j) is
Ci,n = Ci,j + [1 − βj−i]µi
@freakonometrics 78
79. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
where µi is an estimator for E(Ci,n).
If we want to relate that model to the classical Chain Ladder one,
βj is
n
k=j+1
1
λk
Consider the classical triangle. Assume that the estimator µi is a plan value
(obtain from some business plan). For instance, consider a 105% loss ratio per
accident year.
@freakonometrics 79
81. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Boni-Mali
As point out earlier, the (conditional) mean square error of prediction (MSE) is
mset(X) = E [X − X]2
|Ft
= Var(X|Ft)
process variance
+ E E (X|Ft) − X
2
parameter estimation error
i.e X is
a predictor for X
an estimator for E(X|Ft).
But this is only a a long-term view, since we focus on the uncertainty over the
whole runoff period. It is not a one-year solvency view, where we focus on
changes over the next accounting year.
@freakonometrics 81
82. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Boni-Mali
From time t = n and time t = n + 1,
λj
(n)
=
n−j−1
i=0 Ci,j+1
n−j−1
i=0 Ci,j
and λj
(n+1)
=
n−j
i=0 Ci,j+1
n−j
i=0 Ci,j
and the ultimate loss predictions are then
C
(n)
i = Ci,n−i ·
n
j=n−i
λj
(n)
and C
(n+1)
i = Ci,n−i+1 ·
n
j=n−i+1
λj
(n+1)
@freakonometrics 82
83. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Boni-Mali and the one-year-uncertainty
@freakonometrics 83
84. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Boni-Mali and the one-year-uncertainty
@freakonometrics 84
85. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Boni-Mali and the one-year-uncertainty
@freakonometrics 85
86. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Boni-Mali
In order to study the one-year claims development, we have to focus on
R
(n)
i and Yi,n−i+1 + R
(n+1)
i
The boni-mali for accident year i, from year n to year n + 1 is then
BM
(n,n+1)
i = R
(n)
i − Yi,n−i+1 + R
(n+1)
i = C
(n)
i − C
(n+1)
i .
Thus, the conditional one-year runoff uncertainty is
mse(BM
(n,n+1)
i ) = E C
(n)
i − C
(n+1)
i
2
|Fn
@freakonometrics 86
87. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Boni-Mali
Hence,
mse(BM
(n,n+1)
i ) = [C
(n)
i ]2 σ2
n−i/[λ
(n)
n−i]2
Ci,n−i
+
σ2
n−i/[λ
(n)
n−i]2
i−1
k=0 Ck,n−i
+
n−1
j=n−i+1
Cn−j,j
n−j
k=0 Ck,j
·
σ2
j /[λ
(n)
j ]2
n−j−1
k=0 Ck,j
Further, it is possible to derive the MSEP for aggregated accident years (see Merz
& Wüthrich (2008)).
@freakonometrics 87
89. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Ultimate Loss and Tail Factor
The idea - introduced by Mack (1999) - is to compute
λ∞ =
k≥n
λk
and then to compute
Ci,∞ = Ci,n × λ∞.
Assume here that λi are exponentially decaying to 1, i.e. log(λk − 1)’s are
linearly decaying
1 > Lambda= MackChainLadder (PAID)$f[1:( ncol(PAID) -1)]
2 > logL <- log(Lambda -1)
3 > tps <- 1:( ncol(PAID) -1)
4 > modele <- lm(logL~tps)
5 > logP <- predict(modele ,newdata=data.frame(tps=seq (6 ,1000)))
6 > (facteur <- prod(exp(logP)+1))
7 [1] 1.000707
@freakonometrics 89
90. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Ultimate Loss and Tail Factor
1 > DIAG <- diag(PAID [ ,6:1])
2 > PRODUIT <- c(1,rev(Lambda))
3 > sum(( cumprod(PRODUIT) -1)*DIAG)
4 [1] 2426.985
5 > sum(( cumprod(PRODUIT)*facteur -1)*DIAG)
6 [1] 2451.764
The ultimate loss is here 0.07% larger, and the reserves are 1% larger.
@freakonometrics 90
92. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
From Chain Ladder to London Chain and London Pivot
La méthode dite London Chain a été introduite par Benjamin & Eagles (1986).
On suppose ici que la dynamique des (Cij)j=1,..,n est donnée par un modèle de
type AR (1) avec constante, de la forme
Ci,k+1 = λk · Cik + αk pour tout i, k = 1, .., n
De façon pratique, on peut noter que la méthode standard de Chain Ladder,
reposant sur un modèle de la forme Ci,k+1 = λkCik, ne pouvait être appliqué que
lorsque les points (Ci,k, Ci,k+1) sont sensiblement alignés (à k fixé) sur une droite
passant par l’origine. La méthode London Chain suppose elle aussi que les points
soient alignés sur une même droite, mais on ne suppose plus qu’elle passe par 0.
Example:On obtient la droite passant au mieux par le nuage de points et par 0,
et la droite passant au mieux par le nuage de points.
@freakonometrics 92
93. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
From Chain Ladder to London Chain and London Pivot
Dans ce modèle, on a alors 2n paramètres à identifier : λk et αk pour k = 1, ..., n.
La méthode la plus naturelle consiste à estimer ces paramètres à l’aide des
moindres carrés, c’est à dire que l’on cherche, pour tout k,
λk, αk = arg min
n−k
i=1
(Ci,k+1 − αk − λkCi,k)
2
ce qui donne, finallement
λk =
1
n−k
n−k
i=1 Ci,kCi,k+1 − C
(k)
k C
(k)
k+1
1
n−k
n−k
i=1 C2
i,k − C
(k)2
k
où C
(k)
k =
1
n − k
n−k
i=1
Ci,k et C
(k)
k+1 =
1
n − k
n−k
i=1
Ci,k+1
et où la constante est donnée par αk = C
(k)
k+1 − λkC
(k)
k .
@freakonometrics 93
94. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
From Chain Ladder to London Chain and London Pivot
Dans le cas particulier du triangle que nous étudions, on obtient
k 0 1 2 3 4
λk 1.404 1.405 1.0036 1.0103 1.0047
αk −90.311 −147.27 3.742 −38.493 0
@freakonometrics 94
95. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
From Chain Ladder to London Chain and London Pivot
The completed (cumulated) triangle is then
0 1 2 3 4 5
0 3209 4372 4411 4428 4435 4456
1 3367 4659 4696 4720 4730
2 3871 5345 5398 5420
3 4239 5917 6020
4 4929 6794
5 5217
@freakonometrics 95
96. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
From Chain Ladder to London Chain and London Pivot
0 1 2 3 4 5
0 3209 4372 4411 4428 4435 4456
1 3367 4659 4696 4720 4730 4752
2 3871 5345 5398 5420 5437 5463
3 4239 5917 6020 6045 6069 6098
4 4929 6794 6922 6950 6983 7016
5 5217 7234 7380 7410 7447 7483
One the triangle has been completed, we obtain the amount of reserves, with
respectively 22, 43, 78, 222 and 2266 per accident year, i.e. the total is 2631 (to
be compared with 2427, obtained with the Chain Ladder technique).
@freakonometrics 96
97. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
From Chain Ladder to London Chain and London Pivot
La méthode dite London Pivot a été introduite par Straub, dans Nonlife
Insurance Mathematics (1989). On suppose ici que Ci,k+1 et Ci,k sont liés par
une relation de la forme
Ci,k+1 + α = λk. (Ci,k + α)
(de façon pratique, les points (Ci,k, Ci,k+1) doivent être sensiblement alignés (à k
fixé) sur une droite passant par le point dit pivot (−α, −α)). Dans ce modèle,
(n + 1) paramètres sont alors a estimer, et une estimation par moindres carrés
ordinaires donne des estimateurs de façon itérative.
@freakonometrics 97
98. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Introduction to factorial models: Taylor (1977)
This approach was studied in a paper entitled Separation of inflation and other
effects from the distribution of non-life insurance claim delays
We assume the incremental payments are functions of two factors, one related to
accident year i, and one to calendar year i + j. Hence, assume that
Yij = rjµi+j−1 for all i, j
r1µ1 r2µ2 · · · rn−1µn−1 rnµn
r1µ2 r2µ3 · · · rn−1µn
...
...
r1µn−1 r2µn
r1µn
et
r1µ1 r2µ2 · · · rn−1µn−1 rnµn
r1µ2 r2µ3 · · · rn−1µn
...
...
r1µn−1 r2µn
r1µn
@freakonometrics 98
99. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Hence, incremental payments are functions of development factors, rj, and a
calendar factor, µi+j−1, that might be related to some inflation index.
@freakonometrics 99
100. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Introduction to factorial models: Taylor (1977)
In order to identify factors r1, r2, .., rn and µ1, µ2, ..., µn, i.e. 2n coefficients, an
additional constraint is necessary, e.g. on the rj’s, r1 + r2 + .... + rn = 1 (this will
be called arithmetic separation method). Hence, the sum on the latest diagonal is
dn = Y1,n + Y2,n−1 + ... + Yn,1 = µn (r1 + r2 + .... + rk) = µn
On the first sur-diagonal
dn−1 = Y1,n−1 +Y2,n−2 +...+Yn−1,1 = µn−1 (r1 + r2 + .... + rn−1) = µn−1 (1 − rn)
and using the nth column, we get γn = Y1,n = rnµn, so that
rn =
γn
µn
and µn−1 =
dn−1
1 − rn
More generally, it is possible to iterate this idea, and on the ith sur-diagonal,
dn−i = Y1,n−i + Y2,n−i−1 + ... + Yn−i,1 = µn−i (r1 + r2 + .... + rn−i)
= µn−i (1 − [rn + rn−1 + ... + rn−i+1])
@freakonometrics 100
101. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Introduction to factorial models: Taylor (1977)
and finally, based on the n − i + 1th column,
γn−i+1 = Y1,n−i+1 + Y2,n−i+1 + ... + Yi−1,n−i+1
= rn−i+1µn−i+1 + ... + rn−i+1µn−1 + rn−i+1µn
rn−i+1 =
γn−i+1
µn + µn−1 + ... + µn−i+1
and µk−i =
dn−i
1 − [rn + rn−1 + ... + rn−i+1]
k 1 2 3 4 5 6
µk 4391 4606 5240 5791 6710 7238
rk in % 73.08 25.25 0.93 0.32 0.12 0.29
@freakonometrics 101
102. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Introduction to factorial models: Taylor (1977)
The challenge here is to forecast forecast values for the µk’s. Either a linear
model or an exponential model can be considered.
q q
q
q
q
q
0 2 4 6 8 10
20004000600080001000012000
q q
q
q
q
q
@freakonometrics 102
103. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Lemaire (1982) and autoregressive models
Instead of a simple Markov process, it is possible to assume that the Ci,j’s can be
modeled with an autorgressive model in two directions, rows and columns,
Ci,j = αCi−1,j + βCi,j−1 + γ.
@freakonometrics 103
104. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Zehnwirth (1977)
Here, we consider the following model for the Ci,j’s
Ci,j = exp (αi + γi · j) (1 + j)
βi
,
which can be seen as an extended Gamma model. αi is a scale parameter, while
βi and γi are shape parameters. Note that
log Ci,j = αi + βi log (1 + j) + γi · j.
For convenience, we usually assume that βi = β et γi = γ.
Note that if log Ci,j is assume to be Gaussian, then Ci,j will be lognormal. But
then, estimators one the Ci,j’s while be overestimated.
@freakonometrics 104
105. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Zehnwirth (1977)
Assume that log Ci,j ∼ N Xi,jβ, σ2
, then, if parameters were obtained using
maximum likelihood techniques
E Ci,j = E exp Xi,jβ +
σ2
2
= Ci,j exp −
n − 1
n
σ2
2
1 −
σ2
n
− n−1
2
> Ci,j,
Further, the homoscedastic assumption might not be relevant. Thus Zehnwirth
suggested
σ2
i,j = V ar (log Ci,j) = σ2
(1 + j)
h
.
@freakonometrics 105
106. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 10
Regression and reserving
De Vylder (1978) proposed a least squares factor method, i.e. we need to find
α = (α0, · · · , αn) and β = (β0, · · · , βn) such
(α, β) = argmin
n
i,j=0
(Xi,j − αi × βj)
2
,
or equivalently, assume that
Xi,j ∼ N(αi × βj, σ2
), independent.
A more general model proposed by De Vylder is the following
(α, β, γ) = argmin
n
i,j=0
(Xi,j − αi × βj × γi+j−1)
2
.
In order to have an identifiable model, De Vylder suggested to assume γk = γk
(so that this coefficient will be related to some inflation index).
@freakonometrics 106