SlideShare a Scribd company logo
A Sequential Approach to Loss Forecasting
In doing a traditional loss forecast a projected loss rate is estimated using historical loss and exposure base data adjusted for
loss development, benefit level changes, and inflation as appropriate.
Each policy period is adjusted separately and the resulting loss rate for that period is used as a sample point. A decision rule
must then be applied to the sample points in arriving at a projected loss rate.
Several decision rule candidates are often considered including the mean, median, weighted means, trimmed means, etc.
These are heuristical in nature and a selection is made based on subjective judgment.
Another approach is to use the decision rule that produces the estimate with the lowest variance.
Consider
Z=wX + (1-w)Y (1)
where X and Y are random variables and w is a weight selected so that Z has the minimum variance.
If X and Y are independent and w is a constant, then
Var(Z)= w2
Var (X) + (1-w)2
Var (Y) (2)
If we take the derivative with respect to w, set equal to 0 and solve for w,
w = Var (Y)/[(Var (X) + Var (Y)] (3)
This is value of w that produces the value of Z with the minimum variance.
If X and Y are dependent then (3) becomes
w = [Var(Y) - Cov (XY)]/[Var (X) + Var (Y) - 2Cov(XY)] (3a)
We can use this information to develop a sequential approach to derive the projected loss rate that results in a minimum
variance estimate.
The loss rate of an individual policy period is
T*D*ΣX/E
where T is the Trend factor, D is the development factor, ΣX is the paid or incurred amount, and E is the exposure base,
inflation adjusted if necessary.
So
Var(Loss Rate)= (T*D/E)2
*N*Var(X) (4)
where N is the claim count
This is a simplification which assumes the trend factor, development factor, exposure base, and claim count are constants. If
claim counts are Poisson distributed (mean and variance N) , and trend, development, exposures and claim size are mutually
independent this is probably not too bad.
Consider the each complete policy period in our historical sample. For each policy period, if Xi is the value (paid or
incurred) of the ith
claim in that policy period, then we can calculate the sample variance of claim values from the loss run
dumped to an Excel file.
Given trend factor T, development factor D, and exposure E, we can compute an estimate of the variance in the trended and
developed loss rate for each historical policy period in our sample.
We could make the assumption that the trended and developed loss rate in the most recent year is the one most likely to
repeat in the year we are forecasting but this would ignore all the information about loss rates from the prior policy periods
and would be basing the forecast on one sample point.
If Z represents the weighting of the trended and developed loss rate for the most recent complete policy period X, with the
trended and developed loss rate from the prior policy period Y, we can apply (3) to come up with our estimate of w using
(4) to calculate the variance of the loss rates for X and Y (assuming independence).
We can calculate the variance of the new weighted loss rate using (2) and then repeat weighting this with the trended and
developed loss rate for the policy period 2 years prior to the most current complete policy period and repeat this procedure
until we have exhausted all historical trended and developed loss rates.
This method produces the minimum variance projected loss rate without having to resort to a subjective judgment as to what
decision rule to apply to the sample loss rates.
Further we can apply this method to incurred or paid losses and then we can weight the resulting projected loss rates based
on paid and then based on incurred using (3) (or (3a) if Paid and Incurred if the correlation of paid loss rates and incurred
loss rates are non-zero).
Example:
PPR LDF*Trend Payroll Counts Var Incurred Ksquare Var PPR Weight Var New Wtd. Est. Proj. Payroll Forecast
2008 7.82 1.450
4.45E+0
8 75 2.51E+08 1.06E-09 20.02 0.10 2.04 15.46
2009
17.8
7 1.273
4.55E+0
8 90 5.00E+08 7.84E-10 35.29 0.06 2.28 16.33
2010
28.6
9 1.218
4.51E+0
8 93 3.79E+08 7.28E-10 25.64 0.09 2.43 16.22
2011
17.1
6 1.230
4.58E+0
8 75 9.11E+07 7.20E-10 4.92 0.55 2.69 14.92
2012 8.31 1.339
4.83E+0
8 65 4.13E+08 7.68E-10 20.61 0.29 5.92 12.22
2013
13.5
7 1.282
4.99E+0
8 63 3.15E+08 6.59E-10 13.07 0.64 8.30 13.79
2014
14.1
8 1.443
4.94E+0
8 55 4.85E+08 8.55E-10 22.79 1.00 22.79 14.18 474,414,459 733,499
w 0.31
Cov adj. w 0.08
PPR LDF*Trend Payroll Counts Var Paid Ksquare Var PPR Weight Var New Wtd. Est. Proj. Payroll Forecast
2008 8.11 1.504
4.45E+0
8 75 2.51E+08 1.14E-09 21.54 0.04 0.92 15.05
2009
18.7
2 1.333
4.55E+0
8 90 4.83E+08 8.60E-10 37.41 0.03 0.97 15.36
2010
30.4
1 1.308
4.51E+0
8 93 3.79E+08 8.40E-10 29.60 0.03 0.99 15.27
2011
18.5
3 1.328
4.58E+0
8 75 9.11E+07 8.40E-10 5.74 0.18 1.03 14.75
2012 9.29 1.497
4.83E+0
8 65 3.28E+08 9.60E-10 20.49 0.06 1.25 13.92
2013
15.1
2 1.575
4.99E+0
8 63 2.14E+08 9.96E-10 13.43 0.10 1.33 14.22
2014
14.1
2 1.924
4.94E+0
8 55 1.77E+07 1.52E-09 1.48 1.00 1.48 14.12 474,414,459 713,903
Wtd. P,I forecast 720,008
Cov adjusted
715,502
Cov (I,P) 0.82
The top section of the example is a forecast using the suggested method with incurred development and the bottom section
is a forecast using paid loss development.
15.46 was the loss rate that resulted from applying the method to incurred losses and 15.05 was the loss rate that resulted
from applying the method to paid losses. A straight average of trended and developed loss rates produces a loss rate of
15.37 based on incurred losses or 16.33 based on paid losses.
The incurred method result would have gotten a 31% weight if we assumed independence. Adjusting for the covariance
between the incurred and paid loss rates resulted in the incurred method getting an 8% weight and the paid method getting a
92% weight resulting in a forecast of $715,502, a loss rate of 15.08.
This is essentially a non-parametric method as we are assuming independence and identically distributed random variable
but not the actual distribution of the paid or incurred claims
Gamma Distribution
According to a problem in Wilks, Mathematical Statistics [ p.249, 8.27, a result proved by Laha (1954)], if the ratio of a
linear combination of i.i.d. random variables with coefficients not all zero to the sum of those variables is independent of
the sum of those variables then the pdf of the of the random variables is a Gamma.
This essentially says that if the development factor is independent of the sum of the prior period paids or incurreds as the
case may be then those claims are drawn from a gamma distribution. There is a question as to whether this holds before the
valuation where all claims have been reported (i.e. no pure IBNR) but we will ignore this for this discussion.
It is interesting that the Log Normal is sometimes used, particularly when it provides a better fit to the sample claims data
but given the result cited in Wilks, this may not be theoretically justified even when it yields a better fit.
The Gamma has nicer properties than the lognormal as respects additivity of variables.
If the claim amounts are iid drawn from a Gamma, we can estimate the parameters of the Gamma for each policy period
from the sample data.
Γ(α,β) = βα
/Γ(α) xα-1
e-βx
Var(X) = α/β2
E(X) = α/β
So β= E(X)/Var(X) (5)
and α= E(X)2
/Var(X) (6)
We can calculate α and β for each policy period from the historical trended and developed claim amounts calculating E(X)
and Var(X) for each of the policy period where X is the aggregate claim amount since the sum of Gamma’s is a Gamma.
If Xi is Γ(ki,β) then ΣXi is Γ(Σki,β). If Xi are independent then ΣXi is Γ(Nki,β) where N is the number of claims.
Therefore, the aggregate claims are Γ(Nα,β) where α and β are estimated using (6) and (5) and N is the claim counts.
We can now start with the earliest year and assume that the Gamma distribution fitted to the sample data for that policy
period is a prior distribtution with Γ(α1,β1). The sample fitted Gamma distribution for the next historical year is a likelihood
distribution Γ(α2,β2).
The Bayesian posterior is then Γ(α1+α2-1, β1 β2/(β1+β2)), the product of the prior and the likelihood . This becomes the new
prior and the Gamma sample fit using claims data for the next year is the new Likelihood distribution. We continue until all
the policy period data has been used.
The resulting parameters of the Gamma distribution are used to calculate our forecast which is the expected value of that
Gamma Distribution.
Example:
Incurred
Year PPR Var PPR α β α' β' PPR Fit Var PPR Fit
2008 7.82 20.02 3.06 2.56 3.06 2.56 7.82 20.02
2009 17.87 35.29 9.05 1.97 11.11 1.11 12.38 13.80
2010 28.69 25.64 32.09 0.89 42.19 0.50 20.93 10.38
2011 17.16 4.92 59.85 0.29 101.04 0.18 18.36 3.34
2012 8.31 20.61 3.35 2.48 103.39 0.17 17.51 2.96
2013 13.57 13.07 14.10 0.96 116.49 0.14 16.77 2.42
2014 14.18 22.79 8.82 1.61 124.31 0.13 16.43 2.17
Exposure 474,414,459
Weight 31% Units 10,000
Adj. Weight 52% Forecast $ 779,358
Paid
Year PPR Var PPR α β α' β' PPR Fit Var PPR Fit
2008 8.11 21.54 3.06 2.65 3.06 2.65 8.11 21.54
2009 18.72 37.41 9.37 2.00 11.43 1.14 13.03 14.85
2010 30.41 29.60 31.25 0.97 41.68 0.52 21.88 11.49
2011 18.53 5.74 59.85 0.31 100.53 0.19 19.58 3.81
2012 9.29 20.49 4.21 2.21 103.74 0.18 18.56 3.32
2013 15.12 13.43 17.02 0.89 119.76 0.15 17.84 2.66
2014 14.12 1.48 135.05 0.10 253.80 0.06 15.59 0.96
Exposure 474,414,459
Covariance 19.75 Units 10,000
Forecast $ 739,799
Wtd. Forecast $ 760,238
Loss Rate 16.02
Variance 10.67
α' β'
377.11 0.04 Loss Rate 15.82
Variance 0.66
Bayes Estimate $ 750,364
Assuming the Gamma for the loss distribution and applying the sequential method yields a higher forecast than the non-
parametric sequential approach. Weighting the incurred and paid forecasts together yielded a slightly higher projection than
combining the parameters and computing the expected value from them.
Assuming the Gamma for the loss distribution and applying the sequential method yields a higher forecast than the non-
parametric sequential approach. Weighting the incurred and paid forecasts together yielded a slightly higher projection than
combining the parameters and computing the expected value from them.

More Related Content

Viewers also liked

Transformada de fourier
Transformada de fourierTransformada de fourier
Transformada de fourier
Roniel Balan
 
El biocarbon
El biocarbonEl biocarbon
El biocarbon
Fredy Ayala
 
PSA Medical Oxygen Concentrators from NOXERIOR
PSA Medical Oxygen Concentrators from NOXERIORPSA Medical Oxygen Concentrators from NOXERIOR
PSA Medical Oxygen Concentrators from NOXERIOR
Oscar de Groen
 
151002_Railway2Heaven_Bentley_rus
151002_Railway2Heaven_Bentley_rus151002_Railway2Heaven_Bentley_rus
151002_Railway2Heaven_Bentley_rusDmitry Yakushev
 
How to Use AdWords Segmentation for Better PPC Results by Amy Hebdon
How to Use AdWords Segmentation for Better PPC Results by Amy HebdonHow to Use AdWords Segmentation for Better PPC Results by Amy Hebdon
How to Use AdWords Segmentation for Better PPC Results by Amy Hebdon
Anton Shulke
 
Project Moonless
Project MoonlessProject Moonless
Project MoonlessSamuel Soh
 
EXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIM
EXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIMEXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIM
EXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIM
roy mendez
 
История лицея
История лицеяИстория лицея
История лицея
Артем Заяц
 
Ulir pada mur dan baut
Ulir pada mur dan bautUlir pada mur dan baut
Ulir pada mur dan baut
Raden Muh Hadi
 
Stress and Physical Activity
Stress and Physical ActivityStress and Physical Activity
Stress and Physical Activity
Marilia Coutinho
 
Páscoa 2016
Páscoa 2016Páscoa 2016
Páscoa 2016
Silmara Oliveira
 
Ελευθέριος Βενιζέλος (εργασία μαθητών)
Ελευθέριος Βενιζέλος  (εργασία μαθητών)Ελευθέριος Βενιζέλος  (εργασία μαθητών)
Ελευθέριος Βενιζέλος (εργασία μαθητών)
philologiama
 
160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...
160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...
160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...
brettnelson16
 

Viewers also liked (17)

Transformada de fourier
Transformada de fourierTransformada de fourier
Transformada de fourier
 
CINDYSMITH-2
CINDYSMITH-2CINDYSMITH-2
CINDYSMITH-2
 
El biocarbon
El biocarbonEl biocarbon
El biocarbon
 
PSA Medical Oxygen Concentrators from NOXERIOR
PSA Medical Oxygen Concentrators from NOXERIORPSA Medical Oxygen Concentrators from NOXERIOR
PSA Medical Oxygen Concentrators from NOXERIOR
 
151002_Railway2Heaven_Bentley_rus
151002_Railway2Heaven_Bentley_rus151002_Railway2Heaven_Bentley_rus
151002_Railway2Heaven_Bentley_rus
 
How to Use AdWords Segmentation for Better PPC Results by Amy Hebdon
How to Use AdWords Segmentation for Better PPC Results by Amy HebdonHow to Use AdWords Segmentation for Better PPC Results by Amy Hebdon
How to Use AdWords Segmentation for Better PPC Results by Amy Hebdon
 
Project Moonless
Project MoonlessProject Moonless
Project Moonless
 
Kate Resume
Kate ResumeKate Resume
Kate Resume
 
EXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIM
EXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIMEXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIM
EXAMEN PRÁCTICO DE COMPUTACIÓN 2DO BIM
 
История лицея
История лицеяИстория лицея
История лицея
 
Ulir pada mur dan baut
Ulir pada mur dan bautUlir pada mur dan baut
Ulir pada mur dan baut
 
Stress and Physical Activity
Stress and Physical ActivityStress and Physical Activity
Stress and Physical Activity
 
Páscoa 2016
Páscoa 2016Páscoa 2016
Páscoa 2016
 
F14 DeansList
F14 DeansListF14 DeansList
F14 DeansList
 
Ελευθέριος Βενιζέλος (εργασία μαθητών)
Ελευθέριος Βενιζέλος  (εργασία μαθητών)Ελευθέριος Βενιζέλος  (εργασία μαθητών)
Ελευθέριος Βενιζέλος (εργασία μαθητών)
 
160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...
160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...
160108 gunnar knapp-an introduction to alaska fiscal facts and choices-januar...
 
SHSTA Overview
SHSTA OverviewSHSTA Overview
SHSTA Overview
 

Similar to Loss Forecastiing_A sequential approach

Combining Economic Fundamentals to Predict Exchange Rates
Combining Economic Fundamentals to Predict Exchange RatesCombining Economic Fundamentals to Predict Exchange Rates
Combining Economic Fundamentals to Predict Exchange RatesBrant Munro
 
Vasicek Model Project
Vasicek Model ProjectVasicek Model Project
Vasicek Model ProjectCedric Melhy
 
Holtwinters terakhir lengkap
Holtwinters terakhir lengkapHoltwinters terakhir lengkap
Holtwinters terakhir lengkap
Zulyy Astutik
 
Precautionary Savings and the Stock-Bond Covariance
Precautionary Savings and the Stock-Bond CovariancePrecautionary Savings and the Stock-Bond Covariance
Precautionary Savings and the Stock-Bond Covariance
Eesti Pank
 
Looking for cooperation on working paper - Expenditure model
Looking for cooperation on working paper - Expenditure modelLooking for cooperation on working paper - Expenditure model
Looking for cooperation on working paper - Expenditure model
Miss. Antónia FICOVÁ, Engineer. (Not yet Dr.)
 
Time Series Decomposition
Time Series DecompositionTime Series Decomposition
Time Series Decomposition
chandan kumar singh
 
Estimation and confidence interval
Estimation and confidence intervalEstimation and confidence interval
Estimation and confidence interval
Homework Guru
 
creditriskmanagment_howardhaughton121510
creditriskmanagment_howardhaughton121510creditriskmanagment_howardhaughton121510
creditriskmanagment_howardhaughton121510mrmelchi
 
Affine Term Structure Model with Stochastic Market Price of Risk
Affine Term Structure Model with Stochastic Market Price of RiskAffine Term Structure Model with Stochastic Market Price of Risk
Affine Term Structure Model with Stochastic Market Price of Risk
Swati Mital
 
Forcast2
Forcast2Forcast2
Forcast2
martinizo
 
Forecasting
ForecastingForecasting
Forecasting3abooodi
 
Topics Volatility
Topics VolatilityTopics Volatility
Topics Volatility
HELIOSPADILLAMAYER
 
Customer Lifetime Value Modeling
Customer Lifetime Value ModelingCustomer Lifetime Value Modeling
Customer Lifetime Value Modeling
Asoka Korale
 
Risk notes ch12
Risk notes ch12Risk notes ch12
Pricing interest rate derivatives (ext)
Pricing interest rate derivatives (ext)Pricing interest rate derivatives (ext)
Pricing interest rate derivatives (ext)
Swati Mital
 
Intro to econometrics
Intro to econometricsIntro to econometrics
Intro to econometrics
Gaetan Lion
 
OPM101Chapter.ppt
OPM101Chapter.pptOPM101Chapter.ppt
OPM101Chapter.ppt
VasudevPur
 
SAMPLING MEAN DEFINITION The term sampling mean .docx
SAMPLING MEAN DEFINITION The term sampling mean .docxSAMPLING MEAN DEFINITION The term sampling mean .docx
SAMPLING MEAN DEFINITION The term sampling mean .docx
anhlodge
 

Similar to Loss Forecastiing_A sequential approach (20)

Combining Economic Fundamentals to Predict Exchange Rates
Combining Economic Fundamentals to Predict Exchange RatesCombining Economic Fundamentals to Predict Exchange Rates
Combining Economic Fundamentals to Predict Exchange Rates
 
Vasicek Model Project
Vasicek Model ProjectVasicek Model Project
Vasicek Model Project
 
Holtwinters terakhir lengkap
Holtwinters terakhir lengkapHoltwinters terakhir lengkap
Holtwinters terakhir lengkap
 
Precautionary Savings and the Stock-Bond Covariance
Precautionary Savings and the Stock-Bond CovariancePrecautionary Savings and the Stock-Bond Covariance
Precautionary Savings and the Stock-Bond Covariance
 
Looking for cooperation on working paper - Expenditure model
Looking for cooperation on working paper - Expenditure modelLooking for cooperation on working paper - Expenditure model
Looking for cooperation on working paper - Expenditure model
 
Time Series Decomposition
Time Series DecompositionTime Series Decomposition
Time Series Decomposition
 
Estimation and confidence interval
Estimation and confidence intervalEstimation and confidence interval
Estimation and confidence interval
 
creditriskmanagment_howardhaughton121510
creditriskmanagment_howardhaughton121510creditriskmanagment_howardhaughton121510
creditriskmanagment_howardhaughton121510
 
Affine Term Structure Model with Stochastic Market Price of Risk
Affine Term Structure Model with Stochastic Market Price of RiskAffine Term Structure Model with Stochastic Market Price of Risk
Affine Term Structure Model with Stochastic Market Price of Risk
 
Forcast2
Forcast2Forcast2
Forcast2
 
Forecasting
ForecastingForecasting
Forecasting
 
Topics Volatility
Topics VolatilityTopics Volatility
Topics Volatility
 
report
reportreport
report
 
Customer Lifetime Value Modeling
Customer Lifetime Value ModelingCustomer Lifetime Value Modeling
Customer Lifetime Value Modeling
 
Risk notes ch12
Risk notes ch12Risk notes ch12
Risk notes ch12
 
final
finalfinal
final
 
Pricing interest rate derivatives (ext)
Pricing interest rate derivatives (ext)Pricing interest rate derivatives (ext)
Pricing interest rate derivatives (ext)
 
Intro to econometrics
Intro to econometricsIntro to econometrics
Intro to econometrics
 
OPM101Chapter.ppt
OPM101Chapter.pptOPM101Chapter.ppt
OPM101Chapter.ppt
 
SAMPLING MEAN DEFINITION The term sampling mean .docx
SAMPLING MEAN DEFINITION The term sampling mean .docxSAMPLING MEAN DEFINITION The term sampling mean .docx
SAMPLING MEAN DEFINITION The term sampling mean .docx
 

Loss Forecastiing_A sequential approach

  • 1. A Sequential Approach to Loss Forecasting In doing a traditional loss forecast a projected loss rate is estimated using historical loss and exposure base data adjusted for loss development, benefit level changes, and inflation as appropriate. Each policy period is adjusted separately and the resulting loss rate for that period is used as a sample point. A decision rule must then be applied to the sample points in arriving at a projected loss rate. Several decision rule candidates are often considered including the mean, median, weighted means, trimmed means, etc. These are heuristical in nature and a selection is made based on subjective judgment. Another approach is to use the decision rule that produces the estimate with the lowest variance. Consider Z=wX + (1-w)Y (1) where X and Y are random variables and w is a weight selected so that Z has the minimum variance. If X and Y are independent and w is a constant, then Var(Z)= w2 Var (X) + (1-w)2 Var (Y) (2) If we take the derivative with respect to w, set equal to 0 and solve for w, w = Var (Y)/[(Var (X) + Var (Y)] (3) This is value of w that produces the value of Z with the minimum variance. If X and Y are dependent then (3) becomes w = [Var(Y) - Cov (XY)]/[Var (X) + Var (Y) - 2Cov(XY)] (3a) We can use this information to develop a sequential approach to derive the projected loss rate that results in a minimum variance estimate. The loss rate of an individual policy period is T*D*ΣX/E where T is the Trend factor, D is the development factor, ΣX is the paid or incurred amount, and E is the exposure base, inflation adjusted if necessary. So Var(Loss Rate)= (T*D/E)2 *N*Var(X) (4) where N is the claim count This is a simplification which assumes the trend factor, development factor, exposure base, and claim count are constants. If claim counts are Poisson distributed (mean and variance N) , and trend, development, exposures and claim size are mutually independent this is probably not too bad. Consider the each complete policy period in our historical sample. For each policy period, if Xi is the value (paid or incurred) of the ith claim in that policy period, then we can calculate the sample variance of claim values from the loss run dumped to an Excel file.
  • 2. Given trend factor T, development factor D, and exposure E, we can compute an estimate of the variance in the trended and developed loss rate for each historical policy period in our sample. We could make the assumption that the trended and developed loss rate in the most recent year is the one most likely to repeat in the year we are forecasting but this would ignore all the information about loss rates from the prior policy periods and would be basing the forecast on one sample point. If Z represents the weighting of the trended and developed loss rate for the most recent complete policy period X, with the trended and developed loss rate from the prior policy period Y, we can apply (3) to come up with our estimate of w using (4) to calculate the variance of the loss rates for X and Y (assuming independence). We can calculate the variance of the new weighted loss rate using (2) and then repeat weighting this with the trended and developed loss rate for the policy period 2 years prior to the most current complete policy period and repeat this procedure until we have exhausted all historical trended and developed loss rates. This method produces the minimum variance projected loss rate without having to resort to a subjective judgment as to what decision rule to apply to the sample loss rates. Further we can apply this method to incurred or paid losses and then we can weight the resulting projected loss rates based on paid and then based on incurred using (3) (or (3a) if Paid and Incurred if the correlation of paid loss rates and incurred loss rates are non-zero). Example: PPR LDF*Trend Payroll Counts Var Incurred Ksquare Var PPR Weight Var New Wtd. Est. Proj. Payroll Forecast 2008 7.82 1.450 4.45E+0 8 75 2.51E+08 1.06E-09 20.02 0.10 2.04 15.46 2009 17.8 7 1.273 4.55E+0 8 90 5.00E+08 7.84E-10 35.29 0.06 2.28 16.33 2010 28.6 9 1.218 4.51E+0 8 93 3.79E+08 7.28E-10 25.64 0.09 2.43 16.22 2011 17.1 6 1.230 4.58E+0 8 75 9.11E+07 7.20E-10 4.92 0.55 2.69 14.92 2012 8.31 1.339 4.83E+0 8 65 4.13E+08 7.68E-10 20.61 0.29 5.92 12.22 2013 13.5 7 1.282 4.99E+0 8 63 3.15E+08 6.59E-10 13.07 0.64 8.30 13.79 2014 14.1 8 1.443 4.94E+0 8 55 4.85E+08 8.55E-10 22.79 1.00 22.79 14.18 474,414,459 733,499 w 0.31 Cov adj. w 0.08 PPR LDF*Trend Payroll Counts Var Paid Ksquare Var PPR Weight Var New Wtd. Est. Proj. Payroll Forecast 2008 8.11 1.504 4.45E+0 8 75 2.51E+08 1.14E-09 21.54 0.04 0.92 15.05 2009 18.7 2 1.333 4.55E+0 8 90 4.83E+08 8.60E-10 37.41 0.03 0.97 15.36 2010 30.4 1 1.308 4.51E+0 8 93 3.79E+08 8.40E-10 29.60 0.03 0.99 15.27 2011 18.5 3 1.328 4.58E+0 8 75 9.11E+07 8.40E-10 5.74 0.18 1.03 14.75 2012 9.29 1.497 4.83E+0 8 65 3.28E+08 9.60E-10 20.49 0.06 1.25 13.92 2013 15.1 2 1.575 4.99E+0 8 63 2.14E+08 9.96E-10 13.43 0.10 1.33 14.22 2014 14.1 2 1.924 4.94E+0 8 55 1.77E+07 1.52E-09 1.48 1.00 1.48 14.12 474,414,459 713,903 Wtd. P,I forecast 720,008 Cov adjusted
  • 3. 715,502 Cov (I,P) 0.82 The top section of the example is a forecast using the suggested method with incurred development and the bottom section is a forecast using paid loss development. 15.46 was the loss rate that resulted from applying the method to incurred losses and 15.05 was the loss rate that resulted from applying the method to paid losses. A straight average of trended and developed loss rates produces a loss rate of 15.37 based on incurred losses or 16.33 based on paid losses. The incurred method result would have gotten a 31% weight if we assumed independence. Adjusting for the covariance between the incurred and paid loss rates resulted in the incurred method getting an 8% weight and the paid method getting a 92% weight resulting in a forecast of $715,502, a loss rate of 15.08. This is essentially a non-parametric method as we are assuming independence and identically distributed random variable but not the actual distribution of the paid or incurred claims Gamma Distribution According to a problem in Wilks, Mathematical Statistics [ p.249, 8.27, a result proved by Laha (1954)], if the ratio of a linear combination of i.i.d. random variables with coefficients not all zero to the sum of those variables is independent of the sum of those variables then the pdf of the of the random variables is a Gamma. This essentially says that if the development factor is independent of the sum of the prior period paids or incurreds as the case may be then those claims are drawn from a gamma distribution. There is a question as to whether this holds before the valuation where all claims have been reported (i.e. no pure IBNR) but we will ignore this for this discussion. It is interesting that the Log Normal is sometimes used, particularly when it provides a better fit to the sample claims data but given the result cited in Wilks, this may not be theoretically justified even when it yields a better fit. The Gamma has nicer properties than the lognormal as respects additivity of variables. If the claim amounts are iid drawn from a Gamma, we can estimate the parameters of the Gamma for each policy period from the sample data. Γ(α,β) = βα /Γ(α) xα-1 e-βx Var(X) = α/β2 E(X) = α/β So β= E(X)/Var(X) (5) and α= E(X)2 /Var(X) (6) We can calculate α and β for each policy period from the historical trended and developed claim amounts calculating E(X) and Var(X) for each of the policy period where X is the aggregate claim amount since the sum of Gamma’s is a Gamma.
  • 4. If Xi is Γ(ki,β) then ΣXi is Γ(Σki,β). If Xi are independent then ΣXi is Γ(Nki,β) where N is the number of claims. Therefore, the aggregate claims are Γ(Nα,β) where α and β are estimated using (6) and (5) and N is the claim counts. We can now start with the earliest year and assume that the Gamma distribution fitted to the sample data for that policy period is a prior distribtution with Γ(α1,β1). The sample fitted Gamma distribution for the next historical year is a likelihood distribution Γ(α2,β2). The Bayesian posterior is then Γ(α1+α2-1, β1 β2/(β1+β2)), the product of the prior and the likelihood . This becomes the new prior and the Gamma sample fit using claims data for the next year is the new Likelihood distribution. We continue until all the policy period data has been used. The resulting parameters of the Gamma distribution are used to calculate our forecast which is the expected value of that Gamma Distribution. Example: Incurred Year PPR Var PPR α β α' β' PPR Fit Var PPR Fit 2008 7.82 20.02 3.06 2.56 3.06 2.56 7.82 20.02 2009 17.87 35.29 9.05 1.97 11.11 1.11 12.38 13.80 2010 28.69 25.64 32.09 0.89 42.19 0.50 20.93 10.38 2011 17.16 4.92 59.85 0.29 101.04 0.18 18.36 3.34 2012 8.31 20.61 3.35 2.48 103.39 0.17 17.51 2.96 2013 13.57 13.07 14.10 0.96 116.49 0.14 16.77 2.42 2014 14.18 22.79 8.82 1.61 124.31 0.13 16.43 2.17 Exposure 474,414,459 Weight 31% Units 10,000 Adj. Weight 52% Forecast $ 779,358 Paid Year PPR Var PPR α β α' β' PPR Fit Var PPR Fit 2008 8.11 21.54 3.06 2.65 3.06 2.65 8.11 21.54 2009 18.72 37.41 9.37 2.00 11.43 1.14 13.03 14.85 2010 30.41 29.60 31.25 0.97 41.68 0.52 21.88 11.49 2011 18.53 5.74 59.85 0.31 100.53 0.19 19.58 3.81 2012 9.29 20.49 4.21 2.21 103.74 0.18 18.56 3.32 2013 15.12 13.43 17.02 0.89 119.76 0.15 17.84 2.66 2014 14.12 1.48 135.05 0.10 253.80 0.06 15.59 0.96 Exposure 474,414,459 Covariance 19.75 Units 10,000 Forecast $ 739,799 Wtd. Forecast $ 760,238 Loss Rate 16.02 Variance 10.67 α' β' 377.11 0.04 Loss Rate 15.82 Variance 0.66 Bayes Estimate $ 750,364
  • 5. Assuming the Gamma for the loss distribution and applying the sequential method yields a higher forecast than the non- parametric sequential approach. Weighting the incurred and paid forecasts together yielded a slightly higher projection than combining the parameters and computing the expected value from them.
  • 6. Assuming the Gamma for the loss distribution and applying the sequential method yields a higher forecast than the non- parametric sequential approach. Weighting the incurred and paid forecasts together yielded a slightly higher projection than combining the parameters and computing the expected value from them.