SlideShare a Scribd company logo
1 of 57
UNIVERSITY OF GHANA
DEPARTMENT OF STATISTICS
RATEMAKING AND RESERVING IN CASUALTY AND PROPERTY
INSURANCE, CREDIBILITY APPROACH.
BY
SHAPAH GBOR SHADRACH
A DISERTATION SUBMITTED TO THE DEPARTMENT OF
STATISTICS, UNIVERSITY OF GHANA, IN PARTIAL FULFILLMENT
OF THE REQUIREMENT FOR THE AWARD OF THE DEGREE OF
MASTER OF SCIENCE IN ACTUARIAL SCIENCE.
JULY 2015
i
DECLARATION
I hereby declare that this submission is my own work towards the MSc. Actuarial Science
program, and that, to the best of my knowledge, it contains no material(s) previously published
by another person(s) nor material(s) which has been accepted for the award of any other degree
of any university, except where due acknowledgement has been made in the text.
SHAPAH GBOR SHADRACH _________________ ________________
(10507649) Signature Date
Certified by:
Dr. F. O. Mettle _____________________ __________________
Supervisor Signature Date
Mr.E.B.N Quaye _____________________ __________________
Supervisor Signature Date
ii
Abstract
The major problems that the modern insurance industry is facing is how to compute the risk
premium adequately to cover the claim payment that may occur. This is due to the randomness
of the risk associate with insurance contracts and/or partly due to the modifications in the
policies and the increasing demand for the insurance policies.
The purpose of this work is to develop an improved estimates for insurance premiums using
credibility framework. Credibility theory is non-parametric quantitative method to forecast for
future insurance coverage through the combination of parameters from a subset and the whole
group to update for the future losses. It makes use of observed results and results from a larger
data set from a similar industry with appropriate weight placed on them to estimate future
expectations. It is based on statistical methods and actuarial judgments to fulfil this ultimate task
of forecasting.
iii
DEDICATION
I dedicate this research work to Mariama ABDULRAMANI, one who open heartedly supported
and contributed immensely in diverse ways to help me climb the academic ladder.
iv
ACKNOWLEDGEMENT
I wish to express my profound gratitude to the almighty God for his love and guidance
throughout my education. Big thanks go to my supervisor Dr. F.O Mettle, head of department,
Department of Statistics for his tired less effort and contribution toward the completion of this
research work. My development editor, Mr.E.B.N Quaye of the Department of Statistics kept me
focused, and managed the entire project to completion. His is one of those few lecturers that
brings out the best in you. Thank you for all of your hard work and encouragement. A special
note of gratitude goes to my late uncle Torgbi Agumedra ΙΙΙ, chief of Adaklu Torda and my late
mother Helen Dorfe whose vision and ambitions is the product of this piece of work, uncle and
mom “AKPE NA MI”. I would like to end by extending a heartfelt thank you to my wife Frieda
SHAPAH for your selfless life to support me morally and financially, putting your needs on hold
throughout this whole period.
Thank you all with all of my heart.
v
Table of Contents
DECLARATION ..............................................................................................................................i
Abstract ........................................................................................................................................ii
DEDICATION................................................................................................................................iii
ACKNOWLEDGEMENT ..............................................................................................................iv
1 CHAPTER ONE ..................................................................................................................... 1
1.1 Introduction...................................................................................................................... 1
1.2 Background ...................................................................................................................... 3
1.3 Problem statement............................................................................................................ 4
1.4 Objective of the study ...................................................................................................... 5
1.5 Research questions ........................................................................................................... 6
1.6 Significance of study........................................................................................................ 6
1.7 Scope of the Study............................................................................................................ 6
1.8 Outline of the Study ......................................................................................................... 7
2 CHAPTER TWO .................................................................................................................... 8
2.1 Literature review .............................................................................................................. 8
2.2 Limited Fluctuation Credibility........................................................................................ 9
2.3 Bayesian and the Bulhmann Approach .......................................................................... 10
3 CHAPTER THREE .............................................................................................................. 12
vi
3.1 Methodology .................................................................................................................. 12
3.1.1 3.1.1 Presentation of standard formula of credibility; ............................................ 12
3.1.2 Interpretation of assumptions.................................................................................. 13
3.2 EMPIRICAL BAYES METHOD .................................................................................. 15
3.2.1 General presentation of the problem. ...................................................................... 15
3.2.2 The unbiased estimators of 𝑬𝑷𝑽 and 𝑽𝑯𝑴.......................................................... 16
4 CHAPTER FOUR................................................................................................................. 18
4.1 DATA............................................................................................................................. 18
4.2 PRESENTATION AND ANALYSIS OF DATA ......................................................... 18
4.2.1 Set up of the computation ....................................................................................... 20
4.3 DETERMINATION OF PARAMETERS ..................................................................... 21
4.4 Chi-square Goodness of Fit Test.................................................................................... 28
4.4.1 Test Statistics .......................................................................................................... 28
4.4.2 The critical region:.................................................................................................. 31
4.4.3 Decision: ................................................................................................................. 31
4.5 Testing the model in terms of Cedi equivalence ............................................................ 31
4.5.1 Test statistics........................................................................................................... 31
4.5.2 Interpretation of results;.......................................................................................... 34
5 DETERMINATION OF CLAIMS FREQUENCY .............................................................. 35
5.1.1 Compute the total exposures................................................................................... 36
vii
5.1.2 Total exposures of all risk groups........................................................................... 36
5.1.3 The exposure-weighted means of the claim frequency are:.................................... 37
5.1.4 Computing EPV ...................................................................................................... 38
5.1.5 Buhlmann-straub predicted claims frequency for each risk group ......................... 41
5.1.6 Total claim frequency predicted based on the historical exposure is ..................... 42
6 CHAPTER FIVE .................................................................................................................. 44
6.1 CONCLUSION AND RECOMMENDATIONS........................................................... 44
6.1.1 Findings................................................................................................................... 44
7 RECOMMENDATIONS...................................................................................................... 45
7.1 Assumptions................................................................................................................... 46
7.2 Limitations ..................................................................................................................... 46
APPENDICES .............................................................................................................................. 47
1
1 CHAPTER ONE
1.1 Introduction
In the face of economic risk people seek security which they consider the next basic goal after
food, shelter and clothing, so they get into agreement called insurance contract that promises to
pay a fixed amount to the individuals upon the occurrence of the stipulated random event that is
the subject matter of the insurance contract and that to which the policyholder had paid in
advance a premium.
The insurer pools the expected losses and the potential for individual variability and charges
premium that will be sufficient to cover all projected claim payments. Each policyholder is
charged a premium in effect and it reflects any special traits of the policy and the past experience
of the individual, Anderson & Brown (2005).
The general framework of insurance is based on the weak law of large numbers, which says that
for all 𝜀 > 0
lim
𝑛 → ∞
𝑃 [|
1
𝑛
∑ 𝑌𝑖 − 𝜇𝑛
𝑖=1 | ≥ 𝜀 ] = 0 (1)
Under the assumption that the individual risks are uncorrelated and identically distributed
random variables on the probability space {Ω,ℱ, Ρ}, with finite mean, 𝜇 = Ε[𝑌𝑖].
Intuitively, this means that the total claim amount becomes more predictable with increasing
portfolio size n, and we can therefore calculate the insurance premium more accurately for large
portfolio sizes n (Wutherich, 2014). This justify why credence is attached to the amount of
experience data available at any point in time under frame work of credibility theory. Therefore
the weak law of large numbers is considered to be a theoretical cornerstone of insurance.
2
By the Chebychev inequality which provides the rate of convergence and the central limit
theorem which provides the asymptotic distribution, if the claims random variables 𝑌𝑖 have finite
variance 𝜎2
the weak law of large numbers could be rewritten as this and that we have the
following in convergence in distribution;
∑ 𝑌𝑖 −𝑛 𝜇𝑛
𝑖=1
√ 𝑛𝜎
⇒ 𝑁(0,1) as 𝑛 ⟶ ∞, (2)
That is within the limit we then have standard normal distribution, Wutherich (2014). As the
portfolio size n increases the denominator increases at a slower rate making the total claim
amount more predictable as the confidence bound narrows with increasing portfolio size.
(Wutherich, 2014). The most interesting aspect of insurance therefore is the size of the policies
in a portfolio. The volume takes care of the risk of randomness.
Intuitively that is why we will shortly see that the credibility factor z which measures the
importance placed on the individual experience approaches 1 as the portfolio size n increases,
that is more emphases are placed on the experience data and as the company enters new line of
business and the volume goes down the credibility factor z approaches zero, shifting more
emphases to the collateral data.
Generally, only few of such policyholders suffer losses and the losses are paid out of the
premiums collected from the pool of policyholders. Thus, the entire pool contributes to pay the
unfortunate few. In effect each policyholder exchanges an unknown loss for the payment of a
known premium. The insurer decides on the type of losses to cover under each insurance
contract. The insurance policy may define specific perils that are covered, or it may cover all
perils with certain named exclusions, for example, loss as a result of war or loss of life due to
suicide, Anderson & Brown (2005). The number of losses that occur within a specified period is
3
random variable and is known as the loss frequency while the amount paid by the insurer for
such losses is the claim severity.
This research seeks to predict claim frequency, claim severity, aggregate loss and pure premium,
based on the framework of credibility theory. The framework of credibility theory is based on
experience data of an insurance firm and data collected from similar industry with appropriate
weight placed on each to update for the future expected losses. In other words it places weight on
individual risk experience and class risk experience, making the premium rate a weighted
average. This shows how much importance is placed on the risk or on the collateral data due to
the volume of data available on the company level or the volume of the collateral data in use.
Thus z reflects the amount of data available. The risk group is covered over a period of time, say
one year upon payment of premium. The premium is partially based on a rate specified in the
manual, called Manual rate and partially on the specific risk characteristics of the group, Dean
(2005). Based on recent claim experience of the risk group the premium for the next period will
be revised and the revised prediction determines the insurance premium for the next period for
the risk group.
1.2 Background
Ratemaking is the determination of what rate or premiums to charge for insurance product. This
has been a major challenge in managing insurance industries. This involves the calculations of
the adequate premium to cover losses and expenses and a margin for unanticipated claim
payments (Anderson & Brown, 2005).
In Ghana the insurance industry is one of the growing industries through the banking sector for
the past decades under the bancassurnce partnership agreement (National Isurance Commission,
2011). This is obvious as almost all the banks have some sort of insurance packages available
4
aside the traditional banking activities. In view of making the insurance products attractive lot of
modifications have been put in place that makes the computation of premium a challenging task
and demanding more hands of actuaries to accomplish.
Insurance industry is a complicated entity that needs experts to manage it by providing reliable
risk models that may be able to predict catastrophic risk, adequate financial reserves that could
meet any future losses, calculating appropriate risk for any insured, developing new products to
suit the needs of the people and their culture. It is the duty of the actuary to advice on how the
products are managed, how much deductible or policy limit should be imposed on a policy and if
there should be co-insurance factor or there is the need for re-insurance. These are major
variables that keeps insurance firm solvent and profitable.
Insurance is risk transferring that may or may not occur in the case of property and casualty
insurance but is a sure event in most of the life insurance policies therefore there should be
adequate reserve to pay any unforeseen contingencies that may be the subject matter of an
insurance contract. This paper seeks to provide one of the alternative ways that the insurance
industry uses to compute the claims frequency and the claims reserves necessary for any future
losses.
1.3 Problem statement
There are a lot of problems that associate with the practice of pricing risk in insurance markets.
The most obvious problem is the availability of data and the reason for data restriction basically
stems from several reasons including the following:
 Poor documentation of the losses limits the capacity of the experience data available
(Biener, November 2011).
 Delay in processing the insurance policies often times distort data.
5
 Release of data for analyses was accompanied by lot of bureaucratic processes.
These factors mostly affect the insurers that there are required to add high risk-loading for
uncertainty in the estimation of expected losses Biener; (2011). As a result the pure premium is
higher in micro and emerging insurance market as compared to regular insurance markets,
making insurance more expensive and thus less attractive to the low income population, Biener,
(2011).
The ability to compute insurance premiums that are adequately sufficient, reasonable and fair is a
major challenge to the insurance companies. Though insurance contract in the view of the
insured is transferring of unknown risk for a known premium, he/she is only willing to pay a
certain amount for the risk beyond this amount he/she avoids the contract. Therefore there is the
need to know how much gross premium to charge to reflect expected utility but just enough to
cover commission and expenses and any anticipated profit Anderson & Brown, (2005).
In view of this that this project seeks to provide one of the means to compute the rates for
insurance products.
1.4 Objective of the study
One of the fundamental challenges confronting the insurance industry in providing the insurance
products is pricing risk, Biener, (2011). Therefore this paper will seek to address the following
problems: Seeks to investigate one of the quantitative techniques that would enable computation
of risk premium that will be fair but adequate for insurance firms. Seeks to provide a basic
understanding of one of the conventional techniques which is in used in insurance markets today
to compute the aggregate loss.
6
1.5 Research questions
The research work will seek to address the following fundamental questions that frequently face
the insurance industries:
 How much fair and equitable premium should be charged to each policyholder?
 How do we predict total claim frequency based on company experience?
 How much loading should be added to make up for expenses and profitability?
These questions are answered through the knowledge of probability, statistics, mathematics and
finance. There are different approaches as to how to compute premiums and this paper
considered credibility theory as one of the quantitative tool to address these problems.
1.6 Significance of study
After the completion of this piece of work it would be clear to the insurance firms how to use
their available experience data and other data collected from a similar industry to predict for the
next period the frequency of their claims. Management of insurance firms would know how to
compute adequately the premium to charge each policyholder and how much to reserve to meet
any future losses. The state will benefit as there will be fairness in premiums charge to the
policyholders that would enable expansions in the insurance industries as more people will buy
insurance products making more funds available to businesses.
1.7 Scope of the Study
This study was structured using experience data from an insurance firm whose name we decide
to remain unanimous. The study lay emphasis on determining risk premium through the
framework of Bulhmann- Straub credibility approach based on the historical experience data of
the company over a period of eight years. The firm had been in existence for about ten years now
but due to poor documentations which distort most of the earlier data set forces the use of only
7
the resent five years data. The data set is composed of twelve different risk groups in the
portfolio of the firm over the period under review. The research focus on five years data to
formulate a model to predict for the next three years and the results are compared with the
observed data for the same period. A chi square test is then done to check if the model fits the
data set. An additional Cedi equivalent test is carried out to check how much will the firm’s gain
of loss if the model was used over those three years.
1.8 Outline of the Study
This research work is organized as follows; Chapter one gives brief descriptive of the research
work, while Chapter two lays emphases on the findings of different authors whose ideas have
been defined in relation to the topic under study. Chapter three focuses on the methodology
review in the light of Mathematical Statistics tools that are employed and important to the
analyses of the various data gathered. Chapter four focuses on data analysis and summary of the
results. Chapter five, which is the final chapter provides findings, conclusions,
Recommendations and it is followed closely by references and appendices.
8
2 CHAPTER TWO
2.1 Literature review
Credibility theory is a set of quantitative tool that many actuaries used to estimate claims
frequencies and premium based on the experience data on risk or group of risks. It is a branch of
insurance mathematics that explore model based principles to construct formulas that covers
more broadly linear estimation and prediction in latent variables (Norberg, 2006).
Mowbray (1914) used the limited fluctuation credibility theory in his work under workers
compensation insurance, Whitney (1918) show that credibility estimate could be a weighted
average of two known quantities, which is a weighted average of an individual and a class
estimate of the individual risk premium. Whitney defined 𝑈 as expected claims expenses per unit
of risk exposed for any individual risk that form part of the portfolio of similar risks. Whitney
proposed that premium rate should be the weighted average of the individual risk experience and
class risk experience.
𝑈̅ = 𝑍 ∗ 𝑈̂ + (1 − 𝑍) ∗ 𝜇 (3)
Where the observed mean claim amount per unit of risk exposed for any individual is 𝑈̂ and 𝜇 is
the overall mean in the portfolio. The weight Z is the credibility factor that measures how
importance or credence is given to the individual experience. Whitney described the risk
premium as a random variable which is a function 𝑈(Θ) of a random element that represents the
unobserved characteristics of the individual risk. We treat 𝜃 as the realization of a random
variable Θ the distribution of which we call prior distribution. The randomness of Θ clearly
shows that the individual risks that form the portfolio similar but not necessarily identical and
thus the distribution of Θ describes the variation of the individual risk characteristics in the entire
9
portfolio. Credibility theory has two major areas namely Limited Fluctuation Credibility and The
Greatest Accuracy Credibility.
2.2 Limited Fluctuation Credibility
Mowbray (1914) used the limited fluctuations credibility theory to quantify the problem of while
working on the workers compensation in insurance. In his work, Mowbray suggested how to
determine the total amount of individual risk exposure that is needed for 𝑈̂ to be fully reliable for
the estimate of 𝑈. Mowbray worked with annual claim amounts 𝑋1, 𝑋2, 𝑋3, . . . , 𝑋 𝑁, that are
assumed to be i.i.d. (independent and identically distributed) selections from a probability
distribution with density f (x|θ), mean 𝑈(θ), and variance 𝑣2
(𝜃). The parameter θ was viewed as
non-random. For any given,
𝑈̂ =
1
𝑁
∑ 𝑋𝑗
𝑁
𝑖=1 ,
Mowbray wanted to kwon how much observations are needed for some given 𝑘 and 𝛼, the
Ρ[|𝑈̂ − 𝑈(θ)| ≤ 𝑘𝑈(θ)] ≥ 1 − 𝛼, Norberg (2006). Where 𝑘 is the precision parameter within
which the observation, 𝑋 be of the mean, 𝜇. Using the normal approximation, 𝑈̂ ~ 𝑁(𝑈(θ),
𝑣( 𝜃)
√ 𝑁
). This gives, 𝑘𝑈(θ) ≥ 𝑍1−𝛼/2(
𝑣( 𝜃)
√ 𝑁
). The estimate, 𝑣̂2
=
1
𝑁−1
∑ (𝑋𝑖
𝑁
𝑖=1 − 𝑈̂)^2. Using both
estimators,𝑈̂ 𝑎𝑛𝑑 𝑣̂2
, we obtained;
𝑛 ≥
𝑍1 −𝛼/2 𝑣̂2
𝑘2 𝑈̂2 (4)
as a condition for full credibility for 𝑈̂ that is Z is set to 1 in equation (3) above. Now, the
question how to choose Z if n does not satisfy equation (4) above. Longley-Cook (1962)
provided a modern mathematical derivation of limited fluctuation approach to credibility theory.
One major advantage of this limited approach is the simplicity of its use. However, a number of
10
researchers raised questions about the method presented by Longly-Cook. Bulhmann did not
agree on the mathematical reasoning behind the limited fluctuation approach as found in Herzog
(2008) and he commented based on statistics pointing out that prior data is ignored in the
approach, Norberg (2006). Bulhmann argued that if the derivation was performed by using
confidence interval, and that why should a confidence interval that by definition includes the true
value with a probability of less than 1, gives full credibility? There were concern for the
credibility factor, (1-Z) given to the prior data 𝜇 while there is the trust of accuracy placed on 𝜇,
such that all weight is given to observed data 𝑈̂ when they assume enough information for full
credibility. In their lecture note, Ohlsson & Johansson (2006b) introduce credibility theory
through simple multiplicative models. A recent work by Englund et al. (2009) makes use of
multivariate generalization of the recursive credibility estimates of Sundt (1981) and modeled the
risk parameters as autoregressive process. Jawell (1974) indicates that if the likelihood function
is of exponential family and the prior is conjugate the Bayesian premium coincides with the
credibility premium.
2.3 Bayesian and the Bulhmann Approach
Whitney (1918) as reported in Herzog (2008) stated that the credibility factor, Z, needed to be of
the form,
n
Z
n k


Where 𝑛 is exposure period or the number of policy years and 𝑘 is a constant of interest to be
determined. Whitney suggest that to determine 𝑘 could be best done through the inverse
probability of the Bayes Theorem. Moreover Bulhmann approach to credibility estimates tend
out to be the best approximate to the corresponding Bayes estimates. This is directly related to
11
the prior data and the mean loss from the additional data in a linear model. Therefore the pure
premium was derived as a linear combination of the prior mean loss and the mean loss from the
additional or the collateral data with appropriate weight attached to each of the data set mean
values. More detailed work on credibility theory is show in the works of Sundt (1986), Halliwell
(1996), Grieg (1999), Bulhmann (2005), and Gisler (2005). Empirical Bayes credibility approach
is considered in this piece of work and is based upon a presumption that the sample mean is from
a distribution chosen from a set of distributions. However, many credibility situations are not
characterized by the process of first choosing a distribution randomly and then sampling from
that distribution.
12
3 CHAPTER THREE
3.1 Methodology
In the general frame-work of credibility theory actuaries use credibility factor “z” which lies
between the interval [0,1]. The ‘z’ is the weight placed on the experience data of the firm
(observation) and ‘1 – z’ is the weight placed on the other information (collateral data).
3.1.1 3.1.1 Presentation of standard formula of credibility;
Estimate = Z * Experience data + (1 - Z) * other information,
0 ≤ 𝑧 ≤ 1, and Z varies with the size of experience data and could be 1 as volume of the
experience data become very large and likely not to change with time.
Let 𝑋1, 𝑋2, 𝑋3, … , 𝑋 𝑁, be the claims data for particular policy of an insurance firm and we can
estimate pure premium as 𝑋̅ while 𝑋𝑖 denotes the 𝑖 𝑡ℎ
claim with 𝑁 being the claims frequency.
Let 𝑆 = ∑ 𝑋𝑖
𝑁
𝑖=1 ,
be the aggregate loss and 𝑋1, 𝑋2, 𝑋3,… , 𝑋 𝑁 models the individual claim sizes while 𝑁 counts
all claims in one fixed period.
𝑋𝑖 and the 𝑁 are random variables that describe the total claims amount and hence 𝑆 is also a
random variable.
At this point we should note the underlying assumptions that governs the distributions of 𝑋𝑖 and
𝑁;
𝑆 = ∑ 𝑋𝑖
𝑁
𝑖=1 ,
1. 𝑁 is a discrete random variable taking only positive values.
13
2. 𝑋1, 𝑋2. . . ~ 𝐺, with 𝐺(0) = 0. (i.i.d)
3. 𝑁 and (𝑋1, 𝑋2. . .) are independent.
3.1.2 Interpretation of assumptions
The first assumption says that 𝑁 takes only non-negative integer values and that the event
{ 𝑁 = 0} means that no claims occur which provides a total claim of S = 0.
The second assumption says that the individual claims 𝑋𝑖 do not affect each other, that is if we
face a large claim,𝑋1 this does not give any information for the remaining claims 𝑋𝑖 𝑖 ≥ 2 and
that the claims have homogeneity in the sense that all have the same marginal distribution 𝐺,
with
𝑃[ 𝑋𝑖 ≤ 0] = 𝐺(0) = 0
That is all the individual claims sizes 𝑋𝑖 are strictly positive.
The final assumption is saying that the individual claim sizes 𝑋𝑖are not affected by the number of
claims 𝑁 that is if we observe many claims this does not contain any information whether these
claims are of smaller or large size (Wutherich, 2014).
Considering the above assumptions though the 𝑁 and 𝑋𝑖 are independent, they are conditioned
on risk and we denotes this risk parameter as 𝛩, with 𝜃 being the observation obtained of either
𝑁 or 𝑋𝑖 of a particular insured with risk parameter 𝛩. The realization 𝜃 is then modeled by the
conditional distribution𝑓𝑥/ 𝛩 (𝑋 / 𝛩 ), given that = 𝜃.
Under the above framework, the update for the next period 𝑋 𝑁+1 is given as,
𝑈 𝑁+1 = 𝑍 ∗ 𝑋̅ + (1 − 𝑍) ∗ 𝜇.
14
Where 𝑍 is the credibility factor assigned to the experience data and 𝜇 being the unconditional
mean and where,
𝑍 =
𝑁
𝑁+𝑘
Where k =credibility parameter of the model.
Recall that 𝑋𝑖 are random variables that are considered in a set of portfolios, therefore to make
any meaningful prediction for the next period we have to look out for the possible sources of
variations. The sources of variations are those that we could experience between risk groups and
those that are within the risk groups due to the randomness of the𝑋𝑖′𝑠.
By the tower property we can define the unconditional mean as,
𝐸[ 𝑋] = 𝐸[𝐸[𝑋/ 𝛩] ], for 𝑋/𝛩
where 𝛩 is sub 𝜎 − 𝑎𝑙𝑔𝑒𝑏𝑟𝑎 a sub set of ℱon the probability space {Ω,ℱ, Ρ} and 𝑋~ ℱ, an
integrable random variable.
The unconditional variance is defined as;
𝑉𝑎𝑟( 𝑋) = 𝐸[ 𝑉𝑎𝑟(𝑋/ 𝛩)] ] + 𝑉𝑎𝑟(𝐸[𝑋/𝛩])
Note again that the total variability is due to the variability in the risk parameter 𝛩 and the
variability in the in 𝑋 conditioned on the 𝛩. Based on the risk parameter 𝛩, 𝐸[𝑋/ 𝛩] is known as
the hypothetical mean and 𝑉𝑎𝑟(𝑋/ 𝛩) is known as the process variance and is the variance of
any given risk group, Dean (2005).
It is clear that the unconditional variance is the sum of the expectation of the process variance
and the variance of the hypothetical mean. This bring us to very important ratio;
15
𝑘 =
𝐸𝑃𝑉
𝑉𝐻𝑀
,
and this 𝑘 appears in the computation of 𝑍 =
𝑁
𝑁+𝑘
above. Intuitively, any portfolio of
homogeneous risk the variability in the hypothetical mean will be extremely small making 𝑘 as
large as possible leading to 𝑍 approaching zero.
lim
𝑘 →∞
𝑁
𝑁+𝑘
= 0
Conversely as the portfolio is formed of heterogeneous risk group of policies the variability in
the hypothetical mean become very large making 𝑘 as small as possible and 𝑍 approaching one.
This conform to the fact that the 𝐸[ 𝑋] = 𝜇 𝑥, becomes the pure risk premium in portfolio of
homogeneous risk groups.
What if the portfolio is of heterogeneous risk groups and/or data are collected over different
periods with different exposures? This bring us to the Bulhmann – Straub credibility theory and
also known as Empirical Bayes credibility, Dean (2005).
3.2 EMPIRICAL BAYES METHOD
Under this approach the assumption is that the risk groups that form the portfolio are not
identically distributed.
3.2.1 General presentation of the problem.
Let 𝑋𝑖𝑗 denotes the loss per unit of exposure and 𝑚 𝑖𝑗 denotes the amount of exposure. The 𝑖
denotes the 𝑖 𝑡ℎ
risk group, while, 𝑖 = 1, … , 𝑟, where 𝑟 > 1.The 𝑗/𝑖 denotes the 𝑗 𝑡ℎ
loss
observation in the 𝑖 𝑡ℎ
group, while 𝑗 = 1, … , 𝑛𝑖 where 𝑛𝑖 > 1 for 𝑖 = 1, …, 𝑟.The 𝑗 indexes an
individual within the risk group or a period of the risk group. Thus, for the 𝑖 𝑡ℎ
risk group, we
16
have loss observations of 𝑛𝑖 individuals or periods. We assume the losses, 𝑋𝑖𝑗 are independently
distributed. The risk parameter of the 𝑖 𝑡ℎ
group is denoted by 𝜃𝑖 which is a realization of the
random variable, Θ𝑖. We again assume that Θ𝑖 are independently and identically distributed as the
parameter, Θ.
We assume 𝑋1, 𝑋2, 𝑋3, . . . , 𝑋 𝑁 are independent and condition on Θ with common mean and
variance as;
𝐸(𝑋𝑖𝑗/Θ = 𝜃𝑖) = 𝜇 𝑥(𝜃𝑖) , for I = 1 . . . r, and j = 1 . . . 𝑛𝑖
𝑉𝑎𝑟(𝑋𝑖𝑗/Θ = 𝜃𝑖) =
𝜎 𝑥
2
(θ 𝑖)
𝑚𝑖𝑗
, for I = 1 . . . r, and j = 1 . . . 𝑛𝑖
𝑚 𝑖𝑗 is measuring the exposure, that is it could be the number of months that the policy had been
in force or the number of individuals in the group or the amount of premium income for the past
j years, Anderson & Brown (2005).
We write expectation of the process variance as
𝐸𝑃𝑉 = 𝐸[ 𝜎 𝑋
2(Θ𝑖)] = 𝐸[ 𝜎 𝑋
2(Θ)]
𝑉𝐻𝑀 = 𝑉𝑎𝑟[ 𝜇 𝑋(Θ𝑖)] = 𝑉𝑎𝑟[ 𝜇 𝑋(Θ)]
3.2.2 The unbiased estimators of 𝑬𝑷𝑽 and 𝑽𝑯𝑴
Based on our definition of the problem above:
𝐸𝑃𝑉 =
∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖)
2𝑛𝑖
𝑗=1
𝑟
𝑖=1
∑ ( 𝑛𝑖 − 1)𝑟
𝑖=1
The unbiased estimator for the variance of the hypothetical means of each risk group is;
17
𝑉𝐻𝑀 =
[∑ 𝑚 𝑖( 𝑋̅𝑖 − 𝑋̅)2𝑟
𝑖=1 ] − ( 𝑟 − 1) 𝐸𝑃𝑉
𝑚 −
1
𝑚
∑ 𝑚 𝑖
2𝑟
𝑖=1
,
Where 𝑚 𝑖 is the total exposure for the 𝑖 𝑡ℎ
risk group and is defined as;
𝑚 𝑖 = ∑ 𝑚 𝑖𝑗
𝑛𝑖
𝑗=1
, 𝑓𝑜𝑟 𝑖 = 1, …, 𝑟
While the total exposure over all risk groups define as;
𝑚 = ∑ 𝑚 𝑖
𝑟
𝑖=1
Exposure-weighted mean of the 𝑖 𝑡ℎ
𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝
𝑋̅𝑖 =
1
𝑚 𝑖
∑ 𝑚 𝑖𝑗 𝑋𝑖𝑗
𝑛𝑖
𝑗=1
𝑓𝑜𝑟 𝑖 = 1, …, 𝑟
The overall weighted mean.
𝑋̅ =
1
𝑚
∑ 𝑚 𝑖 𝑋̅𝑖
𝑟
𝑖=1
We can now revisit our earlier definition of 𝑍,
𝑍 =
𝑁
𝑁 + 𝑘
Therefore under the description of the problem;
𝑍𝑖 =
𝑚𝑖
𝑚𝑖 +𝑘
, for each risk group and where 𝑘 =
𝐸𝑃𝑉
𝑉𝐻𝑀
. The update for the next period therefore
will be, 𝑍𝑖 𝑋̅𝑖 + (1 − 𝑍𝑖) 𝑋̅, Anderson & Brown (2005)
18
4 CHAPTER FOUR
4.1 DATA
The data is an insurance data collected by small insurance company over period of eight years in
Ghana. The data have been grouped according to their respective risk groups for analysis. Due to
the short period of data available we use five years data to formulate the model then the model is
used to forecast for three years and compared with the three years remaining observed values.
Chi square test is then done to check the validity of the model and confirmed by testing the
model in terms of Cedi equivalent.
4.2 PRESENTATION AND ANALYSIS OF DATA
Under the Bulhmann-Straub model each model could have different number of exposures that
enable us to monitor each risk group over different time periods changing the calculations of
EPV and VHM (Anderson & Brown, 2005). Data is presented in this format as shown in table
4.1, amount of claims and their corresponding exposures at any given period over each risk
group. The 𝑋𝑖𝑗 and the 𝑚 𝑖𝑗 are the claim and the exposure respectively for the
𝑖 𝑡ℎ
𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝 𝑎𝑡 𝑡ℎ𝑒 𝑗 𝑡ℎ
𝑝𝑒𝑟𝑖𝑜𝑑. Various parameters such as sample means and the sample
variances are computed as presented on table 4.2
𝑋𝑖
̅is the unbiased estimators for each risk group mean 𝜇 and 𝑣𝑖̅are the unbiased estimators for
each risk group process variance 𝑣.we now calculate the unbiased estimator for expected value of
the process variance (EPV) of the group as
𝐸𝑃𝑉 =
∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖)
2𝑛𝑖
𝑗=1
𝑟
𝑖=1
∑ ( 𝑛𝑖 − 1)𝑟
𝑖=1
19
Table 4.1
Table4.2: Expected Values of the Process Variance
Risk
Groups
Claims Sample Mean Sample Process Variance
Periods 1 2 L N
1 𝑋11
𝑚11
𝑋12
𝑚12
L
L
𝑋1𝑁
𝑚1𝑁
𝑋1
̅̅̅ =
1
𝑁
∑ 𝑥1𝑗
𝑁
𝑗=1
𝑣1̅̅̅ =
1
𝑁 − 1
∑ 𝑋1𝑗 − 𝑋1
̅̅̅
𝑁
𝑗=1
2 𝑋21
𝑚21
𝑋22
𝑚22
L
L
𝑋2𝑁
𝑚2𝑁 𝑋2
̅̅̅ =
1
𝑁
∑ 𝑥2𝑗
𝑁
𝑗=1
𝑣2̅̅̅ =
1
𝑁 − 1
∑ 𝑋2𝑗 − 𝑋2
̅̅̅
𝑁
𝑗=1
: M M M M M M
R 𝑋 𝑟1
𝑚 𝑟1
𝑋 𝑟2
𝑚 𝑟2
L
L
𝑋 𝑟𝑁
𝑚 𝑟𝑁 𝑋 𝑟
̅̅̅ =
1
𝑁
∑ 𝑥 𝑟𝑗
𝑁
𝑗=1
𝑣𝑟̅ =
1
𝑁 − 1
∑ 𝑋 𝑟𝑗 − 𝑋 𝑟
̅̅̅
𝑁
𝑗=1
We now have the platform to present estimators for risk means and variances for each group.
The mean is assumed in the Bulhmann-Straub model to be constant for each risk group over
time.
Periods Claims and exposures over the risk groups
1
𝑋11 𝑋12 …… ……. ……. 𝑋1𝑁1
𝑚11 𝑚12 …….. …….. …….. 𝑚1𝑁1
2 𝑋21 𝑋22 …….. ……. ……. 𝑋2𝑁2
𝑚21 𝑚22 …….. ……. …….. 𝑚2𝑁2
M M M M M M M
r
𝑋 𝑟1 𝑋 𝑟2 …… ……. ……... 𝑋 𝑟𝑁𝑟
𝑚 𝑟1 𝑚 𝑟1 ……. ……. ……… 𝑚 𝑟𝑁𝑟
20
Table 4.3
Claims Period Total
Exposure
Sample Mean Sample Estimator for Variance
1 𝑁1
𝑚1 = ∑ 𝑚1𝑗
𝑁
𝑗=1
𝑋1
̅̅̅ =
1
𝑚1
∑ 𝑚1𝑗 𝑋1𝑗
𝑁
𝑗=1
𝑉1
̂ =
1
𝑁1 − 1
∑ 𝑚1𝑗(𝑋1𝑗 − 𝑋1
̅̅̅
𝑁1
𝑗=1
)^2
2 𝑁1
𝑚2 = ∑ 𝑚2𝑗
𝑁
𝑗=1
𝑋2
̅̅̅ =
1
𝑚2
∑ 𝑚2𝑗 𝑋2𝑗
𝑁
𝑗=1
𝑉2
̂ =
1
𝑁2 − 1
∑ 𝑚2𝑗(𝑋2𝑗 − 𝑋1
̅̅̅
𝑁2
𝑗=1
)^2
.
.
.
.R 𝑁𝑟
𝑚 𝑟 = ∑ 𝑚 𝑟𝑗
𝑁
𝑗=1
𝑋𝑖
̅ =
1
𝑚 𝑟
∑ 𝑚 𝑟𝑗 𝑋 𝑟𝑗
𝑁
𝑗 =1
𝑉𝑟
̂ =
1
𝑁𝑟 − 1
∑ 𝑚 𝑟𝑗(𝑋 𝑟𝑗 − 𝑋1
̅̅̅
𝑁𝑟
𝑗=1
)^2
Total
𝑚 = ∑ 𝑚 𝑖
𝑟
𝑖=1
𝑋̅ =
1
𝑚
∑ 𝑚 𝑖 𝑋𝑖
̅
𝑟
𝑖=1
𝐸𝑃𝑉̂ =
1
∑ 𝑁𝑖 − 1𝑟
𝑖=1
∑(𝑁𝑖
𝑟
𝑖=1
− 1) ∗ 𝑉𝑖
̂
4.2.1 Set up of the computation
𝑚 𝑖𝑗 = 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝑜𝑓 𝑖 𝑡ℎ
𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝 𝑎𝑡 𝑡ℎ𝑒 𝑗 𝑡ℎ
𝑝𝑒𝑟𝑖𝑜𝑑
𝑚 𝑖 = 𝑡ℎ𝑒 total exposure for the 𝑖𝑡ℎ
risk group
𝑚 = 𝑡𝑜𝑡𝑎𝑙 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝑜𝑓 𝑎𝑙𝑙 𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝𝑠
𝑋̅𝑖 = 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑚𝑒𝑎𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑖 𝑡ℎ
𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝
𝑋̅ = 𝑡ℎ𝑒 𝑜𝑣𝑒𝑟𝑎𝑙𝑙 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑚𝑒𝑎𝑛
𝐸𝑃𝑉 = 𝑡ℎ𝑒 𝑚𝑒𝑎𝑛 𝑜𝑓 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒
21
𝑉𝐻𝑀 = 𝑡ℎ𝑒 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑜𝑓 ℎ𝑦𝑝𝑜𝑡ℎ𝑒𝑡𝑖𝑐𝑎𝑙 𝑚𝑒𝑎𝑛
𝑘 =
𝐸𝑃𝑉
𝑉𝐻𝑀
k =credibility parameter of the model,
𝑍 =
𝑚 𝑖
𝑚 𝑖+𝑘
4.3 DETERMINATION OF PARAMETERS
mi = mij+ mij + mij + mij+ ... +mij
m1 = 34 + 88 + 97 + 117 + 151 = 487
m2 = 27 + 76 + 132 + 116 + 140 = 491
m3 = 58 + 102 + 154 + 163 + 148 = 625
m4 = 64 + 93 + 131 + 140 + 194 = 622
m5 = 93 + 89 + 134 + 167 + 211 = 694
. .
. .
. .
m12 = 78 + 146 + 149 + 138 + 168 = 697
m = ∑ mi
r
i=1
It implies that, 𝑚 = 487 + 491 + 625+. . .+678 + 697 = 777
22
𝑋̅𝑖 =
1
𝑚 𝑖
∑ 𝑚 𝑖𝑗 𝑋𝑖𝑗
𝑛𝑖
𝑗=1
𝑓𝑜𝑟 𝑖 = 1, …, 𝑟
𝑋̅1 =
1
487
(34 ∗ 72528 + 88 ∗ 153294+ . . .+151 ∗ 56244) = 345213
𝑋̅2 =
1
491
(27 ∗ 37652 + 76 ∗ 196662+ . . .+140 ∗ 342263) = 403870
. .
. .
. .
𝑋̅12 =
1
697
(78 ∗ 176208 + 146 ∗ 335822 + . . . +186 ∗ 915079) = 605858
𝑋̅ =
1
𝑚
∑ 𝑚 𝑖 𝑋̅𝑖
𝑟
𝑖=1
𝑋̅ =
1
7776
(487 ∗ 345213 + 491 ∗ 403870+ . . .+697 ∗ 608585) = 795927
𝐸𝑃𝑉 =
∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖)
2𝑛𝑖
𝑗=1
𝑟
𝑖=1
∑ ( 𝑛𝑖 − 1)𝑟
𝑖=1
∑ 𝑚 𝑖1
𝑛
𝑗
( 𝑋𝑖1 − 𝑋1
̅̅̅) = 34 ∗ (72528− 345213)2
+.. . +151 ∗ (561244− 345213)^2
=1.52116 *1013
∑ 𝑚 𝑖2
𝑛
𝑗
( 𝑋𝑖2 − 𝑋2
̅̅̅) = 27 ∗ (37652− 403870)2
+. . .+140 ∗ (342263 − 403870)^2
23
=1.61116*1013
∑ 𝑚 𝑖12
𝑛
𝑗
( 𝑋𝑖12 − 𝑋12
̅̅̅̅̅) = 78 ∗ (176208 − 608585)+.. . +291(793094 − 608585)2
=4.41216*1015
Numerator of EPV:
∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖)
2𝑛𝑖
𝑗=1
𝑟
𝑖=1 = 1.20509*1016
Denominator of EPV:
∑ ( 𝑛𝑖 − 1)𝑟
𝑖=1 = 48
Therefore EPV = 1.20509*1016
/48
= 2.510604167*1014
𝑉𝐻𝑀 =
[∑ 𝑚 𝑖( 𝑋̅𝑖 − 𝑋̅)2𝑟
𝑖=1 ] − ( 𝑟 − 1) 𝐸𝑃𝑉
𝑚 −
1
𝑚
∑ 𝑚 𝑖
2𝑟
𝑖=1
∑ 𝑚 𝑖
𝑟
𝑖=1
( 𝑋𝑖
̅ − 𝑋̅)2
= 489 ∗ (345213− 795927)2
+. .. +697 ∗ (608585 − 795927)^2
= 2.92338*1015
Numerator of VHM = 2.92338*1015
- 11* 2.510604167*1014
=1.617154163*1014
Denominator of VHM = 7776 −
1
7776
(4872
+ 4912
+. . .+6972
)
24
=7117.112912
VHM = 1.617154163 ∗ 1014
/7117.112912
= 22724099798
k =
EPV
VHM
𝑘 =
2.510604167∗10^14
22724099798
= 11048.155
𝑍𝑖 =
𝑚𝑖
𝑚𝑖+𝑘
𝑍1 =
487
487 + 11048.155
= 0.04221876516
𝑍2 =
491
491 + 11048.155
= 0.04255077603
𝑍3 =
625
625 + 11048.155
= 0.05354165176
𝑍4 =
622
622 + 11048.155
= 0.05329834951
25
𝑍5 =
694
694 + 11048.155
= 0.059103290667
𝑍6 =
802
802 + 11048.155
= 0.06767843965
𝑍7 =
693
693 + 11048.155
= 0.05902315403
𝑍8 =
664
664 + 11048.155
= 0.05669323878
𝑍9 =
638
638 + 11048.155
= 0.05459451804
𝑍10 =
685
685 + 11048.155
= 0.05838156915
𝑍11 =
678
678 + 11048.155
= 0.05781946427
26
𝑍12 =
697
697 + 11048.155
= 0.05934361871
The update for the next period therefore will be
𝑈𝑖 = 𝑍𝑖 𝑋̅𝑖 + (1 − 𝑍𝑖) 𝑋̅
U1 = 345213 ∗ 0.04221876516+ (1 − 0.04221876516)∗ 795927
U1 = 776898.41
We do the same for the rest of the risk groups for the first period as follow:
U2 = 403870 ∗ 0.04255077603 + (1 − 0.04255077603)∗ 795927
U2 = 779244.67
U3 = 406299 ∗ 0.05354165176 + (1 − 0.05354165176)∗ 795927
U3 = 775065.67
𝑈4 = 541635 ∗ 0.05329834951+ (1 − 0.05329834951)∗ 795927
𝑈4 = 782373.66
𝑈5 = 580375 ∗ 0.059103290667+ (1 − 0.059103290667)∗ 795927
𝑈5 = 783187.17
𝑈6 = 782893 ∗ 0.06767843965+ (1 − 0.06767843965)∗ 795927
𝑈6 = 795044.88
27
𝑈7 = 521916 ∗ 0.05902315403+ (1 − 0.05902315403)∗ 795927
𝑈7 = 779754.01
𝑈8 = 464957 ∗ 0.05669323878+ (1 − 0.05669323878)∗ 795927
𝑈8 = 777163.24
𝑈9 = 622617 ∗ 0.05459451804+ (1 − 0.05459451804)∗ 795927
𝑈9 = 786465.22
𝑈10 = 2576265 ∗ 0.05838156915+ (1 − 0.05838156915)∗ 795927
𝑈10 = 899865.92
𝑈11 = 1393275 ∗ 0.05781946427+ (1 − 0.05781946427)∗ 795927
𝑈11 = 830465.34
𝑈12 = 608585 ∗ 0.05934361871 + (1 − 0.05934361871)∗ 795927
𝑈12 = 784809.45
These are the results of the update of all risk groups in the first period;
𝑈1 = 776898, 𝑈2 =779245, 𝑈3 =775066, 𝑈4 = 782374, 𝑈5 = 873187, 𝑈6 = 795045
𝑈7 = 779754, 𝑈8 = 777163, 𝑈9 = 786465, 𝑈10 = 899866, 𝑈11 = 830465,
𝑈12 = 784809
These results are added to the previews observations and the procedure repeated to find new
updates for the next two periods and the results compared with the observed values of such
periods as follow in table 4.4 below.
28
4.4 Chi-square Goodness of Fit Test
The chi square goodness of fit is applied in order to determine whether there is significant
difference in the annual claims.
0H  There is significant difference in the annual claims.
1H  There is no significance between the annual claims.
4.4.1 Test Statistics
𝑋𝑖
2
= ∑
𝑂𝑖 − 𝐸𝑖
𝐸𝑖
𝑛
𝑖=1
Where 𝑂𝑖 𝑎𝑟𝑒 𝑡ℎ𝑒 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 𝑐𝑙𝑎𝑖𝑚𝑠 𝑎𝑛𝑑 𝐸𝑖 𝑎𝑟𝑒 𝑡ℎ𝑒 𝑒𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑐𝑙𝑎𝑖𝑚𝑠 and
𝑋𝑖
2
𝑖𝑠 𝑡ℎ𝑒 𝑐ℎ𝑖 𝑠𝑞𝑢𝑎𝑟𝑒 𝑡𝑒𝑠𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑚𝑜𝑑𝑒𝑙 𝑤𝑖𝑡ℎ 𝑑𝑒𝑔𝑟𝑒𝑒 𝑜𝑓 𝑓𝑟𝑒𝑒𝑑𝑜𝑚 𝑑 = ( 𝐶 − 1) ∗ (𝑅 − 1) C
(number of columns) and R (number of rows), therefore d = (12-1)*(5-1) = 44
𝑋1
2
=
(777992 − 776898)2
776898
+
(787632 − 784915)2
784915
+
(789376 − 791556)2
791556
𝑋1
2
= 16.95
29
Table 4.4
RISK GROUP 1 RISK GROUP 2 RISK GROUP 3
OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED
777992 776898 780154 779245 771189 775066
787632 784915 789553 786218 785033 783858
789376 791556 789173 792033 795063 791154
RISK GROUP 4 RISK GROUP 5 RISK GROUP 6
OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED
781641 782374 779164 783187 1067813 795045
784476 787897 789116 788317 1365187 794920
789916 792627 788193 792767 1198562 795195
RISK GROUP 7 RISK GROUP 8 RISK GROUP 9
OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED
781134 779754 776935 777163 791649 786465
790641 786409 779867 784981 789671 790145
788154 792070 788981 791554 796615 793444
RISK GROUP 10 RISK GROUP 11 RISK GROUP 12
OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED
901194 899866 941781 830465 785553 784809
849134 853097 1007616 814574 790179 789216
820964 816511 1378450 802389 789649 793094
𝑋2
2
=
(780154 − 7792245)^2
779245
+
(789553− 786218)^2
786218
+
(789173 − 792033)^2
792033
30
𝑋2
2
= 25.53419837
𝑋3
2
=
(771189− 775066)^2
775066
+
(785033 − 783858)^2
783858
+
(795063− 791154)^2
791154
𝑋3
2
= 40.46858875
𝑋4
2
=
(781641− 782374)^2
782374
+
(784476 − 787897)^2
787897
+
(789916− 792627)^2
792627
𝑋4
2
= 24.81286973
𝑋5
2
=
(779164− 783187)^2
783187
+
(789116 − 788317)^2
788317
+
(788193− 792767)^2
792767
𝑋5
2
= 47.86523665
𝑋6
2
=
(1067813 − 795045)^2
795045
+
(1365187− 794920)^2
794920
+
(1198562− 795195)^2
795195
𝑋6
6
= 489794.8022
𝑋7
2
=
(781134− 779754)^2
779754
+
(790641 − 786409)^2
786409
+
(788154− 792070)^2
792070
𝑋7
2
= 44.57722693
𝑋8
2
=
(776935− 777163)^2
777163
+
(779867 − 784981)^2
784981
+
(788981− 791554)^2
791554
𝑋8
2
= 41.74732544
𝑋9
2
=
(791649− 786465)^2
786465
+
(789671 − 790145)^2
790145
+
(796615− 793444)^2
793444
𝑋9
2
= 47.12769467
31
𝑋10
2
=
(901194 − 899866)^2
899866
+
(849134− 853097)^2
853097
+
(820964 − 816511)^2
816511
𝑋10
2
= 44.65495069
𝑋11
2
=
(941781 − 830465)^2
830465
+
(1007616− 814574)^2
814574
+
(1378450− 802389)^2
802389
𝑋11
2
= 433086.0535
𝑋12
2
=
(785553 − 784809)^2
784809
+
(790179− 789216)^2
789216
+
(789649 − 793094)^2
793094
𝑋12
2
= 16.84457374
4.4.2 The critical region:
X44 ,0.05
2
= 55.8
4.4.3 Decision:
We fail to reject the null hypothesis 𝐻0on the significant level of α = 0.05 for risk groups 1, 2, 3,
4, 5, 7, 8, 9, 10, and 12 as the computed chi square values are less than the critical region. We
then conclude that the model is good fit for their claims data. However we reject the null
hypothesis 𝐻0 on the significant level of α = 0.05 for risk groups 6 and 11 as the computed chi
square values are greater than the critical region and conclude that the model is not a good fit for
their claims data.
4.5 Testing the model in terms of Cedi equivalence
4.5.1 Test statistics
Pure Premium per Expected, PP/EXP. (GHc) =
𝐸𝑋𝑃𝐸𝐶𝑇𝐸𝐷
𝐸𝑋𝑃𝑂𝑆𝑈𝑅𝐸
32
Pure Premium per Observed, PP/OBS. (GHc) =
𝑂𝐵𝑆𝐸𝑅𝑉𝐸𝐷
𝐸𝑋𝑃𝑂𝑆𝑈𝑅𝐸
Difference, DIFF. (GHc) = PP/EXP. (GHc) - PP/OBS. (GHc)
The model is tested in Cedi value by computing pure premium based on forecast values and
observed value and the results are compared to check for any shortfall in value at period under
consideration. This test covers only the ten risk groups that fit the model. The results are
tabulated as follow:
Table 4.5
YEAR ONE
EXPECTED(GHc) OBSERVED(GHc) EXPOSURE PP/EXP.(GHc) PP/OBS.(GHc) DIFF.(GHc)
776898.44 777992 188 4132.44 4138.26 -5.82
779244.70 780154 154 5060.03 5065.94 -5.90
775065.71 771189 162 4784.36 4760.43 23.93
782373.67 781641 193 4053.75 4049.95 3.80
783187.18 779164 250 3132.75 3116.66 16.09
779754.02 781134 236 3304.04 3309.89 -5.85
777163.25 776935 229 3393.73 3392.73 1.00
786465.26 791649 287 2740.30 2758.36 -18.06
899865.93 901194 171 5262.37 5270.14 -7.77
784809.44 785553 264 2972.76 2975.58 -2.82
TOTAL -1.40
33
YEAR TWO
EXPECTED(GHc) OBSERVED(GHc) EXPOSURE PP/EXP.(GHc) PP/OBS.(GHc) DIFF.(GHc)
784914.85 787632 146 5376.13 5394.74 -18.61
786218.03 789553 150 5241.45 5263.69 -22.23
783858.28 785033 173 4530.97 4537.76 -6.79
787896.63 784476 187 4213.35 4195.06 18.29
788316.55 789116 238 3312.25 3315.61 -3.36
786409.46 790641 203 3873.94 3894.78 -20.85
784980.69 779867 158 4968.23 4935.87 32.37
790145.15 789671 165 4788.76 4785.88 2.87
853096.80 849134 189 4513.74 4492.77 20.97
789216.40 790179 281 2808.60 2812.02 -3.43
TOTAL -0.77
34
YEAR THREE
EXPECTED(GH) OBSERVED(GHc) EXPOSURE PP/EXP.(GHc) PP/OBS.(GHc) DIFF.(GHc)
791555.80 789376 173 4575.47 4562.87 12.60
792032.68 789193 126 6285.97 6263.44 22.54
791154.01 795063 134 5904.13 5933.31 -29.17
792626.51 789916 116 6832.99 6809.62 23.37
792766.59 788193 201 3944.11 3921.36 22.75
792069.86 788154 280 2828.82 2814.84 13.99
791554.07 788981 281 2816.92 2807.76 9.16
793444.36 796615 168 4722.88 4741.76 -18.87
816510.92 820964 182 4486.32 4510.79 -24.47
793093.67 789649 291 2725.41 2713.57 11.84
TOTAL 43.73
4.5.2 Interpretation of results;
We observed that in year one and year two the insurance firm make a loss of GHc 1.45, GHc
0.77 respectively but make a gain of GHc 43.73. These losses are in fact very negligible as
compare to the total premium collected over the period. This test help confirm the validity of the
35
model. These values are the differences between the pure premium based on expected values and
the pure premium based on observed values over the same periods.
5 DETERMINATION OF CLAIMS FREQUENCY
Calculate the Buhlmann-Straub credibility predictions of the numbers of claims per hundred
policyholders for the twelve risk groups for the next period.
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
1.8 0.03 1.8 0.03 1.2 0.06 1.3 0.06
1.9 0.09 2.5 0.08 2.4 0.10 3.1 0.09
1.9 0.10 2.5 0.13 2.7 0.15 2.3 0.13
2.5 0.12 1.6 0.12 1.7 0.16 4.2 0.14
1.1 0.15 1.7 0.14 3.4 0.15 1.6 0.19
0.49 0.49 0.63 0.62
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
2.0 0.09 2.3 0.08 1.6 0.08 0.5 0.06
3.3 0.09 1.7 0.15 1.3 0.12 2.0 0.10
36
4.2 0.13 2.3 0.17 2.7 0.14 2.2 0.14
1.5 0.17 3.2 0.22 2.8 0.17 1.8 0.18
2.3 0.21 4.1 0.18 1.69 0.20 2.3 0.18
0.69 0.80 0.69 0.66
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
Claims
per 100
Insured
Per 100
1.8 0.09 1.4 0.08 0.9 0.09 1.1 0.08
2.2 0.08 1.3 0.15 1.2 0.12 2.8 0.15
1.7 0.14 2.8 0.13 2.1 0.12 1.2 0.15
2.4 0.18 3.7 0.16 1.9 0.17 3.1 0.14
1.3 0.15 2.6 0.17 2.2 0.17 2.5 0.19
0.64 0.69 0.68 0.70
5.1.1 Compute the total exposures
𝑚1 =4.87, 𝑚2 = 4.91, 𝑚3 = 6.25, 𝑚4 = 6.22, 𝑚5 = 6.94 , 𝑚6 = 8.02, 𝑚7 = 6.93, 𝑚8 =
6.64, 𝑚9 = 6.38, 𝑚10 = 6.85, 𝑚11 = 6.78, 𝑚12 = 7.97
5.1.2 Total exposures of all risk groups
𝑚 = 4.87 + 4.91 + 6.25 + 6.22 + 6.94 + 8.02 + 6.93 + 6.64 + 6.38 + 6.85 + 6.78 + 7.97
= 77.76
37
5.1.3 The exposure-weighted means of the claim frequency are:
𝑋̅1 =
1.8 ∗ 0.34 + 1.9 ∗ 0.88 + 1.9 ∗ 0.97 + 2.5 ∗ 1.17 + 1.1 ∗ 1.51
4.87
= 1.789
𝑋̅2 =
1.8 ∗ 0.27 + 2.5 ∗ 0.76 + 2.5 ∗ 1.32 + 1.6 ∗ 1.16 + 1.7 ∗ 1.40
4.91
= 2.0208
𝑋̅3 =
1.2 ∗ 0.58 + 2.4 ∗ 1.02 + 2.7 ∗ 1.54 + 1.7 ∗ 1.63 + 3.4 ∗ 1.48
6.25
= 2.4168
𝑋̅4 =
1.1 ∗ 0.64 + 3.1 ∗ 0.93 + 2.3 ∗ 1.31 + 4.2 ∗ 1.4 + 1.6 ∗ 1.94
6.22
= 2.5055
𝑋̅5 =
2.0 ∗ 0.93 + 3.3 ∗ 0.89 + 4.2 ∗ 1.43 + 1.5 ∗ 1.67 + 2.3 ∗ 2.11
6.94
= 2.6169
𝑋̅6 =
2.3 ∗ 0.84 + 1.7 ∗ 1.54 + 2.3 ∗ 1.65 + 3.2 ∗ 2.2 + 4.1 ∗ 1.8
8.02
= 2.8385
𝑋̅7 =
1.6 ∗ 0.8 + 1.3 ∗ 1.15 + 2.6 ∗ 1.36 + 2.7 ∗ 1.67 + 1.7 ∗ 1.95
6.93
= 2.0397
38
𝑋̅8 =
0.5 ∗ 0.61 + 2.0 ∗ 1.01 + 2.2 ∗ 1.4 + 1.8 ∗ 1.79 + 2.3 ∗ 1.83
6.64
=1.9331
𝑋̅9 =
1.8 ∗ 0.9 + 2.2 ∗ 0.83 + 1.7 ∗ 1.39 + 2.4 ∗ 1.78 + 1.3 ∗ 1.48
6.38
= 1.8817
𝑋̅10 =
1.4 ∗ 0.8 + 1.3 ∗ 1.47 + 2.8 ∗ 1.26 + 3.7 ∗ 1.62 + 2.6 ∗ 1.7
6.85
= 2.4778
𝑋̅11 =
0.9 ∗ 0.94 + 1.2 ∗ 0.83 + 2.1 ∗ 1.24 + 1.9 ∗ 1.69 + 2.2 ∗ 1.73
6.78
= 1.6907
𝑋̅12 =
1.1 ∗ 0.78 + 2.8 ∗ 1.46 + 1.2 ∗ 1.49 + 3.1 ∗ 1.39 + 2.5 ∗ 1.86
6.97
= 2.2515
5.1.4 Computing EPV
Numerator of EPV
=0.34(1.8-1.789)^2+0.88(1.9-1.789)^2+0.97(1.9-1.789)^2+1.17(2.5-1.789)^2+1.51(1.1-
1.789)^2+…+0.78(1.1-2.2515)^2+1.46(2.8-2.2515)^2+1.49(1.2-2.2515)^2+1.86(2.5-2.2515)^2
= 39.94803206
𝐸𝑃𝑉 =
39.94803206
4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4
39
= 0.832250668
Overall mean
𝑋̅ =
1.789 ∗ 4.87 + 2.0208 ∗ 4.91 + ⋯+ 1.6907 ∗ 6.78 + 2.2515 ∗ 6.97
77.76
= 2.23368325
𝑽𝑯𝑴
=
4.87(1.762− 2.23368)2
+ 4.91(2.0133 − 2.23368)2
+ ⋯+ 6.97(2.24637− 2.23368)2
− 11 ∗ 0.8322
77.76 −
1
77.76
∗ (4.872 + 4.912 + ⋯ + 6.682 + 6.972)
=0.000704845
𝑘 =
0.83225
0.000704845
= 1180.757628
𝑍1 =
4.87
4.78 + 1180.757628
= 0.004107529
𝑍2 =
4.91
4.91 + 1180.757628
= 0.004141127
40
𝑍3 =
6.26
6.25 + 1180757628
= 0.005265341
𝑍4 =
6.22
6.22 + 1180.757628
= 0.0052402
𝑍5 =
6.94
6.94 + 1180.757628
= 0.005843238
𝑍6 =
8,02
8.02 + 1180.757628
= 0.006746426
𝑍7 =
6.93
6.93 + 1180.757628
= 0.005834868
𝑍8 =
6.64
6.64 + 1180.757628
41
= 0.005592061
𝑍9 =
6.38
6.38 + 1180.757628
= 0.005374272
𝑍10 =
6.85
6.85 + 1180.757628
= 0.005767898
𝑍11 =
6.78
6.78 + 1180.757628
= 0.005709293
𝑍12 =
6.97
6.97 + 1180.757628
= 0.005868349
5.1.5 Buhlmann-straub predicted claims frequency for each risk group
𝐶𝑓1 = 1.789 ∗ (0.004107529)+ (1 − 0.004107529)∗ 2.23368325
= 2.231744543
𝐶𝑓2 = 2.0208 ∗ (0.004141127) + (1 − 0.004141127)∗ 2.23368325
42
= 2.232770562
𝐶𝑓3 = 2.4168*(0.005265341) + (1-0.005265341)*2.23368325
= 2.234672633
.
.
.
𝐶𝑓11 = 1.6907 ∗ (0.005709293)+ (1 − 0.00570923) ∗ 2.23368325
= 2.230967558
𝐶𝑓12 = 2.2515 ∗ (0.005868349)+ (1 − 0.005868349)∗ 2.23368325
= 2.233757715
5.1.6 Total claim frequency predicted based on the historical exposure is
𝐶 = 4.87 ∗ 2.23174+ 4.91 ∗ 2.23277 + ⋯ + 6.78 ∗ 2.23097 + 6.97 ∗ 2.33758
= 173.7014931
43
44
6 CHAPTER FIVE
6.1 CONCLUSION AND RECOMMENDATIONS
6.1.1 Findings
The expected values as show by the two test statistics indicate that the model fits the given data
set expect for risk groups 6 and 11 that have some obvious outliers. These two risk groups could
be modeled using other methods that take care of extreme and irregular claims.
The elements that are considered for insurance premium calculation are pure risk premium, risk
margin, profit margin, sales commission to sales agents, administrative expenses, financial gain
on investments, and state task, Wutherich (2014). We computed the pure premium for each
policyholder in the twelve risk groups under consideration and the results are shown in table 4.4
above. The pure premium due to each policyholder for the first year is as follows: for risk group
1 charges GHc 4132.44, risk group 2 charges GHc 5060.03, risk group 3 charges GHc 4784.36,
risk group 4 charges GHc 4053.75, risk group 5 charges GHc 3132.75, risk group 7 charges GHc
3304.04, risk group 8 charges GHc 3393.73, risk group 9 charges GHc 2740.30, risk group 10
charges GHc 5262.37, risk group 12 charges GHc 2972.76. These values are the pure premium
that are supposed to be charged to each policyholder necessary to cover losses and loss related
expenses. Loading is the part of premium necessary to cover sales expenses and profit margin.
On the other hand the claim frequency for each of the twelve risk groups for the next period
based on the historical exposure are as follow: risk group one will have to expect 10.87 claims
and risk groups 2,3,4,5,6,7,8,9,10,11,12 are 10.96, 13.97, 13.9, 15.52, 17.95, 15.47 14.82, 14.24,
15.31, 15,13, and 15.57 respectively. Therefore the total expected claim frequency for all the
twelve risk groups for the next period is 173.7 claims.
45
7 RECOMMENDATIONS
Actuarial pricing methodology generally consists of a collection of forecasting methods,
economic models, and trend analyses (Stein, 1995). The factors, ratios, and averages which result
are used to generate rates which help promote the various financial, operational, and strategic
needs necessary for the insurance enterprise to remain solvent and competitive in business. To
achieve these goals the actuary has the responsibility to choose the best forecasting method to
formulate the model that will provide these needs.
The model employed above exhibit a good fit for most of the risk groups under consideration but
is the duty of the actuary to use informed judgment to determine other variables that could easily
change over time. The model needs review constantly to meet the dynamism of the insurance
industry as a whole. A financial performance of an insurance firm product could be vulnerable to
different of complex socio-economic, legal and operational forces.
However most of this traditional rate making methods do not seems to fully capture these
dynamisms. Moreover most of the available data are such that simple and static methods could
not model them for any meaningful predictions. The actuary needs to develop more robust
models to take care of any extremely and irregular claims that do not fit in these models and
methods and therefore do not give the accurate prediction of the loss reserves. This could be
done through the extension of the conventional normal error distribution to generalized-t
distribution which involves several of the long-tailed distributions such as the student-t and the
exponential power distributions. These distributions are expressed as a scale mixture of uniforms
distribution which help model implementation and detection of outliers by using mixing
parameters.
46
The concepts regarding dynamic ratemaking should be employed at all times to include product
related variables and should include product management ideas through evaluation and
hypothesis about the range of product and environmental forces. These will go beyond the usual
historical averages for making projections but instead will include the study of the underlying
distributions to identify any meaningful pattern or any unanticipated correlations as well as the
continuous analysis of the company’s operational systems and expenses, Stein, (1995).
7.1 Assumptions
The relationship of product cost does not vary under any combination of rating variables within
the time frame under consideration. This assumption is oversimplified to ignore the interplay of
exposure issues and the complexity of valuing the claims profiles of insured.
7.2 Limitations
 As static method, it fails to model all the cost and systems associated with the sales of the
insurance products. It focus solely on historical data that could not reflect the exact image
of the current experience of the firm as the range of exposures and risk environments to
be insured, the competitive insurance market, the company’s operations are all dynamic.
 There is no presuppose correlations the ratemaking variables and this allow the use of
data whose level of detail allows for only one dimensional rating plan analysis enabling
use of a constant rating factor in the whole ratemaking calculations (Stein, 1995).
47
APPENDICES
The following is the set of historical data used for the analysis
RISK GROUP ONE RISK GROUP TWO RISK GROUP THREE
CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS
725283 34 37652 27 106228 58
153294 88 129662 76 251823 102
203667 97 561848 132 479907 154
407347 117 563351 116 444459 163
561244 151 342263 140 511740 148
777992 188 780143 154 771189 162
787632 146 789553 150 785033 173
789376 173 789193 126 795063 134
RISK GROUP FOUR RISK GROUP FIVE RISK GROUP SIX
CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS
213232 64 185515 93 137114 84
231136 93 316191 89 418161 153
472143 131 668315 134 6447337 165
619666 140 455434 167 1013470 220
789436 194 908885 211 1236720 180
781641 193 779164 250 1067813 200
784476 187 789116 238 1365187 263
789916 116 788193 201 1198562 221
48
RISK GROUP SEVEN RISK GROUP EIGHT RISK GROUP NINE
CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS
169106 80 230578 61 460661 90
211137 115 369545 101 140268 83
283162 136 379300 140 454233 139
350533 167 567780 179 979572 178
906305 236 560697 183 720446 148
781134 203 776935 229 791649 287
790641 203 779867 158 789671 165
788154 280 788981 281 796615 168
RISK GROUP TEN RISK GROUP ELEVEN RISK GROUP TWELVE
CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS
294765 80 194248 94 176208 78
399408 147 194069 118 335822 146
347689 126 4918118 124 637742 149
543466 162 564203 169 696964 138
9121159 170 1146151 173 915079 186
901194 171 941781 172 785553 264
849134 189 1007616 157 790179 281
820964 182 1378450 227 789649 291
49

More Related Content

What's hot

WVU - IMC 636 American Red Cross Proposal
WVU - IMC 636 American Red Cross ProposalWVU - IMC 636 American Red Cross Proposal
WVU - IMC 636 American Red Cross ProposalJ Elise Soto
 
Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...
Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...
Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...Martin Otundo
 
People first places and streets, chloe
People first places and streets, chloePeople first places and streets, chloe
People first places and streets, chloeChloé Ava Rodrigues
 
TWUtranscripts
TWUtranscriptsTWUtranscripts
TWUtranscriptsKen Morgan
 
Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...
Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...
Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...PJ Tremblay
 
CAPACITY BUILDING IN THE PRINTING INDUSRTY NEW
CAPACITY BUILDING IN THE PRINTING INDUSRTY NEWCAPACITY BUILDING IN THE PRINTING INDUSRTY NEW
CAPACITY BUILDING IN THE PRINTING INDUSRTY NEWRichard Odei-Nkansah
 
FMH Crisis Management Plan
FMH Crisis Management PlanFMH Crisis Management Plan
FMH Crisis Management PlanChelsea Braun
 

What's hot (8)

WVU - IMC 636 American Red Cross Proposal
WVU - IMC 636 American Red Cross ProposalWVU - IMC 636 American Red Cross Proposal
WVU - IMC 636 American Red Cross Proposal
 
Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...
Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...
Martin otundo research paperDETERMINANTS OF IMPLEMENTATION OF CASH TRANSFER P...
 
People first places and streets, chloe
People first places and streets, chloePeople first places and streets, chloe
People first places and streets, chloe
 
TWUtranscripts
TWUtranscriptsTWUtranscripts
TWUtranscripts
 
Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...
Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...
Marine Corps Moneyball - Operationalizing Personnel Analytics to Manage the F...
 
MAX
MAXMAX
MAX
 
CAPACITY BUILDING IN THE PRINTING INDUSRTY NEW
CAPACITY BUILDING IN THE PRINTING INDUSRTY NEWCAPACITY BUILDING IN THE PRINTING INDUSRTY NEW
CAPACITY BUILDING IN THE PRINTING INDUSRTY NEW
 
FMH Crisis Management Plan
FMH Crisis Management PlanFMH Crisis Management Plan
FMH Crisis Management Plan
 

Similar to FINAL FINAL PROJECT

A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015
A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015
A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015Majune Kraido Socrates
 
Abraham approved final research internal control and performance of non gover...
Abraham approved final research internal control and performance of non gover...Abraham approved final research internal control and performance of non gover...
Abraham approved final research internal control and performance of non gover...Abraham Ayom
 
Internal control and performance of non governmental organization, case study...
Internal control and performance of non governmental organization, case study...Internal control and performance of non governmental organization, case study...
Internal control and performance of non governmental organization, case study...Abraham Ayom
 
Research report on internal control and performance of non governmental organ...
Research report on internal control and performance of non governmental organ...Research report on internal control and performance of non governmental organ...
Research report on internal control and performance of non governmental organ...Abraham Ayom
 
Effect of Communication on Employee Performance
Effect of Communication on Employee PerformanceEffect of Communication on Employee Performance
Effect of Communication on Employee PerformancePalmas Tsyokplo
 
ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...
ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...
ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...lamluanvan.net Viết thuê luận văn
 
Final Revised PhD Thesis Jan 2014
Final Revised  PhD Thesis Jan 2014Final Revised  PhD Thesis Jan 2014
Final Revised PhD Thesis Jan 2014Stella Adagiri
 
Abshir nur mohamed for print
Abshir  nur mohamed for print Abshir  nur mohamed for print
Abshir nur mohamed for print Avv Inshar
 
The happiness of Vietnamese - micro-analysis of happiness determinants in the...
The happiness of Vietnamese - micro-analysis of happiness determinants in the...The happiness of Vietnamese - micro-analysis of happiness determinants in the...
The happiness of Vietnamese - micro-analysis of happiness determinants in the...HanaTiti
 
Assessment of knowledge management practices the case of ethiopian defense fo...
Assessment of knowledge management practices the case of ethiopian defense fo...Assessment of knowledge management practices the case of ethiopian defense fo...
Assessment of knowledge management practices the case of ethiopian defense fo...AbaynehLishan1
 
A Study of Functioning of Two Non –Governmental Organizations (...
A   Study   of   Functioning   of   Two   Non –Governmental  Organizations  (...A   Study   of   Functioning   of   Two   Non –Governmental  Organizations  (...
A Study of Functioning of Two Non –Governmental Organizations (...AKSHAT MAHENDRA
 
sưu tầm: DEVELOPING HUMAN RESOURCES IN QUANG NINH PROVINCE IN THE CONTEXT O...
sưu tầm: DEVELOPING HUMAN RESOURCES IN  QUANG NINH PROVINCE IN THE CONTEXT  O...sưu tầm: DEVELOPING HUMAN RESOURCES IN  QUANG NINH PROVINCE IN THE CONTEXT  O...
sưu tầm: DEVELOPING HUMAN RESOURCES IN QUANG NINH PROVINCE IN THE CONTEXT O...lamluanvan.net Viết thuê luận văn
 
Internship Report - Corporate Services Department (URA)
Internship Report - Corporate Services Department (URA)Internship Report - Corporate Services Department (URA)
Internship Report - Corporate Services Department (URA)Oyo Wilfred Robert
 
Internship_Report_Information_Technology.pdf
Internship_Report_Information_Technology.pdfInternship_Report_Information_Technology.pdf
Internship_Report_Information_Technology.pdfSachin674524
 

Similar to FINAL FINAL PROJECT (20)

A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015
A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015
A DISCRETE TIME ANALYSIS OF EXPORT DURATION IN KENYA 5.11.2015
 
Abraham approved final research internal control and performance of non gover...
Abraham approved final research internal control and performance of non gover...Abraham approved final research internal control and performance of non gover...
Abraham approved final research internal control and performance of non gover...
 
Internal control and performance of non governmental organization, case study...
Internal control and performance of non governmental organization, case study...Internal control and performance of non governmental organization, case study...
Internal control and performance of non governmental organization, case study...
 
Research report on internal control and performance of non governmental organ...
Research report on internal control and performance of non governmental organ...Research report on internal control and performance of non governmental organ...
Research report on internal control and performance of non governmental organ...
 
Research
ResearchResearch
Research
 
Effect of Communication on Employee Performance
Effect of Communication on Employee PerformanceEffect of Communication on Employee Performance
Effect of Communication on Employee Performance
 
ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...
ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...
ATTRACTING FOREIGN DIRECT INVESTMENT IN THE FIELD OF EDUCATIONAL TECHNOLOGY I...
 
Final Revised PhD Thesis Jan 2014
Final Revised  PhD Thesis Jan 2014Final Revised  PhD Thesis Jan 2014
Final Revised PhD Thesis Jan 2014
 
Abshir nur mohamed for print
Abshir  nur mohamed for print Abshir  nur mohamed for print
Abshir nur mohamed for print
 
R102857C SIMBARASHE
R102857C SIMBARASHER102857C SIMBARASHE
R102857C SIMBARASHE
 
final dissertation
final dissertationfinal dissertation
final dissertation
 
The happiness of Vietnamese - micro-analysis of happiness determinants in the...
The happiness of Vietnamese - micro-analysis of happiness determinants in the...The happiness of Vietnamese - micro-analysis of happiness determinants in the...
The happiness of Vietnamese - micro-analysis of happiness determinants in the...
 
Assessment of knowledge management practices the case of ethiopian defense fo...
Assessment of knowledge management practices the case of ethiopian defense fo...Assessment of knowledge management practices the case of ethiopian defense fo...
Assessment of knowledge management practices the case of ethiopian defense fo...
 
A Study of Functioning of Two Non –Governmental Organizations (...
A   Study   of   Functioning   of   Two   Non –Governmental  Organizations  (...A   Study   of   Functioning   of   Two   Non –Governmental  Organizations  (...
A Study of Functioning of Two Non –Governmental Organizations (...
 
hersiende FINALE VERHANDELING
hersiende FINALE VERHANDELINGhersiende FINALE VERHANDELING
hersiende FINALE VERHANDELING
 
sưu tầm: DEVELOPING HUMAN RESOURCES IN QUANG NINH PROVINCE IN THE CONTEXT O...
sưu tầm: DEVELOPING HUMAN RESOURCES IN  QUANG NINH PROVINCE IN THE CONTEXT  O...sưu tầm: DEVELOPING HUMAN RESOURCES IN  QUANG NINH PROVINCE IN THE CONTEXT  O...
sưu tầm: DEVELOPING HUMAN RESOURCES IN QUANG NINH PROVINCE IN THE CONTEXT O...
 
full report Nia
full report Niafull report Nia
full report Nia
 
Internship Report - Corporate Services Department (URA)
Internship Report - Corporate Services Department (URA)Internship Report - Corporate Services Department (URA)
Internship Report - Corporate Services Department (URA)
 
Internship_Report_Information_Technology.pdf
Internship_Report_Information_Technology.pdfInternship_Report_Information_Technology.pdf
Internship_Report_Information_Technology.pdf
 
diana report.doc
diana report.docdiana report.doc
diana report.doc
 

FINAL FINAL PROJECT

  • 1. UNIVERSITY OF GHANA DEPARTMENT OF STATISTICS RATEMAKING AND RESERVING IN CASUALTY AND PROPERTY INSURANCE, CREDIBILITY APPROACH. BY SHAPAH GBOR SHADRACH A DISERTATION SUBMITTED TO THE DEPARTMENT OF STATISTICS, UNIVERSITY OF GHANA, IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE AWARD OF THE DEGREE OF MASTER OF SCIENCE IN ACTUARIAL SCIENCE. JULY 2015
  • 2. i DECLARATION I hereby declare that this submission is my own work towards the MSc. Actuarial Science program, and that, to the best of my knowledge, it contains no material(s) previously published by another person(s) nor material(s) which has been accepted for the award of any other degree of any university, except where due acknowledgement has been made in the text. SHAPAH GBOR SHADRACH _________________ ________________ (10507649) Signature Date Certified by: Dr. F. O. Mettle _____________________ __________________ Supervisor Signature Date Mr.E.B.N Quaye _____________________ __________________ Supervisor Signature Date
  • 3. ii Abstract The major problems that the modern insurance industry is facing is how to compute the risk premium adequately to cover the claim payment that may occur. This is due to the randomness of the risk associate with insurance contracts and/or partly due to the modifications in the policies and the increasing demand for the insurance policies. The purpose of this work is to develop an improved estimates for insurance premiums using credibility framework. Credibility theory is non-parametric quantitative method to forecast for future insurance coverage through the combination of parameters from a subset and the whole group to update for the future losses. It makes use of observed results and results from a larger data set from a similar industry with appropriate weight placed on them to estimate future expectations. It is based on statistical methods and actuarial judgments to fulfil this ultimate task of forecasting.
  • 4. iii DEDICATION I dedicate this research work to Mariama ABDULRAMANI, one who open heartedly supported and contributed immensely in diverse ways to help me climb the academic ladder.
  • 5. iv ACKNOWLEDGEMENT I wish to express my profound gratitude to the almighty God for his love and guidance throughout my education. Big thanks go to my supervisor Dr. F.O Mettle, head of department, Department of Statistics for his tired less effort and contribution toward the completion of this research work. My development editor, Mr.E.B.N Quaye of the Department of Statistics kept me focused, and managed the entire project to completion. His is one of those few lecturers that brings out the best in you. Thank you for all of your hard work and encouragement. A special note of gratitude goes to my late uncle Torgbi Agumedra ΙΙΙ, chief of Adaklu Torda and my late mother Helen Dorfe whose vision and ambitions is the product of this piece of work, uncle and mom “AKPE NA MI”. I would like to end by extending a heartfelt thank you to my wife Frieda SHAPAH for your selfless life to support me morally and financially, putting your needs on hold throughout this whole period. Thank you all with all of my heart.
  • 6. v Table of Contents DECLARATION ..............................................................................................................................i Abstract ........................................................................................................................................ii DEDICATION................................................................................................................................iii ACKNOWLEDGEMENT ..............................................................................................................iv 1 CHAPTER ONE ..................................................................................................................... 1 1.1 Introduction...................................................................................................................... 1 1.2 Background ...................................................................................................................... 3 1.3 Problem statement............................................................................................................ 4 1.4 Objective of the study ...................................................................................................... 5 1.5 Research questions ........................................................................................................... 6 1.6 Significance of study........................................................................................................ 6 1.7 Scope of the Study............................................................................................................ 6 1.8 Outline of the Study ......................................................................................................... 7 2 CHAPTER TWO .................................................................................................................... 8 2.1 Literature review .............................................................................................................. 8 2.2 Limited Fluctuation Credibility........................................................................................ 9 2.3 Bayesian and the Bulhmann Approach .......................................................................... 10 3 CHAPTER THREE .............................................................................................................. 12
  • 7. vi 3.1 Methodology .................................................................................................................. 12 3.1.1 3.1.1 Presentation of standard formula of credibility; ............................................ 12 3.1.2 Interpretation of assumptions.................................................................................. 13 3.2 EMPIRICAL BAYES METHOD .................................................................................. 15 3.2.1 General presentation of the problem. ...................................................................... 15 3.2.2 The unbiased estimators of 𝑬𝑷𝑽 and 𝑽𝑯𝑴.......................................................... 16 4 CHAPTER FOUR................................................................................................................. 18 4.1 DATA............................................................................................................................. 18 4.2 PRESENTATION AND ANALYSIS OF DATA ......................................................... 18 4.2.1 Set up of the computation ....................................................................................... 20 4.3 DETERMINATION OF PARAMETERS ..................................................................... 21 4.4 Chi-square Goodness of Fit Test.................................................................................... 28 4.4.1 Test Statistics .......................................................................................................... 28 4.4.2 The critical region:.................................................................................................. 31 4.4.3 Decision: ................................................................................................................. 31 4.5 Testing the model in terms of Cedi equivalence ............................................................ 31 4.5.1 Test statistics........................................................................................................... 31 4.5.2 Interpretation of results;.......................................................................................... 34 5 DETERMINATION OF CLAIMS FREQUENCY .............................................................. 35 5.1.1 Compute the total exposures................................................................................... 36
  • 8. vii 5.1.2 Total exposures of all risk groups........................................................................... 36 5.1.3 The exposure-weighted means of the claim frequency are:.................................... 37 5.1.4 Computing EPV ...................................................................................................... 38 5.1.5 Buhlmann-straub predicted claims frequency for each risk group ......................... 41 5.1.6 Total claim frequency predicted based on the historical exposure is ..................... 42 6 CHAPTER FIVE .................................................................................................................. 44 6.1 CONCLUSION AND RECOMMENDATIONS........................................................... 44 6.1.1 Findings................................................................................................................... 44 7 RECOMMENDATIONS...................................................................................................... 45 7.1 Assumptions................................................................................................................... 46 7.2 Limitations ..................................................................................................................... 46 APPENDICES .............................................................................................................................. 47
  • 9. 1 1 CHAPTER ONE 1.1 Introduction In the face of economic risk people seek security which they consider the next basic goal after food, shelter and clothing, so they get into agreement called insurance contract that promises to pay a fixed amount to the individuals upon the occurrence of the stipulated random event that is the subject matter of the insurance contract and that to which the policyholder had paid in advance a premium. The insurer pools the expected losses and the potential for individual variability and charges premium that will be sufficient to cover all projected claim payments. Each policyholder is charged a premium in effect and it reflects any special traits of the policy and the past experience of the individual, Anderson & Brown (2005). The general framework of insurance is based on the weak law of large numbers, which says that for all 𝜀 > 0 lim 𝑛 → ∞ 𝑃 [| 1 𝑛 ∑ 𝑌𝑖 − 𝜇𝑛 𝑖=1 | ≥ 𝜀 ] = 0 (1) Under the assumption that the individual risks are uncorrelated and identically distributed random variables on the probability space {Ω,ℱ, Ρ}, with finite mean, 𝜇 = Ε[𝑌𝑖]. Intuitively, this means that the total claim amount becomes more predictable with increasing portfolio size n, and we can therefore calculate the insurance premium more accurately for large portfolio sizes n (Wutherich, 2014). This justify why credence is attached to the amount of experience data available at any point in time under frame work of credibility theory. Therefore the weak law of large numbers is considered to be a theoretical cornerstone of insurance.
  • 10. 2 By the Chebychev inequality which provides the rate of convergence and the central limit theorem which provides the asymptotic distribution, if the claims random variables 𝑌𝑖 have finite variance 𝜎2 the weak law of large numbers could be rewritten as this and that we have the following in convergence in distribution; ∑ 𝑌𝑖 −𝑛 𝜇𝑛 𝑖=1 √ 𝑛𝜎 ⇒ 𝑁(0,1) as 𝑛 ⟶ ∞, (2) That is within the limit we then have standard normal distribution, Wutherich (2014). As the portfolio size n increases the denominator increases at a slower rate making the total claim amount more predictable as the confidence bound narrows with increasing portfolio size. (Wutherich, 2014). The most interesting aspect of insurance therefore is the size of the policies in a portfolio. The volume takes care of the risk of randomness. Intuitively that is why we will shortly see that the credibility factor z which measures the importance placed on the individual experience approaches 1 as the portfolio size n increases, that is more emphases are placed on the experience data and as the company enters new line of business and the volume goes down the credibility factor z approaches zero, shifting more emphases to the collateral data. Generally, only few of such policyholders suffer losses and the losses are paid out of the premiums collected from the pool of policyholders. Thus, the entire pool contributes to pay the unfortunate few. In effect each policyholder exchanges an unknown loss for the payment of a known premium. The insurer decides on the type of losses to cover under each insurance contract. The insurance policy may define specific perils that are covered, or it may cover all perils with certain named exclusions, for example, loss as a result of war or loss of life due to suicide, Anderson & Brown (2005). The number of losses that occur within a specified period is
  • 11. 3 random variable and is known as the loss frequency while the amount paid by the insurer for such losses is the claim severity. This research seeks to predict claim frequency, claim severity, aggregate loss and pure premium, based on the framework of credibility theory. The framework of credibility theory is based on experience data of an insurance firm and data collected from similar industry with appropriate weight placed on each to update for the future expected losses. In other words it places weight on individual risk experience and class risk experience, making the premium rate a weighted average. This shows how much importance is placed on the risk or on the collateral data due to the volume of data available on the company level or the volume of the collateral data in use. Thus z reflects the amount of data available. The risk group is covered over a period of time, say one year upon payment of premium. The premium is partially based on a rate specified in the manual, called Manual rate and partially on the specific risk characteristics of the group, Dean (2005). Based on recent claim experience of the risk group the premium for the next period will be revised and the revised prediction determines the insurance premium for the next period for the risk group. 1.2 Background Ratemaking is the determination of what rate or premiums to charge for insurance product. This has been a major challenge in managing insurance industries. This involves the calculations of the adequate premium to cover losses and expenses and a margin for unanticipated claim payments (Anderson & Brown, 2005). In Ghana the insurance industry is one of the growing industries through the banking sector for the past decades under the bancassurnce partnership agreement (National Isurance Commission, 2011). This is obvious as almost all the banks have some sort of insurance packages available
  • 12. 4 aside the traditional banking activities. In view of making the insurance products attractive lot of modifications have been put in place that makes the computation of premium a challenging task and demanding more hands of actuaries to accomplish. Insurance industry is a complicated entity that needs experts to manage it by providing reliable risk models that may be able to predict catastrophic risk, adequate financial reserves that could meet any future losses, calculating appropriate risk for any insured, developing new products to suit the needs of the people and their culture. It is the duty of the actuary to advice on how the products are managed, how much deductible or policy limit should be imposed on a policy and if there should be co-insurance factor or there is the need for re-insurance. These are major variables that keeps insurance firm solvent and profitable. Insurance is risk transferring that may or may not occur in the case of property and casualty insurance but is a sure event in most of the life insurance policies therefore there should be adequate reserve to pay any unforeseen contingencies that may be the subject matter of an insurance contract. This paper seeks to provide one of the alternative ways that the insurance industry uses to compute the claims frequency and the claims reserves necessary for any future losses. 1.3 Problem statement There are a lot of problems that associate with the practice of pricing risk in insurance markets. The most obvious problem is the availability of data and the reason for data restriction basically stems from several reasons including the following:  Poor documentation of the losses limits the capacity of the experience data available (Biener, November 2011).  Delay in processing the insurance policies often times distort data.
  • 13. 5  Release of data for analyses was accompanied by lot of bureaucratic processes. These factors mostly affect the insurers that there are required to add high risk-loading for uncertainty in the estimation of expected losses Biener; (2011). As a result the pure premium is higher in micro and emerging insurance market as compared to regular insurance markets, making insurance more expensive and thus less attractive to the low income population, Biener, (2011). The ability to compute insurance premiums that are adequately sufficient, reasonable and fair is a major challenge to the insurance companies. Though insurance contract in the view of the insured is transferring of unknown risk for a known premium, he/she is only willing to pay a certain amount for the risk beyond this amount he/she avoids the contract. Therefore there is the need to know how much gross premium to charge to reflect expected utility but just enough to cover commission and expenses and any anticipated profit Anderson & Brown, (2005). In view of this that this project seeks to provide one of the means to compute the rates for insurance products. 1.4 Objective of the study One of the fundamental challenges confronting the insurance industry in providing the insurance products is pricing risk, Biener, (2011). Therefore this paper will seek to address the following problems: Seeks to investigate one of the quantitative techniques that would enable computation of risk premium that will be fair but adequate for insurance firms. Seeks to provide a basic understanding of one of the conventional techniques which is in used in insurance markets today to compute the aggregate loss.
  • 14. 6 1.5 Research questions The research work will seek to address the following fundamental questions that frequently face the insurance industries:  How much fair and equitable premium should be charged to each policyholder?  How do we predict total claim frequency based on company experience?  How much loading should be added to make up for expenses and profitability? These questions are answered through the knowledge of probability, statistics, mathematics and finance. There are different approaches as to how to compute premiums and this paper considered credibility theory as one of the quantitative tool to address these problems. 1.6 Significance of study After the completion of this piece of work it would be clear to the insurance firms how to use their available experience data and other data collected from a similar industry to predict for the next period the frequency of their claims. Management of insurance firms would know how to compute adequately the premium to charge each policyholder and how much to reserve to meet any future losses. The state will benefit as there will be fairness in premiums charge to the policyholders that would enable expansions in the insurance industries as more people will buy insurance products making more funds available to businesses. 1.7 Scope of the Study This study was structured using experience data from an insurance firm whose name we decide to remain unanimous. The study lay emphasis on determining risk premium through the framework of Bulhmann- Straub credibility approach based on the historical experience data of the company over a period of eight years. The firm had been in existence for about ten years now but due to poor documentations which distort most of the earlier data set forces the use of only
  • 15. 7 the resent five years data. The data set is composed of twelve different risk groups in the portfolio of the firm over the period under review. The research focus on five years data to formulate a model to predict for the next three years and the results are compared with the observed data for the same period. A chi square test is then done to check if the model fits the data set. An additional Cedi equivalent test is carried out to check how much will the firm’s gain of loss if the model was used over those three years. 1.8 Outline of the Study This research work is organized as follows; Chapter one gives brief descriptive of the research work, while Chapter two lays emphases on the findings of different authors whose ideas have been defined in relation to the topic under study. Chapter three focuses on the methodology review in the light of Mathematical Statistics tools that are employed and important to the analyses of the various data gathered. Chapter four focuses on data analysis and summary of the results. Chapter five, which is the final chapter provides findings, conclusions, Recommendations and it is followed closely by references and appendices.
  • 16. 8 2 CHAPTER TWO 2.1 Literature review Credibility theory is a set of quantitative tool that many actuaries used to estimate claims frequencies and premium based on the experience data on risk or group of risks. It is a branch of insurance mathematics that explore model based principles to construct formulas that covers more broadly linear estimation and prediction in latent variables (Norberg, 2006). Mowbray (1914) used the limited fluctuation credibility theory in his work under workers compensation insurance, Whitney (1918) show that credibility estimate could be a weighted average of two known quantities, which is a weighted average of an individual and a class estimate of the individual risk premium. Whitney defined 𝑈 as expected claims expenses per unit of risk exposed for any individual risk that form part of the portfolio of similar risks. Whitney proposed that premium rate should be the weighted average of the individual risk experience and class risk experience. 𝑈̅ = 𝑍 ∗ 𝑈̂ + (1 − 𝑍) ∗ 𝜇 (3) Where the observed mean claim amount per unit of risk exposed for any individual is 𝑈̂ and 𝜇 is the overall mean in the portfolio. The weight Z is the credibility factor that measures how importance or credence is given to the individual experience. Whitney described the risk premium as a random variable which is a function 𝑈(Θ) of a random element that represents the unobserved characteristics of the individual risk. We treat 𝜃 as the realization of a random variable Θ the distribution of which we call prior distribution. The randomness of Θ clearly shows that the individual risks that form the portfolio similar but not necessarily identical and thus the distribution of Θ describes the variation of the individual risk characteristics in the entire
  • 17. 9 portfolio. Credibility theory has two major areas namely Limited Fluctuation Credibility and The Greatest Accuracy Credibility. 2.2 Limited Fluctuation Credibility Mowbray (1914) used the limited fluctuations credibility theory to quantify the problem of while working on the workers compensation in insurance. In his work, Mowbray suggested how to determine the total amount of individual risk exposure that is needed for 𝑈̂ to be fully reliable for the estimate of 𝑈. Mowbray worked with annual claim amounts 𝑋1, 𝑋2, 𝑋3, . . . , 𝑋 𝑁, that are assumed to be i.i.d. (independent and identically distributed) selections from a probability distribution with density f (x|θ), mean 𝑈(θ), and variance 𝑣2 (𝜃). The parameter θ was viewed as non-random. For any given, 𝑈̂ = 1 𝑁 ∑ 𝑋𝑗 𝑁 𝑖=1 , Mowbray wanted to kwon how much observations are needed for some given 𝑘 and 𝛼, the Ρ[|𝑈̂ − 𝑈(θ)| ≤ 𝑘𝑈(θ)] ≥ 1 − 𝛼, Norberg (2006). Where 𝑘 is the precision parameter within which the observation, 𝑋 be of the mean, 𝜇. Using the normal approximation, 𝑈̂ ~ 𝑁(𝑈(θ), 𝑣( 𝜃) √ 𝑁 ). This gives, 𝑘𝑈(θ) ≥ 𝑍1−𝛼/2( 𝑣( 𝜃) √ 𝑁 ). The estimate, 𝑣̂2 = 1 𝑁−1 ∑ (𝑋𝑖 𝑁 𝑖=1 − 𝑈̂)^2. Using both estimators,𝑈̂ 𝑎𝑛𝑑 𝑣̂2 , we obtained; 𝑛 ≥ 𝑍1 −𝛼/2 𝑣̂2 𝑘2 𝑈̂2 (4) as a condition for full credibility for 𝑈̂ that is Z is set to 1 in equation (3) above. Now, the question how to choose Z if n does not satisfy equation (4) above. Longley-Cook (1962) provided a modern mathematical derivation of limited fluctuation approach to credibility theory. One major advantage of this limited approach is the simplicity of its use. However, a number of
  • 18. 10 researchers raised questions about the method presented by Longly-Cook. Bulhmann did not agree on the mathematical reasoning behind the limited fluctuation approach as found in Herzog (2008) and he commented based on statistics pointing out that prior data is ignored in the approach, Norberg (2006). Bulhmann argued that if the derivation was performed by using confidence interval, and that why should a confidence interval that by definition includes the true value with a probability of less than 1, gives full credibility? There were concern for the credibility factor, (1-Z) given to the prior data 𝜇 while there is the trust of accuracy placed on 𝜇, such that all weight is given to observed data 𝑈̂ when they assume enough information for full credibility. In their lecture note, Ohlsson & Johansson (2006b) introduce credibility theory through simple multiplicative models. A recent work by Englund et al. (2009) makes use of multivariate generalization of the recursive credibility estimates of Sundt (1981) and modeled the risk parameters as autoregressive process. Jawell (1974) indicates that if the likelihood function is of exponential family and the prior is conjugate the Bayesian premium coincides with the credibility premium. 2.3 Bayesian and the Bulhmann Approach Whitney (1918) as reported in Herzog (2008) stated that the credibility factor, Z, needed to be of the form, n Z n k   Where 𝑛 is exposure period or the number of policy years and 𝑘 is a constant of interest to be determined. Whitney suggest that to determine 𝑘 could be best done through the inverse probability of the Bayes Theorem. Moreover Bulhmann approach to credibility estimates tend out to be the best approximate to the corresponding Bayes estimates. This is directly related to
  • 19. 11 the prior data and the mean loss from the additional data in a linear model. Therefore the pure premium was derived as a linear combination of the prior mean loss and the mean loss from the additional or the collateral data with appropriate weight attached to each of the data set mean values. More detailed work on credibility theory is show in the works of Sundt (1986), Halliwell (1996), Grieg (1999), Bulhmann (2005), and Gisler (2005). Empirical Bayes credibility approach is considered in this piece of work and is based upon a presumption that the sample mean is from a distribution chosen from a set of distributions. However, many credibility situations are not characterized by the process of first choosing a distribution randomly and then sampling from that distribution.
  • 20. 12 3 CHAPTER THREE 3.1 Methodology In the general frame-work of credibility theory actuaries use credibility factor “z” which lies between the interval [0,1]. The ‘z’ is the weight placed on the experience data of the firm (observation) and ‘1 – z’ is the weight placed on the other information (collateral data). 3.1.1 3.1.1 Presentation of standard formula of credibility; Estimate = Z * Experience data + (1 - Z) * other information, 0 ≤ 𝑧 ≤ 1, and Z varies with the size of experience data and could be 1 as volume of the experience data become very large and likely not to change with time. Let 𝑋1, 𝑋2, 𝑋3, … , 𝑋 𝑁, be the claims data for particular policy of an insurance firm and we can estimate pure premium as 𝑋̅ while 𝑋𝑖 denotes the 𝑖 𝑡ℎ claim with 𝑁 being the claims frequency. Let 𝑆 = ∑ 𝑋𝑖 𝑁 𝑖=1 , be the aggregate loss and 𝑋1, 𝑋2, 𝑋3,… , 𝑋 𝑁 models the individual claim sizes while 𝑁 counts all claims in one fixed period. 𝑋𝑖 and the 𝑁 are random variables that describe the total claims amount and hence 𝑆 is also a random variable. At this point we should note the underlying assumptions that governs the distributions of 𝑋𝑖 and 𝑁; 𝑆 = ∑ 𝑋𝑖 𝑁 𝑖=1 , 1. 𝑁 is a discrete random variable taking only positive values.
  • 21. 13 2. 𝑋1, 𝑋2. . . ~ 𝐺, with 𝐺(0) = 0. (i.i.d) 3. 𝑁 and (𝑋1, 𝑋2. . .) are independent. 3.1.2 Interpretation of assumptions The first assumption says that 𝑁 takes only non-negative integer values and that the event { 𝑁 = 0} means that no claims occur which provides a total claim of S = 0. The second assumption says that the individual claims 𝑋𝑖 do not affect each other, that is if we face a large claim,𝑋1 this does not give any information for the remaining claims 𝑋𝑖 𝑖 ≥ 2 and that the claims have homogeneity in the sense that all have the same marginal distribution 𝐺, with 𝑃[ 𝑋𝑖 ≤ 0] = 𝐺(0) = 0 That is all the individual claims sizes 𝑋𝑖 are strictly positive. The final assumption is saying that the individual claim sizes 𝑋𝑖are not affected by the number of claims 𝑁 that is if we observe many claims this does not contain any information whether these claims are of smaller or large size (Wutherich, 2014). Considering the above assumptions though the 𝑁 and 𝑋𝑖 are independent, they are conditioned on risk and we denotes this risk parameter as 𝛩, with 𝜃 being the observation obtained of either 𝑁 or 𝑋𝑖 of a particular insured with risk parameter 𝛩. The realization 𝜃 is then modeled by the conditional distribution𝑓𝑥/ 𝛩 (𝑋 / 𝛩 ), given that = 𝜃. Under the above framework, the update for the next period 𝑋 𝑁+1 is given as, 𝑈 𝑁+1 = 𝑍 ∗ 𝑋̅ + (1 − 𝑍) ∗ 𝜇.
  • 22. 14 Where 𝑍 is the credibility factor assigned to the experience data and 𝜇 being the unconditional mean and where, 𝑍 = 𝑁 𝑁+𝑘 Where k =credibility parameter of the model. Recall that 𝑋𝑖 are random variables that are considered in a set of portfolios, therefore to make any meaningful prediction for the next period we have to look out for the possible sources of variations. The sources of variations are those that we could experience between risk groups and those that are within the risk groups due to the randomness of the𝑋𝑖′𝑠. By the tower property we can define the unconditional mean as, 𝐸[ 𝑋] = 𝐸[𝐸[𝑋/ 𝛩] ], for 𝑋/𝛩 where 𝛩 is sub 𝜎 − 𝑎𝑙𝑔𝑒𝑏𝑟𝑎 a sub set of ℱon the probability space {Ω,ℱ, Ρ} and 𝑋~ ℱ, an integrable random variable. The unconditional variance is defined as; 𝑉𝑎𝑟( 𝑋) = 𝐸[ 𝑉𝑎𝑟(𝑋/ 𝛩)] ] + 𝑉𝑎𝑟(𝐸[𝑋/𝛩]) Note again that the total variability is due to the variability in the risk parameter 𝛩 and the variability in the in 𝑋 conditioned on the 𝛩. Based on the risk parameter 𝛩, 𝐸[𝑋/ 𝛩] is known as the hypothetical mean and 𝑉𝑎𝑟(𝑋/ 𝛩) is known as the process variance and is the variance of any given risk group, Dean (2005). It is clear that the unconditional variance is the sum of the expectation of the process variance and the variance of the hypothetical mean. This bring us to very important ratio;
  • 23. 15 𝑘 = 𝐸𝑃𝑉 𝑉𝐻𝑀 , and this 𝑘 appears in the computation of 𝑍 = 𝑁 𝑁+𝑘 above. Intuitively, any portfolio of homogeneous risk the variability in the hypothetical mean will be extremely small making 𝑘 as large as possible leading to 𝑍 approaching zero. lim 𝑘 →∞ 𝑁 𝑁+𝑘 = 0 Conversely as the portfolio is formed of heterogeneous risk group of policies the variability in the hypothetical mean become very large making 𝑘 as small as possible and 𝑍 approaching one. This conform to the fact that the 𝐸[ 𝑋] = 𝜇 𝑥, becomes the pure risk premium in portfolio of homogeneous risk groups. What if the portfolio is of heterogeneous risk groups and/or data are collected over different periods with different exposures? This bring us to the Bulhmann – Straub credibility theory and also known as Empirical Bayes credibility, Dean (2005). 3.2 EMPIRICAL BAYES METHOD Under this approach the assumption is that the risk groups that form the portfolio are not identically distributed. 3.2.1 General presentation of the problem. Let 𝑋𝑖𝑗 denotes the loss per unit of exposure and 𝑚 𝑖𝑗 denotes the amount of exposure. The 𝑖 denotes the 𝑖 𝑡ℎ risk group, while, 𝑖 = 1, … , 𝑟, where 𝑟 > 1.The 𝑗/𝑖 denotes the 𝑗 𝑡ℎ loss observation in the 𝑖 𝑡ℎ group, while 𝑗 = 1, … , 𝑛𝑖 where 𝑛𝑖 > 1 for 𝑖 = 1, …, 𝑟.The 𝑗 indexes an individual within the risk group or a period of the risk group. Thus, for the 𝑖 𝑡ℎ risk group, we
  • 24. 16 have loss observations of 𝑛𝑖 individuals or periods. We assume the losses, 𝑋𝑖𝑗 are independently distributed. The risk parameter of the 𝑖 𝑡ℎ group is denoted by 𝜃𝑖 which is a realization of the random variable, Θ𝑖. We again assume that Θ𝑖 are independently and identically distributed as the parameter, Θ. We assume 𝑋1, 𝑋2, 𝑋3, . . . , 𝑋 𝑁 are independent and condition on Θ with common mean and variance as; 𝐸(𝑋𝑖𝑗/Θ = 𝜃𝑖) = 𝜇 𝑥(𝜃𝑖) , for I = 1 . . . r, and j = 1 . . . 𝑛𝑖 𝑉𝑎𝑟(𝑋𝑖𝑗/Θ = 𝜃𝑖) = 𝜎 𝑥 2 (θ 𝑖) 𝑚𝑖𝑗 , for I = 1 . . . r, and j = 1 . . . 𝑛𝑖 𝑚 𝑖𝑗 is measuring the exposure, that is it could be the number of months that the policy had been in force or the number of individuals in the group or the amount of premium income for the past j years, Anderson & Brown (2005). We write expectation of the process variance as 𝐸𝑃𝑉 = 𝐸[ 𝜎 𝑋 2(Θ𝑖)] = 𝐸[ 𝜎 𝑋 2(Θ)] 𝑉𝐻𝑀 = 𝑉𝑎𝑟[ 𝜇 𝑋(Θ𝑖)] = 𝑉𝑎𝑟[ 𝜇 𝑋(Θ)] 3.2.2 The unbiased estimators of 𝑬𝑷𝑽 and 𝑽𝑯𝑴 Based on our definition of the problem above: 𝐸𝑃𝑉 = ∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖) 2𝑛𝑖 𝑗=1 𝑟 𝑖=1 ∑ ( 𝑛𝑖 − 1)𝑟 𝑖=1 The unbiased estimator for the variance of the hypothetical means of each risk group is;
  • 25. 17 𝑉𝐻𝑀 = [∑ 𝑚 𝑖( 𝑋̅𝑖 − 𝑋̅)2𝑟 𝑖=1 ] − ( 𝑟 − 1) 𝐸𝑃𝑉 𝑚 − 1 𝑚 ∑ 𝑚 𝑖 2𝑟 𝑖=1 , Where 𝑚 𝑖 is the total exposure for the 𝑖 𝑡ℎ risk group and is defined as; 𝑚 𝑖 = ∑ 𝑚 𝑖𝑗 𝑛𝑖 𝑗=1 , 𝑓𝑜𝑟 𝑖 = 1, …, 𝑟 While the total exposure over all risk groups define as; 𝑚 = ∑ 𝑚 𝑖 𝑟 𝑖=1 Exposure-weighted mean of the 𝑖 𝑡ℎ 𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝 𝑋̅𝑖 = 1 𝑚 𝑖 ∑ 𝑚 𝑖𝑗 𝑋𝑖𝑗 𝑛𝑖 𝑗=1 𝑓𝑜𝑟 𝑖 = 1, …, 𝑟 The overall weighted mean. 𝑋̅ = 1 𝑚 ∑ 𝑚 𝑖 𝑋̅𝑖 𝑟 𝑖=1 We can now revisit our earlier definition of 𝑍, 𝑍 = 𝑁 𝑁 + 𝑘 Therefore under the description of the problem; 𝑍𝑖 = 𝑚𝑖 𝑚𝑖 +𝑘 , for each risk group and where 𝑘 = 𝐸𝑃𝑉 𝑉𝐻𝑀 . The update for the next period therefore will be, 𝑍𝑖 𝑋̅𝑖 + (1 − 𝑍𝑖) 𝑋̅, Anderson & Brown (2005)
  • 26. 18 4 CHAPTER FOUR 4.1 DATA The data is an insurance data collected by small insurance company over period of eight years in Ghana. The data have been grouped according to their respective risk groups for analysis. Due to the short period of data available we use five years data to formulate the model then the model is used to forecast for three years and compared with the three years remaining observed values. Chi square test is then done to check the validity of the model and confirmed by testing the model in terms of Cedi equivalent. 4.2 PRESENTATION AND ANALYSIS OF DATA Under the Bulhmann-Straub model each model could have different number of exposures that enable us to monitor each risk group over different time periods changing the calculations of EPV and VHM (Anderson & Brown, 2005). Data is presented in this format as shown in table 4.1, amount of claims and their corresponding exposures at any given period over each risk group. The 𝑋𝑖𝑗 and the 𝑚 𝑖𝑗 are the claim and the exposure respectively for the 𝑖 𝑡ℎ 𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝 𝑎𝑡 𝑡ℎ𝑒 𝑗 𝑡ℎ 𝑝𝑒𝑟𝑖𝑜𝑑. Various parameters such as sample means and the sample variances are computed as presented on table 4.2 𝑋𝑖 ̅is the unbiased estimators for each risk group mean 𝜇 and 𝑣𝑖̅are the unbiased estimators for each risk group process variance 𝑣.we now calculate the unbiased estimator for expected value of the process variance (EPV) of the group as 𝐸𝑃𝑉 = ∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖) 2𝑛𝑖 𝑗=1 𝑟 𝑖=1 ∑ ( 𝑛𝑖 − 1)𝑟 𝑖=1
  • 27. 19 Table 4.1 Table4.2: Expected Values of the Process Variance Risk Groups Claims Sample Mean Sample Process Variance Periods 1 2 L N 1 𝑋11 𝑚11 𝑋12 𝑚12 L L 𝑋1𝑁 𝑚1𝑁 𝑋1 ̅̅̅ = 1 𝑁 ∑ 𝑥1𝑗 𝑁 𝑗=1 𝑣1̅̅̅ = 1 𝑁 − 1 ∑ 𝑋1𝑗 − 𝑋1 ̅̅̅ 𝑁 𝑗=1 2 𝑋21 𝑚21 𝑋22 𝑚22 L L 𝑋2𝑁 𝑚2𝑁 𝑋2 ̅̅̅ = 1 𝑁 ∑ 𝑥2𝑗 𝑁 𝑗=1 𝑣2̅̅̅ = 1 𝑁 − 1 ∑ 𝑋2𝑗 − 𝑋2 ̅̅̅ 𝑁 𝑗=1 : M M M M M M R 𝑋 𝑟1 𝑚 𝑟1 𝑋 𝑟2 𝑚 𝑟2 L L 𝑋 𝑟𝑁 𝑚 𝑟𝑁 𝑋 𝑟 ̅̅̅ = 1 𝑁 ∑ 𝑥 𝑟𝑗 𝑁 𝑗=1 𝑣𝑟̅ = 1 𝑁 − 1 ∑ 𝑋 𝑟𝑗 − 𝑋 𝑟 ̅̅̅ 𝑁 𝑗=1 We now have the platform to present estimators for risk means and variances for each group. The mean is assumed in the Bulhmann-Straub model to be constant for each risk group over time. Periods Claims and exposures over the risk groups 1 𝑋11 𝑋12 …… ……. ……. 𝑋1𝑁1 𝑚11 𝑚12 …….. …….. …….. 𝑚1𝑁1 2 𝑋21 𝑋22 …….. ……. ……. 𝑋2𝑁2 𝑚21 𝑚22 …….. ……. …….. 𝑚2𝑁2 M M M M M M M r 𝑋 𝑟1 𝑋 𝑟2 …… ……. ……... 𝑋 𝑟𝑁𝑟 𝑚 𝑟1 𝑚 𝑟1 ……. ……. ……… 𝑚 𝑟𝑁𝑟
  • 28. 20 Table 4.3 Claims Period Total Exposure Sample Mean Sample Estimator for Variance 1 𝑁1 𝑚1 = ∑ 𝑚1𝑗 𝑁 𝑗=1 𝑋1 ̅̅̅ = 1 𝑚1 ∑ 𝑚1𝑗 𝑋1𝑗 𝑁 𝑗=1 𝑉1 ̂ = 1 𝑁1 − 1 ∑ 𝑚1𝑗(𝑋1𝑗 − 𝑋1 ̅̅̅ 𝑁1 𝑗=1 )^2 2 𝑁1 𝑚2 = ∑ 𝑚2𝑗 𝑁 𝑗=1 𝑋2 ̅̅̅ = 1 𝑚2 ∑ 𝑚2𝑗 𝑋2𝑗 𝑁 𝑗=1 𝑉2 ̂ = 1 𝑁2 − 1 ∑ 𝑚2𝑗(𝑋2𝑗 − 𝑋1 ̅̅̅ 𝑁2 𝑗=1 )^2 . . . .R 𝑁𝑟 𝑚 𝑟 = ∑ 𝑚 𝑟𝑗 𝑁 𝑗=1 𝑋𝑖 ̅ = 1 𝑚 𝑟 ∑ 𝑚 𝑟𝑗 𝑋 𝑟𝑗 𝑁 𝑗 =1 𝑉𝑟 ̂ = 1 𝑁𝑟 − 1 ∑ 𝑚 𝑟𝑗(𝑋 𝑟𝑗 − 𝑋1 ̅̅̅ 𝑁𝑟 𝑗=1 )^2 Total 𝑚 = ∑ 𝑚 𝑖 𝑟 𝑖=1 𝑋̅ = 1 𝑚 ∑ 𝑚 𝑖 𝑋𝑖 ̅ 𝑟 𝑖=1 𝐸𝑃𝑉̂ = 1 ∑ 𝑁𝑖 − 1𝑟 𝑖=1 ∑(𝑁𝑖 𝑟 𝑖=1 − 1) ∗ 𝑉𝑖 ̂ 4.2.1 Set up of the computation 𝑚 𝑖𝑗 = 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝑜𝑓 𝑖 𝑡ℎ 𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝 𝑎𝑡 𝑡ℎ𝑒 𝑗 𝑡ℎ 𝑝𝑒𝑟𝑖𝑜𝑑 𝑚 𝑖 = 𝑡ℎ𝑒 total exposure for the 𝑖𝑡ℎ risk group 𝑚 = 𝑡𝑜𝑡𝑎𝑙 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝑜𝑓 𝑎𝑙𝑙 𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝𝑠 𝑋̅𝑖 = 𝑒𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑚𝑒𝑎𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑖 𝑡ℎ 𝑟𝑖𝑠𝑘 𝑔𝑟𝑜𝑢𝑝 𝑋̅ = 𝑡ℎ𝑒 𝑜𝑣𝑒𝑟𝑎𝑙𝑙 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑚𝑒𝑎𝑛 𝐸𝑃𝑉 = 𝑡ℎ𝑒 𝑚𝑒𝑎𝑛 𝑜𝑓 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒
  • 29. 21 𝑉𝐻𝑀 = 𝑡ℎ𝑒 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑜𝑓 ℎ𝑦𝑝𝑜𝑡ℎ𝑒𝑡𝑖𝑐𝑎𝑙 𝑚𝑒𝑎𝑛 𝑘 = 𝐸𝑃𝑉 𝑉𝐻𝑀 k =credibility parameter of the model, 𝑍 = 𝑚 𝑖 𝑚 𝑖+𝑘 4.3 DETERMINATION OF PARAMETERS mi = mij+ mij + mij + mij+ ... +mij m1 = 34 + 88 + 97 + 117 + 151 = 487 m2 = 27 + 76 + 132 + 116 + 140 = 491 m3 = 58 + 102 + 154 + 163 + 148 = 625 m4 = 64 + 93 + 131 + 140 + 194 = 622 m5 = 93 + 89 + 134 + 167 + 211 = 694 . . . . . . m12 = 78 + 146 + 149 + 138 + 168 = 697 m = ∑ mi r i=1 It implies that, 𝑚 = 487 + 491 + 625+. . .+678 + 697 = 777
  • 30. 22 𝑋̅𝑖 = 1 𝑚 𝑖 ∑ 𝑚 𝑖𝑗 𝑋𝑖𝑗 𝑛𝑖 𝑗=1 𝑓𝑜𝑟 𝑖 = 1, …, 𝑟 𝑋̅1 = 1 487 (34 ∗ 72528 + 88 ∗ 153294+ . . .+151 ∗ 56244) = 345213 𝑋̅2 = 1 491 (27 ∗ 37652 + 76 ∗ 196662+ . . .+140 ∗ 342263) = 403870 . . . . . . 𝑋̅12 = 1 697 (78 ∗ 176208 + 146 ∗ 335822 + . . . +186 ∗ 915079) = 605858 𝑋̅ = 1 𝑚 ∑ 𝑚 𝑖 𝑋̅𝑖 𝑟 𝑖=1 𝑋̅ = 1 7776 (487 ∗ 345213 + 491 ∗ 403870+ . . .+697 ∗ 608585) = 795927 𝐸𝑃𝑉 = ∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖) 2𝑛𝑖 𝑗=1 𝑟 𝑖=1 ∑ ( 𝑛𝑖 − 1)𝑟 𝑖=1 ∑ 𝑚 𝑖1 𝑛 𝑗 ( 𝑋𝑖1 − 𝑋1 ̅̅̅) = 34 ∗ (72528− 345213)2 +.. . +151 ∗ (561244− 345213)^2 =1.52116 *1013 ∑ 𝑚 𝑖2 𝑛 𝑗 ( 𝑋𝑖2 − 𝑋2 ̅̅̅) = 27 ∗ (37652− 403870)2 +. . .+140 ∗ (342263 − 403870)^2
  • 31. 23 =1.61116*1013 ∑ 𝑚 𝑖12 𝑛 𝑗 ( 𝑋𝑖12 − 𝑋12 ̅̅̅̅̅) = 78 ∗ (176208 − 608585)+.. . +291(793094 − 608585)2 =4.41216*1015 Numerator of EPV: ∑ ∑ 𝑚 𝑖𝑗(𝑋𝑖𝑗 − 𝑋̅𝑖) 2𝑛𝑖 𝑗=1 𝑟 𝑖=1 = 1.20509*1016 Denominator of EPV: ∑ ( 𝑛𝑖 − 1)𝑟 𝑖=1 = 48 Therefore EPV = 1.20509*1016 /48 = 2.510604167*1014 𝑉𝐻𝑀 = [∑ 𝑚 𝑖( 𝑋̅𝑖 − 𝑋̅)2𝑟 𝑖=1 ] − ( 𝑟 − 1) 𝐸𝑃𝑉 𝑚 − 1 𝑚 ∑ 𝑚 𝑖 2𝑟 𝑖=1 ∑ 𝑚 𝑖 𝑟 𝑖=1 ( 𝑋𝑖 ̅ − 𝑋̅)2 = 489 ∗ (345213− 795927)2 +. .. +697 ∗ (608585 − 795927)^2 = 2.92338*1015 Numerator of VHM = 2.92338*1015 - 11* 2.510604167*1014 =1.617154163*1014 Denominator of VHM = 7776 − 1 7776 (4872 + 4912 +. . .+6972 )
  • 32. 24 =7117.112912 VHM = 1.617154163 ∗ 1014 /7117.112912 = 22724099798 k = EPV VHM 𝑘 = 2.510604167∗10^14 22724099798 = 11048.155 𝑍𝑖 = 𝑚𝑖 𝑚𝑖+𝑘 𝑍1 = 487 487 + 11048.155 = 0.04221876516 𝑍2 = 491 491 + 11048.155 = 0.04255077603 𝑍3 = 625 625 + 11048.155 = 0.05354165176 𝑍4 = 622 622 + 11048.155 = 0.05329834951
  • 33. 25 𝑍5 = 694 694 + 11048.155 = 0.059103290667 𝑍6 = 802 802 + 11048.155 = 0.06767843965 𝑍7 = 693 693 + 11048.155 = 0.05902315403 𝑍8 = 664 664 + 11048.155 = 0.05669323878 𝑍9 = 638 638 + 11048.155 = 0.05459451804 𝑍10 = 685 685 + 11048.155 = 0.05838156915 𝑍11 = 678 678 + 11048.155 = 0.05781946427
  • 34. 26 𝑍12 = 697 697 + 11048.155 = 0.05934361871 The update for the next period therefore will be 𝑈𝑖 = 𝑍𝑖 𝑋̅𝑖 + (1 − 𝑍𝑖) 𝑋̅ U1 = 345213 ∗ 0.04221876516+ (1 − 0.04221876516)∗ 795927 U1 = 776898.41 We do the same for the rest of the risk groups for the first period as follow: U2 = 403870 ∗ 0.04255077603 + (1 − 0.04255077603)∗ 795927 U2 = 779244.67 U3 = 406299 ∗ 0.05354165176 + (1 − 0.05354165176)∗ 795927 U3 = 775065.67 𝑈4 = 541635 ∗ 0.05329834951+ (1 − 0.05329834951)∗ 795927 𝑈4 = 782373.66 𝑈5 = 580375 ∗ 0.059103290667+ (1 − 0.059103290667)∗ 795927 𝑈5 = 783187.17 𝑈6 = 782893 ∗ 0.06767843965+ (1 − 0.06767843965)∗ 795927 𝑈6 = 795044.88
  • 35. 27 𝑈7 = 521916 ∗ 0.05902315403+ (1 − 0.05902315403)∗ 795927 𝑈7 = 779754.01 𝑈8 = 464957 ∗ 0.05669323878+ (1 − 0.05669323878)∗ 795927 𝑈8 = 777163.24 𝑈9 = 622617 ∗ 0.05459451804+ (1 − 0.05459451804)∗ 795927 𝑈9 = 786465.22 𝑈10 = 2576265 ∗ 0.05838156915+ (1 − 0.05838156915)∗ 795927 𝑈10 = 899865.92 𝑈11 = 1393275 ∗ 0.05781946427+ (1 − 0.05781946427)∗ 795927 𝑈11 = 830465.34 𝑈12 = 608585 ∗ 0.05934361871 + (1 − 0.05934361871)∗ 795927 𝑈12 = 784809.45 These are the results of the update of all risk groups in the first period; 𝑈1 = 776898, 𝑈2 =779245, 𝑈3 =775066, 𝑈4 = 782374, 𝑈5 = 873187, 𝑈6 = 795045 𝑈7 = 779754, 𝑈8 = 777163, 𝑈9 = 786465, 𝑈10 = 899866, 𝑈11 = 830465, 𝑈12 = 784809 These results are added to the previews observations and the procedure repeated to find new updates for the next two periods and the results compared with the observed values of such periods as follow in table 4.4 below.
  • 36. 28 4.4 Chi-square Goodness of Fit Test The chi square goodness of fit is applied in order to determine whether there is significant difference in the annual claims. 0H  There is significant difference in the annual claims. 1H  There is no significance between the annual claims. 4.4.1 Test Statistics 𝑋𝑖 2 = ∑ 𝑂𝑖 − 𝐸𝑖 𝐸𝑖 𝑛 𝑖=1 Where 𝑂𝑖 𝑎𝑟𝑒 𝑡ℎ𝑒 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 𝑐𝑙𝑎𝑖𝑚𝑠 𝑎𝑛𝑑 𝐸𝑖 𝑎𝑟𝑒 𝑡ℎ𝑒 𝑒𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑐𝑙𝑎𝑖𝑚𝑠 and 𝑋𝑖 2 𝑖𝑠 𝑡ℎ𝑒 𝑐ℎ𝑖 𝑠𝑞𝑢𝑎𝑟𝑒 𝑡𝑒𝑠𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑚𝑜𝑑𝑒𝑙 𝑤𝑖𝑡ℎ 𝑑𝑒𝑔𝑟𝑒𝑒 𝑜𝑓 𝑓𝑟𝑒𝑒𝑑𝑜𝑚 𝑑 = ( 𝐶 − 1) ∗ (𝑅 − 1) C (number of columns) and R (number of rows), therefore d = (12-1)*(5-1) = 44 𝑋1 2 = (777992 − 776898)2 776898 + (787632 − 784915)2 784915 + (789376 − 791556)2 791556 𝑋1 2 = 16.95
  • 37. 29 Table 4.4 RISK GROUP 1 RISK GROUP 2 RISK GROUP 3 OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED 777992 776898 780154 779245 771189 775066 787632 784915 789553 786218 785033 783858 789376 791556 789173 792033 795063 791154 RISK GROUP 4 RISK GROUP 5 RISK GROUP 6 OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED 781641 782374 779164 783187 1067813 795045 784476 787897 789116 788317 1365187 794920 789916 792627 788193 792767 1198562 795195 RISK GROUP 7 RISK GROUP 8 RISK GROUP 9 OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED 781134 779754 776935 777163 791649 786465 790641 786409 779867 784981 789671 790145 788154 792070 788981 791554 796615 793444 RISK GROUP 10 RISK GROUP 11 RISK GROUP 12 OBSERVED EXPECTED OBSERVED EXPECTED OBSERVED EXPECTED 901194 899866 941781 830465 785553 784809 849134 853097 1007616 814574 790179 789216 820964 816511 1378450 802389 789649 793094 𝑋2 2 = (780154 − 7792245)^2 779245 + (789553− 786218)^2 786218 + (789173 − 792033)^2 792033
  • 38. 30 𝑋2 2 = 25.53419837 𝑋3 2 = (771189− 775066)^2 775066 + (785033 − 783858)^2 783858 + (795063− 791154)^2 791154 𝑋3 2 = 40.46858875 𝑋4 2 = (781641− 782374)^2 782374 + (784476 − 787897)^2 787897 + (789916− 792627)^2 792627 𝑋4 2 = 24.81286973 𝑋5 2 = (779164− 783187)^2 783187 + (789116 − 788317)^2 788317 + (788193− 792767)^2 792767 𝑋5 2 = 47.86523665 𝑋6 2 = (1067813 − 795045)^2 795045 + (1365187− 794920)^2 794920 + (1198562− 795195)^2 795195 𝑋6 6 = 489794.8022 𝑋7 2 = (781134− 779754)^2 779754 + (790641 − 786409)^2 786409 + (788154− 792070)^2 792070 𝑋7 2 = 44.57722693 𝑋8 2 = (776935− 777163)^2 777163 + (779867 − 784981)^2 784981 + (788981− 791554)^2 791554 𝑋8 2 = 41.74732544 𝑋9 2 = (791649− 786465)^2 786465 + (789671 − 790145)^2 790145 + (796615− 793444)^2 793444 𝑋9 2 = 47.12769467
  • 39. 31 𝑋10 2 = (901194 − 899866)^2 899866 + (849134− 853097)^2 853097 + (820964 − 816511)^2 816511 𝑋10 2 = 44.65495069 𝑋11 2 = (941781 − 830465)^2 830465 + (1007616− 814574)^2 814574 + (1378450− 802389)^2 802389 𝑋11 2 = 433086.0535 𝑋12 2 = (785553 − 784809)^2 784809 + (790179− 789216)^2 789216 + (789649 − 793094)^2 793094 𝑋12 2 = 16.84457374 4.4.2 The critical region: X44 ,0.05 2 = 55.8 4.4.3 Decision: We fail to reject the null hypothesis 𝐻0on the significant level of α = 0.05 for risk groups 1, 2, 3, 4, 5, 7, 8, 9, 10, and 12 as the computed chi square values are less than the critical region. We then conclude that the model is good fit for their claims data. However we reject the null hypothesis 𝐻0 on the significant level of α = 0.05 for risk groups 6 and 11 as the computed chi square values are greater than the critical region and conclude that the model is not a good fit for their claims data. 4.5 Testing the model in terms of Cedi equivalence 4.5.1 Test statistics Pure Premium per Expected, PP/EXP. (GHc) = 𝐸𝑋𝑃𝐸𝐶𝑇𝐸𝐷 𝐸𝑋𝑃𝑂𝑆𝑈𝑅𝐸
  • 40. 32 Pure Premium per Observed, PP/OBS. (GHc) = 𝑂𝐵𝑆𝐸𝑅𝑉𝐸𝐷 𝐸𝑋𝑃𝑂𝑆𝑈𝑅𝐸 Difference, DIFF. (GHc) = PP/EXP. (GHc) - PP/OBS. (GHc) The model is tested in Cedi value by computing pure premium based on forecast values and observed value and the results are compared to check for any shortfall in value at period under consideration. This test covers only the ten risk groups that fit the model. The results are tabulated as follow: Table 4.5 YEAR ONE EXPECTED(GHc) OBSERVED(GHc) EXPOSURE PP/EXP.(GHc) PP/OBS.(GHc) DIFF.(GHc) 776898.44 777992 188 4132.44 4138.26 -5.82 779244.70 780154 154 5060.03 5065.94 -5.90 775065.71 771189 162 4784.36 4760.43 23.93 782373.67 781641 193 4053.75 4049.95 3.80 783187.18 779164 250 3132.75 3116.66 16.09 779754.02 781134 236 3304.04 3309.89 -5.85 777163.25 776935 229 3393.73 3392.73 1.00 786465.26 791649 287 2740.30 2758.36 -18.06 899865.93 901194 171 5262.37 5270.14 -7.77 784809.44 785553 264 2972.76 2975.58 -2.82 TOTAL -1.40
  • 41. 33 YEAR TWO EXPECTED(GHc) OBSERVED(GHc) EXPOSURE PP/EXP.(GHc) PP/OBS.(GHc) DIFF.(GHc) 784914.85 787632 146 5376.13 5394.74 -18.61 786218.03 789553 150 5241.45 5263.69 -22.23 783858.28 785033 173 4530.97 4537.76 -6.79 787896.63 784476 187 4213.35 4195.06 18.29 788316.55 789116 238 3312.25 3315.61 -3.36 786409.46 790641 203 3873.94 3894.78 -20.85 784980.69 779867 158 4968.23 4935.87 32.37 790145.15 789671 165 4788.76 4785.88 2.87 853096.80 849134 189 4513.74 4492.77 20.97 789216.40 790179 281 2808.60 2812.02 -3.43 TOTAL -0.77
  • 42. 34 YEAR THREE EXPECTED(GH) OBSERVED(GHc) EXPOSURE PP/EXP.(GHc) PP/OBS.(GHc) DIFF.(GHc) 791555.80 789376 173 4575.47 4562.87 12.60 792032.68 789193 126 6285.97 6263.44 22.54 791154.01 795063 134 5904.13 5933.31 -29.17 792626.51 789916 116 6832.99 6809.62 23.37 792766.59 788193 201 3944.11 3921.36 22.75 792069.86 788154 280 2828.82 2814.84 13.99 791554.07 788981 281 2816.92 2807.76 9.16 793444.36 796615 168 4722.88 4741.76 -18.87 816510.92 820964 182 4486.32 4510.79 -24.47 793093.67 789649 291 2725.41 2713.57 11.84 TOTAL 43.73 4.5.2 Interpretation of results; We observed that in year one and year two the insurance firm make a loss of GHc 1.45, GHc 0.77 respectively but make a gain of GHc 43.73. These losses are in fact very negligible as compare to the total premium collected over the period. This test help confirm the validity of the
  • 43. 35 model. These values are the differences between the pure premium based on expected values and the pure premium based on observed values over the same periods. 5 DETERMINATION OF CLAIMS FREQUENCY Calculate the Buhlmann-Straub credibility predictions of the numbers of claims per hundred policyholders for the twelve risk groups for the next period. Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 1.8 0.03 1.8 0.03 1.2 0.06 1.3 0.06 1.9 0.09 2.5 0.08 2.4 0.10 3.1 0.09 1.9 0.10 2.5 0.13 2.7 0.15 2.3 0.13 2.5 0.12 1.6 0.12 1.7 0.16 4.2 0.14 1.1 0.15 1.7 0.14 3.4 0.15 1.6 0.19 0.49 0.49 0.63 0.62 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 2.0 0.09 2.3 0.08 1.6 0.08 0.5 0.06 3.3 0.09 1.7 0.15 1.3 0.12 2.0 0.10
  • 44. 36 4.2 0.13 2.3 0.17 2.7 0.14 2.2 0.14 1.5 0.17 3.2 0.22 2.8 0.17 1.8 0.18 2.3 0.21 4.1 0.18 1.69 0.20 2.3 0.18 0.69 0.80 0.69 0.66 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 Claims per 100 Insured Per 100 1.8 0.09 1.4 0.08 0.9 0.09 1.1 0.08 2.2 0.08 1.3 0.15 1.2 0.12 2.8 0.15 1.7 0.14 2.8 0.13 2.1 0.12 1.2 0.15 2.4 0.18 3.7 0.16 1.9 0.17 3.1 0.14 1.3 0.15 2.6 0.17 2.2 0.17 2.5 0.19 0.64 0.69 0.68 0.70 5.1.1 Compute the total exposures 𝑚1 =4.87, 𝑚2 = 4.91, 𝑚3 = 6.25, 𝑚4 = 6.22, 𝑚5 = 6.94 , 𝑚6 = 8.02, 𝑚7 = 6.93, 𝑚8 = 6.64, 𝑚9 = 6.38, 𝑚10 = 6.85, 𝑚11 = 6.78, 𝑚12 = 7.97 5.1.2 Total exposures of all risk groups 𝑚 = 4.87 + 4.91 + 6.25 + 6.22 + 6.94 + 8.02 + 6.93 + 6.64 + 6.38 + 6.85 + 6.78 + 7.97 = 77.76
  • 45. 37 5.1.3 The exposure-weighted means of the claim frequency are: 𝑋̅1 = 1.8 ∗ 0.34 + 1.9 ∗ 0.88 + 1.9 ∗ 0.97 + 2.5 ∗ 1.17 + 1.1 ∗ 1.51 4.87 = 1.789 𝑋̅2 = 1.8 ∗ 0.27 + 2.5 ∗ 0.76 + 2.5 ∗ 1.32 + 1.6 ∗ 1.16 + 1.7 ∗ 1.40 4.91 = 2.0208 𝑋̅3 = 1.2 ∗ 0.58 + 2.4 ∗ 1.02 + 2.7 ∗ 1.54 + 1.7 ∗ 1.63 + 3.4 ∗ 1.48 6.25 = 2.4168 𝑋̅4 = 1.1 ∗ 0.64 + 3.1 ∗ 0.93 + 2.3 ∗ 1.31 + 4.2 ∗ 1.4 + 1.6 ∗ 1.94 6.22 = 2.5055 𝑋̅5 = 2.0 ∗ 0.93 + 3.3 ∗ 0.89 + 4.2 ∗ 1.43 + 1.5 ∗ 1.67 + 2.3 ∗ 2.11 6.94 = 2.6169 𝑋̅6 = 2.3 ∗ 0.84 + 1.7 ∗ 1.54 + 2.3 ∗ 1.65 + 3.2 ∗ 2.2 + 4.1 ∗ 1.8 8.02 = 2.8385 𝑋̅7 = 1.6 ∗ 0.8 + 1.3 ∗ 1.15 + 2.6 ∗ 1.36 + 2.7 ∗ 1.67 + 1.7 ∗ 1.95 6.93 = 2.0397
  • 46. 38 𝑋̅8 = 0.5 ∗ 0.61 + 2.0 ∗ 1.01 + 2.2 ∗ 1.4 + 1.8 ∗ 1.79 + 2.3 ∗ 1.83 6.64 =1.9331 𝑋̅9 = 1.8 ∗ 0.9 + 2.2 ∗ 0.83 + 1.7 ∗ 1.39 + 2.4 ∗ 1.78 + 1.3 ∗ 1.48 6.38 = 1.8817 𝑋̅10 = 1.4 ∗ 0.8 + 1.3 ∗ 1.47 + 2.8 ∗ 1.26 + 3.7 ∗ 1.62 + 2.6 ∗ 1.7 6.85 = 2.4778 𝑋̅11 = 0.9 ∗ 0.94 + 1.2 ∗ 0.83 + 2.1 ∗ 1.24 + 1.9 ∗ 1.69 + 2.2 ∗ 1.73 6.78 = 1.6907 𝑋̅12 = 1.1 ∗ 0.78 + 2.8 ∗ 1.46 + 1.2 ∗ 1.49 + 3.1 ∗ 1.39 + 2.5 ∗ 1.86 6.97 = 2.2515 5.1.4 Computing EPV Numerator of EPV =0.34(1.8-1.789)^2+0.88(1.9-1.789)^2+0.97(1.9-1.789)^2+1.17(2.5-1.789)^2+1.51(1.1- 1.789)^2+…+0.78(1.1-2.2515)^2+1.46(2.8-2.2515)^2+1.49(1.2-2.2515)^2+1.86(2.5-2.2515)^2 = 39.94803206 𝐸𝑃𝑉 = 39.94803206 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4 + 4
  • 47. 39 = 0.832250668 Overall mean 𝑋̅ = 1.789 ∗ 4.87 + 2.0208 ∗ 4.91 + ⋯+ 1.6907 ∗ 6.78 + 2.2515 ∗ 6.97 77.76 = 2.23368325 𝑽𝑯𝑴 = 4.87(1.762− 2.23368)2 + 4.91(2.0133 − 2.23368)2 + ⋯+ 6.97(2.24637− 2.23368)2 − 11 ∗ 0.8322 77.76 − 1 77.76 ∗ (4.872 + 4.912 + ⋯ + 6.682 + 6.972) =0.000704845 𝑘 = 0.83225 0.000704845 = 1180.757628 𝑍1 = 4.87 4.78 + 1180.757628 = 0.004107529 𝑍2 = 4.91 4.91 + 1180.757628 = 0.004141127
  • 48. 40 𝑍3 = 6.26 6.25 + 1180757628 = 0.005265341 𝑍4 = 6.22 6.22 + 1180.757628 = 0.0052402 𝑍5 = 6.94 6.94 + 1180.757628 = 0.005843238 𝑍6 = 8,02 8.02 + 1180.757628 = 0.006746426 𝑍7 = 6.93 6.93 + 1180.757628 = 0.005834868 𝑍8 = 6.64 6.64 + 1180.757628
  • 49. 41 = 0.005592061 𝑍9 = 6.38 6.38 + 1180.757628 = 0.005374272 𝑍10 = 6.85 6.85 + 1180.757628 = 0.005767898 𝑍11 = 6.78 6.78 + 1180.757628 = 0.005709293 𝑍12 = 6.97 6.97 + 1180.757628 = 0.005868349 5.1.5 Buhlmann-straub predicted claims frequency for each risk group 𝐶𝑓1 = 1.789 ∗ (0.004107529)+ (1 − 0.004107529)∗ 2.23368325 = 2.231744543 𝐶𝑓2 = 2.0208 ∗ (0.004141127) + (1 − 0.004141127)∗ 2.23368325
  • 50. 42 = 2.232770562 𝐶𝑓3 = 2.4168*(0.005265341) + (1-0.005265341)*2.23368325 = 2.234672633 . . . 𝐶𝑓11 = 1.6907 ∗ (0.005709293)+ (1 − 0.00570923) ∗ 2.23368325 = 2.230967558 𝐶𝑓12 = 2.2515 ∗ (0.005868349)+ (1 − 0.005868349)∗ 2.23368325 = 2.233757715 5.1.6 Total claim frequency predicted based on the historical exposure is 𝐶 = 4.87 ∗ 2.23174+ 4.91 ∗ 2.23277 + ⋯ + 6.78 ∗ 2.23097 + 6.97 ∗ 2.33758 = 173.7014931
  • 51. 43
  • 52. 44 6 CHAPTER FIVE 6.1 CONCLUSION AND RECOMMENDATIONS 6.1.1 Findings The expected values as show by the two test statistics indicate that the model fits the given data set expect for risk groups 6 and 11 that have some obvious outliers. These two risk groups could be modeled using other methods that take care of extreme and irregular claims. The elements that are considered for insurance premium calculation are pure risk premium, risk margin, profit margin, sales commission to sales agents, administrative expenses, financial gain on investments, and state task, Wutherich (2014). We computed the pure premium for each policyholder in the twelve risk groups under consideration and the results are shown in table 4.4 above. The pure premium due to each policyholder for the first year is as follows: for risk group 1 charges GHc 4132.44, risk group 2 charges GHc 5060.03, risk group 3 charges GHc 4784.36, risk group 4 charges GHc 4053.75, risk group 5 charges GHc 3132.75, risk group 7 charges GHc 3304.04, risk group 8 charges GHc 3393.73, risk group 9 charges GHc 2740.30, risk group 10 charges GHc 5262.37, risk group 12 charges GHc 2972.76. These values are the pure premium that are supposed to be charged to each policyholder necessary to cover losses and loss related expenses. Loading is the part of premium necessary to cover sales expenses and profit margin. On the other hand the claim frequency for each of the twelve risk groups for the next period based on the historical exposure are as follow: risk group one will have to expect 10.87 claims and risk groups 2,3,4,5,6,7,8,9,10,11,12 are 10.96, 13.97, 13.9, 15.52, 17.95, 15.47 14.82, 14.24, 15.31, 15,13, and 15.57 respectively. Therefore the total expected claim frequency for all the twelve risk groups for the next period is 173.7 claims.
  • 53. 45 7 RECOMMENDATIONS Actuarial pricing methodology generally consists of a collection of forecasting methods, economic models, and trend analyses (Stein, 1995). The factors, ratios, and averages which result are used to generate rates which help promote the various financial, operational, and strategic needs necessary for the insurance enterprise to remain solvent and competitive in business. To achieve these goals the actuary has the responsibility to choose the best forecasting method to formulate the model that will provide these needs. The model employed above exhibit a good fit for most of the risk groups under consideration but is the duty of the actuary to use informed judgment to determine other variables that could easily change over time. The model needs review constantly to meet the dynamism of the insurance industry as a whole. A financial performance of an insurance firm product could be vulnerable to different of complex socio-economic, legal and operational forces. However most of this traditional rate making methods do not seems to fully capture these dynamisms. Moreover most of the available data are such that simple and static methods could not model them for any meaningful predictions. The actuary needs to develop more robust models to take care of any extremely and irregular claims that do not fit in these models and methods and therefore do not give the accurate prediction of the loss reserves. This could be done through the extension of the conventional normal error distribution to generalized-t distribution which involves several of the long-tailed distributions such as the student-t and the exponential power distributions. These distributions are expressed as a scale mixture of uniforms distribution which help model implementation and detection of outliers by using mixing parameters.
  • 54. 46 The concepts regarding dynamic ratemaking should be employed at all times to include product related variables and should include product management ideas through evaluation and hypothesis about the range of product and environmental forces. These will go beyond the usual historical averages for making projections but instead will include the study of the underlying distributions to identify any meaningful pattern or any unanticipated correlations as well as the continuous analysis of the company’s operational systems and expenses, Stein, (1995). 7.1 Assumptions The relationship of product cost does not vary under any combination of rating variables within the time frame under consideration. This assumption is oversimplified to ignore the interplay of exposure issues and the complexity of valuing the claims profiles of insured. 7.2 Limitations  As static method, it fails to model all the cost and systems associated with the sales of the insurance products. It focus solely on historical data that could not reflect the exact image of the current experience of the firm as the range of exposures and risk environments to be insured, the competitive insurance market, the company’s operations are all dynamic.  There is no presuppose correlations the ratemaking variables and this allow the use of data whose level of detail allows for only one dimensional rating plan analysis enabling use of a constant rating factor in the whole ratemaking calculations (Stein, 1995).
  • 55. 47 APPENDICES The following is the set of historical data used for the analysis RISK GROUP ONE RISK GROUP TWO RISK GROUP THREE CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS 725283 34 37652 27 106228 58 153294 88 129662 76 251823 102 203667 97 561848 132 479907 154 407347 117 563351 116 444459 163 561244 151 342263 140 511740 148 777992 188 780143 154 771189 162 787632 146 789553 150 785033 173 789376 173 789193 126 795063 134 RISK GROUP FOUR RISK GROUP FIVE RISK GROUP SIX CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS 213232 64 185515 93 137114 84 231136 93 316191 89 418161 153 472143 131 668315 134 6447337 165 619666 140 455434 167 1013470 220 789436 194 908885 211 1236720 180 781641 193 779164 250 1067813 200 784476 187 789116 238 1365187 263 789916 116 788193 201 1198562 221
  • 56. 48 RISK GROUP SEVEN RISK GROUP EIGHT RISK GROUP NINE CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS 169106 80 230578 61 460661 90 211137 115 369545 101 140268 83 283162 136 379300 140 454233 139 350533 167 567780 179 979572 178 906305 236 560697 183 720446 148 781134 203 776935 229 791649 287 790641 203 779867 158 789671 165 788154 280 788981 281 796615 168 RISK GROUP TEN RISK GROUP ELEVEN RISK GROUP TWELVE CLAIMS NUMBERS CLAIMS NUMBERS CLAIMS NUMBERS 294765 80 194248 94 176208 78 399408 147 194069 118 335822 146 347689 126 4918118 124 637742 149 543466 162 564203 169 696964 138 9121159 170 1146151 173 915079 186 901194 171 941781 172 785553 264 849134 189 1007616 157 790179 281 820964 182 1378450 227 789649 291
  • 57. 49