Introduction

These study notes are based on four Exam 9 syllabus readings:

These papers address the questions of how to assess risk in the insurance industry, and how to use risk measures to allocate the cost of capital. These papers correspond to learning objectives C6 to C9 on the syllabus.

easypackages::packages("dplyr", "ggplot2", "mvtnorm", "Hmisc")

Various parties have an interest in the solvency of an insurer:

An insurer’s solvency protection is provided by its capital, which is the excess of assets over liabilities. (Also called surplus or equity.) In addition to determining the amount of capital needed to provide the desired level of protection, there is a need to allocate the capital among the company’s lines of business. (Technically, it is the cost of capital that is being allocated, because in the event of extreme losses, any single line of business has access to the entire capital of the company.) Reasons for doing so include:

Two different types of capital are considered when assessing insurer solvency:

The key difference between the two is that if there are risk margins included in premiums or conservatism in the loss reserves, they will reduce the amount of risk capital required. Otherwise, the two definitions are the same.

The general framework assumes that the company has \(N\) lines of business, with a fraction \(x_i\) of its total equity capital allocated to each line of business, where \[ \sum_{1 \leq i \leq N} x_i \leq 1 \] If the total capital of the firm is \(C\), then line \(i\) receives an allocation of \(C_i = x_i C\). Therefore, \[ \sum_{1\leq i \leq N} C_i \leq C \] Note that this framework allows for the fact that some of the firm’s capital may remain unallocated. The costs of capital arise from various “friction costs” that reduce the returns for insurers on the investment of their capital. (Theoretically, in the absence of these costs, the insurer would earn the equilibrium rate of return on their invested assets, and capital would be effectively costless.)

  1. Agency / informational costs: managers may not act in the best interest of owners in terms of maximizing the value of the firm. There are also costs associated with information asymmetry in the insurance industry (adverse selection, moral hazard).

  2. Taxation: investment taxes generally lead to a double-taxation scenario, since the insurer’s investment income is taxed, and is later taxed again as dividends / capital gains to investors. As a result, investors could get higher investment returns by investing directly in the market.

  3. Regulation: solvency regulation effectively gives the regulator an “option” on the insurer’s assets, since they have the legal right to take control of the company if its net assets drop below a regulatory threshold. Investment restrictions may lead insurers to hold inefficient portfolios.

The ideas in these readings will be illustrated using the following random data about three correlated lines of business, simulated based on parameters from the Cummins paper.

set.seed(123456)
corr.matrix = matrix(c(1, 0.5, 0.75, 0.5, 1, 0.5, 0.75, 0.5, 1), nrow = 3, ncol = 3)
mu = log(1000) - c(0.375, 0.5, 0.625)^2 / 2
sd = c(0.375, 0.5, 0.625)
random.normal = rmvnorm(1000000, sigma = corr.matrix)
simulated.data = data.frame(line_A_liability = exp(random.normal[,1] * sd[1] + mu[1]), line_B_liability = exp(random.normal[,2] * sd[2] + mu[2]), line_C_liability = exp(random.normal[, 3] * sd[3] + mu[3]))
head(simulated.data)
##   line_A_liability line_B_liability line_C_liability
## 1        1142.5282         816.8318         796.4969
## 2        1311.0265        2847.3639        1834.2847
## 3        2118.6085        3817.7325        3089.7292
## 4         631.0439         463.0765         346.3320
## 5        1178.8774        1722.3228        1720.6342
## 6        1024.3926         695.6029        1264.6935

Risk Measures

Value at Risk

Value at Risk is the maximum amount that a firm could lose over a specified time period with a specified, small, probability. For a cumulative distribution \(F(x)\), \[ \mathrm{VaR}_q = F^{-1}(q) \] In discrete simulations, the readings select the simulation that is immediately below the top \((1 - q)\) of the simulations.

A closely related concept is the exceedence probability. Let \(X_i\) be the random loss variable for a line, and \(C_i\) be the capital allocated to the line. The exceedence probability is \[ \epsilon_i = \mathrm{Pr}(X_i > E[X_i] + C_i) \] The approach to setting capital using Value at Risk is to set a common exceedence probability across all lines, and select the capital accordingly. It is common to express this in terms of ratios of loss to expected loss, giving \[ \epsilon = \mathrm{Pr}\left(\frac{X_i}{E[X_i]} > 1 + \frac{C_i}{E[X_i]}\right) \] In other words, \[ \frac{C_i}{E[X_i]} = \mathrm{VaR}_{1-\epsilon}(X_i / E[X_i]) - 1 \] An exceedence probabilty curve plots \(1 + C / E[X_i]\) on the \(x\)-axis, and the corresponding exceedence probability on the \(y\)-axis.

VaR = function(q, distribution) {
  return(quantile(distribution, probs = q, type = 3))
}
exceedence.curve.data = data.frame(epsilon = 1:50/100)
line.A.mean = mean(simulated.data$line_A_liability)
line.B.mean = mean(simulated.data$line_B_liability)
line.C.mean = mean(simulated.data$line_C_liability)
exceedence.curve.data$line_A_requirement = apply(exceedence.curve.data, 1, function(y){VaR(1 - y['epsilon'], simulated.data$line_A_liability) / line.A.mean})
exceedence.curve.data$line_B_requirement = apply(exceedence.curve.data, 1, function(y){VaR(1 - y['epsilon'], simulated.data$line_B_liability) / line.B.mean})
exceedence.curve.data$line_C_requirement = apply(exceedence.curve.data, 1, function(y){VaR(1 - y['epsilon'], simulated.data$line_C_liability) / line.C.mean})
ggplot(data = exceedence.curve.data, aes(x = line_A_requirement, y = epsilon)) + geom_line(colour = "red") + geom_line(aes(x = line_B_requirement), colour = "blue") + geom_line(aes(x = line_C_requirement), colour = "green") + scale_x_continuous(limits = c(1, 3.25), breaks = seq(1, 3.25, 0.15)) + scale_y_continuous(limits = c(0, 0.5), breaks = seq(0, 0.5, 0.05)) + labs(title = "Exceedence Probability Curves for Simulated Data", x = "1 + C / E[X]", y = "epsilon")
## Warning: Removed 8 rows containing missing values (geom_path).
## Warning: Removed 10 rows containing missing values (geom_path).
## Warning: Removed 14 rows containing missing values (geom_path).

The curve can be interpreted as providing a multiplier to expected losses that produces a given exceedence probability. This can also be calculated directly. Assume we want a common exceedence probability of 5% across all lines. Then:

line.A.VaR.multiplier = VaR(0.95, simulated.data$line_A_liability) / line.A.mean - 1
print(paste0("The captial requirement for Line A is ", line.A.VaR.multiplier, " times expected losses."))
## [1] "The captial requirement for Line A is 0.7286938686675 times expected losses."
line.B.VaR.multiplier = VaR(0.95, simulated.data$line_B_liability) / line.B.mean - 1
print(paste0("The captial requirement for Line B is ", line.B.VaR.multiplier, " times expected losses."))
## [1] "The captial requirement for Line B is 1.00564198153894 times expected losses."
line.C.VaR.multiplier = VaR(0.95, simulated.data$line_C_liability) / line.C.mean - 1
print(paste0("The captial requirement for Line C is ", line.C.VaR.multiplier, " times expected losses."))
## [1] "The captial requirement for Line C is 1.29533776652314 times expected losses."

Disadvantages of the method include:

  • The firm may not have enough total capital to allocate to all of the lines of business; in this case it may need to raise the exceedence probability.

  • This approach does not consider diversification across lines.

  • The approach gives no information about the amount of losses that exceed the probability. In other words, it only considers the probability of ruin, not the depth of ruin.

Bodoff illustrates a drawback of the fact that VaR only focusses on a single point of the distribution by considering two independent catastrophes:

wind.prob = 0.2
earthquake.prob = 0.05
wind.loss = 99
earthquake.loss = 100

The possible outcomes are:

bodoff.var.example = merge(data.frame(wind_occurs = c(1, 0)), data.frame(earthquake_occurs = c(1, 0)))
bodoff.var.example = bodoff.var.example %>% mutate(probability = wind.prob^wind_occurs * (1 - wind.prob)^(1 - wind_occurs) * earthquake.prob^earthquake_occurs * (1 - earthquake.prob)^(1 - earthquake_occurs), loss = wind.loss * wind_occurs + earthquake.loss * earthquake_occurs)
bodoff.var.example
##   wind_occurs earthquake_occurs probability loss
## 1           1                 1        0.01  199
## 2           0                 1        0.04  100
## 3           1                 0        0.19   99
## 4           0                 0        0.76    0

A Value at Risk requirement at 99% would result in a capital requirement of 100 in aggregate. Note that:

  • Using the method described by Cummins would result in an allocation of 100 to the Earthquake line and 99 to the Wind line, far exceeding the total capital.

  • Allocating capital only to events that generate the 99th percentile loss would result in an allocation of 100 to Earthquake and none to Wind, despite the significant potential of a 99 on the Wind line.

An “alternate coVaR” approach proceeds as follows:

  1. Use all events greater than or equal to the VaR threshold to allocate.

  2. Allocate VaR to each event in proportion to its conditional exceedence probability.

  3. Allocate capital from each event to the lines that contribute to it in proportion to their contribution.

Applying this method to Bodoff’s example, we obtain:

loss.threshold = 100
bodoff.coVaR.example = bodoff.var.example %>% filter(loss >= loss.threshold)
exceedence.prob = sum(bodoff.coVaR.example$probability)
bodoff.coVaR.example = bodoff.coVaR.example %>% mutate(conditional_exc_prob = probability / exceedence.prob, capital_allocated_to_event = loss.threshold*conditional_exc_prob, wind_allocation = capital_allocated_to_event * wind_occurs * wind.loss / loss, eq_allocation = capital_allocated_to_event * earthquake_occurs * earthquake.loss / loss)
bodoff.coVaR.example
##   wind_occurs earthquake_occurs probability loss conditional_exc_prob
## 1           1                 1        0.01  199                  0.2
## 2           0                 1        0.04  100                  0.8
##   capital_allocated_to_event wind_allocation eq_allocation
## 1                         20        9.949749      10.05025
## 2                         80        0.000000      80.00000
print(paste0("The allocation to wind is ", sum(bodoff.coVaR.example$wind_allocation), " and the allocation to earthquake is ", sum(bodoff.coVaR.example$eq_allocation)))
## [1] "The allocation to wind is 9.94974874371859 and the allocation to earthquake is 90.0502512562814"

Proportional Allocation

Proportional allocation is a method for allocating capital to sources of risk in which the total capital is allocated in proportion to some risk measure calculated on the individual lines. The approach is:

  1. Calculate VaR (or other risk measure) for each risk source.

  2. Sum the VaR for the individual risk sources, and convert the number from Step 1 into percentages of the total.

  3. Calculate VaR for the aggregate distribution.

  4. Apply the percentages from Step 2 to the aggregate VaR to obtain the capital allocation to individual lines.

The risk measures used in steps 1 and 3 need not be the same – for example, we could determine aggregate VaR at one percentile and allocate using a different percentile, or use TVaR to determine the allocation percentages but VaR to determine the aggregate capital requirement.

This approach can be illustrated using the simulated data above. The aggregate capital requirement is determined using the 99th percentile VaR, and the allocation by line is determined using the 99.5th percentile:

line.A.var.99.5 = VaR(q = 0.995, distribution = simulated.data$line_A_liability)
line.B.var.99.5 = VaR(q = 0.995, distribution = simulated.data$line_B_liability)
line.C.var.99.5 = VaR(q = 0.995, distribution = simulated.data$line_C_liability)
prop.alloc.total = line.A.var.99.5 + line.B.var.99.5 + line.C.var.99.5
total.var.99 = VaR(q = 0.99, distribution = simulated.data$line_A_liability + simulated.data$line_B_liability + simulated.data$line_C_liability)
prop.alloc.example = data.frame(line = c("A", "B", "C", "total"), individual_allocation = c(line.A.var.99.5, line.B.var.99.5, line.C.var.99.5, prop.alloc.total))
prop.alloc.example = prop.alloc.example %>% mutate(percentage_allocation = individual_allocation / prop.alloc.total, proportional_allocation = percentage_allocation * total.var.99)
prop.alloc.example
##    line individual_allocation percentage_allocation
## 1     A              2445.921             0.2502780
## 2     B              3198.284             0.3272631
## 3     C              4128.614             0.4224589
## 4 total              9772.820             1.0000000
##   proportional_allocation
## 1                1883.098
## 2                2462.336
## 3                3178.591
## 4                7524.026

Incremental Allocation

In an incremental allocation approach, each risk source is assigned a capital in proportion to the difference between the aggregate VaR of the firm with and without that risk source. The algorithm is:

  1. Calculate \(\mathrm{Var}_q\) for the firm in aggregate

  2. For each risk source, calculate \(\mathrm{Var}_q\) for the firm without the risk source. Subtract this from the result from Step 1.

  3. Sum the results from Step 2, and use this to convert each of the values in Step 2 to percentages.

  4. Apply the percentages from Step 3 to the value from Step 1 to determine the allocation to each risk.

The method can be applied to the simulated data as follows:

total.var.99 = VaR(q = 0.99, distribution = simulated.data$line_A_liability + simulated.data$line_B_liability + simulated.data$line_C_liability)
line.A.var.99 = VaR(q = 0.99, distribution = simulated.data$line_B_liability + simulated.data$line_C_liability)
line.B.var.99 = VaR(q = 0.99, distribution = simulated.data$line_A_liability + simulated.data$line_C_liability)
line.C.var.99 = VaR(q = 0.99, distribution = simulated.data$line_A_liability + simulated.data$line_B_liability)
incremental.var.example = data.frame(line = c("A", "B", "C"), all_other_var = c(line.A.var.99, line.B.var.99, line.C.var.99))
incremental.var.example = incremental.var.example %>% mutate(incremental_var = total.var.99 - all_other_var)
total.incremental.var = sum(incremental.var.example$incremental_var)
incremental.var.example = incremental.var.example %>% mutate(percentage_allocation = incremental_var / total.incremental.var, capital_allocated = total.var.99 * percentage_allocation)
incremental.var.example
##   line all_other_var incremental_var percentage_allocation
## 1    A      5599.026        1924.999             0.2773631
## 2    B      5473.280        2050.746             0.2954812
## 3    C      4559.412        2964.614             0.4271557
##   capital_allocated
## 1          2086.887
## 2          2223.208
## 3          3213.930
print(paste0("The total incremental VaR is ", total.incremental.var, " and the total capital allocated is ", total.var.99))
## [1] "The total incremental VaR is 6940.35829909041 and the total capital allocated is 7524.02551304879"

Conditional Tail Expectation

Conditional tail expectation, or Tail Value at Risk, is the average value of outcomes that exceed the \(q\)th percentile: \[ \mathrm{TVaR}_q(X) = E[X | X \geq F^{-1}(q)] \]

Advantages of using CTE include:

  • It considers more than a single point of the distribution.

Disadvantages of using CTE include:

  • It does not consider “bad losses” that are close to, but do not exceed the threshold

  • It does not provide information about the probability of default

The first disadvantage can be illustrated by continuing Bodoff’s Wind / Earthquake example using a TVaR at the 95th percentile:

bodoff.tvar.example = bodoff.var.example %>% mutate(satisfies_condition = loss >= 100, wind_allocation = (wind_occurs * wind.loss * satisfies_condition * probability) / 0.05, earthquake_allocation = (earthquake_occurs * earthquake.loss * probability) / 0.05)
bodoff.tvar.example
##   wind_occurs earthquake_occurs probability loss satisfies_condition
## 1           1                 1        0.01  199                TRUE
## 2           0                 1        0.04  100                TRUE
## 3           1                 0        0.19   99               FALSE
## 4           0                 0        0.76    0               FALSE
##   wind_allocation earthquake_allocation
## 1            19.8                    20
## 2             0.0                    80
## 3             0.0                     0
## 4             0.0                     0
print(paste0("The wind allocation is ", sum(bodoff.tvar.example$wind_allocation), " and the earthquake allocation is ", sum(bodoff.tvar.example$earthquake_allocation)))
## [1] "The wind allocation is 19.8 and the earthquake allocation is 100"

Earthquake gets a significantly higher allocation, even though the wind event is nearly as damaging and occurs with a much greater probability.

Co-CTE

The co-measure approach from the paper Riskiness Leverage Models by Rodney Kreps can be used to allocate a TVaR-based capital requirement among lines of business. Each line of business is assigned an amount of capital equal to its average amount in scenarios where the total firm result exceeds the percentile.

This function can be used to calculate co-TVaR:

co.tvar = function(q, line, aggregate) {
  x.q = quantile(aggregate, probs = q, type = 3) #Type 3 takes an actual simulated value rather than interpolating between them.
  leverage = (aggregate > x.q) / (1-q)
  line.mean = mean(line)
  co.tvar = mean(leverage * (line - line.mean)) + mean(line)
  return(co.tvar)
}
aggregate = simulated.data$line_A_liability + simulated.data$line_B_liability + simulated.data$line_C_liability

The allocation based on co-TVaR at the 99th percentile is:

co.TVaR.allocation = data.frame(line = c("A", "B", "C", "Total"), capital_allocated = c(co.tvar(0.99, simulated.data$line_A_liability, aggregate), co.tvar(0.99, simulated.data$line_B_liability, aggregate), co.tvar(0.99, simulated.data$line_C_liability, aggregate), co.tvar(0.99, aggregate, aggregate)))
co.TVaR.allocation
##    line capital_allocated
## 1     A          2211.152
## 2     B          2568.436
## 3     C          4101.815
## 4 Total          8881.403

Proportional Allocation

Both TVaR and co-TVaR can be used as a means of allocating an aggregate capital requirement based on VaR by determining the proportions of the allocation, using the algorithm described earlier. Here is a proportional allocation based on the TVaR of each line:

line.A.tvar.99 = co.tvar(0.99, simulated.data$line_A_liability, simulated.data$line_A_liability)
line.B.tvar.99 = co.tvar(0.99, simulated.data$line_B_liability, simulated.data$line_B_liability)
line.C.tvar.99 = co.tvar(0.99, simulated.data$line_C_liability, simulated.data$line_C_liability)
total.tvar.by.line = line.A.tvar.99 + line.B.tvar.99 + line.C.tvar.99
prop.alloc.example.tvar = data.frame(line = c("A", "B", "C", "Total"), TVaR = c(line.A.tvar.99, line.B.tvar.99, line.C.tvar.99, total.tvar.by.line))
prop.alloc.example.tvar = prop.alloc.example.tvar %>% mutate(percentage_allocation = TVaR / total.tvar.by.line, capital_allocated = total.var.99 * percentage_allocation)
prop.alloc.example.tvar
##    line      TVaR percentage_allocation capital_allocated
## 1     A  2549.238             0.2452808          1845.499
## 2     B  3387.383             0.3259249          2452.267
## 3     C  4456.519             0.4287943          3226.259
## 4 Total 10393.141             1.0000000          7524.026

Alternately, co-TVaR can be used to determine the allocation weights:

prop.alloc.example.cotvar = co.TVaR.allocation %>% select(line, co_TVaR = capital_allocated)
all.line.tvar.99 = co.tvar(0.99, aggregate, aggregate)
prop.alloc.example.cotvar = prop.alloc.example.cotvar %>% mutate(percentage_allocation = co_TVaR / all.line.tvar.99, capital_allocated = percentage_allocation * total.var.99)
prop.alloc.example.cotvar
##    line  co_TVaR percentage_allocation capital_allocated
## 1     A 2211.152             0.2489642          1873.213
## 2     B 2568.436             0.2891926          2175.893
## 3     C 4101.815             0.4618431          3474.920
## 4 Total 8881.403             1.0000000          7524.026

The data were generated so that there is more correlation between lines A and C than the other pairs – the lower allocation for line B indicates that co-TVaR is better picking up the diversification potential of this line than the TVaR allocation method. In this example, the percentage allocation based on VaR is similar to the one based on TVaR, likely due to the similarity of the distirbutions used in the simulation.

Expected Policyholder Deficit / Insolvency Put

The Expected Policyholder Deficit is the average net loss over all outcomes in which liabilities exceed assets. Using the simulated data, assume that the company charges a premium equal to expected losses plus a 10% profit loading:

premium = 1.10 * (line.A.mean + line.B.mean + line.C.mean)
epd.simulation = simulated.data %>% mutate(net_loss = line_A_liability + line_B_liability + line_C_liability - premium, shortfall = (net_loss >= 0))
head(epd.simulation)
##   line_A_liability line_B_liability line_C_liability   net_loss shortfall
## 1        1142.5282         816.8318         796.4969  -544.0439     FALSE
## 2        1311.0265        2847.3639        1834.2847  2692.7744      TRUE
## 3        2118.6085        3817.7325        3089.7292  5726.1695      TRUE
## 4         631.0439         463.0765         346.3320 -1859.4483     FALSE
## 5        1178.8774        1722.3228        1720.6342  1321.9336      TRUE
## 6        1024.3926         695.6029        1264.6935  -315.2117     FALSE
epd = mean(epd.simulation$net_loss * epd.simulation$shortfall)
print(paste0("The expected policyholder deficit is ", epd))
## [1] "The expected policyholder deficit is 396.28038462612"

In calculating this ratio, we include the non-loss scenarios as zeros (i.e. they contribute to the denominator of the average). This is a contrast with the TVaR approach, in which the average is only over losses that exceed the threshold.

The EPD ratio is the ratio of EPD to expected liabilities:

epd.ratio = epd / (line.A.mean + line.B.mean + line.C.mean)
print(paste0("The EPD ratio is ", epd.ratio))
## [1] "The EPD ratio is 0.132097435121705"

The philosophy of the EPD ratio approach is to determine the amount of capital needed in order to bring the EPD ratio to a suitably low level.

The EPD can be studied by viewing its present value as a financial option, using the following notation:

  • \(L_i\) is the value of the liabilities at time \(i\)

  • \(A_i\) is the value of the assets at time \(i\)

  • \(C_i = A_i - L_i\) is the value of capital at time \(i\)

  • The company is setting its capital to achieve a fixed EPD ratio \(d\).

The nature of the option varies depending on whether assets or liabilites are considered to be fixed.

  • When assets are fixed, the insurer effectively owns a call option on the liabilities with a strike price equal to the assets. When the liabilities exceed the assets, the insurer becomes insorvent and effectively receives the value of its liabilities in exchange for its total assets. The value of this option is the present value of \(\max(0, L_1 - A_1)\).

  • When liabilities are fixed, the insurer effectively owns a put option on the assets with a strike price equal to the liabilities. If the value of the assets falls below the liabilites, the insurer effectively sells all its assets and gains the value of the liabilities. The value of this option is the present value of \(\max(0, L_1 - A_1)\).

  • When both liabilites and assets are random, the insure effectively owns a put option on its capital with a strike price of zero. If capital ever becomes negative, the insurer becomes insolvent and replaces its negative capital with a capital of zero. The value of this option is the present value of \(\max(0, -C_1)\).

An alternate interpretation of these option values is that they are the fair premium for a guaranty fund.

Multiperiod Time Horizons

The option formulation assumes that the EPD is defined relative to a single fixed time horizon. The rational is that even in a multi-period case, it suffices to use a single time period equal to the time between valuations, on the grounds that the insurer can adjust its capital based on the results of the valuation. Butsic illustrates this using a binary stochastic model.

To illustrate using an example, assume that an insurer has a loss reserve with an expected value of $1000. With equal probability, the reserve may either increase or decrease by 20%. Assume that the insurer sets capital equal to 10% of the loss reserve:

liability.0 = 1000
assets.0 = 1.1 * liability.0
time.period.1 = data.frame(first_step = c(1.2, 0.8))
time.period.2 = merge(time.period.1, data.frame(second_step = c(1.2, 0.8)))
time.period.3 = merge(time.period.2, data.frame(third_step = c(1.2, 0.8)))

Assess the EPD in time period 1:

time.period.1 = time.period.1 %>% mutate(probability = 1 / nrow(time.period.1), liability = liability.0 * first_step, policyholder_deficit = pmax(0, liability - assets.0))
time.period.1
##   first_step probability liability policyholder_deficit
## 1        1.2         0.5      1200                  100
## 2        0.8         0.5       800                    0
epd.ratio.1 = sum(time.period.1$policyholder_deficit * time.period.1$probability) / liability.0 
print(paste0("The first period EPD ratio is ", epd.ratio.1))
## [1] "The first period EPD ratio is 0.05"

The insurer’s capital policy corresponds to a 5% EPD ratio. Examine the second period:

time.period.2 = time.period.2 %>% mutate(probability = 1 / nrow(time.period.2), liability = liability.0 * first_step * second_step, policyholder_deficit = pmax(0, liability - assets.0))
time.period.2
##   first_step second_step probability liability policyholder_deficit
## 1        1.2         1.2        0.25      1440                  340
## 2        0.8         1.2        0.25       960                    0
## 3        1.2         0.8        0.25       960                    0
## 4        0.8         0.8        0.25       640                    0
epd.ratio.2 = sum(time.period.2$policyholder_deficit * time.period.2$probability) / liability.0 
print(paste0("The second period EPD ratio is ", epd.ratio.2))
## [1] "The second period EPD ratio is 0.085"

Continue to the third time period:

time.period.3 = time.period.3 %>% mutate(probability = 1 / nrow(time.period.3), liability = liability.0 * first_step * second_step * third_step, policyholder_deficit = pmax(0, liability - assets.0))
time.period.3
##   first_step second_step third_step probability liability
## 1        1.2         1.2        1.2       0.125      1728
## 2        0.8         1.2        1.2       0.125      1152
## 3        1.2         0.8        1.2       0.125      1152
## 4        0.8         0.8        1.2       0.125       768
## 5        1.2         1.2        0.8       0.125      1152
## 6        0.8         1.2        0.8       0.125       768
## 7        1.2         0.8        0.8       0.125       768
## 8        0.8         0.8        0.8       0.125       512
##   policyholder_deficit
## 1                  628
## 2                   52
## 3                   52
## 4                    0
## 5                   52
## 6                    0
## 7                    0
## 8                    0
epd.ratio.3 = sum(time.period.3$policyholder_deficit * time.period.3$probability) / liability.0 
print(paste0("The third period EPD ratio is ", epd.ratio.3))
## [1] "The third period EPD ratio is 0.098"

Re-visit the second period. In the event that the insurer remains solvent, the liability must have decreased during the first step. Assume that the insurer would release assets according to its 10% capital policy:

liability.1 = 800
assets.1 = 1.1 * liability.1
time.period.2b = time.period.2 %>% filter(first_step == 0.8) %>% mutate(probability = 1 / nrow(time.period.2), liability = liability.0 * first_step * second_step, policyholder_deficit = pmax(0, liability - assets.1))
time.period.2b
##   first_step second_step probability liability policyholder_deficit
## 1        0.8         1.2        0.25       960                   80
## 2        0.8         0.8        0.25       640                    0
epd.ratio.2b = sum(time.period.2b$policyholder_deficit * time.period.2b$probability) / liability.1 
print(paste0("The second period EPD ratio, assuming release of capital, is ", epd.ratio.2b))
## [1] "The second period EPD ratio, assuming release of capital, is 0.025"

The capital adjustment procedure can be generalized, assuming that the insurer writes additional business, by introducing the following notation:

  • \(P\) is premium net of expenses, written at the beginning of the period. This is assumed to be a percentage \(p\) of the initial capital, \(P = pC_0\).

  • \(L_P\) is the loss resulting from the added premium, and \(b = L_P/P\) is the loss ratio

  • \(r\) is the random rate of return on the assets

  • \(g\) is the random change in the value of the liabilities (with an expected value equal to the risk-free discount rate)

  • The insurer sets its initial capital based on a percentage, \(c\), of liabilities; in other words, \(A_0 = (1 + c) L_0\). At the end of the first year, \(c_1 = C_1 / L_0\) is the ratio of capital to original liabilities.

Then \[ A_1 = (A_0 + P)(1 + r) \] and \[ L_1 = L_0(1+g) + L_P \] Therefore, \[ c_1 = (A_1 - L_1) / L_0 = (1 + c + pc)(1+r) - (1+g) - bpc \] To determine the capital requirement, treat \(c\) as an unknown constant. The probability distribution random variable can be used to determine the expected policyholder deficit, solving for \(c\) to give the desired EPD ratio. (Some more concrete examples appear below.)

The ability to re-allocate capital in order to limit to a single time period is an essential aspect of this method, because the assets and liabilities have different durations, and the longer the time to realization, the greater the risk. A common time period allows for risk to be fairly compared between them. The approach is based on changes in the expected market value at the end of the period, even if the risk has not been fully realized.

Advantages of this approach include:

  • It is more informative than VaR because it is based on the expected value of amount lost by policyholders.

  • It is consistent with financial theory of pricing risky debt contracts

Disadvantages of the approach include:

  • Calculating EPD by line does not reflect the fact that the firm goes insolvent in whole, not by line. All lines of business have access to the firm’s entire capital.

  • It does not account for the benefits of diversification across lines of business.

Asset Amounts Certain

In the case of a discrete loss distribution (such as that produced by a simulation), and fixed assets, the expected policyholder deficit is \[ D_L = \sum_{x > A} p(x)(x - A) \] where \(p(x)\) is the probability density function for losses. The EPD ratio is then calculated as \(D_L / L\).

Butsic illustrates the idea with the following example, which compares two insurers with fixed assets of $13,000 but different loss distributions:

fixed.asset.epd.example = data.frame(assets = rep(13000,3), probability = c(0.2, 0.6, 0.2), insurer_A_loss = c(6900, 10000, 13100), insurer_B_loss = c(2000, 10000, 18000))
fixed.asset.epd.example = fixed.asset.epd.example %>% mutate(policyholder_deficit_A = pmax(0, insurer_A_loss - assets), policyholder_deficit_B = pmax(0, insurer_B_loss - assets))
fixed.asset.epd.example
##   assets probability insurer_A_loss insurer_B_loss policyholder_deficit_A
## 1  13000         0.2           6900           2000                      0
## 2  13000         0.6          10000          10000                      0
## 3  13000         0.2          13100          18000                    100
##   policyholder_deficit_B
## 1                      0
## 2                      0
## 3                   5000

The EPD and EPD ratio can be calculated as follows:

insurer.A.expected.loss = sum(fixed.asset.epd.example$probability * fixed.asset.epd.example$insurer_A_loss)
insurer.A.EPD = sum(fixed.asset.epd.example$probability * fixed.asset.epd.example$policyholder_deficit_A)
insurer.A.EPD.ratio = insurer.A.EPD / insurer.A.expected.loss
print(paste0("Insurer A has an EPD of ", insurer.A.EPD, " and expected losses of ", insurer.A.expected.loss, " for an EPD ratio of ", insurer.A.EPD.ratio))
## [1] "Insurer A has an EPD of 20 and expected losses of 10000 for an EPD ratio of 0.002"
insurer.B.expected.loss = sum(fixed.asset.epd.example$probability * fixed.asset.epd.example$insurer_B_loss)
insurer.B.EPD = sum(fixed.asset.epd.example$probability * fixed.asset.epd.example$policyholder_deficit_B)
insurer.B.EPD.ratio = insurer.B.EPD / insurer.B.expected.loss
print(paste0("Insurer B has an EPD of ", insurer.B.EPD, " and expected losses of ", insurer.B.expected.loss, " for an EPD ratio of ", insurer.B.EPD.ratio))
## [1] "Insurer B has an EPD of 1000 and expected losses of 10000 for an EPD ratio of 0.1"

Cummins illustrates how a captial requirement based on EPD can be set by producing a graph with the asset-to-liability ratio on the \(x\)-axis, and the EPD ratio on the \(y\)-axis. The following function can be used to calculate the EPD for a given loss distribution and asset-to-liability ratio.

EPD.ratio = function(loss.distribution, asset.to.liability.ratio) {
  liability = mean(loss.distribution)
  assets = liability * asset.to.liability.ratio
  PD = pmax(0, loss.distribution - assets)
  EPD = mean(PD)
  return(EPD / liability)
}

Apply the function over a range of asset-to-liability ratios:

EPD.ratio.curves = data.frame(asset_to_liability = 20:64 / 20)
EPD.ratio.curves$line_A_EPD_ratio = apply(EPD.ratio.curves, 1, function(y){EPD.ratio(loss.distribution = simulated.data$line_A_liability, asset.to.liability.ratio = y['asset_to_liability'])})
EPD.ratio.curves$line_B_EPD_ratio = apply(EPD.ratio.curves, 1, function(y){EPD.ratio(loss.distribution = simulated.data$line_B_liability, asset.to.liability.ratio = y['asset_to_liability'])})
EPD.ratio.curves$line_C_EPD_ratio = apply(EPD.ratio.curves, 1, function(y){EPD.ratio(loss.distribution = simulated.data$line_C_liability, asset.to.liability.ratio = y['asset_to_liability'])})
ggplot(data = EPD.ratio.curves, aes(x = asset_to_liability, y = line_A_EPD_ratio)) + geom_line(colour = "red") + geom_line(aes(y = line_B_EPD_ratio), colour = "blue") + geom_line(aes(y = line_C_EPD_ratio), colour = "green") + geom_hline(yintercept = 0.05)

The capital can be set by selecting the asset-to-liability ratio corresponding to the target EPD ratio.

Assume that losses are normally distributed with coefficient of variation \(k\), and let \(c = C/L\) be the ratio of capital to expected loss. Let \(\phi\) be the density function for the standard normal distribution, and let \(\Phi\) be its cumulative distribution function. Then the EPD ratio when assets are riskless is \[ d_L = k\phi(-c/k) - c\Phi(-c/k) \]

Unpacking this formula, the process for determining the EPD ratio is:

  1. Determine \(c\), the ratio of capital to expected loss.

  2. Determine \(k\), the coefficient of variation of losses. Then \(c/k\) is the ratio of capital to standard deviation of losses.

  3. Calculate \(\phi(-c/k) = \phi(c/k)\), and multiply the result by \(k\)

  4. Calculate \(\Phi(-c/k) = 1 - \Phi(c/k)\), multiply the result by \(c\), and subtract from the result from Step 3.

The following function can be used to calculate the EPD ratio over a range of parameters:

normal.liability.EPD.ratio = function(assets, expected.losses, sd.losses) {
  c = (assets - expected.losses) / expected.losses
  k = sd.losses / expected.losses
  first.term = k * dnorm(-c/k)
  second.term = c * pnorm(-c/k)
  return(first.term - second.term)
}

Figure 1 from Butsic’s paper can be replicated and extended by applying the above function over a range of input parameters, assuming expected losses of 1. Rather than showing individual curves, a raster plot is used to show the intermediate EPD ratios as well. Filtering on EPD ratios below 0.1 allows the visualization to focus on the areas of practical interest.

butsic.fig.1 = merge(data.frame(sd_loss = 1:60/100), data.frame(assets = 50:300/100))
butsic.fig.1$epd_ratio = apply(butsic.fig.1, 1, function(y) {normal.liability.EPD.ratio(assets = y['assets'], expected.losses = 1, sd.losses = y['sd_loss'])})
butsic.fig.1 = butsic.fig.1 %>% mutate(capital = assets - 1)
ggplot(data = butsic.fig.1 %>% filter(epd_ratio <= 0.1), aes(x = sd_loss, y = capital, fill = epd_ratio)) + geom_raster() 

Liability Amounts Certain

When assets are random but the liability is fixed, the EPD is calculated in a similar manner except that we only consider cases in which the assets are less than the liabilities. If the assets have a discrete probability distribution \(q(x)\), then the EPD is \[ D_A = \sum_{y < L} q(y)(L - y) \] Bustic illustrates this formula using the following example:

fixed.liability.epd.example = data.frame(assets = c(12000, 6000, 3000), probability = c(0.1, 0.8, 0.1), loss = c(5000, 5000, 5000))
fixed.liability.epd.example = fixed.liability.epd.example %>% mutate(policyholder_deficit = pmax(0, loss - assets))
fixed.liability.epd.example
##   assets probability loss policyholder_deficit
## 1  12000         0.1 5000                    0
## 2   6000         0.8 5000                    0
## 3   3000         0.1 5000                 2000
insurer.expected.assets = sum(fixed.liability.epd.example$probability * fixed.liability.epd.example$assets)
insurer.expected.loss = sum(fixed.liability.epd.example$probability * fixed.liability.epd.example$loss)
insurer.EPD = sum(fixed.liability.epd.example$probability * fixed.liability.epd.example$policyholder_deficit)
insurer.EPD.ratio = insurer.EPD / insurer.expected.loss
print(paste0("The insurer has an EPD of ", insurer.EPD, " and expected losses of ", insurer.expected.loss, " for an EPD ratio of ", insurer.EPD.ratio, ", based on a capital-to-asset ratio of ", (insurer.expected.assets - insurer.expected.loss)/insurer.expected.assets))
## [1] "The insurer has an EPD of 200 and expected losses of 5000 for an EPD ratio of 0.04, based on a capital-to-asset ratio of 0.206349206349206"

Assuming the assets are normally distributed and liabilites are fixed, then the EPD ratio is \[ d_A = \frac{1}{1 - c_A}\left(k_A \phi(-c_A / k_A) - c_A\Phi(-c_A / k_A)\right) \] where \(c_A\) and \(k_A\) are the capital-to-asset ratio and coefficient of variation of assets, respectively. Note that this has the same form as the liability EPD, with the exception that the result is divided by \((1 - c_A)\).

normal.asset.EPD.ratio = function(expected.assets, losses, sd.assets) {
  c = (expected.assets - losses) / expected.assets
  k = sd.assets / expected.assets
  first.term = k * dnorm(-c/k)
  second.term = c * pnorm(-c/k)
  return((first.term - second.term)/(1 - c))
}
normal.asset.EPD.ratio(expected.assets = 15, losses = 12, sd.assets = 5)
## [1] 0.07028031

The impact of different parameters on the EPD ratio can be visualized as follows, assuming losses are equal to 1:

asset.fig = merge(data.frame(sd_assets = 1:60/100), data.frame(expected_assets = 50:300/100))
asset.fig$epd_ratio = apply(asset.fig, 1, function(y) {normal.asset.EPD.ratio(expected.assets = y['expected_assets'], losses = 1, sd.assets = y['sd_assets'])})
asset.fig = asset.fig %>% mutate(capital = expected_assets - 1)
ggplot(data = asset.fig %>% filter(epd_ratio <= 0.1), aes(x = sd_assets, y = capital, fill = epd_ratio)) + geom_raster() 

Merton-Perold Method

The Merton-Perold method is used to allocate capital to lines of business when a target EPD ratio is used as the basis for setting firm-wide capital. The approach for determining the capital allocated to a specified line is:

  1. Calculate the capital needed to achieve a target EPD ratio for a company that includes all the lines except the line of interest.

  2. Calculate the capital needed to achieve a target EPD ratio for the entire company.

  3. Subtract the result from Step 1 from the result from Step 2 to get the marginal capital allocated to the line.

It is often instructive to compare this result to the standalone capital, which is the capital that would be required if the company only wrote the single line of business. It it will generally be higher, because the Merton-Perold method reflects the impact of diversification.

The method can be illustrated on simulated data using the following function to facilitate the calculation of asset-to-liability ratios:

A.to.L.ratio = function(target.EPD, loss.distribution) {
  EPD.ratio.curves = data.frame(asset_to_liability = 20:64 / 20)
  EPD.ratio.curves$EPD_ratio = apply(EPD.ratio.curves, 1, function(y){EPD.ratio(loss.distribution = loss.distribution, asset.to.liability.ratio = y['asset_to_liability'])})
  # Round to two digits to allow for easier matching to the target EPD ratio
  EPD.ratio.curves = EPD.ratio.curves %>% mutate(EPD_ratio = round(EPD_ratio, 2)) %>% filter(EPD_ratio == target.EPD)
  # If there are multiple records that rounded to the target ratio, return their average
  A.to.L = mean(EPD.ratio.curves$asset_to_liability)
  return(A.to.L)
}
stand.alone.A = (A.to.L.ratio(0.05, simulated.data$line_A_liability) - 1) * line.A.mean
mp.A = (A.to.L.ratio(0.05, simulated.data$line_A_liability + simulated.data$line_B_liability + simulated.data$line_C_liability) - 1) *(line.A.mean + line.B.mean + line.C.mean) - (A.to.L.ratio(0.05, simulated.data$line_B_liability + simulated.data$line_C_liability) - 1) * (line.B.mean + line.C.mean)
print(paste0("The stand-alone capital for Line A is ", stand.alone.A, " and the Merton-Perold capital is ", mp.A))
## [1] "The stand-alone capital for Line A is 350.219284258089 and the Merton-Perold capital is 150.438697707873"
stand.alone.B = (A.to.L.ratio(0.05, simulated.data$line_B_liability) - 1) * line.B.mean
mp.B = (A.to.L.ratio(0.05, simulated.data$line_A_liability + simulated.data$line_B_liability + simulated.data$line_C_liability) - 1) *(line.A.mean + line.B.mean + line.C.mean) - (A.to.L.ratio(0.05, simulated.data$line_A_liability + simulated.data$line_C_liability) - 1) * (line.A.mean + line.C.mean)
print(paste0("The stand-alone capital for Line B is ", stand.alone.B, " and the Merton-Perold capital is ", mp.B))
## [1] "The stand-alone capital for Line B is 674.356015716554 and the Merton-Perold capital is 249.414998508202"
stand.alone.C = (A.to.L.ratio(0.05, simulated.data$line_C_liability) - 1) * line.C.mean
mp.C = (A.to.L.ratio(0.05, simulated.data$line_A_liability + simulated.data$line_B_liability + simulated.data$line_C_liability) - 1) *(line.A.mean + line.B.mean + line.C.mean) - (A.to.L.ratio(0.05, simulated.data$line_A_liability + simulated.data$line_B_liability) - 1) * (line.A.mean + line.B.mean)
print(paste0("The stand-alone capital for Line C is ", stand.alone.C, " and the Merton-Perold capital is ", mp.C))
## [1] "The stand-alone capital for Line C is 1100.26101055369 and the Merton-Perold capital is 750.077700618444"

Advantages include:

  • Reflection of diversification results in higher RAROC estimates for the lines, which may lead to better decisions, e.g. avoid rejecting projects that would add to the firm’s value.

Disadvantages include:

  • In the terminology used in Donald Mango’s paper on catastrophe risk loading, this is a marginal method at renewal, i.e. we compare the total surplus requirement to the surplus requirement with one line missing. A disadvantage of the Merton-Perold approach is that it is not renewal additive; that is, the sum of the marginal amounts do not equal the total capital requirement.

Myers-Read Method

The Myers-Read method is another marginal allocation method that differs from Merton-Perold in that rather than removing an entire line of business from the firm, it determines how capital changes for a small change in the liabilities of that line, by taking derivatives of the price of the insolvency put option.

Define the following notation:

  • \(L_i\) are the liabilities on line \(i\), with \(L = \sum_{1\leq i \leq n} L_i\)

  • \(P(L_1,\ldots,L_n)\) is the value of the insolvency put, and \(p = P / L\)

  • \(V\) is the asset portfolio

  • \(\sigma_{a,b}\) is the covariance between \(a\) and \(b\), which may be one of the lines, total liabilities, or assets depending on context. With no subscripts, \(\sigma\) is the firm’s overall volatility parameter.

  • \(s\) is te surplus-to-liability ratio for the firm

The formula given by Myers and Read for the surplus required per dollar of liability for line \(i\) is: \[ s_i = s - \frac{\frac{\partial p}{\partial \sigma}}{\frac{\partial p}{\partial s}} \frac{(\sigma_{iL} - \sigma_L^2) - (\sigma_{iV} - \sigma_{LV})}{\sigma} \]

The formula is based on an option on the asset-to-liability ratio with a strike price of 1, which has a current value of \(1+s\). For lognormal distributions, the asset-to-liability ratio \(V/L\) has variance \(\sigma^2 = \sigma_V^2 + \sigma_L^2\). If the variance and correlation of individual lines is given, then \[ \sigma_L^2 = \sum_{i} w_i^1 \sigma_i^2 + \sum_{i\neq j} w_iw_j \sigma_i\sigma_j\rho_{i,j} \] where \(w_i\) is the proportion of the company’s value that line \(i\) represents in the company’s portfolio.

Advantages of this approach include:

  • The formula has intuitive appeal: liabilities with high correlation with the whole loss portfolio get higher allocations. Liabilities with high correlations with assets get lower allocations, consistent with the fact that assets will act as a hegde for these liabilities.

  • It avoids the problem of unallocated capital that arises in the Merton-Perold approach.

  • It is consistent with “everyday” underwriting decisions about whether to write a new policy or not.

Disadvantages include:

  • It was not explicitly developed for determining risk-adjusted capital requirements.

  • It relies on valuation of the default put option, and as a result requires greater quantitative resources to calculate relative to other methods.

  • It assumes that the size of a business unit can be changed without affecting the shape of the loss distribution; i.e. it assumes homogeneity of risks. This assumption may only be realistic in situations involving changes to quota share percentages.

Cummins suggests that one way to reconcile the differences between the Merton-Perold and Myers-Read methods is to regard each as suited to a different use case: Merton-Perold for decisions about adding or removing an entire line, and Myers-Read for incremental pricing / underwriting decisions.

Risk-based Capital

Broadly speaking, a Risk-Based Capital (RBC) system involves identifying sources of risk, determining a capital requirement for each risk, and then aggregating the capital required for each risk to determine a firm-wide capital requirement. The capital requirements are typically determined by multiplying some exposure base for the risk by a fixed factor. Desirable factors for a risk-based capital system include:

  • The solvency standard should be the same for all parties involved (e.g. personal versus commercial insured)

  • Risk-based capital should be objectively determined; two insurers with the same risk measures should have identical risk-based capital. This means that RBC can be expressed as a mathematical formula based on financial data from insurers.

  • The method must distinguish between items that differ materially in their riskiness, e.g. stocks and bonds should be treated differently. Each distinct class is a risk element, and it must be a balance sheet quantity.

Sources of risk typically considered by RBC systems include:

  1. Market Risk / Investment Risk: risk associated with loss in value of current investments such as bonds and stocks; interest rate risk; foreign exchange rate risk. Risk is typically assessed over a 10-day time horizon on the grounds that this is sufficient time to divest from a risk position, and long term projections are difficult to make. In practice, a one-year time horizon is used to match the time horizon used to evaluate liability risk.

  2. Loss Reserve Risk: risk of adverse reserve development, i.e. the probability that actual loss payments will exceed reserves.

  3. Written Premium: pertains to underwriting risk for the current policy period. It is the risk that the loss ratio on new business will be higher than expected.

  4. Credit Risk: risk that agents, reinsurers, or derivative security counterparties will default on obligations. Includes premium receivable risk on retrospectively rated policies. For a reinsurer, even a downgrade in credit rating without a default presents risk because it can trigger a “death spiral” for the reinsurer. Reinsurer credit risk is likely correlated with insurance risk (e.g. catastrophe risk in particular).

  5. Off-balance Sheet risk: pertains to unexpected payments for contracts that do not show up on teh balance sheet, e.g. loan guarantees to subsidiaries.

Property Catastrophe Risk is a component of written premium risk, but may also be considered separately. Together with loss reserve risk and written premium risk, these form underwriting risk.

Quantifying Sources of Risk

Reserve Risk

The Mack method is one approach to determining reserve risk. (The approach is examined in detail on Exam 7; these notes just address how to convert the output into an assessment of reserve risk.) The general approach is:

  1. Using the output from the Mack model, calculate the reserve in each year by multipying the paid loss by the LDF minus 1. Sum these values to get the total reserve.

  2. Take the square root of the sum of squares of the standard errors to get the standard error for the aggregate reserve.

  3. Assuming a lognormal distribution, use the method of moments to determine the parameters of the distribution.

  4. Use the lognormal distribution with the given parameters to determine the desired risk measure.

Recall that given the expected value \(M\) and variance \(V\) of the lognormal distribution, the underlying parameters \(\mu\) and \(\sigma\) can be calculated as \[ \sigma^2 = \log\left(1 + \frac{V}{M^2}\right) \] and \[ \mu = \log(M) - \frac{\sigma^2}{2} \]

Assume that the following paid amounts, LDFs, and standard errors for the reserve are given:

goldfarb.data = read.csv("./Data/goldfarb.csv")
goldfarb.data
##      AY paid_loss    LDF standard_error
## 1  1994   3901463  1.018              0
## 2  1995   5339085  1.036          76874
## 3  1996   4909315  1.115         123856
## 4  1997   4588268  1.175         135916
## 5  1998   3873311  1.277         266040
## 6  1999   3691712  1.409         418295
## 7  2000   3483130  1.654         568213
## 8  2001   2864498  2.411         890842
## 9  2002   1363294  4.212         988473
## 10 2003    344014 14.703        1387316

The aggregate reserve and standard error can be calculated as follows:

goldfarb.data = goldfarb.data %>% mutate(reserve = (LDF - 1) * paid_loss)
reserve.total = sum(goldfarb.data$reserve)
std.error.total = sqrt(sum(goldfarb.data$standard_error^2))
cv.total = std.error.total / reserve.total
goldfarb.data
##      AY paid_loss    LDF standard_error    reserve
## 1  1994   3901463  1.018              0   70226.33
## 2  1995   5339085  1.036          76874  192207.06
## 3  1996   4909315  1.115         123856  564571.22
## 4  1997   4588268  1.175         135916  802946.90
## 5  1998   3873311  1.277         266040 1072907.15
## 6  1999   3691712  1.409         418295 1509910.21
## 7  2000   3483130  1.654         568213 2277967.02
## 8  2001   2864498  2.411         890842 4041806.68
## 9  2002   1363294  4.212         988473 4378900.33
## 10 2003    344014 14.703        1387316 4714023.84
print(paste0("The total reserve is ", reserve.total, " with a coefficient of variation of ", cv.total))
## [1] "The total reserve is 19625466.742 with a coefficient of variation of 0.10570585382138"

Using the method of moments, we can determine parameters as follows:

sigma = sqrt(log(1 + cv.total^2))
mu = log(reserve.total) - sigma^2 / 2
print(paste0("The lognormal parameters are mu = ", mu, " and sigma = ", sigma))
## [1] "The lognormal parameters are mu = 16.786782723083 and sigma = 0.105412345565226"

If we set capital at the 99th percentile VaR, we can calculate the required assets as follows:

lognormal.var = exp(qnorm(0.99, mean = mu, sd = sigma))
print(paste0("The surplus required for reserve risk is ", lognormal.var - reserve.total))
## [1] "The surplus required for reserve risk is 5315157.0369023"

Note that this method assesses risk over the entire lifetime of the reserve. In practice, it is more common to assess the risk that the best estimate of the reserve will change in a year, since this puts it on a common time horizon with other balance sheet elements.

Underwriting Risk

The risk assumption horizon is the time period over which the risk associated with new business is assessed; it contrasts with the risk exposure horizon, which is the time period over which the risks are expected to impact the firm.

One approach is to use a loss ratio model to determine a multiplier that can be applied to written premium to determine a capital requirement for new business. Given a loss ratio (discounted to the end of the year) and its standard deviation, this can be done by assuming a lognormal distribution and determining parameters using the method of moments. Goldfarb gives the following example:

discounted.LR = 0.916
LR.cv = 0.2113
LR.sigma = sqrt(log(1 + LR.cv^2))
LR.mu = log(discounted.LR) - LR.sigma^2 / 2
print(paste0("The lognormal distribution parameters are mu = ", LR.mu, " and sigma = ", LR.sigma))
## [1] "The lognormal distribution parameters are mu = -0.10957875921638 and sigma = 0.208996865566798"

Assume that capital requirements are set based on 99th percentile VaR:

LR.var = exp(qnorm(0.99, LR.mu, LR.sigma))
print(paste0("The surplus required for new business risk is equal to ", LR.var - discounted.LR, " times written premium."))
## [1] "The surplus required for new business risk is equal to 0.54135136507388 times written premium."

An alternative approach is to model aggregate loss using separate frequency and severity models. Options include:

  • Some choices of frequency and severity models may admit analytical aggregate loss distributions; these are called collective risk models

  • Numerical methods

  • Approximate distributions based on the mean and variance of the collective risk model.

In the classical collective risk model, assuming a claim count distribution \(N\) and severity distribution \(S\), for the aggregate loss distribution \(A\) we have: \[ E[A] = E[N] E[S] \] \[ \mathrm{Var}(A) = E[\mathrm{Var}(A|N)] + \mathrm{Var}(E[A|N]) = E[N] \mathrm{Var}(S) + E[S]^2 \mathrm{Var}(N) \]

A more general approach introduces a contagion parameter to reflect shocks to frequency and severity that affect all lines. (For example, inflation affects severity across lines and therefore introduces correlations between them.) The approach assumes a multiplicative frequency shock \(B\) with mean 1 and variance \(b\), and a multiplicative severity shock \(C\) with mean 1 and variance \(c\). In this case, \[ E[A] = E[N]E[S] \] and \[ \mathrm{Var}(A) = E[N] E[S]^2 (1+b) + \mathrm{Var}(N)E[S]^2(b + c + bc) \]

RBC Algorithm

The general RBC approach is:

  1. Each of the “RBC proportions” is multiplied by a specified income or balance sheet item to determine an “RBC charge” for the risk. Higher-risk items have higher multipliers, e.g. the multiplier for stocks is larger than that for bonds.

  2. Each of the RBC charges are summed by category, giving the values \(R_1\) through \(R_5\).

  3. The total risk capital required is \(R_T = R_0 + [R_1^2 + R_2^2 + R_3^2 + R_4^2 + R_5^2]^{1/2}\), where \(R_0\) is risk-based capital for stocks in the firm’s subsidiaries. This is called the covariance adjustment.

The square root adjustment at the end assumes that the five risk categories are uncorrelated, and reflects the benefits of diversification of risk.

Butsic provides an example to illustrate the impact of diversification on the capital requirement. Consider a single-line insurer with fixed assets and two possible loss outcomes, one large and one small:

asset.amount = 6900
small.loss.amount = 2000
small.loss.probability = 0.6
large.loss.amount = 7000
large.loss.probability = 0.4
butsic.diversification.1 = data.frame(probability = c(small.loss.probability, large.loss.probability), loss_amount = c(small.loss.amount, large.loss.amount), asset_amount = c(asset.amount, asset.amount))
butsic.diversification.1 = butsic.diversification.1 %>% mutate(deficit = pmax(0, loss_amount - asset_amount))
butsic.diversification.1
##   probability loss_amount asset_amount deficit
## 1         0.6        2000         6900       0
## 2         0.4        7000         6900     100

The capital-to-loss ratio and EPD ratio can be calculated as follows:

expected.loss = sum(butsic.diversification.1$probability * butsic.diversification.1$loss_amount)
capital.to.loss = (asset.amount - expected.loss) / expected.loss
EPD = sum(butsic.diversification.1$probability * butsic.diversification.1$deficit)
EPD.ratio = EPD / expected.loss
print(paste0("Based on a capital-to-loss ratio of ", capital.to.loss, " the EPD ratio is ", EPD.ratio))
## [1] "Based on a capital-to-loss ratio of 0.725 the EPD ratio is 0.01"

Next, assume the company has two identical but uncorrelated lines:

butsic.diversification.2 = merge(data.frame(line_1_large = c(0,1)), data.frame(line_2_large = c(0,1)))
butsic.diversification.2 = butsic.diversification.2 %>% mutate(probability = large.loss.probability^(line_1_large + line_2_large) * (1 - large.loss.probability)^(2 - line_1_large - line_2_large), loss_amount = (line_1_large + line_2_large) * large.loss.amount + (2 - line_1_large - line_2_large) * small.loss.amount, asset_amount = 2 * asset.amount, deficit = pmax(0, loss_amount - asset_amount))
butsic.diversification.2
##   line_1_large line_2_large probability loss_amount asset_amount deficit
## 1            0            0        0.36        4000        13800       0
## 2            1            0        0.24        9000        13800       0
## 3            0            1        0.24        9000        13800       0
## 4            1            1        0.16       14000        13800     200

Now the capital-to-loss and EPD ratio are:

expected.loss = sum(butsic.diversification.2$probability * butsic.diversification.2$loss_amount)
capital.to.loss = (2 * asset.amount - expected.loss) / expected.loss
EPD = sum(butsic.diversification.2$probability * butsic.diversification.2$deficit)
EPD.ratio = EPD / expected.loss
print(paste0("Based on a capital-to-loss ratio of ", capital.to.loss, " the EPD ratio is ", EPD.ratio))
## [1] "Based on a capital-to-loss ratio of 0.725 the EPD ratio is 0.004"

At the same capital-to-loss ratio, the company is able to achieve a much lower EPD ratio.

The asset amount can be lowered to achieve the same EPD ratio:

asset.amount = 6750
butsic.diversification.2 = merge(data.frame(line_1_large = c(0,1)), data.frame(line_2_large = c(0,1)))
butsic.diversification.2 = butsic.diversification.2 %>% mutate(probability = large.loss.probability^(line_1_large + line_2_large) * (1 - large.loss.probability)^(2 - line_1_large - line_2_large), loss_amount = (line_1_large + line_2_large) * large.loss.amount + (2 - line_1_large - line_2_large) * small.loss.amount, asset_amount = 2 * asset.amount, deficit = pmax(0, loss_amount - asset_amount))
butsic.diversification.2
##   line_1_large line_2_large probability loss_amount asset_amount deficit
## 1            0            0        0.36        4000        13500       0
## 2            1            0        0.24        9000        13500       0
## 3            0            1        0.24        9000        13500       0
## 4            1            1        0.16       14000        13500     500
expected.loss = sum(butsic.diversification.2$probability * butsic.diversification.2$loss_amount)
capital.to.loss = (2 * asset.amount - expected.loss) / expected.loss
EPD = sum(butsic.diversification.2$probability * butsic.diversification.2$deficit)
EPD.ratio = EPD / expected.loss
print(paste0("Based on a capital-to-loss ratio of ", capital.to.loss, " the EPD ratio is ", EPD.ratio))
## [1] "Based on a capital-to-loss ratio of 0.6875 the EPD ratio is 0.01"

In this case, a lower capital charge is required than would otherwise be indicated by simply adding the capital charges for the lines.

The theoretical basis for the square root rule is that if losses are normally distribution, then sums of random variables corresponding to liabilities and assets will also be normally distributed, with variance of the sum given by the usual covariance rule. Note that in some cases the covariance contribution will be negated due to items appearing on opposite sides of the balance sheet. For example, if \[ C = A - L \] and both \(A\) and \(L\) are random, then \[ \sigma_C^2 = \sigma_A^2 + \sigma_L^2 - 2\rho_{A, L} \sigma_A\sigma_L \] The link to the capital charge is made through the observation that the relationship between the capital-to-loss ratio is approximated linearly by the coefficient of variation: \[ c \approx ak \] Therefore, \[ C \approx akL = a\sigma \] Applying the covariance rule to the capital requirement gives the general formula \[ C^2 = \sum_{1\leq i \leq n} C_i^2 + \sum_{i \neq j} \rho_{i,j} C_iC_j \] where the sign of \(\rho\) is assumed to be reversed for items on opposite sides of the balance sheet. Butsic illustrates these concepts by assuming that we have the following list of balance sheet items, capital ratios, and correlation matrix:

RBC.example = data.frame(element = c("Stocks", "Bonds", "Affiliates", "Loss Reserve", "Property UPR"), amount = c(200, 1000, 100, 800, 100), capital_ratio = c(0.2, 0.05, 0.2, 0.4, 0.20))
RBC.example = RBC.example %>% mutate(RBC = capital_ratio * amount)
RBC.example
##        element amount capital_ratio RBC
## 1       Stocks    200          0.20  40
## 2        Bonds   1000          0.05  50
## 3   Affiliates    100          0.20  20
## 4 Loss Reserve    800          0.40 320
## 5 Property UPR    100          0.20  20

The correlation matrix, with signs switch for items on opposite sides of the balance sheet, is:

RBC.corr = matrix(c(1, 0.2, 1.0, 0, 0, 0.2, 1, 0.2, - 0.3, 0, 1.0, 0.2, 1, 1, 0, 0, -0.3, 1, 1, 0, 0, 0, 0, 0, 1.0), nrow = 5, ncol = 5)
RBC.corr
##      [,1] [,2] [,3] [,4] [,5]
## [1,]  1.0  0.2  1.0  0.0    0
## [2,]  0.2  1.0  0.2 -0.3    0
## [3,]  1.0  0.2  1.0  1.0    0
## [4,]  0.0 -0.3  1.0  1.0    0
## [5,]  0.0  0.0  0.0  0.0    1

The total RBC capital requirement can be calculated as follows:

x = RBC.example$RBC
total.capital = sqrt(colSums(x * (RBC.corr %*% x)))
print(paste0("The total capital requirement is ", total.capital, ", compared to ", sum(RBC.example$RBC), " when the square root rule is not considered."))
## [1] "The total capital requirement is 336.600653594137, compared to 450 when the square root rule is not considered."

A key challenge in using this method is that it is difficult to numerically determine correlations between items, and they often need to be selected judgmentally.

Examples of positively correlated risk elements include:

  • Bonds and loss reserves

  • Loss reserve and LAE reserve

  • Stock and Bonds

Examples of uncorrelated risk elements include:

  • Stock and unearned premium reserve

  • Liability loss reserve and property unearned premium reserve

  • Cash and real estate

Examples of negatively correlated risk elements include:

  • Reinsurance recoverables and loss reserves

  • Loss reserves and income tax liability

  • Loss reserves and stock in property / liability companies

  • Stock and put options

Disadvantages of the approach include:

  • Some of the charges are based on worst-case scenarios and are not tailored to the firm’s businesses. In particular, correlations between the firm’s lines of business are not considered.

  • The square root rule is only exact when distributions are normal, and risk measures are proportional to standard deviation.

Goldfarb provides some alternative options for aggregating risk across categories:

  • Closed form solutions: it may be possible to derive analytical solutions for the aggregate risk distribution, but only in very simplified cases.

  • Approximation methods: assume all distributions are normal or lognormal, and derive model parameters from the moments or percentiles of the distribution.

  • Simulation: provides an intuitive interpretation, but raises run-time and stability concerns. May make it difficult to thoroughly test the model and its assumptions. A key consideration is that correlations need to be reflected in the simulation, e.g. by using copulas.

Percentile Layer of Capital

Assume we have a capital requirement \(R(\alpha)\) that is a function of a percentile \(\alpha\), for example, \(R(\alpha) = 2 \times \mathrm{VaR}_\alpha\). The percentile layer of capital between \(\alpha\) and \(\alpha + j\) is \[ R(\alpha + j) - R(\alpha) \] In the special case where \(R(\alpha) = \mathrm{VaR}_{\alpha}\), for a loss distribution \(F(x)\), the percentile layer of capital is \[ F^{-1}(\alpha + j) - F^{-1}(\alpha) \]

The capital allocation philosophy is based on the idea that the layers can be allocated to events in the following manner:

  1. Capital in a layer is only allocated to events that penetrate the layer

  2. Capital in the layer is allocated in proportion to its conditional exceedence probability, i.e. the ratio of its probability of occurrence to the probability of all events that exceed the layer.

  3. All layers, up to the selected VaR threshold should be allocated to events.

  4. Within each event, allocate its capital to the lines of business in proportion to their contribution to the event.

  5. If premium is given, convert the allocated capital to a surplus requirement by discounting allocated capital to time 0, and subtracting premium net of expenses (“contributed capital”).

Variations on this approach include:

  • An alternate, equivalent approach, is to isolate a single loss event, and for all layers less than that loss event and the VaR threshold, calculate the amount from step 2 and sum the results.

  • To produce an allocation based on TVaR rather than VaR, add an additional step in which the layer of capital between VaR and TVaR is allocated to events which exceed the VaR threshold, in proportion to each event’s average loss in excess of the threshold.

To illustrate the ideas using Bodoff’s second example, assume we have 100 simulations of an insurer that offers both wind and earthquake insurance, with each simulation having equal probability:

pcl.data = data.frame(simulation = 100:1, cause = c("Wind and EQ", rep("EQ", 4), rep("Wind", 19), rep("None", 76)), loss = c(150, rep(100, 4), rep(50, 19), rep(0, 76)))
head(pcl.data)
##   simulation       cause loss
## 1        100 Wind and EQ  150
## 2         99          EQ  100
## 3         98          EQ  100
## 4         97          EQ  100
## 5         96          EQ  100
## 6         95        Wind   50

The probability of each event is:

pr.wind.eq = sum(pcl.data$cause == "Wind and EQ") / nrow(pcl.data)
pr.eq.only = sum(pcl.data$cause == "EQ") / nrow(pcl.data)
pr.wind.only = sum(pcl.data$cause == "Wind") / nrow(pcl.data)
pr.none = sum(pcl.data$cause == "None") / nrow(pcl.data)

We can plot the simulation in a Lee diagram (in which the cumulative probability is on the \(x\)-axis and the loss amount is on the \(y\)-axis) as follows:

ggplot(data = pcl.data, aes(x = simulation, y = loss, fill = cause, color = cause)) + geom_bar(stat = "identity")

Suppose we are setting capital at the 99th percentile of the loss distribution, which in this case is $100M. First, allocate the layer 50M xs 50M. Zooming in on the graph and plotting this layer:

ggplot(data = pcl.data, aes(x = simulation, y = loss, fill = cause, color = cause)) + geom_bar(stat = "identity") + geom_hline(yintercept = 50) + geom_hline(yintercept = 100) + xlim(75, 101)
## Warning: Removed 74 rows containing missing values (position_stack).

In this case, only two events (earthquake only, and wind plus earthquake) penetrate the layer.

print(paste0("The proportion of the layer allocated to earthquake only is ", pr.eq.only / (pr.eq.only + pr.wind.eq)))
## [1] "The proportion of the layer allocated to earthquake only is 0.8"
print(paste0("The proportion of the layer allocated to earthquake and wind is ", pr.wind.eq / (pr.eq.only + pr.wind.eq)))
## [1] "The proportion of the layer allocated to earthquake and wind is 0.2"

Multiply these proportinos by the capital in the layer to get the allocation to each event:

wind.eq.alloc.A = pr.wind.eq / (pr.eq.only + pr.wind.eq) * (100 - 50)
eq.only.alloc.A = pr.eq.only / (pr.eq.only + pr.wind.eq) * (100 - 50)
print(paste0("In this layer, ", wind.eq.alloc.A, " is allocated to the wind and earthquake event, " , " and ", eq.only.alloc.A, " is allocated to the earthquake-only event." ))
## [1] "In this layer, 10 is allocated to the wind and earthquake event,  and 40 is allocated to the earthquake-only event."

The next layer is 50M xs 0:

ggplot(data = pcl.data, aes(x = simulation, y = loss, fill = cause, color = cause)) + geom_bar(stat = "identity") + geom_hline(yintercept = 50) + geom_hline(yintercept = 0) + xlim(75, 101)
## Warning: Removed 74 rows containing missing values (position_stack).

In this case, the allocations are:

wind.eq.alloc.B = pr.wind.eq / (pr.eq.only + pr.wind.eq + pr.wind.only) * (50 - 0)
eq.only.alloc.B = pr.eq.only / (pr.eq.only + pr.wind.eq + pr.wind.only) * (50 - 0)
wind.only.alloc.B = pr.wind.only / (pr.eq.only + pr.wind.eq + pr.wind.only) * (50 - 0)
print(paste0("In this layer, ", wind.eq.alloc.B, " is allocated to the wind and earthquake event, " , ", ", eq.only.alloc.B, " is allocated to the earthquake-only event, ", " and ", wind.only.alloc.B, " is allocated to the wind-only event." ))
## [1] "In this layer, 2.08333333333333 is allocated to the wind and earthquake event, , 8.33333333333333 is allocated to the earthquake-only event,  and 39.5833333333333 is allocated to the wind-only event."

The total allocations to each event are:

wind.eq.alloc = wind.eq.alloc.A + wind.eq.alloc.B
eq.only.alloc = eq.only.alloc.A + eq.only.alloc.B
wind.only.alloc = wind.only.alloc.B
print(paste0("The allocations for the events are: ", wind.eq.alloc, " for the wind and earthquake event; ", eq.only.alloc, " for the earthquake-only event; ", wind.only.alloc, " for the wind-only event"))
## [1] "The allocations for the events are: 12.0833333333333 for the wind and earthquake event; 48.3333333333333 for the earthquake-only event; 39.5833333333333 for the wind-only event"

The wind and earthquake perils individually can be obtained by splitting the wind and earthquake event based on the fact that wind contributes 1/3 to this event and earthquake contributes 2/3:

wind.alloc = wind.only.alloc + 1/3 * wind.eq.alloc
eq.alloc = eq.only.alloc + 2/3 * wind.eq.alloc
print(paste0("The wind allocation is ", wind.alloc, " and the earthquake allocation is ", eq.alloc))
## [1] "The wind allocation is 43.6111111111111 and the earthquake allocation is 56.3888888888889"

Given a continuous distribution, the amount of capital allocated to a loss event of size \(x\) is given by \[ AC(x) = \int_0^x \frac{f(x)dy}{1 - F(y)} \] when \(x \leq \mathrm{VaR}_{0.99}\), and \[ AC(x) = \int_0^{\mathrm{VaR}_{0.99}} \frac{f(x)dy}{1 - F(y)} \] if the loss breaches the VaR threshold. Given an event of size \(x\), these formulas are analagous to the discrete process of summing contribution of each layer in proportion to their conditional exceedence probability.

For the exponential distribution, with mean \(\mu\), we have \[ F(x) = 1 - e^{-x / \mu} \] and \[ f(x) = \frac{1}{\mu} e^{-x / \mu} \] Therefore, when \(x\) is below the VaR threshold, \[ AC(x) = \int_0^x \frac{1}{\mu}\frac{e^{-x/\mu}}{e^{-y / \mu}} dy = e^{-x/\mu} [e^{x/\mu} - 1] = 1 - e^{-x/\mu} \]

Returning to the general case, examining the derivative with respect to \(x\) allows us to deduce some properties of the risk load: \[ AC'(x) = f'(x) \int_0^x \frac{dy}{1 - F(y)} + \frac{f(x)}{1 - F(x)} \]

Factors that increase allocated capital include:

  1. Higher conditional exceedence probability for the event of size \(x\), due to the term \(f(x)/(1-F(x))\)

  2. In most continuous cases, \(f'(x)\) is negative, so the first term tends to cause allocated capital to decrease.

  3. In the case of a simulation, all events are equally likely, so \(f'(x)=0\) and the change in capital is driven entirely by the conditional exceedence probability.

In the case of the exponential distribution, \[ AC'(x) = \frac{1}{\mu} e^{-x/\mu} \] confirming that in this case allocated capital always increases as \(x\) increases.

Key advantages of the percentile layer allocation method include:

  • Avoids situations where too much capital is allocated to extreme tail events; the method reflects the fact that capital held benefits more likely but moderately severe events.

  • It is an improvement over purely probability-based methods that might allocate too little capital to rare but costly events.

  • May be particularly useful when capital is split into “tranches” that have different costs; the method can vary the cost of capital by layer. Layers of reinsurance would be a similar use case.

Calculation of risk load

The percentile layer allocation method can be used to determine a risk load as follows:

  1. Subtract the premium net of expenses, \(P\), from the allocated capital to determine the surplus that is required.

  2. Multiply the required surplus by the required rate of return \(r\) to get the dollar-value cost of capital.

  3. Add the cost of capital to the expected losses to get the new premium.

The above gives the equation \[ P = E[L] + r(AC(x) - P) \] which can be solved for \(P\) to give \[ P = \frac{E[X]}{1+r} + \frac{r}{1+r} AC(x) = E[X] + \frac{r}{1+r}\left(AC(x) - E[X]\right) \] As a mnemonic, the formula is analogous to investing the excess capital at the required rate of return, discounting the investment income to time 0, then adding the expected losses.

Premium can be allocated to individual loss events, where the expected value for a loss of size \(x\) is \(xf(x)\). In the continuous case, this becomes \[ P(x) = xf(x) + \frac{r}{1+r}\left(\int_0^x \frac{f(x)}{1 - F(y)}dy - xf(x)\right) \] By factoring out either \(f(x)\) or \(xf(x)\), this can be re-expressed as either a multiplicative adjustment to the probability \(f(x)\) or the loss amount \(x\). The adjustment factor to \(f(x)\) is called the disutility function: \[ x + \frac{r}{1+r}\left(\int_0^x \frac{1}{1 - F(y)} dy - x\right) \] This is interpreted as the total cost of an event, given that the event occurs.

Risk-Adjusted Return on Capital

The risk-adjusted return on capital (RAROC) for line of business \(i\) is \[ \text{RAROC}_i = \frac{N_i}{C_i} \] where

There is variation in the exact definition of RAROC, depending on how the numerator and denominator are defined. Income measures include:

Examples of capital measures that could be used in the denominator are summarized in the following table.

Capital Measure Description Advantages Disadvantages
Actual Committed Capital Actual capital provided by shareholders (contributed capital plus retained earnings) Readily available on the balance sheet Not risk-adjusted
Market Value of Equity Number of shares outstanding times price per share Includes franchise value Not risk-adjusted
Regulatory Required Capital Amount requried by a regulatory formula Easy to determine
Rating Agency Required Capital Capital required to achieve a target credit rating Ratings incorporate a subjective aspect
Economic Capital Capital required, in excess of liability, to achieve objective with given probability Objective may vary based on perspective (policyholder, management, shareholder)
Risk Capital Amount of capital that must be provided by shareholders to absorb the risk that liabilities will exceed available funds Reflects conservatism in reserves / risk margins Does not reflect cost of unallocated capital

Examples of objectives that could be used to define Economic Captial include:

A common way to address the issue of unallocated (“stranded”) capital is to use all of the firm’s actual capital, but to apply a risk-based allocation method (e.g. proportional allocation based on a risk measure).

The calculation of risk capital involves selection of a threshold at which risk is measured. Examples of approaches include:

Economic Value Added

Economic Value Added is a metric used to determine whether a line of business is adding value to a firm. It is defined as net income minus the cost of capital (hurdle rate) times the capital allocated to the business: \[ \text{EVA}_i = N_i - r_i C_i \] When EVA is positive, then writing this line of business is consistent with value maximization, while lines of business with negative EVA are eroding the value of the firm. A closely related value is economic value added on capital (EVAOC), which is essentially the RAROC minus the cost of capital: \[ \text{EVAOC}_i = N_i / C_i - r_i \]

Determining the Hurdle Rate using CAPM

Cummins shows how a CAPM-based hurdle rate can be derived from betas for the liabilities and assets of the company. Assume two lines of business, and use the following notation:

  • \(I\) is net income

  • \(r_A\), \(r_i\) are the returns on assets and line of business \(i\)

  • \(A\) is the assets

  • \(P_i\) and \(L_i\) are the premium and losses on line \(i\)

  • \(\beta_E\), \(\beta_A\), \(\beta_i\) are the betas for equity, assets, and line \(i\) liabilities.

  • \(k_i\) is the reserve-to-surplus ratio for line \(i\)

  • \(s_i\) is the premium-to-surplus ratio for line \(i\)

The approach is based on the observation that \[ I = r_AA + r_1P_1 + r_2P_2 \] Writing \(A = E + L_1 + L_2\) and dividing by \(E\), we obtain the rate of return on equity: \[ r_E = r_A(1 + k_1 + k_2) + r_1 s_1 + r_2 s_2 \] Since \(\beta_x = \mathrm{Cov}(r_x, r_M) / \mathrm{Var}(r_M)\) for the market rate \(r_M\), applying the covariance operator to the above equation and using linearity gives \[ \beta_E = \beta_A(1 + k_1 + k_2) + \beta_1 s_1 + \beta_2 s_2 \] Cummins points out that this provides a theoretical justification for the use of premium-to-surplus ratios as a measure of underwriting leverage.

The rate of return for an individual line is \[ r_i = -k_i r_f + \beta_i(r_M - r_f) \] where the usual \(r_f\) term is replaced with \(-k_i r_f\) to reflect the fact that the line implicitly pays interest on the use of policyholder-supplied funds.

Applications of RAROC

Assessing insurer performance

RAROC can be used to assess actual performance of a line of business, in a manner that accounts for the relative riskiness of the line, as follows:

  1. Calculate the economic profit of the line at the end of the first year, as:

    • premium

    • minus expenses

    • plus investment income on premium net of expenses

    • minus discounted losses.

  2. Determine the amount of captial allocated to the line at the beginning of the year using the prescribed method

  3. Divide economic profit by allocated capital. If it is greater than the cost of capital, then the line is adding value to the firm.

    • When there is a multi-year capital commitment, assume that capital is relased at the same rate as losses are paid. Calculate the allocated capital in each year, then discount to present value using the investment rate. Divide by the current capital allocation to get a multiplicative factor that adjusts the target rate of return.

The approach can be illustrated using the following example due to Goldfarb:

raroc.performance.example = data.frame(line = c("A", "B"), premium = c(6400000, 6400000), expense_ratio = c(0.05, 0.05), investment_return = c(0.05, 0.05), discounted_loss_ratio = c(0.92, 0.86), co_cte_allocation = c(2117082, 4225340), var_allocation = c(2035598, 3384941))
raroc.performance.example = raroc.performance.example %>% mutate(expenses = expense_ratio * premium, investment_income = investment_return * (1 - expense_ratio) * premium, discounted_losses = discounted_loss_ratio * premium, economic_profit = premium - expenses + investment_income - discounted_losses, raroc_cte = economic_profit / co_cte_allocation, raroc_var = economic_profit / var_allocation)
raroc.performance.example
##   line premium expense_ratio investment_return discounted_loss_ratio
## 1    A 6400000          0.05              0.05                  0.92
## 2    B 6400000          0.05              0.05                  0.86
##   co_cte_allocation var_allocation expenses investment_income
## 1           2117082        2035598   320000            304000
## 2           4225340        3384941   320000            304000
##   discounted_losses economic_profit raroc_cte raroc_var
## 1           5888000          496000 0.2342847  0.243663
## 2           5504000          880000 0.2082673  0.259975

Note that despite having a higher loss ratio, line A outperforms line B on a risk-adjusted basis when CTE is used to allocate capital. However, this method is sensitive to the selection of allocation method: the relative performance of the lines is reversed when VaR is used to allocate capital.

Determining Insurance Prices

RAROC can be used to set insurance prices as follows:

  1. Calculate the required economic profit in one of two ways:

    • In simple cases, this is just the product of the allocated risk capital and the target RAROC.

    • When there is a multi-year capital committment, assume that capital is relased at the same rate as losses are paid. Calculate the cost of capital in each year, then discount to the end of the first year using the investment rate to determine required economic profit. (Note: Goldfarb’s multi-year example discounts to the beginning of the first year; however, discounting to the end is consistent with the single-period approach, and it is the one used in the example below.)

  2. Determine the current economic profit, using the same approach as in the performance assessment algorithm.

  3. Determine the shortfall (or excess). Divide by 1 plus the investment rate to determine the additional premium (or premium reduction) required.

The approach can be illustrated using the following example due to Goldfarb. First, calculate the required economic profit:

beginning.risk.capital = 4225340
target.return = 0.15
investment.rate = 0.05
pv.cost.of.capital = data.frame(year = c(1, 2, 3, 4), percent_paid = c(0.50, 0.30, 0.15, 0.05))
pv.cost.of.capital = pv.cost.of.capital %>% mutate(end_of_period_pct = 1 - cumsum(percent_paid), beginning_of_period_pct = lag(end_of_period_pct, default = 1), beginning_capital = beginning_of_period_pct * beginning.risk.capital, end_capital = end_of_period_pct * beginning.risk.capital, cost_of_capital = beginning_capital * target.return, pv_cost_of_capital = cost_of_capital / (1 + investment.rate)^(year - 1))
pv.cost.of.capital
##   year percent_paid end_of_period_pct beginning_of_period_pct
## 1    1         0.50              0.50                    1.00
## 2    2         0.30              0.20                    0.50
## 3    3         0.15              0.05                    0.20
## 4    4         0.05              0.00                    0.05
##   beginning_capital end_capital cost_of_capital pv_cost_of_capital
## 1           4225340     2112670       633801.00          633801.00
## 2           2112670      845068       316900.50          301810.00
## 3            845068      211267       126760.20          114975.24
## 4            211267           0        31690.05           27375.06
required.economic.profit = sum(pv.cost.of.capital$pv_cost_of_capital)
print(paste0("The required economic profit is ", required.economic.profit))
## [1] "The required economic profit is 1077961.29478458"

Next, calculate the current economic profit using the current premium:

expense.ratio = 0.05
discounted.loss.ratio = 0.9160
premium = 6400000
current.economic.profit = premium * (1 - expense.ratio) * (1 + investment.rate) - premium * discounted.loss.ratio
shortfall = required.economic.profit - current.economic.profit
additional.premium = shortfall / (1 + investment.rate)
print(paste0("The current economic profit is ", current.economic.profit, " for a shortfall of ", shortfall, ". The additional premium required is ", additional.premium))
## [1] "The current economic profit is 521600 for a shortfall of 556361.29478458. The additional premium required is 529867.899794838"

As an alternative point of view, we can adjust the target rate based on the present value of capital allocated:

pv.capital = pv.cost.of.capital %>% mutate(pv_capital = beginning_capital / (1 + investment.rate)^(year - 1))
adjusted.target.rate = target.return * sum(pv.capital$pv_capital) / beginning.risk.capital
print(paste0("The adjusted target rate is ", adjusted.target.rate, " and the required economic profit is ", adjusted.target.rate * beginning.risk.capital))
## [1] "The adjusted target rate is 0.255118237771299 and the required economic profit is 1077961.29478458"