Introduction

These study notes are based on the Exam 9 syllabus reading Riskiness Leverage Models by Rodney Kreps. The paper describes a general framework for calculating a risk load based on a riskiness leverage function that represents attitudes toward risk, and for allocating that risk load among lines of business. This paper corresponds to learning objective D6 on the syllabus.

easypackages::packages("dplyr", "ggplot2", "copula", "Hmisc")
options(scipen = 999)

Introduction

The general philosophy behind a risk load is that the mean losses are supported by the reserves, and their variability by the surplus. We assume that the net result for the company is a random variable \(X\) that is the sum of a collection of random variables \(X_k\): \[ X = \sum_{1\leq k \leq n} X_k \] Define the following notation:

These are related by the equation \[ C = \mu + R \] Analogous notation, with a subscript \(k\), is used for the corresponding values for \(X_k\).

The objective is to develop a general framework for determining the total capital requirement, in a way that allows for an allocation the cost of surplus capital to lines of business, subject to three desirable qualities:

  1. The allocation should performable at any desired level of definition (region, line of business, etc.)

  2. The risk load allocated to the total should equal the sum of the individual allocations. In other words, \(R = \sum_{1\leq k \leq n} R_k\).

  3. The same formula is used to calculate the risk load for any subgroup.

The ideas are illustrated using simulated data, which are generated below using the same parameters as the spreadsheet that accompanies Kreps’ paper:

set.seed(12345)
line.A.mean = 10000000
line.A.sd = 1000000
line.B.mean = 8000000
line.B.sd = 2000000
inv.mean = 1.04
inv.sd = 0.1
line.correlation = 0.25
number.of.simulations = 1000000
lognormal.mu.A = log(line.A.mean / sqrt(1 + line.A.sd^2 / line.A.mean^2))
lognormal.mu.B = log(line.B.mean / sqrt(1 + line.B.sd^2 / line.B.mean^2))
lognormal.mu.inv = log(inv.mean / sqrt(1 + inv.sd^2 / inv.mean^2))
lognormal.sigma.A = sqrt(log(1 + line.A.sd^2 / line.A.mean^2))
lognormal.sigma.B = sqrt(log(1 + line.B.sd^2 / line.B.mean^2))
lognormal.sigma.inv = sqrt(log(1 + inv.sd^2 / inv.mean^2))
random.numbers = rCopula(number.of.simulations, normalCopula(line.correlation, dim = 2))
simulated.data = data.frame(line_A_random = random.numbers[,1], line_B_random = random.numbers[,2])
simulated.data = simulated.data %>% mutate(line_A_loss = exp(qnorm(line_A_random, mean = lognormal.mu.A, sd = lognormal.sigma.A)), line_B_loss = exp(qnorm(line_B_random, mean = lognormal.mu.B, sd = lognormal.sigma.B)), inv_return = exp(rnorm(number.of.simulations, mean = lognormal.mu.inv, sd = lognormal.sigma.inv))) %>% select(-line_A_random, -line_B_random)
head(simulated.data)
##   line_A_loss line_B_loss inv_return
## 1    10638389     9398857  1.2602453
## 2     9787378     6923820  1.2179731
## 3    10326529     5072715  1.2860345
## 4    10553850     7398059  0.9840555
## 5     9563325     6145753  1.0223438
## 6    10063836    12054235  0.9352681

Other parameters used in the simulation include:

line.A.premium = 10500000
line.B.premium = 8400000
starting.surplus = 9000000

We can calculate the overall results of the company as follows:

simulated.data = simulated.data %>% mutate(line_A_result = line.A.premium - line_A_loss, line_B_result = line.B.premium - line_B_loss, investment_result = starting.surplus * (inv_return - 1), total = line_A_result + line_B_result + investment_result, surplus = starting.surplus + total, return = surplus / starting.surplus - 1) %>% arrange(total)
head(simulated.data)
##   line_A_loss line_B_loss inv_return line_A_result line_B_result
## 1     9532385    29293698  1.0529896     967615.07     -20893698
## 2    12769580    25047738  1.0099660   -2269580.41     -16647738
## 3    11916889    23250846  0.9762604   -1416888.87     -14850846
## 4    10404515    25299247  1.0432970      95485.33     -16899247
## 5    12238780    23429047  1.0420742   -1738779.64     -15029047
## 6    12256464    22553126  1.0236519   -1756464.50     -14153126
##   investment_result     total   surplus    return
## 1         476906.13 -19449177 -10449177 -2.161020
## 2          89693.87 -18827625  -9827625 -2.091958
## 3        -213656.13 -16481391  -7481391 -1.831266
## 4         389673.05 -16414089  -7414089 -1.823788
## 5         378668.14 -16389158  -7389158 -1.821018
## 6         212867.21 -15696723  -6696723 -1.744080
tail(simulated.data)
##         line_A_loss line_B_loss inv_return line_A_result line_B_result
## 999995      7793396     4274330   1.398940       2706604       4125670
## 999996      8566473     3138870   1.359704       1933527       5261130
## 999997      7891737     4104195   1.395143       2608263       4295805
## 999998      7565944     3805118   1.327795       2934056       4594882
## 999999      8865252     4199074   1.525916       1634748       4200926
## 1000000     7512486     3775717   1.351391       2987514       4624283
##         investment_result    total  surplus   return
## 999995            3590461 10422736 19422736 1.158082
## 999996            3237338 10431996 19431996 1.159111
## 999997            3556287 10460355 19460355 1.162262
## 999998            2950157 10479094 19479094 1.164344
## 999999            4733247 10568922 19568922 1.174325
## 1000000           3162516 10774313 19774313 1.197146
describe(simulated.data)
## simulated.data 
## 
##  9  Variables      1000000  Observations
## ---------------------------------------------------------------------------
## line_A_loss 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1  9998543  1124508  8445415  8754995 
##      .25      .50      .75      .90      .95 
##  9300682  9948238 10640910 11307524 11722300 
## 
## lowest :  6210141  6311990  6322258  6332730  6347095
## highest: 15495278 15927208 15983097 16100587 16354214
## ---------------------------------------------------------------------------
## line_B_loss 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1  8000997  2214067  5176536  5659489 
##      .25      .50      .75      .90      .95 
##  6571311  7761035  9166113 10644019 11644342 
## 
## lowest :  2346621  2429942  2524871  2547908  2581539
## highest: 23429047 23780869 25047738 25299247 29293698
## ---------------------------------------------------------------------------
## inv_return 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1     1.04   0.1125   0.8843   0.9156 
##      .25      .50      .75      .90      .95 
##   0.9706   1.0355   1.1044   1.1708   1.2123 
## 
## lowest : 0.6614497 0.6629554 0.6703305 0.6738224 0.6773036
## highest: 1.6049724 1.6065133 1.6099711 1.6114772 1.6254196
## ---------------------------------------------------------------------------
## line_A_result 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1   501457  1124508 -1222300  -807524 
##      .25      .50      .75      .90      .95 
##  -140910   551762  1199318  1745005  2054585 
## 
## lowest : -5854214 -5600587 -5483097 -5427208 -4995278
## highest:  4152905  4167270  4177742  4188010  4289859
## ---------------------------------------------------------------------------
## line_B_result 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1   399003  2214067 -3244342 -2244019 
##      .25      .50      .75      .90      .95 
##  -766113   638965  1828689  2740511  3223464 
## 
## lowest : -20893698 -16899247 -16647738 -15380869 -15029047
## highest:   5818461   5852092   5875129   5970058   6053379
## ---------------------------------------------------------------------------
## investment_result 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1   361168  1012066 -1041325  -759669 
##      .25      .50      .75      .90      .95 
##  -264835   319076   940034  1537238  1910412 
## 
## lowest : -3046953 -3033401 -2967025 -2935598 -2904267
## highest:  5444752  5458620  5489740  5503295  5628776
## ---------------------------------------------------------------------------
## total 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1  1261629  2914401 -3304648 -2144174 
##      .25      .50      .75      .90      .95 
##  -345412  1447080  3076387  4433090  5199923 
## 
## lowest : -19449177 -18827625 -16481391 -16414089 -16389158
## highest:  10431996  10460355  10479094  10568922  10774313
## ---------------------------------------------------------------------------
## surplus 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1 10261629  2914401  5695352  6855826 
##      .25      .50      .75      .90      .95 
##  8654588 10447080 12076387 13433090 14199923 
## 
## lowest : -10449177  -9827625  -7481391  -7414089  -7389158
## highest:  19431996  19460355  19479094  19568922  19774313
## ---------------------------------------------------------------------------
## return 
##        n  missing distinct     Info     Mean      Gmd      .05      .10 
##  1000000        0  1000000        1   0.1402   0.3238 -0.36718 -0.23824 
##      .25      .50      .75      .90      .95 
## -0.03838  0.16079  0.34182  0.49257  0.57777 
## 
## lowest : -2.161020 -2.091958 -1.831266 -1.823788 -1.821018
## highest:  1.159111  1.162262  1.164344  1.174325  1.197146
## ---------------------------------------------------------------------------
probability.of.ruin = sum(simulated.data$surplus < 0) / nrow(simulated.data)
print(paste0("The probabilty of ruin is ", probability.of.ruin))
## [1] "The probabilty of ruin is 0.000918"

General Framework

The general framework assumes that we have a joint probability distribution for \(X_1, \ldots, X_n\) given by \(f(x_1,\ldots,x_n)\), and \[ \overline{dF} = f(x_1,x_2,\ldots, x_n) dx_1dx_2\ldots dx_n \]

Let \(x = \sum_{1 \leq k \leq n} x_k\). A riskiness leverage function, denoted by \(L(x)\), is a function that depends only on the sum of the variables; it is intended to reflect a view of the risk level associated with each aggregate outcome and to adjust the probability distribution based on the view that not all dollars are “equally risky.”

In a riskiness leverage model, \[ R_k = \int (x_k - \mu_k) L(x) \overline{dF} \] and as a result, \[ R = \sum_{1 \leq k \leq n} R_k = \int (x - \mu) L(x)\overline{dF} \] Note that these formulas allow some variables to have negative risk loads in cases when they are below their mean – this is a desirable outcome, as hedges such as reinsurance should exhibit this behaviour.

In practice, the joint distirbution is simulated so it is in effect a discrete distribution in which each outcome occurs with equal probability. If there are \(m\) simulations, then \[ R_k = \frac{1}{m} \sum_{1 \leq i \leq m} (x_k^{(i)} - \mu_k) L(x^{(i)}) \] and \[ R = \frac{1}{m} \sum_{1 \leq i \leq m} (x^{(i)} - \mu_k) L(x^{(i)}) \] where the superscript \((i)\) indicates the outcome of the \(i\)th simulation.

Properties of the Risk Load

  1. Provided that \(L\) is homogeneous of degree zero (\(L(\lambda x) = L(x)\)) then the risk load will scale with a currency change: \(R(\lambda X) = \lambda R(X)\). This can be achieved by ensuring that \(L\) is a function of a ratio of currencies such as \(x / \mu\), \(x / \sigma\), or \(x / S\) where \(S\) is the total surplus of the company. Note that this can produce a recursive procedure, since the risk load may be used to set the surplus.

  2. The risk load may not be a coherent risk measure, because subadditivity (\(R(X+Y) \leq R(X) + R(Y)\)) may depend on the form of \(L(x)\). A superadditive measure (\(R(X+Y) \geq R(X) + R(Y)\)) may be more appropriate when there are interactions between \(X\) and \(Y\).

Types of Risk Attitude

Risk-Neutral

In a risk neutral attitude, no consideration is given to the variability of the random variables. This corresponds to a constant riskiness leverage, and a risk load of zero, because the contributions in excess of the mean cancel out with the contributions below the mean. Let \(L(x) = \beta\). Then the risk load becomes \[ R_k = \int (x_k - \mu_k) \beta \overline{dF} = \beta\left(\int x_k \overline{dF} - \mu_k\right) = 0 \]

Variance

A variance-based view corresponds to the attitude that the whole distribution is relevant, that there is risk associated with good outcomes as well as bad outcomes, and that risk load increases quadratically. In this case, \[ L(x) = \frac{\beta}{S}(x - \mu) \] for some dimensionless scaling constant \(\beta\).

This gives the aggregate risk load \[ R = \frac{\beta}{S} \int (x - \mu)^2 \overline{dF} = \frac{\beta}{S} \mathrm{Var}(X) \] and setting \(S = R\) gives \[ S = \sqrt{\beta \mathrm{Var}(X)} \] The allocation to \(X_k\) is \[ R_k = \frac{\beta}{S} \int (x_k - \mu_k)(x - \mu) \overline{dF} = \sqrt{\beta} \frac{\mathrm{Cov}(X_k, X)}{\sqrt{\mathrm{Var}(X)}} = R \frac{\mathrm{Cov}(X_k, X)}{\mathrm{Var}(X)} \] Note that this formula is analogous to the CAPM formula.

The method can be illustrated using the simulated data produced earlier, assuming \(\beta = 1\). Initially, calculate without scaling the leverage by surplus, then take the square root to get the scaled version.

X.mean = mean(simulated.data$total)
variance.data = simulated.data %>% mutate(unscaled_leverage = total - X.mean)
R.unscaled = mean(variance.data$unscaled_leverage * (variance.data$total - X.mean))
R.scaled = sqrt(R.unscaled)
print(paste0("The aggregate risk load is ", R.scaled))
## [1] "The aggregate risk load is 2608684.01846601"

Now that we have the total surplus, we can scale the leverage and calculate the risk load for each of the contributing lines:

variance.data = variance.data %>% mutate(leverage = (total - X.mean) / R.scaled)
line.A.mean = mean(simulated.data$line_A_result)
line.A.R = mean(variance.data$leverage * (variance.data$line_A_result - line.A.mean))
print(paste0("The risk load for Line A is ", line.A.R))
## [1] "The risk load for Line A is 572105.482303507"
line.B.mean = mean(simulated.data$line_B_result)
line.B.R = mean(variance.data$leverage * (variance.data$line_B_result - line.B.mean))
print(paste0("The risk load for Line B is ", line.B.R))
## [1] "The risk load for Line B is 1726212.55578483"
inv.mean = mean(simulated.data$investment_result)
inv.R = mean(variance.data$leverage * (variance.data$investment_result - inv.mean))
print(paste0("The risk load for Investments is ", inv.R))
## [1] "The risk load for Investments is 310365.980377667"
print(paste0("The total risk load is ", inv.R + line.A.R + line.B.R))
## [1] "The total risk load is 2608684.01846601"

Tail Value at Risk (TVAR)

Tail Value at Risk corresponds to an attitude that only the high end of the distribution is relevant. The leverage for this risk measure is \[ L(x) = \frac{\theta(x - x_q)}{1 - q} \] where \(q\) is a selected quantile, \(x_q\) is the value of \(x\) such that \(F(x_q) = q\), and \[ \theta(x) = \begin{cases} 0 & \text{ if } x \leq 0 \\ 1 & \text{ if } x > 0 \end{cases} \]

To apply this approach to the simulated data, we first take the negative of the “total”, since this was calculated as net income to the company, rather than the net loss to the company.

q = 0.99
tvar.data = simulated.data %>% mutate(net_loss = - total)
x.q = quantile(tvar.data$net_loss, probs = q, type = 3) #Type 3 takes an actual simulated value rather than interpolating between them.
tvar.data = tvar.data %>% mutate(leverage = (net_loss > x.q) / (1-q))
nl.mean = mean(tvar.data$net_loss)
R.scaled = mean(tvar.data$leverage * (tvar.data$net_loss - nl.mean))
print(paste0("The aggregate risk load is ", R.scaled))
## [1] "The aggregate risk load is 8406266.72378088"

Allocate among the lines, negating the result to reflect that the simulation results are net income, not net loss:

line.A.R = -mean(tvar.data$leverage * (tvar.data$line_A_result - line.A.mean))
print(paste0("The risk load for Line A is ", line.A.R))
## [1] "The risk load for Line A is 1459612.2127933"
line.B.R = -mean(tvar.data$leverage * (tvar.data$line_B_result - line.B.mean))
print(paste0("The risk load for Line B is ", line.B.R))
## [1] "The risk load for Line B is 6417934.96426519"
inv.R = -mean(tvar.data$leverage * (tvar.data$investment_result - inv.mean))
print(paste0("The risk load for Investments is ", inv.R))
## [1] "The risk load for Investments is 528719.546722456"
print(paste0("The total risk load is ", inv.R + line.A.R + line.B.R))
## [1] "The total risk load is 8406266.72378094"

In this method, note that it is the total capital that is TVaR, not the risk load, due to the subtraction of \(\mu\) under the integral: \[ C = \mu + \int_{x_q}^{\infty} \frac{x - \mu}{1-q}f(x)dx = \frac{1}{1-q} \int_{x_q}^{\infty}xf(x)dx \] Therefore, \[ R = \text{TVaR}_q(X) - \mu \] Analogously, \[ R_k = \text{co-TVaR}_q(X_k) - \mu_k \] where co-TVaR is the average of the values of \(X_k\) in situations where the total \(X\) is greater than \(x_q\).

Value at Risk (VAR)

A Value at Risk approach taxes the view that only a fixed percentile \(x_q = \text{VaR}_q(X)\) is relevant, and the distribution matters only to the extent that it determines \(x_q\). In this case, \[ L(x) = \frac{\delta(x - x_q)}{f(x_q)} \] where \(\delta\) is the Dirac delta function. In this case, the total capital required is \[ C = \mu + \int (x - \mu)\frac{\delta(x - x_q)}{f(x_q)} f(x)dx = \mu + x_q -\mu = \text{VaR}_q(X) \] Therefore, the risk load is \[ R = \text{VaR}_q(X) - \mu \] The allocation by line, when using continuous distributions, will be an integral over the hyperplane defined by \(x_q = \sum_{1\leq k \leq n} x_k\). When using simulated data, this is not a feasible approach since only one simulation corresponds to the specified percentile. Instead, a small range around this percentile is used, effectively resulting in a difference of two closely-neighbouring TVaR calculations. For example, for a small value \(\epsilon\) we can approximate \(L\) as \[ \tilde{L}(x) = \frac{\theta(x - x_{q-\epsilon}) - \theta(x - x_{q + \epsilon})}{2\epsilon} \]

The approach can be illustrated using the simulated data as follows:

q = 0.99
epsilon = 0.005
var.data = simulated.data %>% mutate(net_loss = - total)
x.q = quantile(var.data$net_loss, probs = q, type = 3) #Type 3 takes an actual simulated value rather than interpolating between them.
x.small = quantile(var.data$net_loss, probs = q - epsilon, type = 3)
x.large = quantile(var.data$net_loss, probs = q + epsilon, type = 3)
var.data = var.data %>% mutate(leverage = ((net_loss > x.small) - (net_loss > x.large)) / (2 * epsilon))
nl.mean = mean(var.data$net_loss)
R.approx = mean(var.data$leverage * (var.data$net_loss - nl.mean))
print(paste0("The approximate aggregate risk load is ", R.approx, " and the exact risk load is ", x.q - nl.mean))
## [1] "The approximate aggregate risk load is 7077026.58935353 and the exact risk load is 7021389.63255526"

Allocate among the lines, negating the result to reflect that the simulation results are net income, not net loss:

line.A.R = -mean(var.data$leverage * (var.data$line_A_result - line.A.mean))
print(paste0("The approximate risk load for Line A is ", line.A.R))
## [1] "The approximate risk load for Line A is 1302919.07667181"
line.B.R = -mean(var.data$leverage * (var.data$line_B_result - line.B.mean))
print(paste0("The approximate risk load for Line B is ", line.B.R))
## [1] "The approximate risk load for Line B is 5260878.2074706"
inv.R = -mean(var.data$leverage * (var.data$investment_result - inv.mean))
print(paste0("The approximate risk load for Investments is ", inv.R))
## [1] "The approximate risk load for Investments is 513229.305211114"
print(paste0("The total approximate risk load is ", inv.R + line.A.R + line.B.R))
## [1] "The total approximate risk load is 7077026.58935353"

As expected, the totals reconcile when the same approximation method is applied to the aggregate.

Semi-Variance (SVAR)

The semi-variance is a hybrid between TVaR and variance, and corresponds to a view in which we only consider outcomes that are worse than the mean to be risky, and penalize them quadratically: \[ L(x) = \frac{\beta}{S}(x - \mu)\theta(x - \mu) \] In this case, setting \(R = S\), \[ S = \sqrt{\beta \int_{\mu}^{\infty} (x - \mu)^2 f(x)dx} \] and \[ R_k = \frac{\beta}{S} \int_{x\geq \mu}(x_k - \mu_k)(x - \mu) \overline{dF} \] where the integral is over the region where the total \(x\) exceeds \(\mu\).

The method can be applied to the simulated data as follows, assuming \(\beta = 1\):

svar.data = simulated.data %>% mutate(net_loss = - total, unscaled_leverage = (net_loss - nl.mean) * (net_loss > nl.mean))
R.unscaled = mean(svar.data$unscaled_leverage * (svar.data$net_loss - nl.mean))
R.scaled = sqrt(R.unscaled)
print(paste0("The aggregate risk load is ", R.scaled))
## [1] "The aggregate risk load is 1950657.88545679"

Now that we have the total surplus, we can scale the leverage and calculate the risk load for each of the contributing lines, negating the results to reflect the fact that the simluation contains net income values rather than net losses:

svar.data = svar.data %>% mutate(leverage = (net_loss - nl.mean) * (net_loss > nl.mean) / R.scaled)
line.A.R = -mean(svar.data$leverage * (svar.data$line_A_result - line.A.mean))
print(paste0("The risk load for Line A is ", line.A.R))
## [1] "The risk load for Line A is 402067.177209136"
line.B.R = -mean(svar.data$leverage * (svar.data$line_B_result - line.B.mean))
print(paste0("The risk load for Line B is ", line.B.R))
## [1] "The risk load for Line B is 1361661.21649659"
inv.R = -mean(svar.data$leverage * (svar.data$investment_result - inv.mean))
print(paste0("The risk load for Investments is ", inv.R))
## [1] "The risk load for Investments is 186929.491751063"
print(paste0("The total risk load is ", inv.R + line.A.R + line.B.R))
## [1] "The total risk load is 1950657.88545679"

Mean Downside Deviation

The mean downside deviation is the average over all outcomes that are worse than the mean; it is essentially TVaR with \(x_q = \mu\). It is also similar to semi-variance, except that outcomes are penalized linearly rather than quadratically. This is viewed as a naive measure that is appropriate for risks such as not achieving plan, but not for considering risk of ruin. For this measure, \[ L = \beta \frac{\theta(x - \mu)}{1 - F(\mu)} \] Calculations using this measure are essentially the same as for TVaR, except that the average is used as the cutoff point rather than a specified quantile.

Kreps suggests that a constant of \(\beta = 2\) is appropriate in situations where the distribution is uniform and tightly gathered around the mean. The rationale is that in this case, \(\mu - \Delta \leq x \leq \mu + \Delta\) for some small \(\Delta\), and in this situation \(\Delta\) is a natural risk load. Averaging over the worse-than-average cases gives a risk load of \(\Delta / 2\), so doubling it is recommended.

Proportional Excess

This is a fairly general risk measure of the form \[ L(x) = \frac{h(x)\theta(x - \mu - \Delta)}{x - \mu} \] If we assume that \(h(\mu) = 0\), we can set \(\Delta = 0\) without worrying about the function being ill-defined at \(\mu\). In this case, \[ R = \int f(x)h(x)\theta(x - \mu)dx \] and \[ R_k = \int \frac{x_k - \mu_k}{x - \mu} h(x) \theta(x - \mu) \overline{dF} \] In this case, the risk loading is allocated in a manner proportion to each outcome’s contribution to the excess over the mean.

Generic Management Risk Load

In general, \(L\) can be an arbitrary function that reflects management’s attitude toward risk. Kreps recommends that a general load should satisfy the following properties:

  1. It is a downside risk measure

  2. Be constant, or close to it, for excess loss that is small relative to capital (e.g. not making plan but not a disaster)

  3. Become larger for excess losses significantly impacting capital. This could include “steps” at key points, such as regulatory triggers.

  4. Become zero for excess losess that significantly exceed capital – in these scenarios, insolvency is unavoidable.

An example of a general risk measure that reflects points 1 through 3 above is \[ L(x) = \beta\left(1 + \frac{\alpha}{S}(x - \mu)\right) \theta(x-\mu) \]

The perspective of a regulator might be different, as they are more concerned with extreme scenarios:

  1. It should be zero until capital is seriously impacted.

  2. It should never decrease – even if the company becomes insolvent, there is still a risk to guaranty funds.

As an example, a regulator might prefer TVaR with the quantile chosen as a given fraction of surplus.

Simulation

Baseline

To facilitate the investigation of the simulated data, the following function simplifies the calculation of co-TVaR. Note that by setting “line” to be the same as “aggregate”, this function reduces to TVaR.

co.tvar = function(q, line, aggregate) {
  x.q = quantile(aggregate, probs = q, type = 3) #Type 3 takes an actual simulated value rather than interpolating between them.
  leverage = (aggregate > x.q) / (1-q)
  line.mean = mean(line)
  co.tvar = mean(leverage * (line - line.mean)) + mean(line)
  return(co.tvar)
}

Test the function and compare to earlier results:

co.tvar(0.99, -simulated.data$line_A_result, -simulated.data$total) - mean(-simulated.data$line_A_result)
## [1] 1459612
co.tvar(0.99, -simulated.data$total, -simulated.data$total) - mean(-simulated.data$total)
## [1] 8406267

As a baseline, calculate the TVAR at various thresholds and then the percentage of capital allocated to each line. Based on this, surplus can be allocated to each line, and its return on surplus calculated. Note that the co-TVaR isn’t used to set the surplus for each line directly; instead, it is used to determine the proportion of the fixed starting surplus to allocate to the line.

baseline.allocation = data.frame(q = c(0.999, 0.998, 0.996, 0.99, 0.98, 0.95, 0.9))
baseline.allocation$aggregate = apply(baseline.allocation, 1, function(y){ co.tvar(q = y['q'], -simulated.data$total, -simulated.data$total)})
baseline.allocation$line_A_allocation = apply(baseline.allocation, 1, function(y){ co.tvar(q = y['q'], -simulated.data$line_A_result, -simulated.data$total)})
baseline.allocation$line_B_allocation = apply(baseline.allocation, 1, function(y){ co.tvar(q = y['q'], -simulated.data$line_B_result, -simulated.data$total)})
baseline.allocation$investment_allocation = apply(baseline.allocation, 1, function(y){ co.tvar(q = y['q'], -simulated.data$investment_result, -simulated.data$total)})
baseline.allocation = baseline.allocation %>% mutate(line_A_pct = line_A_allocation / aggregate, line_B_pct = line_B_allocation / aggregate, investment_pct = investment_allocation / aggregate, line_A_return = line.A.mean / (starting.surplus * line_A_pct), line_B_return = line.B.mean / (starting.surplus * line_B_pct), investment_return = inv.mean / (starting.surplus * investment_pct))
baseline.allocation %>% select(q, aggregate, line_A_pct, line_B_pct, investment_pct, line_A_return, line_B_return, investment_return)
##       q aggregate line_A_pct line_B_pct investment_pct line_A_return
## 1 0.999  10187009  0.1204327  0.8629842     0.01658315     0.4626440
## 2 0.998   9286111  0.1245871  0.8569983     0.01841466     0.4472169
## 3 0.996   8386151  0.1283913  0.8515630     0.02004563     0.4339658
## 4 0.990   7144638  0.1341083  0.8424404     0.02345132     0.4154661
## 5 0.980   6161409  0.1351983  0.8405649     0.02423679     0.4121165
## 6 0.950   4812691  0.1356539  0.8410891     0.02325709     0.4107325
## 7 0.900   3739508  0.1320865  0.8492340     0.01867947     0.4218255
##   line_B_return investment_return
## 1    0.05137257          2.419916
## 2    0.05173140          2.179232
## 3    0.05206158          2.001923
## 4    0.05262535          1.711196
## 5    0.05274276          1.655740
## 6    0.05270990          1.725487
## 7    0.05220436          2.148338

Changing Volume

Based on the above results, the company may be inclined to reduce its writing in Line B and increase it in Line A. By assuming that the new risks written are independent of the existing risks, we assume that the standard deviation changes based on the square root of the multiplier.

line.A.multiplier = 1.6
line.B.multiplier = 0.25
line.A.mean = 10000000 * line.A.multiplier
line.A.sd = 1000000 * sqrt(line.A.multiplier)
line.B.mean = 8000000 * line.B.multiplier
line.B.sd = 2000000 * sqrt(line.B.multiplier)
inv.mean = 1.04
inv.sd = 0.1
line.correlation = 0.25
number.of.simulations = 1000000
lognormal.mu.A = log(line.A.mean / sqrt(1 + line.A.sd^2 / line.A.mean^2))
lognormal.mu.B = log(line.B.mean / sqrt(1 + line.B.sd^2 / line.B.mean^2))
lognormal.mu.inv = log(inv.mean / sqrt(1 + inv.sd^2 / inv.mean^2))
lognormal.sigma.A = sqrt(log(1 + line.A.sd^2 / line.A.mean^2))
lognormal.sigma.B = sqrt(log(1 + line.B.sd^2 / line.B.mean^2))
lognormal.sigma.inv = sqrt(log(1 + inv.sd^2 / inv.mean^2))
random.numbers = rCopula(number.of.simulations, normalCopula(line.correlation, dim = 2))
volume.change.data = data.frame(line_A_random = random.numbers[,1], line_B_random = random.numbers[,2])
volume.change.data = volume.change.data %>% mutate(line_A_loss = exp(qnorm(line_A_random, mean = lognormal.mu.A, sd = lognormal.sigma.A)), line_B_loss = exp(qnorm(line_B_random, mean = lognormal.mu.B, sd = lognormal.sigma.B)), inv_return = exp(rnorm(number.of.simulations, mean = lognormal.mu.inv, sd = lognormal.sigma.inv))) %>% select(-line_A_random, -line_B_random)
line.A.premium = 10500000 * line.A.multiplier
line.B.premium = 8400000 * line.B.multiplier
starting.surplus = 9000000
volume.change.data = volume.change.data %>% mutate(line_A_result = line.A.premium - line_A_loss, line_B_result = line.B.premium - line_B_loss, investment_result = starting.surplus * (inv_return - 1), total = line_A_result + line_B_result + investment_result, surplus = starting.surplus + total, return = surplus / starting.surplus - 1) %>% arrange(total)

Assess how the results change with the new volumes:

volume.change.allocation = data.frame(q = c(0.999, 0.998, 0.996, 0.99, 0.98, 0.95, 0.9))
line.A.mean = mean(volume.change.data$line_A_result)
line.B.mean = mean(volume.change.data$line_B_result)
inv.mean = mean(volume.change.data$investment_result)
volume.change.allocation$aggregate = apply(volume.change.allocation, 1, function(y){ co.tvar(q = y['q'], -volume.change.data$total, -volume.change.data$total)})
volume.change.allocation$line_A_allocation = apply(volume.change.allocation, 1, function(y){ co.tvar(q = y['q'], -volume.change.data$line_A_result, -volume.change.data$total)})
volume.change.allocation$line_B_allocation = apply(volume.change.allocation, 1, function(y){ co.tvar(q = y['q'], -volume.change.data$line_B_result, -volume.change.data$total)})
volume.change.allocation$investment_allocation = apply(volume.change.allocation, 1, function(y){ co.tvar(q = y['q'], -volume.change.data$investment_result, -volume.change.data$total)})
volume.change.allocation = volume.change.allocation %>% mutate(line_A_pct = line_A_allocation / aggregate, line_B_pct = line_B_allocation / aggregate, investment_pct = investment_allocation / aggregate, line_A_return = line.A.mean / (starting.surplus * line_A_pct), line_B_return = line.B.mean / (starting.surplus * line_B_pct), investment_return = inv.mean / (starting.surplus * investment_pct))
volume.change.allocation %>% select(q, aggregate, line_A_pct, line_B_pct, investment_pct, line_A_return, line_B_return, investment_return)
##       q aggregate line_A_pct line_B_pct investment_pct line_A_return
## 1 0.999   7727636  0.2371561  0.7310529     0.03179097     0.3746939
## 2 0.998   6931851  0.2570922  0.7054701     0.03743766     0.3456385
## 3 0.996   6142732  0.2818064  0.6710396     0.04715400     0.3153262
## 4 0.990   5116688  0.3079999  0.6352177     0.05678247     0.2885097
## 5 0.980   4351861  0.3274134  0.6055980     0.06698859     0.2714030
## 6 0.950   3322389  0.3488669  0.5735735     0.07755968     0.2547131
## 7 0.900   2514150  0.3575809  0.5593374     0.08308172     0.2485059
##   line_B_return investment_return
## 1    0.01524115         1.2594749
## 2    0.01579385         1.0695094
## 3    0.01660422         0.8491311
## 4    0.01754059         0.7051460
## 5    0.01839849         0.5977127
## 6    0.01942574         0.5162467
## 7    0.01992016         0.4819343

Note that shifting volume has significantly reduced the overall capital needed without impacting the total mean income. Line A gets a much larger share of capital, and the allocation to Line B is reduced. In practice, there may be barriers to changing volume in this manner:

  • The two lines may be indivisible parts of one policy, e.g. a combined property and liability policy.

  • Regulatory requirements may make it difficult to exit a line of business

  • It takes time and underwriting effort to shift the makeup of the book of business

Impact of Reinsurance

Returning to the original volumes, Kreps assesses how the results change if the company takes on an excess of loss contract for Line B, with an attachment point of $10M and a limit of $5M, assuming that the reinsurer prices the contract with a load of 25% of its standard deviation.

attachment = 10000000
limit = 5000000
reinsurance.data = simulated.data %>% mutate(reinsurance_recoverable = pmin(limit, pmax(0, line_B_loss - attachment)))
head(reinsurance.data)
##   line_A_loss line_B_loss inv_return line_A_result line_B_result
## 1     9532385    29293698  1.0529896     967615.07     -20893698
## 2    12769580    25047738  1.0099660   -2269580.41     -16647738
## 3    11916889    23250846  0.9762604   -1416888.87     -14850846
## 4    10404515    25299247  1.0432970      95485.33     -16899247
## 5    12238780    23429047  1.0420742   -1738779.64     -15029047
## 6    12256464    22553126  1.0236519   -1756464.50     -14153126
##   investment_result     total   surplus    return reinsurance_recoverable
## 1         476906.13 -19449177 -10449177 -2.161020                 5000000
## 2          89693.87 -18827625  -9827625 -2.091958                 5000000
## 3        -213656.13 -16481391  -7481391 -1.831266                 5000000
## 4         389673.05 -16414089  -7414089 -1.823788                 5000000
## 5         378668.14 -16389158  -7389158 -1.821018                 5000000
## 6         212867.21 -15696723  -6696723 -1.744080                 5000000

Calculate the reinsurance premium:

reinsurance.premium = mean(reinsurance.data$reinsurance_recoverable) + 0.25 * sd(reinsurance.data$reinsurance_recoverable)
print(paste0("The Reinsurance Premium is ", reinsurance.premium))
## [1] "The Reinsurance Premium is 388308.385351574"

Define a separate column for the net reinsurance result. Note that by doing this, rather than adjusting Line B directly based on the reinsurance, we can separate out the capital impact of each. Re-calculate the total result accordingly.

reinsurance.data = reinsurance.data %>% mutate(reinsurance_result = reinsurance_recoverable - reinsurance.premium, total = line_A_result + line_B_result + investment_result + reinsurance_result, surplus = starting.surplus + total, return = surplus / starting.surplus - 1)
head(reinsurance.data)
##   line_A_loss line_B_loss inv_return line_A_result line_B_result
## 1     9532385    29293698  1.0529896     967615.07     -20893698
## 2    12769580    25047738  1.0099660   -2269580.41     -16647738
## 3    11916889    23250846  0.9762604   -1416888.87     -14850846
## 4    10404515    25299247  1.0432970      95485.33     -16899247
## 5    12238780    23429047  1.0420742   -1738779.64     -15029047
## 6    12256464    22553126  1.0236519   -1756464.50     -14153126
##   investment_result     total  surplus    return reinsurance_recoverable
## 1         476906.13 -14837485 -5837485 -1.648609                 5000000
## 2          89693.87 -14215933 -5215933 -1.579548                 5000000
## 3        -213656.13 -11869699 -2869699 -1.318855                 5000000
## 4         389673.05 -11802397 -2802397 -1.311377                 5000000
## 5         378668.14 -11777467 -2777467 -1.308607                 5000000
## 6         212867.21 -11085031 -2085031 -1.231670                 5000000
##   reinsurance_result
## 1            4611692
## 2            4611692
## 3            4611692
## 4            4611692
## 5            4611692
## 6            4611692

Re-calculate the capital allocation, treating reinsurance as a fourth financial variable:

reinsurance.allocation = data.frame(q = c(0.999, 0.998, 0.996, 0.99, 0.98, 0.95, 0.9))
line.A.mean = mean(reinsurance.data$line_A_result)
line.B.mean = mean(reinsurance.data$line_B_result)
inv.mean = mean(reinsurance.data$investment_result)
rein.mean = mean(reinsurance.data$reinsurance_result)
reinsurance.allocation$aggregate = apply(reinsurance.allocation, 1, function(y){ co.tvar(q = y['q'], -reinsurance.data$total, -reinsurance.data$total)})
reinsurance.allocation$line_A_allocation = apply(reinsurance.allocation, 1, function(y){ co.tvar(q = y['q'], -reinsurance.data$line_A_result, -reinsurance.data$total)})
reinsurance.allocation$line_B_allocation = apply(reinsurance.allocation, 1, function(y){ co.tvar(q = y['q'], -reinsurance.data$line_B_result, -reinsurance.data$total)})
reinsurance.allocation$investment_allocation = apply(reinsurance.allocation, 1, function(y){ co.tvar(q = y['q'], -reinsurance.data$investment_result, -reinsurance.data$total)})
reinsurance.allocation$reinsurance_allocation = apply(reinsurance.allocation, 1, function(y){ co.tvar(q = y['q'], -reinsurance.data$reinsurance_result, -reinsurance.data$total)})
reinsurance.allocation = reinsurance.allocation %>% mutate(line_A_pct = line_A_allocation / aggregate, line_B_pct = line_B_allocation / aggregate, investment_pct = investment_allocation / aggregate, reinsurance_pct = reinsurance_allocation / aggregate, line_A_return = line.A.mean / (starting.surplus * line_A_pct), line_B_return = line.B.mean / (starting.surplus * line_B_pct), investment_return = inv.mean / (starting.surplus * investment_pct), reinsurance_return = rein.mean / (starting.surplus * reinsurance_pct))
reinsurance.allocation %>% select(q, aggregate, line_A_pct, line_B_pct, investment_pct, reinsurance_allocation, reinsurance_pct, line_A_return, line_B_return, investment_return, reinsurance_return)
##       q aggregate line_A_pct line_B_pct investment_pct
## 1 0.999   6144588  0.3505502  0.9561117      0.1089194
## 2 0.998   5598103  0.3758762  0.8711445      0.1222206
## 3 0.996   5109916  0.3826927  0.7987618      0.1385895
## 4 0.990   4493419  0.3770852  0.7479120      0.1470514
## 5 0.980   4020086  0.3589215  0.7499060      0.1438740
## 6 0.950   3346304  0.3157512  0.7783742      0.1318810
## 7 0.900   2763107  0.2679237  0.8303998      0.1083328
##   reinsurance_allocation reinsurance_pct line_A_return line_B_return
## 1             -2553575.6      -0.4155813     0.1589428    0.04636876
## 2             -2067050.7      -0.3692413     0.1482335    0.05089135
## 3             -1635398.1      -0.3200440     0.1455932    0.05550305
## 4             -1222428.1      -0.2720485     0.1477583    0.05927665
## 5             -1015881.4      -0.2527014     0.1552358    0.05911903
## 6              -756286.2      -0.2260064     0.1764600    0.05695682
## 7              -571013.7      -0.2066564     0.2079601    0.05338840
##   investment_return reinsurance_return
## 1         0.3684360         0.04666523
## 2         0.3283393         0.05252175
## 3         0.2895589         0.06059540
## 4         0.2728966         0.07128579
## 5         0.2789234         0.07674352
## 6         0.3042881         0.08580816
## 7         0.3704309         0.09384273

Note that the negative capital allocatinos for reinsurance are consistent with its role as a hedge.

To assess whether reinsurance provides value to the company:

  1. Determine the amount of surplus that can be released based on the company’s rule for setting surplus.

  2. Multiply the released surplus by the cost of capital to determine the benefit of releasing surplus

  3. Compare the result in step 2 to the cost of reinsurance (net of expected recoveries)

As an example, suppose the company sets surplus at 1.5 times the 98th percentile TVaR. Assume a cost of capital of 5%. The calculations can be done as follows:

baseline.surplus = as.numeric(baseline.allocation %>% filter(q == 0.980) %>% select(aggregate)) * 1.5
reinsurance.surplus = as.numeric(reinsurance.allocation %>% filter(q == 0.980) %>% select(aggregate)) * 1.5
surplus.released = baseline.surplus - reinsurance.surplus
cost.of.capital = 0.05
reinsurance.benefit = surplus.released * cost.of.capital
reinsurance.cost = reinsurance.premium -  mean(reinsurance.data$reinsurance_recoverable)
print(paste0("The benefit of reinsurance is ", reinsurance.benefit, " and the cost of reinsurance is ", reinsurance.cost))
## [1] "The benefit of reinsurance is 160599.211811203 and the cost of reinsurance is 174538.759637965"

In this example, the benefit of reinsurance is not justified relative to the cost.