Introduction

These study notes are based on the Exam 7 syllabus readings A Model for Reserving Workers Compensation High Deductibles by Jerome Siewert, and Claims Development by Layer: The Relationship Between Claims Development Patterns, Trend, and Claim Size Models by Rajesh Sahasrabuddhe. The first paper explains how loss develpoment in excess of a deductible relates to ground-up loss development, while the second paper dives deeper into the relationship between development, trend, and limits.

easypackages::packages("data.table", "DT", "ggplot2")
## Loading required package: data.table
## Warning: package 'data.table' was built under R version 3.4.2
## Loading required package: DT
## Loading required package: ggplot2
## All packages loaded successfully

Loss Ratio Approach

A major challenge with reserving for large deductible policies is that losses above the deductible tend to have a longer reporting pattern than the ground-up losses. A simple approach to reserving, applicable when little data is available, or for immature accident years, is to use loss ratio estimates based on the plan parameters. Let

In this notation, the expected loss above the deductible is \[ PE\chi \] The expected amount in excess of the aggregate limit is \[ PE(1 - \chi)\phi \] Calculating these values for each policy and aggregating provides an estimate of ultimate losses. Doing so will reflect the mix of limits in the book of business. This is demonstrated in the following example:

rating.parameters = fread("./siewert-rating_parameters.csv")
rating.parameters[, expected_losses := premium * ELR]
rating.parameters[, per_occurrence_excess_losses := expected_losses * excess_ratio]
rating.parameters[, aggregate_excess_losses := (expected_losses - per_occurrence_excess_losses) * net_insurance_charge]
datatable(rating.parameters)

Ultimate losses can then be determined as follows:

total_po_losses = rating.parameters[, sum(per_occurrence_excess_losses)]
total_agg_losses = rating.parameters[, sum(aggregate_excess_losses)]
expected_ultimate_losses = total_po_losses + total_agg_losses
print(paste0("Ultimate losses in excess of per-occurrence deductibles are ", total_po_losses))
## [1] "Ultimate losses in excess of per-occurrence deductibles are 137250.675497"
print(paste0("Utiimate losses in excess of aggregate limits are ", total_agg_losses))
## [1] "Utiimate losses in excess of aggregate limits are 19562.64535006"
print(paste0("Ultimate expected loss amount is ", expected_ultimate_losses))
## [1] "Ultimate expected loss amount is 156813.32084706"

A drawback of this approach is that it completely ignores actual emerging experience, so it is not useful after several years of development. Thus, approaches that consider actual emergence are needed.

Losses in Excess of a Per-Occurrence Limit

LDF Consistency Requirements

The recommended approach is implicit development, which proceeds as follows:
  1. Develop ground-up unlimited losses to ultimate
  2. Develop losses below the deductible (indexed to inflation) to ultimate. In this step, the development factors will differ by the size of the deductible.
  3. Determine ultimate excess losses as the difference between step 1 and step 2

Note that this approach can provide an estimate of excess losses at early maturities, even before excess losses have begun to emerge. Moreover, the development factors for losses below the deductible are more stable than those determined directly from excess losses. A direct development approach involves deriving excess loss development factors from the unlimited and limited development factors, and applying them to excess losses; however, the losses tend to be volatile and the factors highly leveraged.

The relationship between the limited LDFs and excess LDFs relies on the severity relativity for the given limit, and this relativity is dependent on the development age. Define the following notation:
  • \(C_t\) denotes the claim count at time \(t\)
  • \(S_t\) denotes unlimited claim severity at time \(t\)
  • \(R_t^L\) denotes the severity relativity at time \(t\) for the limit \(L\). This is the ratio of limited severity to unlimited severity.

When these quantities appear without the subscript \(t\), the ultimate value is denoted.

In this notation, the unlimited LDF at time \(t\) is the ultimate claim amount divided by the claim amount at time \(t\): \[ \mathrm{LDF}_t = \frac{C S}{C_t S_t} \] The limited LDF at time \(t\) is the ratio of the ultimate limited claim amount to the limited claim amount at time \(t\): \[ \mathrm{LDF}_t^L = \frac{CSR^L}{C_tS_tR_t^L} = \mathrm{LDF}_t \Delta R^L \] where \(\Delta R^L = R^L / R^L_t\) is the percentage change between the ultimate relativity and the current relativity. Because of this, a key driver of the relationship between unlimited, limited, and excess development factors are the severity relativities. These tend to decrease over time (though not always), since unlimited severity grows faster than limited severity.

Similarly, the excess LDF at time \(t\) is the ratio of ultimate excess losses to excess losses at time \(t\): \[ \mathrm{XSLDF}_t^L = \frac{CS(1-R^L)}{C_tS_t(1-R_t^L)} \] This requires that the following consistency relationship be satisfied between these factors: \[ \mathrm{LDF}_t = R_t^L \mathrm{LDF}_t^L + (1 - R_t^L) \mathrm{XSLDF}_t^L \] These relationships also apply to the age-to-age factors. This relationship can also be used to partition the future development into deductible and excess layers, since the percentage unreported is equal to \[ 1 - \frac{1}{\mathrm{LDF}_t} = \frac{\mathrm{LDF}_t - 1}{\mathrm{LDF}_t} = \frac{R_t^L(\mathrm{LDF}_t^L - 1) + (1 - R_t^L) (\mathrm{XSLDF}_t^L-1)}{\mathrm{LDF}_t} \] Thus, the percentage unreported below the deductible is \[ \frac{R_t^L (\mathrm{LDF}_t^L - 1)}{\mathrm{LDF}_t^L} \] and the percentage unreported above the deductible is \[ \frac{(1 - R_t^L)(\mathrm{XSLDF}_t^L - 1)}{\mathrm{LDF}_t^L} \]

The ideas described above can be applied to the following unlimited age-to-age factors and limited severities from Siewert’s paper:

unlimited.age.to.age = fread("./siewert-unlimited_age_to_age.csv")
datatable(unlimited.age.to.age)
severity.relativities = fread("./siewert-limited_severity.csv")
datatable(severity.relativities)

The first step is to calculate the change in limited severity relativities between subsequent ages:

severity.relativities[, next_age := age + 12]
severity.relativities[, development_period := paste0(age,"-",next_age)]
severity.relativities = merge(severity.relativities, severity.relativities[, .(accident_year, age, next_relativity = limited_severity)], by.x = c("accident_year", "next_age"), by.y = c("accident_year", "age"))
severity.relativities[, relativity_change := next_relativity / limited_severity]
datatable(severity.relativities)

Join to the table of unlimited age-to-age factors, and use it to determine the limited age-to-age factors:

limited.age.to.age = merge(unlimited.age.to.age, severity.relativities[,.(accident_year, development_period, relativity_change)], by = c("accident_year", "development_period"))
limited.age.to.age[, limited_factor := factor * relativity_change]
datatable(limited.age.to.age)

To get the excess development factors, join the severity relativities for the start of the development period:

limited.age.to.age[, dev_period_start := as.numeric(substr(development_period, 1, 2))]
excess.age.to.age = merge(limited.age.to.age, severity.relativities[,.(accident_year, dev_period_start = age, limited_severity)], by = c("accident_year", "dev_period_start"))
excess.age.to.age[, excess_factor := (factor - limited_severity * limited_factor) / (1 - limited_severity)]
datatable(excess.age.to.age)

An alternate approach is to use the fact that \[ \mathrm{XSLDF}_t^L = \mathrm{LDF}_t \Delta(1 - R^L) \] where \(\Delta(1 - R^L)\) is the percentage change in the complement of the relativitiy, \[ \Delta(1 - R^L) = \frac{1 - R^L}{1 - R_t^L} \] Using the examples above,

severity.relativities[, complement_change := (1 - next_relativity) / (1 - limited_severity)]
excess.age.to.age.v2 = merge(excess.age.to.age, severity.relativities[,.(accident_year, development_period, complement_change)], by = c("accident_year", "development_period"))
excess.age.to.age.v2[, excess_factor_v2 := factor * complement_change]
datatable(excess.age.to.age.v2[,.(accident_year, development_period, factor, excess_factor, complement_change, excess_factor_v2)])

Distributional Approach

An alternate approach is to view the claim amounts at each development age as the realization of a parametrized distribution, whose parameters vary based on the development age. Siewert uses a Weibull distribution with the following parameters at 48 months and at ultimate:

beta.ult = 180.0
alpha.ult = 0.2326
beta.48 = 305.7
alpha.48 = 0.2625

In order to use these to determine the limited severity relativities at each age, we need to know the mean of the distribution, \[ E[X] = \beta \Gamma(1/\alpha + 1) \] and the limited expected value \[ \mathrm{LEV}(x) = E[X;x] = E[X] \Gamma(1/\alpha + 1; (x/\beta)^{\alpha}) + x\exp(-(x/\beta)^{\alpha}) \] R functions for calculating these quantities are as follows.

weibull.mean = function(alpha, beta) {
  return(beta * gamma(1 / alpha + 1))
}
unlimited.severity.ult = weibull.mean(alpha.ult, beta.ult)
unlimited.severity.48 = weibull.mean(alpha.48, beta.48)
print(paste0("The ultimate expected unlimited severity is ", unlimited.severity.ult))
## [1] "The ultimate expected unlimited severity is 6845.71167800678"
print(paste0("The 48-month expected unlimited severity is ", unlimited.severity.48))
## [1] "The 48-month expected unlimited severity is 5529.48981697578"

Examples for the expected limited loss assume a deductible of $250K.

weibull.LEV = function(alpha, beta, limit) {
  return(weibull.mean(alpha, beta) * pgamma((limit / beta)^alpha, 1/alpha + 1) + limit * exp(-(limit / beta)^alpha))
}
print(paste0("The ultimate expected severity limited by a $250K deductible is ", weibull.LEV(alpha.ult, beta.ult, limit = 250000)))
## [1] "The ultimate expected severity limited by a $250K deductible is 5064.10737365941"
print(paste0("The 48-month expected severity limited by a $250K deductible is ", weibull.LEV(alpha.48, beta.48, limit = 250000)))
## [1] "The 48-month expected severity limited by a $250K deductible is 4721.89840584418"

The approach Siewert uses assumes that claim counts are already at ultimate (so \(C / C_t = 1\)), which is a reasonable assumption for the 48-month valuation. The limited LDFs can be calculated, for a variety of limits, as the ratio of limited severity at ultimate to limited severity at 48 months:

distributional.dev.factors = data.table(limit = c(1000000, 750000, 500000, 250000, 100000, 50000))
distributional.dev.factors$limited_severity_ult = apply(distributional.dev.factors, 1, function(r) weibull.LEV(alpha.ult, beta.ult, limit = r['limit']))
distributional.dev.factors$limited_severity_48 = apply(distributional.dev.factors, 1, function(r) weibull.LEV(alpha.48, beta.48, limit = r['limit']))
distributional.dev.factors[, limited_LDF := limited_severity_ult / limited_severity_48]
datatable(distributional.dev.factors)

Excess LDFs can be calculated in a similar manner:

distributional.dev.factors[, excess_severity_ult := unlimited.severity.ult - limited_severity_ult]
distributional.dev.factors[, excess_severity_48 := unlimited.severity.48 - limited_severity_48]
distributional.dev.factors[, excess_LDF := excess_severity_ult / excess_severity_48]
datatable(distributional.dev.factors)

An alternate approach would be to calculate the severity relativities and use the consistency relationships in the previous section.

unlimited.LDF = unlimited.severity.ult / unlimited.severity.48
distributional.dev.factors[, severity_relativity_ult := limited_severity_ult / unlimited.severity.ult]
distributional.dev.factors[, severity_relativity_48 := limited_severity_48 / unlimited.severity.48]
distributional.dev.factors[, Delta_R := severity_relativity_ult / severity_relativity_48]
distributional.dev.factors[, limited_LDF_v2 := unlimited.LDF * Delta_R]
distributional.dev.factors[, Delta_R_complement := (1 - severity_relativity_ult) / (1 - severity_relativity_48)]
distributional.dev.factors[, excess_LDF_v2 := unlimited.LDF * Delta_R_complement]
datatable(distributional.dev.factors[, .(limit, limited_LDF, limited_LDF_v2, severity_relativity_ult, severity_relativity_48, Delta_R, excess_LDF, excess_LDF_v2, Delta_R_complement)])

Future development can be partitioned into deductible and excess layers as of 48 months as follows:

distributional.dev.factors[, percent_unreported := 1 - 1 / unlimited.LDF]
distributional.dev.factors[, deductible_layer_unreported := severity_relativity_48 *(limited_LDF - 1) / unlimited.LDF]
distributional.dev.factors[, excess_layer_unreported := (1 - severity_relativity_48) * (excess_LDF -1) / unlimited.LDF]
datatable(distributional.dev.factors[, .(limit, percent_unreported, deductible_layer_unreported, excess_layer_unreported)])

Visualizing the differences by layer:

ggplot(data = distributional.dev.factors, aes(x = limit, y = deductible_layer_unreported + excess_layer_unreported)) + geom_bar(stat = "identity", fill = "blue") + geom_bar(aes(y = deductible_layer_unreported), stat = "identity", fill = "red")
## Warning in plyr::split_indices(scale_id, n): '.Random.seed' is not an
## integer vector but of type 'NULL', so ignored

(Note that this is a different chart than the one presented by Siewert – his is a chart of deductible and excess development by age for a fixed limit.)

Incorporating Severity Trend

Sahasrabuddhe generalizes these ideas by incorporating trend factors, both accident year and calendar year, into the analysis. (The notes below attempt to convert his notation to the style used by Siewert to make the connection between the two appraches clearer.) Assumptions underlying this method include:
  • There is a cost level index \(T_{i,j}\) for accident year \(i\) and development interval \(j\), relative to some base cost level. Ultimate values are denoted by \(j = \infty\). These trends are defined relative to ground-up unlimited losses. Let \(S_{i,j,L}\) denote the severity in accident year \(i\), development period \(j\), and limit \(L\).
  • We have a claim size model for each development age, whose parameters can be adjusted for inflation.
Then the formula for the limited LDF generalizes to \[ \mathrm{LDF}_{i,j}^L = \frac{C_{\infty}}{C_j} \frac{S_{i, \infty, L}}{S_{i,j,L}} \] In other words, this is the factor that is applied to the entry at row \(i\) and column \(j\) of the triangle to obtain an estimate of ultimate claims at the given limit. Define severity relativities with respect to the unlimited severity at the cost level of the most recent exposure period \(n\): \[ R_{i,j}^L = S_{i, j, L} / S_{n, j} \] Then the formula for the limited LDF becomes \[ \mathrm{LDF}_{i,j}^L = \frac{C_{\infty}}{C_j}\frac{S_{n, \infty}}{S_{n,j}} \frac{R_{i,\infty}^L}{R_{i,j}^L} = \mathrm{LDF}_{n,j} \frac{R_{i,\infty}^L}{R_{i,j}^L} \] Rather than relating limited LDFs to unlimited LDFs, Sahasrabuddhe relates them to basic limit LDFs at the cost level of exposure period \(n\). The basic limits LDFs can be obtained from the above formula: \[ LDF_{n,j}^B = \mathrm{LDF}_{n,j} \frac{R_{n,\infty}^B}{R_{n,j}^B} \] Taking the ratio of these two formulas relates the LDF for any limit and cost level to the basic limits LDF at the most recent cost level: \[ \mathrm{LDF}_{i,j}^L = \mathrm{LDF}_{n,j}^B \frac{R_{i,\infty}^L / R_{n,\infty}^B}{R_{i,j}^L / R_{n,j}^B} = \mathrm{LDF}_{n,j}^B \frac{S_{i,\infty,L} / S_{n,\infty,B}}{S_{i,j,L} / S_{n,j,B}} \] The main steps in the approach are:
  1. Determine trend factors \(T_{i,j}\) for each accident year and development period.
  2. Calculate the limited severity, at each cost level, for the limits that are relevant to the problem (e.g. basic limit, limit of the data in the table, limit for the policy of interest).
  3. Use ratios of limited severities to convert the data in the table to basic limits losses at the cost level of the most recent exposure period, i.e. multiply by \(S_{n, j, B} / S{i, j, L}\).
  4. Apply the chain ladder method to the adjusted table to determine \(\mathrm{LDF_{n,j}^B}\).
  5. Use ratios of limited severities to convert \(\mathrm{LDF}_{n,j}^B\) to \(\mathrm{LDF}_{i,j}^L\), noting that we are typically only interested in \(\mathrm{LDF}_{i,j}^L\) for the most recent diagonal.

Sahasrabuddhe’s First Example

Sahasrabuddhe’s first example is based on the following data:

example.1.table = fread("./High Deductibles/sahasrabuddhe-example_1.csv")
datatable(example.1.table)
The data are assumed to come from policies with a limit of $1M. The objective is to determine development factors for a limit of $2M, given a basic limit of $500K. Sahasrabuddhe assumes that the following trends are in effect:
  • Accident year trend of 2%, with the exception of a 5% trend between accident years 6 and 7.
  • Calendar year trend of 1%, with the exception of a -5% trend between calendar years 2 and 3.

Calculate the trends, then join them to the table above:

AY.trend = data.frame(exposure_period = 1:10)
AY.trend$annual_trend = ifelse(AY.trend$exposure_period == 1, 1,
                                  ifelse(AY.trend$exposure_period == 7, 1.05, 1.02))
AY.trend$AY_trend = cumprod(AY.trend$annual_trend)
datatable(as.data.table(AY.trend))
CY.trend = data.frame(calendar_period = 1:19)
CY.trend$annual_trend = ifelse(CY.trend$calendar_period == 1, 1,
                                  ifelse(CY.trend$calendar_period == 3, 0.95, 1.01))
CY.trend$CY_trend = cumprod(CY.trend$annual_trend)
datatable(as.data.table(CY.trend))
trend.table = merge(AY.trend[,c("exposure_period", "AY_trend")], CY.trend[, c("calendar_period", "CY_trend")], by = NULL)
trend.table$combined_trend = trend.table$AY_trend * trend.table$CY_trend
datatable(as.data.table(trend.table))

Assuming an exponential distribution for claim severity in each development interval, the following values for \(\theta\) are used, assuming exposure period 10.

exponential.theta = data.frame(development_interval = 1:10, theta = c(28138, 84242, 133998, 182460, 204649, 228245, 252830, 265063, 275707, 280000))
EP.list = data.frame(exposure_period = 1:10)
severity.table = merge(exponential.theta, EP.list, by = NULL)
severity.table$calendar_period = severity.table$exposure_period + severity.table$development_interval - 1
severity.table = merge(severity.table, trend.table[, c("exposure_period", "calendar_period", "combined_trend")], by = c("exposure_period", "calendar_period"))
severity.table = as.data.table(severity.table)
datatable(severity.table)

Since the parameters are given relative to exposure period 10, but the trend factors are relative to exposure period 1, the trends are normalized based on the exposure period 10 trend factors.

ep.10.trend = as.data.table(trend.table)[exposure_period == 10]
ep.10.trend$development_interval = ep.10.trend$calendar_period - ep.10.trend$exposure_period + 1
severity.table = merge(severity.table, ep.10.trend[,.(development_interval, ep_10_trend = combined_trend)], by = "development_interval")
datatable(severity.table)

For the exponential distribution, the unlimited mean is just \(\theta\), after being adjusted for trend. Limited expected values for this distribution can be calculated using the formula \[ E[X;x] = \theta(1 - \exp(-x/\theta)) \]

severity.table[, unlimited_mean := theta * combined_trend / ep_10_trend]
severity.table[, bl_severity := unlimited_mean * (1 - exp(-500000 / unlimited_mean))]
severity.table[, table_severity := unlimited_mean * (1 - exp(-1000000 / unlimited_mean))]
severity.table[, pl_severity := unlimited_mean * (1 - exp(-2000000 / unlimited_mean))]
datatable(severity.table[, .(development_interval, exposure_period, unlimited_mean, bl_severity, table_severity, pl_severity)])

Use the above data to convert the given data, which is at the $1M limit, to basic limits at the cost level of exposure period 10.

basic.limits.table = merge(example.1.table, severity.table[, .(development_interval, exposure_period, table_severity)], by = c("development_interval", "exposure_period"))
basic.limits.table = merge(basic.limits.table, severity.table[exposure_period == 10, .(development_interval, bl_severity)], by = "development_interval")
basic.limits.table[, bl_cumulative := cumulative_loss * bl_severity / table_severity]
datatable(basic.limits.table)

Apply the chain ladder method to the adjusted table:

basic.limits.CL = basic.limits.table[order(c(development_interval, exposure_period))]
basic.limits.CL = basic.limits.table[, bl_cumulative_next := shift(bl_cumulative, type = "lead"), by = "exposure_period"]
basic.limits.CL = basic.limits.CL[!is.na(bl_cumulative_next)]
basic.limits.LDF = basic.limits.CL[, .(age_to_age = sum(bl_cumulative_next) / sum(bl_cumulative)), by = "development_interval"]
basic.limits.LDF = basic.limits.LDF[order(-development_interval)]
basic.limits.LDF[, LDF_B_n := cumprod(age_to_age)]
datatable(basic.limits.LDF[order(development_interval)])

Use the severity table to adjust the cost level of these LDFs:

adjusted.LDFs = merge(severity.table, basic.limits.LDF[, .(development_interval, LDF_B_n)], by = "development_interval")
ult.BL.sev.n = severity.table[development_interval == 10 & exposure_period == 10, .(bl_severity)][[1]]
adjusted.LDFs = merge(adjusted.LDFs, severity.table[exposure_period == 10, .(development_interval, bl_severity_n = bl_severity)], by = "development_interval")
adjusted.LDFs = merge(adjusted.LDFs, severity.table[development_interval == 10, .(exposure_period, bl_severity_ult = bl_severity)], by = "exposure_period")
adjusted.LDFs[, LDF_B := LDF_B_n * bl_severity_n / ult.BL.sev.n * bl_severity_ult / bl_severity]
datatable(adjusted.LDFs[, .(development_interval, exposure_period, calendar_period, bl_severity, LDF_B_n, bl_severity_n, bl_severity_ult, LDF_B)])

The main entries of interest are along the most recent diagonal:

datatable(adjusted.LDFs[calendar_period == 10, .(development_interval, LDF_B)][order(development_interval)])

A similar adjustment can be performed to get the LDFs for the given policy limit:

adjusted.LDFs = merge(adjusted.LDFs, severity.table[development_interval == 10, .(exposure_period, pl_severity_ult = pl_severity)], by = "exposure_period")
adjusted.LDFs[, LDF_P := LDF_B_n * bl_severity_n / ult.BL.sev.n * pl_severity_ult / pl_severity]
datatable(adjusted.LDFs[calendar_period == 10, .(development_interval, LDF_P)][order(development_interval)])

Factors for the layer between the basic limit and policy limit can be calculated using the expected severity for the layer:

adjusted.LDFs[, LDF_B_to_P := LDF_B_n * bl_severity_n / ult.BL.sev.n * (pl_severity_ult - bl_severity_ult) / (pl_severity - bl_severity)]
datatable(adjusted.LDFs[calendar_period == 10, .(development_interval, LDF_B_to_P)][order(development_interval)])

Sahasrabuddhe’s second example is similar, with the only difference being a change in the unlimited severity at each development interval.

Sahasrabuddhe’s Third Example

This example demonstrates a simplification of the above approach that can be applied under the following practical constraints:
  • We are only provided with a development pattern and its associated limit; we do not know the cost level. In this case, it is assumed to be at the cost level of the latest exposure period.
  • We only have a claim size model at ultimate, and not claim size models by age.
The modified formula is \[ \mathrm{LDF}_{i,j}^L = \mathrm{LDF}_{n,j}^B \frac{S_{i,\infty,L} / S_{i,\infty, B}}{R_j(L,B)} \] where \(R_j(L,B)\) is the ratio of losses limited at \(L\) to losses limited to \(B\) in development period \(j\), evaluated on the most recent diagonal. (Note that since the basic limit losses are at cost level \(i\) on this diagonal, then the basic limit severity in the numerator changes from \(S_{n,\infty,B}\) to \(S_{i,\infty, B}\).) Assumptions underlying this simplification include:
  • Trend rates are low; in particular, they are not high enough to impact calculations of ratios of limited severities
  • Limits are above the working layer

The above assumptions ensure that the differences introduced by the approximation are immaterial.

The following information is assumed to be provided:

example.3.table = fread("./High Deductibles/sahasrabuddhe-example_3.csv")
theta.ultimate = 280000
datatable(example.3.table)

The objective is to determine the basic limit LDFs from the $1M LDFs. Rather than using the observed ratios of limited claims along the most recent diagonal, Sahasrabuddhe selects a ratio based on a decay factor methodology (details not provided).

example.3.table[, R_j := claims_500K_limit / claims_1M_limit]
example.3.table = merge(example.3.table, as.data.table(trend.table)[calendar_period == 10 + exposure_period - 1, .(exposure_period, combined_trend)], by = "exposure_period")
most.recent.trend = example.3.table[exposure_period == 10, .(combined_trend)][[1]]
example.3.table[, trended_theta := theta.ultimate / most.recent.trend * combined_trend]
example.3.table[, limited_ult_sev_1M := trended_theta * (1 - exp(- 1000000 / trended_theta))]
example.3.table[, limited_ult_sev_500K := trended_theta * (1 - exp(- 500000 / trended_theta))]
example.3.table$selected_R = c(0.917, 0.917, 0.914, 0.912, 0.912, 0.915, 0.923, 0.937, 0.961, 0.999)
example.3.table[, LDF_500K := LDF_1M * limited_ult_sev_500K / limited_ult_sev_1M / selected_R]
datatable(example.3.table)

Aggregate Excess Reserves

Siewert recommends two approaches to determining LDFs for aggregate excess losses:
  1. Use of a collective risk model, e.g. by assuming a Poisson frequency distribution along with a Weibull severity distribution. Use of the Bornhuetter-Ferguson method is recommended due to the volatility of losses in excess of the aggregate limit.
  2. Use of NCCI Table M.

To illustrate the idea behind a collective risk model, first assume that expected unlimited losses are $1M. Therefore, using the Weibull distribution for severity, we would make the following frequency assumption:

lambda = 1000000 / unlimited.severity.ult
print(paste0("The assumed claim frequency is ", lambda))
## [1] "The assumed claim frequency is 146.076850302168"

This function will randomly generate a table of losses given the Weibull parameters, frequency, and per-occurrence limits.

set.seed(12345)
collective.risk.table = function(frequency, alpha, beta, limit) {
  number.of.claims = rpois(1, frequency)
  result.table = data.table(claim_number = 1:number.of.claims)
  result.table[, claim_amount := rweibull(number.of.claims, shape = alpha, scale = beta)]
  result.table[, deductible_amount := pmin(claim_amount, limit)]
  result.table[, po_excess_amount := claim_amount - deductible_amount]
  return(result.table)
}
test = collective.risk.table(frequency = lambda, alpha = alpha.ult, beta = beta.ult, limit = 250000)
datatable(test)

The following function simulates the given number of accounts, and calculates the total amount within the deductible layer for each account. From this, the aggregate excess is calculated.

simulate.accounts = function(number_of_accounts, frequency, alpha, beta, limit, aggregate_limit) {
  deductible_amount = rep(0, number_of_accounts)
  po_excess_amount = rep(0, number_of_accounts)
  aggregate_excess_amount = rep(0, number_of_accounts)
  for (i in 1:number_of_accounts) {
    account.simulation = collective.risk.table(frequency, alpha, beta, limit)
    deductible_amount[i] = account.simulation[, sum(deductible_amount)]
    po_excess_amount[i] = account.simulation[, sum(po_excess_amount)]
    aggregate_excess_amount[i] = pmax(0, deductible_amount[i] - aggregate_limit)
  }
  results = data.table(deductible_amount, po_excess_amount, aggregate_excess_amount)
  return(results)
}
simulation.test = simulate.accounts(10, lambda, alpha.ult, beta.ult, limit = 250000, aggregate_limit = 500000)
datatable(simulation.test)

By performing the simulations using both the ultimate and 48-month parameters, we can approximate the excess LDFs as the ratio at the given per-occurrence and aggregate limits.

simulate.excess.LDF = function(number_of_simulations, frequency, limit, aggregate_limit) {
  excess_loss_48 = sum(simulate.accounts(number_of_simulations, frequency, alpha.48, beta.48, limit, aggregate_limit)$aggregate_excess_amount)
  excess_loss_ult = sum(simulate.accounts(number_of_simulations, frequency, alpha.ult, beta.ult, limit, aggregate_limit)$aggregate_excess_amount) 
  return(excess_loss_ult / excess_loss_48)
}
LDF.test = simulate.excess.LDF(10000, lambda, 250000, 500000)
print(paste0("The simulated LDF for a per-occurrence limit of $250K and an aggregate limit of $500K is ", LDF.test))
## [1] "The simulated LDF for a per-occurrence limit of $250K and an aggregate limit of $500K is 1.23010044211904"

Apply the simulation to a variety of per-occurrence and aggregate limits:

excess.LDF.table.1 = data.table(po_limit = c(100000, 250000, 500000, 100000, 250000, 500000, 100000, 250000, 500000), aggregate_limit = c(500000, 500000, 500000, 750000, 750000, 750000, 1000000, 1000000, 1000000))
excess.LDF.table.1$excess_LDF = apply(excess.LDF.table.1, 1, function(r) simulate.excess.LDF(number_of_simulations = 10000, frequency = lambda, limit = r['po_limit'], aggregate_limit = r['aggregate_limit']))
datatable(excess.LDF.table.1)

Note that a drawback to using simulations to calculate the average aggregate excess loss at each age in the manner above is that the simulations at 48 months and at ultimate are performed independently, but in reality, the ultimate value will depend on the 48-month value (and in generally, we expect the ultimate value to be greater than the 48-month value, with the exception of lines where salvage and subrogation are common). With a simulation approach, there is a risk that smaller claims could be generated at ultimate than at the 48-month mark; however, using a large number of simulations appears to mitigate this problem and produce reasonable results. Repeat the above approach for an expected unlimited loss of $2.5M:

lambda = 2500000 / unlimited.severity.ult
excess.LDF.table.2 = data.table(po_limit = c(100000, 250000, 500000, 100000, 250000, 500000, 100000, 250000, 500000), aggregate_limit = c(1000000, 1000000, 1000000, 1250000, 1250000, 1250000, 1500000, 1500000, 1500000))
excess.LDF.table.2$excess_LDF = apply(excess.LDF.table.2, 1, function(r) simulate.excess.LDF(number_of_simulations = 10000, frequency = lambda, limit = r['po_limit'], aggregate_limit = r['aggregate_limit']))
datatable(excess.LDF.table.2)

The second approach determines IBNR from Table M by subtracting insurance charges at different maturities. The process mirrors the pricing process, and applies an adjustment factor \(A\) to expected losses to account for the per-occurrence limits: \[ A = \frac{1 + 0.8\chi}{1 -\chi} \] The adjustment factor increases expected losses, resulting in a less dispersed distribution and hence a smaller insurance charge. Siewert recommends calculating the per-occurrence excess charge \(\chi\) using the ratio of undeveloped limited losses to ultimate unlimited losses to produce an adjustment that reflects both development and loss limitation. Using the limited severities calculated earlier, along with parameters provided from Siewert’s Appendix II, the adjustment factors are as follows:

NCCI.aggregate = 1250000
distributional.dev.factors[, chi_48 := 1 - limited_severity_48 / unlimited.severity.ult]
distributional.dev.factors[, chi_ult := 1 - limited_severity_ult / unlimited.severity.ult]
distributional.dev.factors[, A_48 := (1 + 0.8 * chi_48) / (1 - chi_48)]
distributional.dev.factors[, A_ult := (1 + 0.8 * chi_ult) / (1 - chi_ult)]
distributional.dev.factors[, limited_loss_48 := lambda * limited_severity_48]
distributional.dev.factors[, limited_loss_ult := lambda * limited_severity_ult]
distributional.dev.factors[, entry_ratio_48 := NCCI.aggregate / limited_loss_48]
distributional.dev.factors[, entry_ratio_ult := NCCI.aggregate / limited_loss_ult]
distributional.dev.factors[, adjusted_limited_loss_48 := unlimited.severity.ult * lambda * A_48]
distributional.dev.factors[, adjusted_limited_loss_ult := unlimited.severity.ult * lambda * A_ult]
datatable(distributional.dev.factors[, .(limit, chi_48, chi_ult, A_48, A_ult, limited_loss_48, limited_loss_ult, entry_ratio_48, entry_ratio_ult, adjusted_limited_loss_48, adjusted_limited_loss_ult)])

Note that loss development impacts both the entry ratio on the table and the adjusted loss that will be used to select the expected loss group. Looking up the insurance charges on this table, multiplying by the expected limited losses at the corresponding maturity, and taking the difference provides an estimate of IBNR.

Service Revenue Asset

Service revenue is an asset reflecting the revenue associated with servicing claims under a high deductible program. It is typically determined by applying a factor to deductible losses (limited to the aggregate) in a manner similar to a loss conversion factor in a retrospective rating plan. Siewert proposes the following approach to estimating this asset:
  1. Determine ultimate deductible losses at the account level
  2. Subtract deductible losses that are excess of the aggregate
  3. Apply the loss multiplier to the amount determined in step 2 to estimate ultimate recoverables
  4. Subtract known recoveries to determine the asset for future recoveries, and sum the results across all accounts