These study notes are based on the Exam 9 syllabus readings Investments by Bodie, Kane, and Marcus, and The Economics of Structured Finance by Joshua Coval, Jakub Jurek, and Erik Stafford. This reading explains how collateralized debt obligations are structured, and how they contributed to the 2008 financial crisis. This paper corresponds to learning objectives C1 to C4 on the syllabus.
easypackages::packages("dplyr", "ggplot2", "copula")
Default risk, or credit risk, is the risk that a bond issuer may not be able to pay the full promised interest or principal on a bond.
Bond rating agencies (Moody’s, Standard & Poor’s, Fitch) assign ratings to bonds based on default risk.
Bonds rated BBB / Baa or above are considered investment grade.
Lower-rated bonds are speculative grade or junk bonds. These generally have high yields.
The rating agencies use various characteristics to assess bond safety:
A coverage ratio is a ratio of company earnings to fixed costs, e.g. earnings (before interest) divided by interest obligations. Higher ratios are better: in essesnce this is the “number of times” that the required interest payments are earned.
The leverage ratio is the debt-to-equity ratio, with lower ratios being better.
Liquidity ratios measure the ability of a firm to pay bills with its most liquid assets; higher ratios are better.
Profitability ratios are indicative of the firm’s overall financial health, as well as its ability to raise money in securities markets because of the attractive returns they offer.
Cash flow-to-debt ratio is based on the total cash flow
One approach to assessing default risk is to produce a Z-score, which is a linear combination of various safety measures. If the score is below a given threshold, then it is vulnerable to bankruptcy; there is also a “grey area” above this threshold, followed by a score above which the bond is considered safe.
The bond indenture may contain additional restrictions to improve the safety of the bond. Examples include:
A sinking fund requirement specifies that the issuer must spread the repayment of principal over several years, in order to avoid a large commitment at maturity. Typically, this involves buying back a fraction of the outstanding bonds each year. An alternative solution is a serial bond issue in which the maturity dates of the bonds are staggered.
A subordination clause requires that if additional debt is taken on, then the bond takes priority over any new debt. This addresses concerns that the bond safety could be compromised by later issuance of debt.
Dividend restrictions force the firm to retain assets rather than paying them out to stockholders.
Collateral is an asset that the bondholder recievs if the firm defaults on the bond. Examples of collateral include property (mortgage bond), securities (collateral trust bond), or equipment (equipment obligation bond). Bonds without collateral are called debenture bonds.
Default risk requires that we make a distinction between the promised yield to maturity, which is the maximum possible yield, and the expected yield to maturity, which incorporates the risk of default. Typically this involves re-calculating the yield after changing the expected coupon and principal payments to their expected values. To do this, re-use the yield-to-maturity function produced in an earlier notebook:
yield.to.maturity = function(price, coupon, maturity, par.value, semiannual = TRUE) {
number.of.payments = ifelse(semiannual, maturity * 2, maturity)
cash.flow = c(-price, rep(coupon, number.of.payments - 1), coupon + par.value)
roots = polyroot(cash.flow)
discount.factor = Re(roots[round(Im(roots), 8) == 0])
interest.rate = ifelse(semiannual, 2 * (1 / discount.factor - 1), 1 / discount.factor - 1)
return(interest.rate)
}
Suppose that a $1000 par-value bond with 10 years remaining pays 9% coupons semi-annually. The current price of this bond is $750. Then the promised yield-to-maturity is
promised.yield.example = yield.to.maturity(price = 750, coupon = 45, maturity = 10, par.value = 1000, semiannual = TRUE)
print(paste0("The promised yield to maturity is ", promised.yield.example))
## [1] "The promised yield to maturity is 0.136569038555435"
However, if investors only expect to receive $700 at maturity, then the expected yield is as follows:
expected.yield.example = yield.to.maturity(price = 750, coupon = 45, maturity = 10, par.value = 700, semiannual = TRUE)
print(paste0("The promised yield to maturity is ", expected.yield.example))
## [1] "The promised yield to maturity is 0.116302753396654"
In order to be compensated for the risk of default, investors require a default premium, which can be calculated as the difference between the promised yield on a coporate bond and the yield of an otherwise-equivalent risk-free bond. In general, the lower a bond’s rating, the higher this premium will be. Suppose that a 10-year government bond with 9% coupons paid semiannually has a par value of $1000 and is currently selling for $1100. We can calculate the default premium as follows:
risk.free.yield = yield.to.maturity(price = 1100, coupon = 45, maturity = 10, par.value = 1000, semiannual = TRUE)
print(paste0("The risk-free yield for a comparable bond is ", risk.free.yield, " and the default premium is ", promised.yield.example - risk.free.yield))
## [1] "The risk-free yield for a comparable bond is 0.075570789006481 and the default premium is 0.0609982495489541"
A credit default swap (CDS) is an insurance policy on the default risk of a bond or loan. The purchaser of a CDS pays a percentage of the principal to the seller; in exchange, if the underlying bond defaults, the seller must compensate the buyer for the loss of bond value. This raises the effective quality of the debt, and a fair price for a CDS can be set by taking the difference in yield between the lower and higher bond rating.
A mortgage pass-through is a security in which a government agency purchases mortgages from lending institutions, pools them, and sells shares in the pool to investors. The cash flows from the original borrowers pass through the institution to the agency to the investors.
Because the homeowner has the right to prepay the loan at any time, this is effectively callable, where the call price is the remaining principal balance on the loan.
Mortgage-backed securities will exhibit the same region of negative convexity as callable bonds: when interest rates are low, borrowers will be more likely to pre-pay their mortgages, and investors lose out on capital gains.
The analogy to callable bonds isn’t exact because homeowners do not refinance loans as soon as interest rates drop due to a variety of behaviours (hassle / cost of refinancing; may be moving soon; may lack sophistication). As a result, the principal balance is only a theoretical call price, not a firm upper limit on the value.
A collateralized mortgage obligation redirects the cash flow from a mortgage-backed security to various investors based on their appetite for interest rate risk. The mortgage pool is separated into tranches based on a proportion of the total principal. For example:
The “Short-pay” tranche is allocated the first $X of principal to be repaid.
The “Intermediate-pay” tranche is allocated the next $Y of principal to be repaid. Only interest payments are received until the short-pay tranche is retired.
The “Long-pay” tranche is allocated the remaining principal. Only receives interest payments until the two earlier tranches are retired.
Each tranche receives the principal payments on its share of the principal as they are made, as well as any interest payments on its share of the outstanding principal. This includes any principal payments made as prepayments. This can accelerate the retirement of the earlier tranches.
In the early 2000’s, mortgage securitization resulted in significant growth in the issuance of subprime mortgages, from $96.8 billion in 1996 to $600 billion in 2006.
“Private label” securitization allowed for the issuance of mortgages that would not normally be purchased by government agencies.
Demand for assets to construct CDOs fueled growth of this market.
Default risk is correlated with interest rate risk, because if default rates increase significantly when the market value of the home is less than the value of the mortgage.
In contrast to passive bond porfolio management, which aims to immunize the portfolio against interest rate risk, active bond portfolio management aims to achieve abnormal returns either through interest rate forecasting or identification of mispriced bonds. The time period between the identification of a mispriced bond and the realignment of the prices is called the workout period, and it is during this time that abnormal returns are anticipated. These strategies are described as “swaps” in which one bond is swapped out of the portfolio for another – note that these are not “swaps” in the sence of interest rate or currency swaps.
A substitution swap exchanges one bond for a nearly identical substitute, in terms of coupon, maturity, quality, contract provisions, etc. This is typically employed when one of the bonds is priced less than the other.
An intermarket spread swap is used when the yield spread between two sectors of the bond market (e.g. government and corporate) is temporarily out of line. The risk associated with this strategy is that there may be an underlying reason why the yield spread is different than it has been historically.
A rate anticipation swap involves swapping into longer duration bonds if rates are expected to fall, and shorter duration bonds if rates are expected to rise.
A pure yield pickup swap involves holding higher-yield bonds without an identification of mispricing. For example, if the yield curve is sloping upward, this would involve moving into longer-term bonds and taking on more interest rate risk.
A tax swap involves swapping bonds to exploit a tax advantage, for example, swapping from a bond that has decreased in price to another in order to realize the capital loss for tax purposes.
A Collateralized Debt Obligation (CDO) is a structured finance instrument, created as follows:
A large collection of credit-sensitive assets is pooled.
Claims on the cash flow from the underlying asset pool are prioritized into tranches, typically referred to as Junior, Mezzanine, and Senior tranches. Each tranche has attachment points, expressed as percentages of the underlying asset pool, and it is responsible for covering losses in that range. For example, if a tranche has attachment points X% to Y%, it will begin to absorb losses from the underlying asset pool once they reach X%, and continue to do so until the losses reach Y%.
Due to the higher default risk in the junior tranches, they have higher rates of return than the senior tranches. The protection offered to the senior tranches by the junior traches is referred to as overcollateralization.
The core ideas behind the construction of a CDO can be illustrated using the following example:
There are two underlying assets \(X\) and \(Y\), with probability of default \(p_X\) and \(p_Y\), respectively. The probability of both defaulting is denoted by \(p_{XY}\).
The correlation between default of the two assets is denoted by \(\rho\).
Each underlying asset pays $0 in the case of default, and $1 otherwise.
There are two tranches in the CDO. The junior tranche absorbs up to $1 in losses (in other words, its attachment point is 0% - 50%). The senior tranche has an attachment point of 50% to 100%.
Given a default probabilty \(\rho\), the probability of both underlying assets defaulting can be derived from \[ \rho = \frac{p_{XY} - p_Xp_Y}{\sqrt{(p_X(1 - p_X)p_Y(1-p_Y))}} \]
p.X = 0.10
p.Y = 0.20
rho = 0.2
p.XY = rho * sqrt(p.X * (1 - p.X) * p.Y * (1 - p.Y)) + p.X * p.Y
The algorithm for calculating expected outcomes of a two-asset CDO is:
Determine the joint probability distribution of the two assets using the above formula, i.e. the probability of each possible combination of defaults.
Calculate the total loss to the underlying asset pool in each scenario.
Allocate the losses to each tranche.
Calculate the expected loss in each tranche, or the probability of impairment in that tranche (probability that it has a non-zero loss).
In this case, the joint probability distribution is:
CDO.2.example = data.frame(loss_X = c(0, 0, 1, 1), loss_Y = c(0, 1, 0, 1), probability = c(1 - p.X - p.Y + p.XY, p.Y - p.XY, p.X - p.XY, p.XY))
CDO.2.example
## loss_X loss_Y probability
## 1 0 0 0.744
## 2 0 1 0.156
## 3 1 0 0.056
## 4 1 1 0.044
Calculating the total loss and allocating it to tranches:
CDO.2.example = CDO.2.example %>% mutate(total_loss = loss_X + loss_Y, junior_tranche_loss = pmin(1, pmax(total_loss, 0)), senior_tranche_loss = pmin(1, pmax(total_loss - junior_tranche_loss, 0)))
CDO.2.example
## loss_X loss_Y probability total_loss junior_tranche_loss
## 1 0 0 0.744 0 0
## 2 0 1 0.156 1 1
## 3 1 0 0.056 1 1
## 4 1 1 0.044 2 1
## senior_tranche_loss
## 1 0
## 2 0
## 3 0
## 4 1
Calculate the expected loss and probability of impairment in each tranche:
junior.expected.loss = sum(CDO.2.example$junior_tranche_loss * CDO.2.example$probability)
junior.impairment.probability = sum((CDO.2.example$junior_tranche_loss > 0) * CDO.2.example$probability)
print(paste0("The expected loss in the junior tranche is ", junior.expected.loss, " and the probability of impairment is ", junior.impairment.probability))
## [1] "The expected loss in the junior tranche is 0.256 and the probability of impairment is 0.256"
senior.expected.loss = sum(CDO.2.example$senior_tranche_loss * CDO.2.example$probability)
senior.impairment.probability = sum((CDO.2.example$senior_tranche_loss > 0) * CDO.2.example$probability)
print(paste0("The expected loss in the senior tranche is ", senior.expected.loss, " and the probability of impairment is ", senior.impairment.probability))
## [1] "The expected loss in the senior tranche is 0.044 and the probability of impairment is 0.044"
Compare to an example in which the assets are uncorreltaed:
p.X = 0.10
p.Y = 0.10
rho = 0
p.XY = rho * sqrt(p.X * (1 - p.X) * p.Y * (1 - p.Y)) + p.X * p.Y
CDO.2.example = data.frame(loss_X = c(0, 0, 1, 1), loss_Y = c(0, 1, 0, 1), probability = c(1 - p.X - p.Y + p.XY, p.Y - p.XY, p.X - p.XY, p.XY))
CDO.2.example = CDO.2.example %>% mutate(total_loss = loss_X + loss_Y, junior_tranche_loss = pmin(1, pmax(total_loss, 0)), senior_tranche_loss = pmin(1, pmax(total_loss - junior_tranche_loss, 0)))
CDO.2.example
## loss_X loss_Y probability total_loss junior_tranche_loss
## 1 0 0 0.81 0 0
## 2 0 1 0.09 1 1
## 3 1 0 0.09 1 1
## 4 1 1 0.01 2 1
## senior_tranche_loss
## 1 0
## 2 0
## 3 0
## 4 1
junior.expected.loss = sum(CDO.2.example$junior_tranche_loss * CDO.2.example$probability)
junior.impairment.probability = sum((CDO.2.example$junior_tranche_loss > 0) * CDO.2.example$probability)
print(paste0("The expected loss in the junior tranche is ", junior.expected.loss, " and the probability of impairment is ", junior.impairment.probability))
## [1] "The expected loss in the junior tranche is 0.19 and the probability of impairment is 0.19"
senior.expected.loss = sum(CDO.2.example$senior_tranche_loss * CDO.2.example$probability)
senior.impairment.probability = sum((CDO.2.example$senior_tranche_loss > 0) * CDO.2.example$probability)
print(paste0("The expected loss in the senior tranche is ", senior.expected.loss, " and the probability of impairment is ", senior.impairment.probability))
## [1] "The expected loss in the senior tranche is 0.01 and the probability of impairment is 0.01"
Note that in the presence of correlation, the performance of the senior tranche deteriorates considerably.
A CDO squared (CDO^2) is a collateralized debt obligation that is formed from the tranches of other CDOs. We can continue the above example by forming a CDO-squared from the junior tranches of two identical versions of the above CDO, assuming no correlation between the underlying asset pools:
p.X = junior.impairment.probability
p.Y = junior.impairment.probability
rho = 0
p.XY = rho * sqrt(p.X * (1 - p.X) * p.Y * (1 - p.Y)) + p.X * p.Y
CDO.squared.example = data.frame(loss_X = c(0, 0, 1, 1), loss_Y = c(0, 1, 0, 1), probability = c(1 - p.X - p.Y + p.XY, p.Y - p.XY, p.X - p.XY, p.XY))
CDO.squared.example = CDO.squared.example %>% mutate(total_loss = loss_X + loss_Y, junior_tranche_loss = pmin(1, pmax(total_loss, 0)), senior_tranche_loss = pmin(1, pmax(total_loss - junior_tranche_loss, 0)))
CDO.squared.example
## loss_X loss_Y probability total_loss junior_tranche_loss
## 1 0 0 0.6561 0 0
## 2 0 1 0.1539 1 1
## 3 1 0 0.1539 1 1
## 4 1 1 0.0361 2 1
## senior_tranche_loss
## 1 0
## 2 0
## 3 0
## 4 1
junior.expected.loss = sum(CDO.squared.example$junior_tranche_loss * CDO.squared.example$probability)
junior.impairment.probability = sum((CDO.squared.example$junior_tranche_loss > 0) * CDO.squared.example$probability)
print(paste0("The expected loss in the junior tranche is ", junior.expected.loss, " and the probability of impairment is ", junior.impairment.probability))
## [1] "The expected loss in the junior tranche is 0.3439 and the probability of impairment is 0.3439"
senior.expected.loss = sum(CDO.squared.example$senior_tranche_loss * CDO.squared.example$probability)
senior.impairment.probability = sum((CDO.squared.example$senior_tranche_loss > 0) * CDO.squared.example$probability)
print(paste0("The expected loss in the senior tranche is ", senior.expected.loss, " and the probability of impairment is ", senior.impairment.probability))
## [1] "The expected loss in the senior tranche is 0.0361 and the probability of impairment is 0.0361"
In this example, we consider a CDO built on an asset pool consisting of three independent assets, with three tranches: a junior tranche (0% - 33%), a mezzanine tranche (33% - 67%), and a senior tranche (67% - 100%).
p.X = 0.1
p.Y = 0.1
p.Z = 0.1
CDO.3.example = merge(merge(data.frame(loss_X = c(0,1)), data.frame(loss_Y = c(0,1))), data.frame(loss_Z = c(0,1)))
CDO.3.example = CDO.3.example %>% mutate(probability = p.X^loss_X * (1 - p.X)^(1 - loss_X) * p.Y ^loss_Y * (1 - p.Y)^(1 - loss_Y) * p.Z^loss_Z * (1 - p.Z)^(1 - loss_Z), total_loss = loss_X + loss_Y + loss_Z, junior_tranche_loss = pmin(1, pmax(total_loss, 0)), mezzanine_tranche_loss = pmin(1, pmax(total_loss - junior_tranche_loss, 0)), senior_tranche_loss = pmin(1, pmax(total_loss - mezzanine_tranche_loss - junior_tranche_loss, 0)))
CDO.3.example
## loss_X loss_Y loss_Z probability total_loss junior_tranche_loss
## 1 0 0 0 0.729 0 0
## 2 1 0 0 0.081 1 1
## 3 0 1 0 0.081 1 1
## 4 1 1 0 0.009 2 1
## 5 0 0 1 0.081 1 1
## 6 1 0 1 0.009 2 1
## 7 0 1 1 0.009 2 1
## 8 1 1 1 0.001 3 1
## mezzanine_tranche_loss senior_tranche_loss
## 1 0 0
## 2 0 0
## 3 0 0
## 4 1 0
## 5 0 0
## 6 1 0
## 7 1 0
## 8 1 1
junior.expected.loss = sum(CDO.3.example$junior_tranche_loss * CDO.3.example$probability)
junior.impairment.probability = sum((CDO.3.example$junior_tranche_loss > 0) * CDO.3.example$probability)
print(paste0("The expected loss in the junior tranche is ", junior.expected.loss, " and the probability of impairment is ", junior.impairment.probability))
## [1] "The expected loss in the junior tranche is 0.271 and the probability of impairment is 0.271"
mezz.expected.loss = sum(CDO.3.example$mezzanine_tranche_loss * CDO.3.example$probability)
mezz.impairment.probability = sum((CDO.3.example$mezzanine_tranche_loss > 0) * CDO.3.example$probability)
print(paste0("The expected loss in the mezzanine tranche is ", mezz.expected.loss, " and the probability of impairment is ", mezz.impairment.probability))
## [1] "The expected loss in the mezzanine tranche is 0.028 and the probability of impairment is 0.028"
senior.expected.loss = sum(CDO.3.example$senior_tranche_loss * CDO.3.example$probability)
senior.impairment.probability = sum((CDO.3.example$senior_tranche_loss > 0) * CDO.3.example$probability)
print(paste0("The expected loss in the senior tranche is ", senior.expected.loss, " and the probability of impairment is ", senior.impairment.probability))
## [1] "The expected loss in the senior tranche is 0.001 and the probability of impairment is 0.001"
The parameter risk associated with CDOs can be assessed by simulating the outcome of a CDO and a CDO squared, then quantifying the change in default rate as the underlying assumptions about the default rate and correlation change.
First, simulate the outcome of 100 of the underlying assets with a given default probability and pairwise correlation coefficient. (The default settings for the normalCopula function assume constant pairwise correlation.)
set.seed(123456)
simulate.underlying = function(default.probability, pairwise.correlation, number.of.assets, default.recovery) {
random.result = rCopula(1, normalCopula(pairwise.correlation, dim = number.of.assets))[1,]
default = random.result <= default.probability
loss = (1 - default.recovery) * default
return(sum(loss))
}
simulate.underlying(default.probability = 0.05, pairwise.correlation = 0.2, number.of.assets = 100, default.recovery = 0.5)
## [1] 1
simulate.cdo = function(default.probability, pairwise.correlation, number.of.assets, default.recovery, mezzanine.attachment, senior.attachment) {
loss = simulate.underlying(default.probability, pairwise.correlation, number.of.assets, default.recovery)
junior.loss = min(mezzanine.attachment, loss / number.of.assets) / mezzanine.attachment
mezzanine.loss = min(senior.attachment - mezzanine.attachment, max(0, loss / number.of.assets - mezzanine.attachment)) / (senior.attachment - mezzanine.attachment)
senior.loss = max(0, loss / number.of.assets - senior.attachment) / (1 - senior.attachment)
return(list('collateral_loss' = loss / number.of.assets, 'junior_loss' = junior.loss, 'mezzanine_loss' = mezzanine.loss, 'senior_loss' = senior.loss))
}
simulate.cdo(default.probability = 0.05, pairwise.correlation = 0.2, number.of.assets = 100, default.recovery = 0.5, mezzanine.attachment = 0.06, senior.attachment = 0.12)
## $collateral_loss
## [1] 0.03
##
## $junior_loss
## [1] 0.5
##
## $mezzanine_loss
## [1] 0
##
## $senior_loss
## [1] 0
This function iterates the simulation over a large number of CDOs:
iterate.cdo.simulation.df = function(default.probability, pairwise.correlation, number.of.assets, default.recovery, mezzanine.attachment, senior.attachment, number.of.iterations) {
result = data.frame(simulation.number = 1:number.of.iterations)
result$collateral_loss = rep(NA, number.of.iterations)
result$junior_loss = rep(NA, number.of.iterations)
result$mezzanine_loss = rep(NA, number.of.iterations)
result$senior_loss = rep(NA, number.of.iterations)
for(i in 1:number.of.iterations) {
simulation.result = simulate.cdo(default.probability, pairwise.correlation, number.of.assets, default.recovery, mezzanine.attachment, senior.attachment)
result$collateral_loss[i] = simulation.result$collateral_loss
result$junior_loss[i] = simulation.result$junior_loss
result$mezzanine_loss[i] = simulation.result$mezzanine_loss
result$senior_loss[i] = simulation.result$senior_loss
}
return(result)
}
CDO.simulation.iteration = iterate.cdo.simulation.df(default.probability = 0.05, pairwise.correlation = 0.2, number.of.assets = 100, default.recovery = 0.5, mezzanine.attachment = 0.06, senior.attachment = 0.12, number.of.iterations = 1000)
head(CDO.simulation.iteration %>% arrange(-collateral_loss))
## simulation.number collateral_loss junior_loss mezzanine_loss senior_loss
## 1 911 0.185 1 1 0.07386364
## 2 358 0.165 1 1 0.05113636
## 3 775 0.160 1 1 0.04545455
## 4 210 0.145 1 1 0.02840909
## 5 662 0.140 1 1 0.02272727
## 6 879 0.140 1 1 0.02272727
Wrap the above simulation in a function to calculate and return the expected loss and impairment probability for each tranche:
iterate.cdo.simulation.summary = function(default.probability, pairwise.correlation, number.of.assets, default.recovery, mezzanine.attachment, senior.attachment, number.of.iterations) {
data = iterate.cdo.simulation.df(default.probability, pairwise.correlation, number.of.assets, default.recovery, mezzanine.attachment, senior.attachment, number.of.iterations)
collateral.expected = mean(data$collateral_loss)
collateral.impairment = sum(data$collateral_loss > 0) / nrow(data)
junior.expected = mean(data$junior_loss)
junior.impairment = sum(data$junior_loss > 0) / nrow(data)
mezz.expected = mean(data$mezzanine_loss)
mezz.impairment = sum(data$mezzanine_loss > 0) / nrow(data)
senior.expected = mean(data$senior_loss)
senior.impairment = sum(data$senior_loss > 0) / nrow(data)
return(list('collateral_expected_loss' = collateral.expected, 'collateral_impairment' = collateral.impairment, 'junior_expected_loss' = junior.expected, 'junior_impairment_prob' = junior.impairment, 'mezzanine_expected_loss' = mezz.expected, 'mezzanine_impairment_prob' = mezz.impairment, 'senior_expected_loss' = senior.expected, 'senior_impairment_prob' = senior.impairment))
}
iterate.cdo.simulation.summary(default.probability = 0.05, pairwise.correlation = 0.2, number.of.assets = 100, default.recovery = 0.5, mezzanine.attachment = 0.06, senior.attachment = 0.12, number.of.iterations = 1000)
## $collateral_expected_loss
## [1] 0.024855
##
## $collateral_impairment
## [1] 0.85
##
## $junior_expected_loss
## [1] 0.35425
##
## $junior_impairment_prob
## [1] 0.85
##
## $mezzanine_expected_loss
## [1] 0.04708333
##
## $mezzanine_impairment_prob
## [1] 0.099
##
## $senior_expected_loss
## [1] 0.0008806818
##
## $senior_impairment_prob
## [1] 0.018
This simulation appears to produce results that are a bit more extreme than Coval’s; for example in this simulation the senior tranche has a default probability greater than 1%. The authors also extend the simulation to a CDO-squared, but for simplicity I have omitted this next step from these notes.
The authors test the sensitivity of the model to the pairwise correlation assumption. (The graphs shown below aren’t normalized relative to the baseline simulation as they are in the reading, so these provide a different “absolute” view rather than a relative one.) This can be done by iterating the simulation over a range of values for \(\rho\):
correlation.sensitivity.results = data.frame(rho = 0:100 / 100, collateral_expected_loss = rep(NA, 101), collateral_impairment = rep(NA, 101), junior_expected_loss = rep(NA, 101), junior_impairment_prob = rep(NA, 101), mezzanine_expected_loss = rep(NA, 101), mezzanine_impairment_prob = rep(NA, 101), senior_expected_loss = rep(NA, 101), senior_impairment_prob = rep(NA, 101))
for (i in 1:101) {
sim.results = iterate.cdo.simulation.summary(default.probability = 0.05, pairwise.correlation = correlation.sensitivity.results[i, 'rho'], number.of.assets = 100, default.recovery = 0.5, mezzanine.attachment = 0.06, senior.attachment = 0.12, number.of.iterations = 1000)
correlation.sensitivity.results[i, 'collateral_expected_loss'] = sim.results['collateral_expected_loss']
correlation.sensitivity.results[i, 'collateral_impairment'] = sim.results['collateral_impairment']
correlation.sensitivity.results[i, 'junior_expected_loss'] = sim.results['junior_expected_loss']
correlation.sensitivity.results[i, 'junior_impairment_prob'] = sim.results['junior_impairment_prob']
correlation.sensitivity.results[i, 'mezzanine_expected_loss'] = sim.results['mezzanine_expected_loss']
correlation.sensitivity.results[i, 'mezzanine_impairment_prob'] = sim.results['mezzanine_impairment_prob']
correlation.sensitivity.results[i, 'senior_expected_loss'] = sim.results['senior_expected_loss']
correlation.sensitivity.results[i, 'senior_impairment_prob'] = sim.results['senior_impairment_prob']
}
In the following graph, the red line is the junior tranche, the blue line is the mezannine tranche, and the green line is the senior tranche.
ggplot(data = correlation.sensitivity.results, aes(x = rho, y = junior_expected_loss)) + geom_smooth(colour = "red") + geom_smooth(aes(y = mezzanine_expected_loss), colour = "blue") + geom_smooth(aes(y = senior_expected_loss), colour = "green") + labs(title = "Sensitivity of CDO Expected Loss to Default Correlation Assumption", x = "Pairwise Correlation", y = "Expected Loss")
## `geom_smooth()` using method = 'loess'
## `geom_smooth()` using method = 'loess'
## `geom_smooth()` using method = 'loess'
Note that the junior tranche does significantly better, relative to its baseline, than the other tranches at high correlation levels. This is because, in a high-correlation situation, more of the risk is transferred to the mezzanine and eventually, senior tranches.
ggplot(data = correlation.sensitivity.results, aes(x = rho, y = junior_impairment_prob)) + geom_smooth(colour = "red") + geom_smooth(aes(y = mezzanine_impairment_prob), colour = "blue") + geom_smooth(aes(y = senior_impairment_prob), colour = "green") + labs(title = "Sensitivity of CDO Default Probability to Default Correlation Assumption", x = "Pairwise Correlation", y = "Probability of Default")
## `geom_smooth()` using method = 'loess'
## `geom_smooth()` using method = 'loess'
## `geom_smooth()` using method = 'loess'
We see a similar effect on the default probability.
The same process can be repeated, varying the default probability assumption rather than the correlation assumption.
default.sensitivity.results = data.frame(p = 0:100 / 100, collateral_expected_loss = rep(NA, 101), collateral_impairment = rep(NA, 101), junior_expected_loss = rep(NA, 101), junior_impairment_prob = rep(NA, 101), mezzanine_expected_loss = rep(NA, 101), mezzanine_impairment_prob = rep(NA, 101), senior_expected_loss = rep(NA, 101), senior_impairment_prob = rep(NA, 101))
for (i in 1:101) {
sim.results = iterate.cdo.simulation.summary(default.probability = default.sensitivity.results[i, 'p'], pairwise.correlation = 0.20, number.of.assets = 100, default.recovery = 0.5, mezzanine.attachment = 0.06, senior.attachment = 0.12, number.of.iterations = 1000)
default.sensitivity.results[i, 'collateral_expected_loss'] = sim.results['collateral_expected_loss']
default.sensitivity.results[i, 'collateral_impairment'] = sim.results['collateral_impairment']
default.sensitivity.results[i, 'junior_expected_loss'] = sim.results['junior_expected_loss']
default.sensitivity.results[i, 'junior_impairment_prob'] = sim.results['junior_impairment_prob']
default.sensitivity.results[i, 'mezzanine_expected_loss'] = sim.results['mezzanine_expected_loss']
default.sensitivity.results[i, 'mezzanine_impairment_prob'] = sim.results['mezzanine_impairment_prob']
default.sensitivity.results[i, 'senior_expected_loss'] = sim.results['senior_expected_loss']
default.sensitivity.results[i, 'senior_impairment_prob'] = sim.results['senior_impairment_prob']
}
ggplot(data = default.sensitivity.results, aes(x = p, y = junior_expected_loss)) + geom_line(colour = "red") + geom_line(aes(y = mezzanine_expected_loss), colour = "blue") + geom_line(aes(y = senior_expected_loss), colour = "green") + labs(title = "Sensitivity of CDO Expected Loss to Default Probability Assumption", x = "Default Probability - Underlying Assets", y = "Expected Loss")
ggplot(data = default.sensitivity.results, aes(x = p, y = junior_impairment_prob)) + geom_line(colour = "red") + geom_line(aes(y = mezzanine_impairment_prob), colour = "blue") + geom_line(aes(y = senior_impairment_prob), colour = "green") + labs(title = "Sensitivity of CDO Default Probability to Default Probability Assumption", x = "Default Probability - Underlying Assets", y = "Default Probability - CDO")
Credit Default Swaps contributed to the crisis as follows:
The crisis was characterized by a widespread lack of confidence in the creditworthiness of counterparties; credit default swaps fostered these doubts.
Many CDS were issued on subprime mortgages, and as this market collapsed, there was doubt that the CDS sellers would be able to honour their commitments.
AIG was a major issuer of CDS contracts, and its insolvency would have triggered the insolvency of other firms that had relied on it for protection against loan defaults; government intervention was required to prevent a chain reaction. Lenders could be exposed to the risk of default of an institution with which they do not directly trade, creating systemic risk.
Lax reporting requirements made it impossible to understand a firm’s exposure to credit risk.
Problems with Collateralized Debt Obligations that contributed to the crisis include:
The CDO default probabilities are highly sensitive to assumptions about default probability and correlations between the underlying assets.
As a result, there is significant parameter risk associated with the ratings given to these securities.
The problem is compounded by the fact that there is a very small range of default probabilities among investment-grade securities, so very high precision is needed it order to rate investment-grade securities.
The impact of parameter uncertainty is magnified through the construction of CDO-squared.
Once it was realized that the default probabilities and expected recovery values were worse than expected, the sell-off of assets caused a further deterioration in prices due to the “fire sale” effect.
CDOs substitute assets that are diversifiable with systematic risks.
Senior tranches of CDOs did not offer nearly enough yield spread to compensate for their correlation with the market. In other words, they were overpriced relative to the underlying risks.
In contrast, the junior tranches were underpriced and the purchases of these tranches were overcompensated; this drove demand for all tranches of a CDO.
Senior tranches of CDOs essentially functioned as economic catastrophe insurance, since they default only in extreme economic downturns.
CDOs were rated on the same scale as single-name corporate bonds, creating an illusion of comparability.
This provided access to a large pool of potential buyers who otherwise would have avoided CDOs; moreover, their yields were generally higher than AAA corporate bonds, making them an attractive investment.
Rating a CDO requires a model for the entire joint distribution for the entire collateral pool, in contrast to the rating of single-name corporate bonds which are assessed independently.
Credit ratings only consider the security’s expected payoff, without consideration of whether it is correlated with the market; this is a key distinction between credit ratings and the CAPM pricing model, which determines expected yields based on an asset’s degree of correlation with the market. This drives yield spreads between assets of the same credit rating.
Both investors and regulators (in setting capital requirements based on ratings) outsourced their due diligence to rating agencies; however, there is a conflict of interest due to the fact that the CDO issuer pays for the rating. The complexity of the securities worsened the situation, because rating agencies essentially had to “become part of the underwriting team” rather than acting as independent agents.
CDOs exacerbated the problems associated with subprime mortgages
Many CDOs used mortgage-backed securities as the underlying asset, so they were in effect CDO-squared.
A lack of historical data on defaults for subprime mortgages increased the risk of error in the estimation of default probabilities, correlations, and recovery rates.
There was a large degree of correlation between the underlying mortgages, because when home prices stalled, mortgage delinquinces soared.
Correlation was exacerbated by pooling mortgages of similar vintages, and from similar geographic regions.
Demand for collateral drove down borrowing costs and fueled a real estate bubble.