CAS Exam 7 Study Notes: Risk Margins, Stochastic Reserve Models, and Enterprise Risk Management
I've uploaded the last three of my Exam 7 Study Note files, covering the following topics:
- Assessment of Risk Margins, based on a paper by Karl Marshall, Scott Collings, Matt Hodson, and Conor O'Dowd.
- Stochastic Loss Reserving, based on three readings: Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinion by R.J. Verrall, Using the ODP Bootstrap Model: A Practitioner's Guide by M. Shapland, and Stochastic Loss Reserving Using Bayesian MCMC Models by G. Meyers.
- Enterprise Risk Management, based on a reading by Brehm, Gluck, Kreps, Major, Mango, Shaw, Venter, White, and Witcraft.
CAS Exam 7 Study Notes: Unpaid Claims by Layer
My notes on unpaid claims by layer of loss are now available. These notes synthesize the ideas in two syllabus readings. The first, A Model for Reserving Workers Compensation High Deductibles by Jerome Siewert, describes how to relate unlimited development loss development factors to limited and excess development factors. The second, Claims Development by Layer: The Relationship Between Claims Development Patterns, Trend, and Claim Size Models by Rajesh Sahasrabuddhe, generalizes these ideas to include adjustments for accident year and calendar year trends.
CAS Exam 7 Study Notes: Premium Asset for Retro Policies
I've uploaded my notes on premium estimation for retrospectively rated policies, which are based on a paper by Michael Teng and Miriam Perkins, and includes a discussion by Sholom Feldblum. The notes demonstrate how to estimate future premium on retrospectively-rated policies through the calculation of premium development to loss development (PDLD) ratios.
CAS Exam 7 Study Notes: Loss Reserving for Reinsurance
I've uploaded my notes on loss reserving for reinsurance, which are based on a section on reinsurance loss reserving that appears in Foundations of Casualty Actuarial Science by G. S. Patrik. The notes describe some concerns that are specific to reinsurance reserving, and provide details on the Stanard-Buhlmann method, which is often used for long-tailed lines of reinsurance.
CAS Exam 7 Study Notes: Benktander Method and Optimal Credibility
I've uploaded my notes on the Benktander method and optimal credibility. These notes are based on two papers from the syllabus. The first, Credible Claims Reserves: The Benktander Method by Thomas Mack illustrates how to apply the methods to a single accident year. The second, Credible Loss Ratio Claims Reserves: The Benktander, Neuhaus, and Mack Methods Revisited, addresses the question of how to apply the methods to an entire development triangle, by taking credibility-weighted averages of the individual loss ratio claims reserve and the collective loss ratio claims reserve.
CAS Exam 7 Study Notes: Testing Assumption of the Chain Ladder Method
I've uploaded my study notes on the topic of testing assumption underlying the chain ladder method. These notes are based on a combination of two closely-related papers, Measuring the Variability of Chain Ladder Reserve Estimates by Thomas Mack, and Testing the Assumptions of Age-to-Age Factors by Gary Venter. The topics addressed include testing the variance assumption underlying the method, testing for calendar year or accident year correlations, and comparing the "direct linear relationship" assumption of the chain ladder method against alternative emergence patterns, such as a linear-plus-constant pattern or a Bornhuetter-Ferguson or Cape Cod emergence pattern.
CAS Exam 7 Study Notes: Maximum Likelihood Approaches in Reserving
I've uploaded my study notes based on LDF Curve-fitting and Stochastic Reserving: A Maximum Likelihood Approach by David Clark. This paper describes how to fit a smooth curve to the historical loss emergence pattern, under both the Loss Development Factor and Cape Cod reserving methods, and how to obtain estimates for the variance of the reserves resulting from use of this method. In the notes, I demonstrate how to use R to maximize the likelihood functions corresponding to Clark's models and replicate the examples from his paper.
CAS Exam 7 Study Notes: Loss Development Using Credibility
I've uploaded my notes on Loss Development Using Credibility, a paper by Eric Brosius. The paper explains how to use linear regression to predict ultimate claim values based on the amount reported as of a given date, and demonstrates that the result has an equivalent interpretation as a credibility-weighted average of the ultimate loss estimates from the link ratio and budgeted loss methods. The notebook illustrates how to replicate the examples from the paper using R. Of particular interest, it shows how to use R to simulate the results of a Poisson process as a means to generate data to illustrate the methodology.
CAS Exam 7 Study Notes: P&C Insurance Company Valuation
As part of my preparation for Exam 7 of the Casualty Actuarial Society, I'm writing my study notes in R notebooks, using R code to illustrate the concepts in the readings. As a result, the notes will sometimes address more than the bare minimum that is needed to pass an exam, but will contain additional information that can be used to put the ideas into practice using contemporary statistical software. Some of the notebooks may be of interest, independent of the subject matter, of illustrations of R techniques.
The first instalment is P&C Company Valuation, based on a reading by Richard Goldfarb. It addresses learning objectives B1-B3 on the exam, explaining several methods for determining the value of an insurance company. Of particular interest is that it shows how R can be used to quickly perform sensitivity testing over a range of assumptions, and visualize the results.
ANSA 2018 - "Analytics-Driven Innovation at Economical Insurance"
I'll be speaking at the 2018 Actuarial Students National Association (ASNA) conference, in Ottawa from January 5-7. The theme of the conference is "Breaking the Paradigm," so I'll be speaking about the innovative work that my colleagues and I have been engaged in. Here's the full abstract:
Economical Insurance, a medium-sized Property and Casualty insurer, is an industry leader in introducing technological innovations and deploying predictive analytics. In this presentation, I'll describe several ways in which Economical has broken traditional insurance paradigms. First, I'll tell you about Sonnet Insurance. Our direct channel offering is Canada's first fully-digital insurance product: customers can get a quote and purchase their policy entirely online, after answering a minimal number of questions. Next, I'll tell you about how our Advanced Analytics team has applied actuarial science to problems outside the traditional practice areas of pricing and reserving. Recent initiatives focused on improving the efficiency of our claims operations. For example: if a property claim requires a field visit by a claim adjuster, how do you determine which adjuster to send? It turns out that the correct answer isn't always as simple as sending the nearest adjuster.
Keywords: Actuarial Science
CAS Annual Meeting, 2017
At this year's Annual Meeting of the Casualty Actuarial Society, Jeffrey Baer and I will be presenting Operations Research and Actuarial Science: Blending the Disciplines. We'll describe two case studies in which we used the results of an actuarial analysis as the inputs to an operations research model. Our objective is to provide actuaries with a general introduction to mathematical optimization models and equip them to identify opportunities within their own organizations that are similar to our case studies. Here's the full abstract:
Operations research develops optimal business processes within an organization. Actuarial science applies statistical concepts to quantify financial and insurance risk. How are these two mathematical disciplines related?
In this session, we will explore the connections between these fields within a P&C insurance context. Through a series of interactive case studies, we will explain how integrating actuarial science and predictive analytics into operations research problems can improve top-line growth, risk management, and the customer experience of an insurance company.
Keywords: Actuarial Science
Filters added to Election Dashboard
I've added the ability to apply filters to the Election Dashboard. Filters can be used to remove data from the analysis entirely, in contrast to dimensions, which keep the data in the analysis but subdivide it prior to analysis. Currently, five filters are available:
- Remove all but the "Big Five" parties (Conservative, NDP, Liberal, Bloc Quebecois, and Green)
- Use only incumbent candidates
- Use only non-incumbent candidates
- Use only elected candidates
- Use only losing candidates
Each of the filters can be toggled on or off independently, so for example, you could restrict to incumbents of the Big Five, or look at incumbents who were re-elected. Be careful with the selection of filters, because some combinations (e.g. "only incumbents" and "only non-incumbents") could result in an empty report.
Filters may be used either as a supplement to a dimension, or as a replacement for one. For example, if one of your dimensions was political party, there may be value in applying the "Big Five" filter in order to produce a smaller report. (In fact, this filter is now on by default.) On the other hand, if you were only interested in looking at incumbent data, one solution could be to use incumbency as one of the dimensions and just ignore the non-incumbent parts of the report. A better solution would be to apply the incumbent filter, which would free up one of the dimensions to be used for another variable.
In addition to adding filters, I've made some cosmetic changes to improve the appearance of the report, and added a new metric, Number of Candidates. Because I removed candidate name from the data in order to conserve space, this metric is calculated by counting unique pairs of district number and political party. In most cases, this will work well (since a party only runs one candidate per riding), but may not be accurate for independent candidates.
Keywords: Election Dashboard
Election Dashboard Now Available
I've recently added a new section to the website, called the Election Dashboard. This page allows you to produce a variety of summary reports based on data from the 2011 Canadian Federal Election. This is a preliminary version that has only basic functionality; over time, I'm planning on adding more features to the dashboard, joining additional data sources, and expanding to the 2015 election.
How to use the dashboard
To use the dashboard, you select the metric you want to calculate, and two dimensions that will be used to group the data prior to calculation of the metric. The primary dimension will form the rows of the report, and the secondary dimension will provide the columns. Dimensions available include:
- Political Party
- Whether or not the candidate is an incumbent (current holder of the seat)
- Whether or not the candidate was victorious in the 2011 election
The two metrics available are the total number of votes cast, and the conversion ratio, which is defined as the total number of votes cast divided by the number of voters who were eligible to vote for that candidate.
Summarization of Conversion Ratio Calculations
Because the metrics are being calculated on data that is grouped according to the dimensions, it is important to clarify how the conversion ratio is calculated. Both the number of votes cast and the number of eligible voters are totalled before the ratio is calculated. For some dimensions, voters may be counted multiple times if their vote is courted by multiple political parties within in the grouping. A typical example of this would be when the "Elected Candidate" dimension is used: in the "N" column, because there are many non-elected candidates in each riding, this is a grouping in which multiple candidates are vying for each elector's vote. The "Y" grouping does not present this problem, since there is a unique winning candidate in each riding. In general, the conversion ratio should be interpreted as the weighted average conversion ratio of all candidates in the grouping defined by the dimensions. The main advantage of this approach to calculating conversion ratio is that it allows for fair conversion ratio comparisons between political parties based on the number of ridings they actually contested; this is a particularly relevant consideration for parties such as the Bloc Québécois which only runs candidates in Quebec.
Data Sources, Processing, and Reconciliation
- "Void polls" and records where no poll was held were removed
- Province does not appear in the original data, and was defined using the mapping from district number to province provided on the site linked above.
- Accents were removed from Bloc Québécois entries because they were not rendering properly in the dashboard.
- A new variable counting the number of valid votes cast in each polling station was defined as the sum of the votes recieved by each candidate. (This is not currently available in the dashboard, but will be used to support metrics that will be added in the future.)
Following data processing, top-level reconciliation was performed to validate that the total number of votes cast (14,723,980), the total number of eligible voters (24,257,592), and number of electoral districts (308) match the values reported by Elections Canada. More granular reconcilitaion (e.g. at the party and province level) can be performed using the dashboard itself, and match the results provided in "Table 8" of the Elections Canada report on the 2011 election.
I've recently revised my website after having gone several years without an update. The biography page now reflects my current activities, and I've added a new blog page. This is a work in progress, and I hope to add new features and content over the next month.
Keywords: Website news