Ideas from IDSC 2019

About a week ago, I attended the second Insurance Data Science Conference held at ETH Zürich. On a personal note, I am very grateful to the conference organizers for inviting me to give a keynote, and my deck from that presentation is here. Making the conference extra special for me was the opportunity to meet the faculty of ETH Zürich’s RiskLab, who have written some of the best textbooks and papers on the actuarial topics that I deal with in my professional capacity.

This was one of the best organized events I have attended, including the beautiful location of the conference dinner at the Zürich guild house (shown below), and the hard choices of deciding between simultaneous sessions at the conference. It was great to see the numerous insurance professionals, academics and students who were present – the growth in the number of conferences attendees from previous years is witness to the huge current interest in data science in insurance, which will I am sure will help create tangible benefits for the industry, and the policyholders it serves.

In this post I will discuss some of the interesting ideas presented at the IDSC 2019 that stand out in my memory from the conference. If any of these snippets spark interest, then the full presentations can be found at the conference website here.

Evolution of Insurance Modelling

It is interesting to observe the impact on modelling techniques caused the availability of data at a more granular level than previously, or due to a recognition of the potential benefits of better exploiting traditional data. I would categorize this impact as a move towards more empirical modelling, but still framed within the classical actuarial models, and I explain this by examining some of the standout talks for me that fell into this category. Within my talk, I showed the following slide, which discusses the split between those actuarial tasks driven primarily by models, versus those driven by empirical relationships found within datasets. Many of the talks I discuss cover proposal to make tasks that are today more model driven, more empirically driven.

One of the sessions was structured with a focus on reserving techniques. Alessandro Carrato presented on an interesting technique that adapts the chain ladder method within an unsupervised learning framework. This technique is used for reserving for IBNeR on reported claims and works by clustering claims trajectories in a 2d spaced comprised of claims paid and outstanding loss reserves. Loss development factors are then calculated using development factors calculated from the more developed claims in each cluster. Thus, the traditional approach of finding “homogenous” lines of business, which is usually done subjectively, is here replaced by unsupervised learning. Another reserving talk, by Jonas Crevecoeur, also investigated the possibility of reserving at a more granular level using several GLMs, which were shown to reduce to more traditional techniques depending on the choice of GLM covariates.

Within the field of mortality modelling, Andrew Cairns presented on a new dataset covering mortality in the UK split by small geographic areas. This dataset also includes several static variables describing the circumstances of each of these areas, such as deprivation index, education, weekly income, nursing homes, allowing for the modelling of granular mortality rates depending on these covariates. This presentation took a very interesting approach – firstly, an overall national mortality rate was calculated, and then the mortality rate in each area was compared to the national rate in a typical “actual versus expected” analysis. Models were then estimated to explain this AvE analysis in terms of the covariates, as well as in terms of the geographic location of each area. An interesting finding was that income deprivation is an important indicator of excess mortality at the older ages, whereas unemployment is more important at the younger ages.

Another talk on mortality modelling was given by Andrés Villegas, who cast traditional mortality models into what I would call a feature engineering context. In other words, many traditional mortality models, such as the Cairns-Blake-Dowd model can be expressed as a regression of the mortality rate on a number of features, or basis functions which represent, different combinations of age, period and cohort effects. The method basically proceeds by setting up a very large number of potential features, and then selecting these using the grouped lasso technique (which gives zero weight to most features i.e. performs feature selection). A very similar idea has appeared in the reserving literature from Gráinne McGuire, Greg Taylor and Hugh Miller (link). This talk epitomized for me the shift to more empirical techniques, within a field that has traditionally been defined by models and competing model specifications(Gompertz vs Kannisto, Lee-Carter vs Cairns-Blake-Dowd etc).

Keeping it safe

A topic touched on by some speakers was the need to manage new, emerging risks arising due to advanced algorithms and open source software. Jürg Schelldorfer presented an excellent view of how to apply machine learning models within a highly regulated industry such as insurance. Some of his ideas were to focus on prediction uncertainty, and to provide questions to be answered when peer reviewing ML models. I highly recommend this presentation if you are going on the ML journey within an established company!

Jeffrey Bonh also spoke about this theme, emphasizing “algorithmic risks”, which are risks arising due to poor data used to calibrate ML algorithms, or due to the risks of malpractice during algorithmic design and calibration.

Within this section, I would also mention the amazing morning keynote by Professor Buhmann, who presented on an alternative to the paradigm of empirical risk minimization, used often to train ML models. The extent of the knowledge of ML theory shown in this talk was breath-taking, and I am excited to delve into Professor Buhmann’s work in more detail link. The lesson here for me was that it is a mistake to assume that ML methodology is “cut and dried”, and that by building more knowledge about alternative methods, one can hopefully understand some of the risks implied by these techniques.

R – the language for insurance data science

The IDSC began life as the R in Insurance conference, and in this respect, many interesting talks covered innovative R packages. Within the sessions I attended, Daphné Giorgi presented an R package used for simulating human populations based on individuals, which showed excellent performance due to the implementation of some of the algorithms in C++. Kornelius Rohmeyer presented a very promising package called DistrFit, which, as the name implies, is helpful for fitting distributions to insurance claims. This package is a very neat Shiny app, which automates some of the drudge work when fitting claims distributions in R. I hope this one gets a public release soon! Other notable packages are Silvana Pesenti’s SWIM package which implements methods for sensitivity analysis of stochastic models and the interesting sue of Hawke’s processes by Alexandre Boumezoued for predicting cyber claims.

I would also mention the excellent presentation on TensorFlow Probability by Roland Schmid. TF Probability offers many possibilities of incorporating a probabilistic view into Keras deep learning models (amongst other things) and it is exciting that RStudio is in the process of porting this package from Python to R.

Conclusion

The above is a sample of the excellent talks presented (biased towards my own interests), and I have not done justice to the rest of the talks on the day.

I look forward to IDSC 2020 and wish the organizers every success as this conference grows from strength to strength!