Thoughts on the International Congress of Actuaries 2018

I had to get a couple more CPD hours done and the ICA 2018 conference came along at exactly the right time! This time around, a virtual option was offered and the Actuarial Society of South Africa (ASSA) organized access for all of its members – a really great move by ASSA in my opinion, and I hope that others benefited as much as I did from this intellectually stimulating event.

I listened with a focus on P&C insurance (I prefer the American term, but in other jurisdictions: SA – short-term, UK – GI, Europe – non-Life), so my comments that follow don’t take account of the sessions on other actuarial areas that I have no doubt were also very worthwhile.

In a previous post I advanced the view that actuarial science is not standing still as a discipline, and that comments such as “AI instead of actuaries” are short-sighted. I am glad to report that the discipline is moving forward rapidly to incorporate machine learning and data science into its toolbox – of the 28 sessions I listened to (I needed a lot of CPD!), 9 mentioned machine learning/data science and had some advice on methods or integrating ML into actuarial practice. Another good sign is that some of the leading researchers speaking at the conference – such as Paul Embrechts and Mario Wüthrich – provided their (positive) thoughts on integrating ML and data science into actuarial science. The actuarial world is moving forward rapidly and I think the prospects for the profession are good, if the actuarial associations around the world recognize the trends quickly and incorporate ML/data science and more into the curricula.

My standout favourite session was by Mario Wüthrich (who some actuaries will recognize as one of the co-authors of the Merz-Wüthrich one year reserve risk formula), who presented on his paper “Neural networks applied to Chain-ladder reserving”, available on SSRN. Besides for the new method he suggests, which I think is one of the best solutions when an actuary needs to reserve for IBNR by sub-category (such as LOB/injury code etc), I found the perspectives on ML that he interspersed his talk with fascinating, an example of which is connecting neural networks to Hilbert’s 13th problem. One point made was that the new claims of algorithms potentially reserving with much less uncertainty than the chain-ladder need to be treated with caution until the issue of model risk is dealt with, and the underlying assumptions brought out into the open.

A brilliant session was given by Paul Glasserman on “Robust model risk assessment” (the paper is ungated if you google it). At the heart of the idea is that model risk could be defined as an alternative probability measure (i.e. if the model generates a baseline probability distribution on a random variable of interest, then model risk could arise if in fact the RV followed a different probability distribution) attached to the simulations presented by a stochastic model (instead of an issue with the model parameters or structure). With this idea in hand, the presentation carried on to show how to find the maximally damaging alternative probability measure for a given level of model risk, as measured by the relative entropy between the baseline and alternative models. The major benefit for actuaries is that this is simple to implement in practice and gives rise to some interesting insights into what can go wrong with a model.

Another session that stood out for me is Pietro Parodi and Peter Watson’s session ”Property Graphs: A Statistical Model for Fire Losses Based on Graph Theory”. The idea here is to find a model that helps to explain why commercial property follows the heavy tail severity distributions observed in practice. Often, in practice, algorithms/distributions are applied because they work and not because there is a good logical basis based on first principles. Along these lines, I am reminded of some of the work of Perks/Beard who proposed first principle models to explain their mortality laws (my small contribution along these lines is an explanation of the chain-ladder algorithm as a life-table estimator, on this blog). Parodi and Watson use graph theory to represent properties (houses/factories etc) and define statistical models on the graph of fire events. These models lead, after simulation and in aggregate, to curves defining the overall severity of a fire event that are not radically different from the current set of curves used by actuaries.

Paul Embrecht’s sessions were amazing to listen to, because of his ability to tie together so many disparate strands in actuarial science and quantitative risk management. It was particularly meaningful to see Paul, who is close to retirement, show-casing some of the work of Mario Wüthrich on telematics on stage, and providing his view that the novel application of data science, as embodied by the work on telematics, is a direction for the discipline to take.

I also enjoyed Hansjörg Albrecher’s session “Flood risk modelling” which was a tour-de-force of various statistical modelling techniques applied to some novel data.

Some other noteworthy sessions which spring to mind:

  • “Data based storm modelling” – this was a demonstration of how a storm model was built for Germany by a medium size consulting firm. I liked the application of mathematics (Delauney triangulation for mapping the wind events and Singular Value Decomposition for dimensionality reduction) to a relatively big data problem (40m records).
  • “Using Risk Factors in Insurance Analytics: Data Driven Strategies” – how to apply GLMs to sparse and high-dimensional data without breaking the linearity assumptions.
  • “Trend in Marine Insurance” – a great overview of this specialty line.

I am not sure if access to the VICA is still open, but if you have any interest in the topics above, I would strongly recommend you try and view some of the sessions.

 

%d bloggers like this: