The Actuary in an Age of AI: Connecting Classical Principles with Modern Methods

As an actuary and researcher working at the intersection of actuarial science and artificial intelligence, I’ve been exploring how we can enhance our traditional actuarial toolkit with modern deep learning approaches while maintaining our professional standards and ethical principles. Let me share my current thinking on this evolution of our profession.

The Fusion of Classical and Modern Methods

A key insight that has emerged from my research is the distinction between general AI applications, like large language models, and specific AI tools designed for core actuarial tasks. While LLMs generate significant buzz, I believe the more profound impact in the short and medium term on actuarial work will come from specialized deep learning models adapted for insurance applications.

The LocalGLMnet architecture (Richman & Wüthrich, 2023a) represents one approach to creating inherently interpretable neural networks. By maintaining the same model structure as a GLM while allowing coefficients to vary smoothly with inputs, this architecture provides both interpretability and strong predictive performance. The LocalGLMnet is thus one example of how deep learning can be specialized for actuarial purposes.

The Canadian Institute of Actuaries’ research on mortality table construction (Goulet et al., 2022) provides a fascinating case study of this evolution. When comparing Whittaker-Henderson graduation to neural network approaches, we see that deep learning models can capture more complex relationships while still maintaining the essential characteristics that actuaries expect from mortality tables. The key insight is that we don’t need to choose between traditional actuarial principles and modern methodologies – we can synthesize both approaches to create more powerful and reliable models. This is the approach we took in a recent talk at ASSA’s 2024 Convention – deck attached below.

The core innovation we presented was the incorporation of Whittaker-Henderson (WH) smoothing directly into a neural network architecture through the loss function. This creates what could be called a “smoothness-aware” deep learning model that respects classical actuarial principles while leveraging modern computational capabilities.

This work represents a broader paradigm shift in actuarial science – one where we don’t simply replace traditional methods with black-box deep learning, but rather thoughtfully integrate classical actuarial principles into modern architectures. The presentation shows how this can be done while maintaining:

  • Professional standards for model interpretability
  • Actuarial judgment in parameter selection
  • Smooth and credible mortality patterns
  • Transfer learning capabilities across populations

This work demonstrates how thoughtful integration of classical actuarial principles with modern deep learning can produce models that are both more powerful and more aligned with actuarial professional standards. It provides a template for similar syntheses in other areas of actuarial work.

Classical actuarial principles, like credibility theory, shouldn’t be discarded as we adopt modern methods. Instead, in our recent work on the Credibility Transformer (Richman, Scognamiglio & Wüthrich, 2024), we show how traditional actuarial concepts can be integrated into state-of-the-art deep learning architectures, enhancing their performance and interpretability. This synthesis of old and new represents an exciting path forward for our profession.

Professional Considerations and Challenges

However, the adoption of AI methods raises important professional considerations. In recent work with Roseanne Harris and Mario Wüthrich (Harris et al., 2024), we examine how actuaries can embrace AI tools while remaining committed to professional and ethical principles that have long distinguished our field. Key requirements include:

  • Ensuring model understandability and interpretability
  • Avoiding bias and discrimination in model outputs
  • Maintaining strong governance frameworks
  • Adapting professional education to cover new methodologies
  • Exercising appropriate professional judgment

The actuarial profession is at an inflection point. As discussed in my recent essay (Richman, 2024), choices made about embracing AI and ML in the next few years will determine if we thrive or merely survive in the age of AI. The promising results from applying neural networks to mortality modeling in the Canadian Institute of Actuaries’ research (Goulet et al., 2022), where I contributed to developing new approaches, show how we can adapt powerful tools to meet actuarial standards of practice.

The AI-Enhanced Actuary

Looking ahead, I envision what I call the “AI-enhanced actuary” – a professional who leverages both classical actuarial expertise and AI capabilities to:

  • Build more accurate and efficient models
  • Incorporate new data sources
  • Automate routine tasks
  • Focus on high-level strategic decisions
  • Ensure ethical implementation of AI systems

This evolution represents a natural progression that builds upon our foundation of mathematical and statistical techniques while embracing new methodological advances. The integration of AI into actuarial practice creates opportunities for innovation while maintaining our core professional values.

Meeting Professional Standards

A critical aspect of this evolution is ensuring that new methods comply with professional standards. Recent work has shown how we can adapt deep learning approaches to meet key requirements:

  • Model understanding through inherently interpretable architectures
  • Prevention of unwanted bias through specialized constraints
  • Uncertainty quantification through modern techniques
  • Reproducibility through appropriate model design

The Future Path

The actuarial profession has always evolved with new methodological developments. The current AI revolution offers us the chance to enhance our capabilities while remaining true to our professional principles. The key will be thoughtfully embracing these new tools, ensuring they serve our ultimate goal of managing risk and uncertainty for the benefit of society.

Sources

Harris, R., Richman, R., & Wüthrich, M. V. (2024). Reflections on deep learning and the actuarial profession(al). SSRN.

SSRN

Goulet, S., Balona, C., Richman, R., & Bennet, S. (2022). Canadian mortality table construction alternative methods: Generalized additive model and neural network model. Canadian Institute of Actuaries.

CIA-ICA

Richman, R. (2024). An AI vision for the actuarial profession. CAS E-Forum.

Casualty Actuarial Society Forum

Richman, R., & Wüthrich, M. V. (2023). LocalGLMnet: Interpretable deep learning for tabular data. Scandinavian Actuarial Journal, 2023(1), 71–95.

RePEc Ideas

Richman, R., Scognamiglio, S., & Wüthrich, M. V. (2024). The credibility transformer. arXiv.

arXiv

High-Cardinality Categorical Covariates in Network Regressions

A major challenge in actuarial modelling is how to deal with categorical variables with many levels (i.e. high cardinality). This is often encountered when one has a rating factor like car model, which can take on one of thousands of values, some of which have significant exposure and others with exposure close to zero.

In a new paper with Mario Wüthrich, we show how to incorporate these variables into neural networks using different types of regularized embeddings, including using variational inference. We also consider both the case of standalone variables, as well as the case of variables with a natural hierarchy, which lend themselves to being modelled with recurrent neural networks or Transformers. On a synthetic dataset, the proposed methods provide a significant gain in performance compared to other techniques.

We show the problem we are trying to solve in the image below, which illustrates how the most detailed covariate in the synthetic dataset – Vehicle Detail – can produce observed values vastly different from the true value due to sampling error.

A special thank you to Michael Mayer, PhD for input into the paper and interesting discussions on the topic!

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4549049

Talk on ‘Explainable deep learning for actuarial modelling’

In these past days I had the privilege of presenting on the topic of “Explainable deep learning for actuarial modelling” to Munich Re‘s actuarial and data science teams. In this talk I covered several explainable deep learning methods: the CAXNN, LocalGLMnet and ICEnet models.

My slides are attached below if this is of interest.

Smoothness and monotonicity constraints for neural networks using ICEnet

I am pleased to share a new paper on adding smoothness and monotonicity constraints to neural networks. This is a joint work with Mario Wüthrich.

In this paper, we propose a novel method for enforcing smoothness and monotonicity constraints within deep learning models used for actuarial tasks, such as pricing. The method is called ICEnet, which stands for Individual Conditional Expectation network. It’s based on augmenting the original data with pseudo-data that reflect the structure of the variables that need to be constrained. We show how to design and train the ICEnet using a compound loss function that balances accuracy and constraints, and we provide example applications using real-world datasets. The structure of the ICEnet is shown in the following figure.

Applying the model produces predictions that are smooth and vary with risk factors in line with intuition. Below is an example where applying constraints forces a neural network to produce predictions of claims frequency that increase with population density and vehicle power.

You can read the full paper at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4449030 and we welcome any feedback.

New book by Wüthrich and Merz published!

The fantastic new resource from Mario Wüthrich and Michael Merz on statistical learning for actuarial work has just been published by Springer. This is open access and freely available here:

https://link.springer.com/book/10.1007/978-3-031-12409-9

Everyone involved in these areas will find a wealth of information in this book and I give it my highest recommendation!

GIRO 2022

The Institute and Faculty of Actuaries (IFoA) has been key to my journey as an actuary, providing my initial professional education and, subsequently, many great opportunities to contribute and learn more along the way. This made receiving the 2022 Outstanding Achievement award from the IFoA’s GI Board yesterday very special:

https://actuaries.org.uk/news-and-media-releases/news-articles/2023/jan/30-jan-23-gi-outstanding-achievement-award-winner-2022/

The award was given in connection with my research into applying machine and deep learning within actuarial work. My hope is that more actuaries within the vibrant community attending the 2022 GIRO conference will be motivated to apply these techniques in their own work.

Thank you again to the #GIRO2022 organizing committee and the #ifoa for a fantastic event!

Reserving with the Cape Cod Method – OMI/ASABA Masterclass

I was delighted to present the first masterclass in the series as part of the short-term insurance practicing initiative of the Association of South African Black Actuarial Professionals and Old Mutual Insure. The title was “Reserving with the Cape Cod Method” and the attached slides cover everything from the basics all the way up to advanced methods of setting the parameters using machine learning. More materials can be found at the GitHub link on the title slide.

DFIP Old and New – Talk at the 2022 STIC Seminar

I was delighted to speak at the Actuarial Society of South Africa (ASSA)‘s annual short term insurance seminar, on Discrimination Free Insurance Pricing and our new work on multi-task networks. My slides are below.

Thanks so much to Mathias Lindholm, Andreas Tsanakas and Mario Wüthrich for this collaboration!

Discrimination Free Insurance Pricing – new paper

I am very excited to announce our next paper on Discrimination Free Insurance Pricing (DFIP). The first paper introduced a method for removing indirect discrimination from pricing models. The DFIP technique requires that the discriminatory features (e.g. gender) are known for all examples in the data on which the model is trained, as well as for the subsequent policies which will be priced. In this new work, we only require that the discriminatory features are known for a subset of the examples and use a specially designed neural network (with multiple outputs) to take care of those examples that are missing this information. In the plot below, we show that this new approach produces excellent approximations to the true discriminatory free price in a synthetic health insurance example.

The new paper can be found here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4155585

Thank you to Mathias Lindholm, Andreas Tsanakas and Mario Wüthrich for this wonderful collaboration!

LASSO Regularization within the LocalGLMnet Architecture

We are excited to post a new paper, covering feature selection in our explainable deep learning architecture, the LocalGLMnet. Deep learning models are often criticized for not being explainable nor allowing for variable selection. Here, we show how group LASSO regularization can be implemented within the LocalGLMnet architecture so that we receive feature sparsity for variable selection. On several examples, we find that the proposed methods can identify less important variables successfully, even on smaller datasets! The figure below shows output from the model fit to the famous bike sharing dataset, where randomly permuted variables receive zero importance after regularization.

The paper can be found here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3927187

%d bloggers like this: