As an actuary and researcher working at the intersection of actuarial science and artificial intelligence, I’ve been exploring how we can enhance our traditional actuarial toolkit with modern deep learning approaches while maintaining our professional standards and ethical principles. Let me share my current thinking on this evolution of our profession.
The Fusion of Classical and Modern Methods
A key insight that has emerged from my research is the distinction between general AI applications, like large language models, and specific AI tools designed for core actuarial tasks. While LLMs generate significant buzz, I believe the more profound impact in the short and medium term on actuarial work will come from specialized deep learning models adapted for insurance applications.
The LocalGLMnet architecture (Richman & Wüthrich, 2023a) represents one approach to creating inherently interpretable neural networks. By maintaining the same model structure as a GLM while allowing coefficients to vary smoothly with inputs, this architecture provides both interpretability and strong predictive performance. The LocalGLMnet is thus one example of how deep learning can be specialized for actuarial purposes.
The Canadian Institute of Actuaries’ research on mortality table construction (Goulet et al., 2022) provides a fascinating case study of this evolution. When comparing Whittaker-Henderson graduation to neural network approaches, we see that deep learning models can capture more complex relationships while still maintaining the essential characteristics that actuaries expect from mortality tables. The key insight is that we don’t need to choose between traditional actuarial principles and modern methodologies – we can synthesize both approaches to create more powerful and reliable models. This is the approach we took in a recent talk at ASSA’s 2024 Convention – deck attached below.
The core innovation we presented was the incorporation of Whittaker-Henderson (WH) smoothing directly into a neural network architecture through the loss function. This creates what could be called a “smoothness-aware” deep learning model that respects classical actuarial principles while leveraging modern computational capabilities.
This work represents a broader paradigm shift in actuarial science – one where we don’t simply replace traditional methods with black-box deep learning, but rather thoughtfully integrate classical actuarial principles into modern architectures. The presentation shows how this can be done while maintaining:
- Professional standards for model interpretability
- Actuarial judgment in parameter selection
- Smooth and credible mortality patterns
- Transfer learning capabilities across populations
This work demonstrates how thoughtful integration of classical actuarial principles with modern deep learning can produce models that are both more powerful and more aligned with actuarial professional standards. It provides a template for similar syntheses in other areas of actuarial work.
Classical actuarial principles, like credibility theory, shouldn’t be discarded as we adopt modern methods. Instead, in our recent work on the Credibility Transformer (Richman, Scognamiglio & Wüthrich, 2024), we show how traditional actuarial concepts can be integrated into state-of-the-art deep learning architectures, enhancing their performance and interpretability. This synthesis of old and new represents an exciting path forward for our profession.
Professional Considerations and Challenges
However, the adoption of AI methods raises important professional considerations. In recent work with Roseanne Harris and Mario Wüthrich (Harris et al., 2024), we examine how actuaries can embrace AI tools while remaining committed to professional and ethical principles that have long distinguished our field. Key requirements include:
- Ensuring model understandability and interpretability
- Avoiding bias and discrimination in model outputs
- Maintaining strong governance frameworks
- Adapting professional education to cover new methodologies
- Exercising appropriate professional judgment
The actuarial profession is at an inflection point. As discussed in my recent essay (Richman, 2024), choices made about embracing AI and ML in the next few years will determine if we thrive or merely survive in the age of AI. The promising results from applying neural networks to mortality modeling in the Canadian Institute of Actuaries’ research (Goulet et al., 2022), where I contributed to developing new approaches, show how we can adapt powerful tools to meet actuarial standards of practice.
The AI-Enhanced Actuary
Looking ahead, I envision what I call the “AI-enhanced actuary” – a professional who leverages both classical actuarial expertise and AI capabilities to:
- Build more accurate and efficient models
- Incorporate new data sources
- Automate routine tasks
- Focus on high-level strategic decisions
- Ensure ethical implementation of AI systems
This evolution represents a natural progression that builds upon our foundation of mathematical and statistical techniques while embracing new methodological advances. The integration of AI into actuarial practice creates opportunities for innovation while maintaining our core professional values.
Meeting Professional Standards
A critical aspect of this evolution is ensuring that new methods comply with professional standards. Recent work has shown how we can adapt deep learning approaches to meet key requirements:
- Model understanding through inherently interpretable architectures
- Prevention of unwanted bias through specialized constraints
- Uncertainty quantification through modern techniques
- Reproducibility through appropriate model design
The Future Path
The actuarial profession has always evolved with new methodological developments. The current AI revolution offers us the chance to enhance our capabilities while remaining true to our professional principles. The key will be thoughtfully embracing these new tools, ensuring they serve our ultimate goal of managing risk and uncertainty for the benefit of society.
Sources
Harris, R., Richman, R., & Wüthrich, M. V. (2024). Reflections on deep learning and the actuarial profession(al). SSRN.
Goulet, S., Balona, C., Richman, R., & Bennet, S. (2022). Canadian mortality table construction alternative methods: Generalized additive model and neural network model. Canadian Institute of Actuaries.
Richman, R. (2024). An AI vision for the actuarial profession. CAS E-Forum.
Casualty Actuarial Society Forum
Richman, R., & Wüthrich, M. V. (2023). LocalGLMnet: Interpretable deep learning for tabular data. Scandinavian Actuarial Journal, 2023(1), 71–95.
Richman, R., Scognamiglio, S., & Wüthrich, M. V. (2024). The credibility transformer. arXiv.