A Paradigm Shift in Prediction: Why Actuarial Science Must Redefine Its Understanding of AI

While foundational tools like GLMs are powerful, classifying them as modern AI muddies regulatory waters and blinds us to the representation-learning revolution actively reshaping our world – a revolution characterized not by incremental improvement but by a qualitative transformation in how machines perceive, reason about, and act on reality.

The taxonomy of computational intelligence has sparked a vigorous and necessary debate within the actuarial profession as well as on my LinkedIn feed! This is far more than a squabble over semantics. The words we choose – and the mental models they represent – have profound implications for governance, investment strategy, the evolution of our methods, and our very ability to navigate a world being remade by a new class of technology.

To truly grasp the stakes, let me suggest we must look beyond our own field to the history of science itself, and specifically to the work of philosopher Thomas Kuhn.

Understanding Scientific Revolutions: A Kuhnian Primer

For those uninitiated, Thomas Kuhn’s 1962 book, The Structure of Scientific Revolutions, transformed our understanding of how science progresses. He argued that science doesn’t always advance in a smooth, linear fashion. Instead, it moves through long periods of stability punctuated by radical shifts.

  • Normal Science: This is the day-to-day work. Scientists operate within an established paradigm—a shared set of assumptions, theories, and methods (like Newtonian physics or, in our world, traditional actuarial modeling). Their work focuses on “puzzle-solving” within the rules of that paradigm.
  • Anomalies & Crisis: Over time, results emerge that the existing paradigm cannot explain. These are anomalies. As anomalies accumulate, confidence in the old paradigm wavers, leading to a period of crisis.
  • Paradigm Shift: A new paradigm emerges that can explain the anomalies, offering a fundamentally new way of seeing the world. This paradigm shift is a revolution. The new paradigm isn’t just a better version of the old one; it’s often incommensurable – meaning the two are so different in their core assumptions that they can’t be judged by the same standards. Think of the shift from the Earth-centric astronomy of Ptolemy to the Sun-centric model of Copernicus. Astronomers in the new paradigm were not just getting better answers; they were asking different questions.

Today, actuarial science is in the midst of its own Kuhnian crisis. Our “normal science” of statistical modeling is being challenged by anomalies – models that can “see” images, “read” text, and discover patterns of a complexity we’ve never seen before. I certainly wasn’t aware of models that could do this during my early studies! The “facts on the ground” show that Deep Learning and Large Language Models (LLMs) can simply do different things than prior generations of models. This is our paradigm shift.

The Spectrum of Professional Perspectives

This tension is vividly reflected in a dialogue on an earlier LinkedIn post and blog, which has surfaced diverse viewpoints that illuminate the different facets of this challenge:

  • Academic Rigor: Max Martinelli (in private communications) grounds his perspective in foundational texts, noting that Russell & Norvig’s definition encompasses any system that “receives percepts from the environment and performs actions.” From this vantage point, even simple linear models qualify as AI. Yet he rightly acknowledges the nuance: the manual, iterative feature engineering common in traditional ratemaking “seems to violate the spirit of the definition.”
  • Pragmatic Focus: Chris Dolman questions the operational import of these distinctions beyond “marketing or communication problems.” His pragmatic lens rightly focuses on describing a model’s capability rather than debating its categorical boundary.
  • Industry Reality: Andrew Morgan and Gabriel Ryan highlight the tangible problem of “AI washing”—when traditional techniques are marketed as revolutionary, leading to resource misallocation and unfulfilled expectations.
  • Causal Complexity: Professor Fei Huang reminds us that even models aimed at “explaining” do not automatically establish causation, adding a crucial layer of consideration for a profession built on understanding risk drivers.
  • Semantic Evolution: Arthur da Silva captures the temporal nature of these terms by citing Kevin Weil: “AI is whatever hasn’t been done yet… once it’s been done and kind of works, then you call it machine learning.”

The New Paradigm: Representation Learning is the Key Ingredient

My core view is that the revolutionary ingredient fueling this paradigm shift is representation learning. I have been espousing this view since my early paper “AI in Actuarial Science” and built it out in “Believing the Bot”.

This is the ability of a model to ingest raw, unstructured data and autonomously learn its own features, or “representations,” of reality. This marks a fundamental departure from the old paradigm.

Consider this concrete actuarial example:

  • In the old paradigm (GLM), to model auto risk, an actuary must manually hypothesize and engineer features. We must explicitly tell the model to consider “Driver Age,” “Vehicle Type,” and perhaps create an interaction term for “Territory x Vehicle Age,” because we suspect these variables relate in a specific way. The model’s intelligence is constrained by our own imagination.
  • In the new paradigm (Deep Learning), we can feed a neural network raw telematics data – GPS streams, accelerometer readings, gyroscope data. The model itself might discover a complex, non-linear relationship between subtle braking patterns, the sharpness of turns, and time of day that no human would have hypothesized. It learns the features, creating its own high-dimensional representation of “risky driving.”

This is not only an improvement in accuracy – which deep learning models achieve in many actuarial domains – it is a change in the nature of discovery and modelling! This shift from hand-crafted features to learned representations is in my view the dividing line between traditional ML and the modern AI systems that are driving the current revolution.

Three Distinct Epochs of Algorithmic Intelligence

This paradigm shift becomes even clearer when we view computational history through three distinct phases:

1. The Calculation Era (Pre-2012)

This era was defined by models executing mathematical operations on human-engineered features. Intelligence meant following sophisticated but predetermined rules, a paradigm exemplified by GLMs.

2. The Perception Era (2012-Present)

This phase was sparked by deep learning, which enabled direct learning from raw sensory data. For the first time, models could develop visual understanding (CNNs) and process sequential patterns (RNNs), constructing their own internal representations of reality.

3. The Reasoning & Generative Era (2018-Present)

The current era, supercharged by Transformer architectures, has unlocked emergent synthesis and generation capabilities. Transformers move beyond being mere models; they are engines for compute over anything that can be embedded. By converting text, images, molecules, or user actions into a unified mathematical space (an embedding space), they can reason across domains in a way previously unimaginable. This is a shift from domain-specific analysis to a universal computational framework.

Consequences of the Shift: From Operations to Philosophy

Recognizing this as a true paradigm shift has profound consequences.

Governance: From Regulatory Clarity to Philosophical Challenge

On a practical level, claiming every GLM is AI renders new regulations meaningless through over-inclusion. But the challenge runs deeper. The new paradigm forces us to govern systems whose decision-making processes transcend human interpretability. This opacity is a philosophical one. How do we, as fiduciaries, maintain responsibility over systems whose reasoning we cannot fully comprehend? This is a question the old paradigm never had to ask.

Methodology: From Deductive to Inductive Science

The old paradigm was largely deductive. We started with a human-generated hypothesis (“I believe young drivers in sports cars are riskier”) and used a model to test it. The new paradigm is powerfully inductive. It sifts through vast datasets to discover complex correlative patterns that can inspire new causal investigations. This fundamentally changes the actuarial epistemology – our theory of how we come to know things about risk.

Strategy: Avoiding the Next AI Winter

Clear definitions are vital for credible strategy. The history of this field is littered with “AI winters” – periods of disillusionment and funding collapse caused by capabilities failing to match hype. Lumping GLMs and GPT-4/o3-pro under the same “AI” umbrella invites this same disillusionment. Precise language prevents resource misallocation and builds sustainable, realistic roadmaps for innovation. For instance, while a GLM is king for regulatory rate filing in some jurisdictions, GBMs and Deep Learning models are often superior choices for internal applications like fraud detection or marketing optimization, or for pricing where a pure performance lift is the goal and a formal filing is not required.

The Path Forward: A New Toolkit for a New Era

Navigating this new world requires an updated strategy.

A Practical Taxonomy for Actuarial Practice

Strategic Recommendations

  1. Audit Your Model Portfolio. Classify systems based on whether they learn representations autonomously. This will clarify your true AI governance surface area.
  2. Embrace a New Validation Mindset. Recognize that new models are incommensurable with old ones. Judging a Transformer on its inferential p-values is like judging a car on how well it eats hay. We need new metrics focused on out-of-sample performance, robustness, and the impact of emergent behaviors.
  3. Govern the Ethics of Discovery. The inductive power of AI will uncover patterns that are predictive but may be ethically problematic or based on unfair proxies. We must proactively build ethical frameworks to govern not just our models, but the patterns they discover.
  4. Prepare for Foundation Models. The next shift is from task-specific models to general-purpose foundation models. Actuaries will increasingly fine-tune these massive pre-trained systems rather than building models from scratch. This requires new skills in prompt engineering, model adaptation, and API integration.

Conclusion: Intellectual Honesty in Revolutionary Times

The vigor of this debate confirms we are at a critical juncture. Academic definitions that place GLMs within AI’s historical lineage are valid. Yet to ignore the clear, qualitative chasm between a system that calculates based on our instructions and one that learns to perceive the world on its own is to miss the revolution entirely.

Our task is not to discard the old paradigm – GLMs remain indispensable, elegant tools for the right problems. Rather, our duty is to name the new paradigm accurately, to understand its profound and sometimes unsettling implications, and to build the intellectual and ethical frameworks required to wield its power responsibly. It is time to see the world through the new lens.

A sincere thank you to Max Martinelli, Professor Fei Huang, Chris Dolman, Andrew Morgan, Gabriel Ryan, Davide Radwan, and Arthur da Silva for enriching this crucial discourse. Your perspectives strengthen our collective understanding as we navigate these transformative times.

#ActuarialScience #MachineLearning #AI #DeepLearning #ScientificRevolutions #insureAI #RepresentationLearning #ParadigmShift

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: