Peeking into the black box – an actuarial perspective.
This blog post is a companion piece to the paper of the same title(Kuo and Lupton 2020), which was recently accepted for publication in Variance. In the paper, we discuss details about the need for actuaries to be able to explain machine learning (ML) models. This post provides an accessible summary of the paper and a few code snippets to show how one can get started with model explanation techniques.
First, some background: why is it important to be able to explain ML models? There are a few obvious reasons:
Before we get started, there’s one other question we need to tackle: what does it mean for a model to be interpretable?
The answer really depends on context and audience. Often the inner workings of ML models are not mysterious — the algorithms that generate answers are well-understood, much the same way GLMs are considered to be well-understood. Likewise, it’s worth noting that both GLMs and ML models can pose challenges for interpretation when they become complex.
What qualifies as interpretable, then, really depends on the person asking the questions, the model itself, and the kinds of questions being asked. A seasoned ML or statistics expert will have a very different sense of interpretability than a relative newcomer just by virtue of having deeper understanding of the mechanics of the models in question.
That said, it is possible to sidestep this issue to some extent. The techniques in this paper focus on model-agnostic interpretation strategies. These techniques allow the user to gain a rich understanding of the effects of a model without relying on deep knowledge of the model’s mathematical properties or structure. This broad approach to interpretability allows those unfamiliar with the specific model to gain understanding of how a model is generating predictions and whether those predictions are reasonable.
The techniques we introduce here will help to answer three main questions. We pose these questions in the context of ratemaking, though similar questions might be asked in other domains of application. There are many ways to answer these kinds of questions, as we discussed above, but this post will focus on a more limited toolbox of powerful techniques.
The questions and our proposed answers are as follows:
We emphasize that each technique outlined here is not the only way to do things. In fact, they are the simplest starting points with certain drawbacks, and variations and improvements have been built upon them (see the paper for references). You can think of these as the OG approaches.
Let’s take a look at examples of these Q&As. The working example we consider is a personal auto pure premium model based on actual data from Brazil (more specifically, a neural network, though the model explanation techniques are agnostic with respect to the type of model). For those who would like to follow along with code, check out the GitHub repo.
This technique answers the question, “What are the most important predictors in the model?” or, put another way, which predictors are contributing most to the accuracy of predictions?
One way to figure out how much a variable contributes to predictive accuracy would be to compare nested models including and excluding the variable and checking which model performs better on average. However, with complicated models and lots of data, this could be a computationally expensive process. Instead, variable importance plots use a clever workaround: instead of removing a variable entirely and re-fitting, we simply permute the values of the variable, i.e., we randomly rearrange the variable’s values and we see how well the model fits.
Here, we see that Make contributes most to the accuracy of the model while Sex contributes the least.
This technique answers the question, “How do changes to the inputs affect the output on average?”
For highly non-linear models, it could be important to verify that the relationship between a rating variable and pure premium (for instance) makes sense. This technique can be used to review those relationships. This technique works by considering the model’s prediction at a certain level of a variable averaged over all other variables. For example, we might consider the average predicted pure premium at different vehicle ages, averaged over all other modeled variables. This yields a curve reflecting how the pure premium changes on average as the vehicle age changes.
As the Vehicle Age variable increases, we expect that, on average, our model will predict a lower loss cost. It is appropriate to consider the distribution of the variable when considering its PDP, as sometimes unexpected behavior can appear near values where the variable has thin data.
This method answers the common question: what factors most contribute to the rate a policyholder is being charged? For example, a young driver might get a higher auto insurance premium, but the car on the policy might be associated with comparatively low pure premium. How do these factors combine to generate the policyholder’s rate?
This technique can be used to verify that predictions make sense, and it can be used to investigate problematic predictions. Technically, it works by looking at how the model prediction changes as each variable is introduced. So we start with the average prediction (i.e., the model intercept), and then look at how the model changes as each variable is moved from its average, or expected, value to the specific value for a given policyholder.
The verbal description is a little confusing, but it is easy to understand with an example:
In this case, starting with a model average prediction of R$735 across the entire dataset, our policy has an expected decrease in loss cost of R$141 due to the vehicle being a Ford, an expected increase in loss cost of R$120 due to the vehicle being relatively new (vehicle_age == 1
), and so on, until we arrive at the predicted loss cost of R$847.
In this post, we explored three powerful, model-agnostic techniques for understanding, troubleshooting, and communicating model results. It is worth noting that the techniques explored in this post are only one version of a subset of available techniques, and the paper provides references to alternatives.
Finally, we would like to point out a recent tutorial(Lorentzen and Mayer 2020) made available by others, and encourage academics and practitioners alike explore this very interesting topic!