Interpreting Deep Learning Models with Marginal Attribution by Conditioning on Quantiles

Today Michael Merz, Andreas Tsanakas, Mario Wüthrich and I released a new paper showing a novel deep learning interpretability technique. This can be found on SSRN:

Compared to traditional model interpretability techniques which usually operate either at a global or instance level, the new technique, which we call Marginal Attribution by Conditioning on Quantiles looks at the contributions of variables at each quantile of the response. This provides significant insight into variable importance and the relationships between inputs to the model and predictions. The image above illustrates the output from the MACQ method on the Bike Sharing dataset.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: