Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

Abstract

We want to explain individual predictions from machine models by learning simple, interpretable explanations. Shapley values is a game theoretic concept that can be used for this purpose. The Shapley value framework has a series of desirable theoretical properties, and can in principle handle any predictive model. Kernel SHAP is a computationally efficient approximation to Shapley values in higher dimensions. Like several other existing methods, this approach assumes that the features are independent. Since Shapley values currently suffer when features are correlated, the explanations may be very misleading. We extend the Kernel SHAP method to handle dependent features.

Date
Aug 24, 2021
Location
Online
Next
Previous

Related