An increasing number of decisions in our society rely on predictions stemming from complex machine learning models and the like. The importance and complexity of many of these models drive an urge for understanding and explanation of these predictions. Shapley values is a game theoretic concept that can be used explain such individual predictions. The Shapley value framework has a series of desirable theoretical properties, and can in principle handle any predictive model. In this talk I will provide an introduction to the Shapley value framework for prediction explanation, considering both theoretical and practical aspects. I will highlight two main challenges with the framework and lay out suggestions for how to approach those challenges. If time allows it, I will also showcase how to easily use the R-package shapr to explain predictive models using Shapley values.