SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. While SHAP can explain the output of any machine learning model, Lundberg and his collaborators have developed a high-speed exact algorithm for tree ensemble methods  ,  .
The Tree SHAP Random Forest (Regression) Predictor is used as a substitute to the Random Forest Predictor (Regression). Simply replace every Random Forest Predictor (Regression) with this node to get started. If you are using a different tree based method, consider the other nodes in this package.
The beautiful thing about SHAP values is the intuitive interpretation. Every model has an expected output, the average prediction. The model prediction for a data row is the expected output plus the summation of SHAP values. This leads to intuitive explanations, for example in revenue forecasts for customers "The high interaction of the customer with the product website over the last three months contributed 1200 euros to the predicted revenue next year.".
If you need help integrating explainable machine learning methods in your company, please contact me at email@example.com
All credits to the original research and development of the C++ and Python code go to Lundberg and his collaborators.