Explaining the Credit Scoring Model with Model Interpretability Components
This workflow demonstrates the usage of the verified components developed to interpret machine learning models.
In the example, the Credit Scoring data set is partitioned to training and test samples. Then, the black box model (Neural Network) is trained on the standardly pre-processed training data using the AutoML component. The Workflow Object capturing the pre-processing and the model is provided as one of the inputs for the model interpretability components.
The components allow for both global (Global Surrogates, Permutation Feature Importance, Partial Dependence Plot) and local (Counterfactual Explanations, Local Surrogates, ICE, SHAP) explainability techniques.
This workflow demonstrates the usage of the verified components developed to interpret machine learning models.
In the example, the Credit Scoring data set is partitioned to training and test samples. Then, the black box model (Neural Network) is trained on the standardly pre-processed training data using the AutoML component. The Workflow Object capturing the pre-processing and the model is provided as one of the inputs for the model interpretability components.
The components allow for both global (Global Surrogates, Permutation Feature Importance, Partial Dependence Plot) and local (Counterfactual Explanations, Local Surrogates, ICE, SHAP) explainability techniques.