This workflow demonstrates the usage of the verified components developed to interpret machine learning models. In the example we train a credit scording model (a Neural Network). The Workflow Object capturing the pre-processing and the model is provided as one of the inputs for the model interpretability components. The components compute both global (Global Surrogates, Permutation Feature Importance, Partial Dependence Plot) and local (Counterfactual Explanations, Local Surrogates, ICE, SHAP) explainability (XAI) techniques as well as fariness measures (demographic parity, equality of opportunity and equalized odds).
Used extensions & nodes
Created with KNIME Analytics Platform version 4.5.1
By using or downloading the workflow, you agree to our terms and conditions.
Discussions are currently not available, please try again later.