In order to decipher the decision making process of a black-box model you can use the eXplainable Artificial Intelligence (XAI) view. The view works for machine learning classifiers for binary and multiclass targets. The component generates an interactive dashboard view visualizing explanations for a set of instances you provide, as well other charts and Machine Learning Interpretability (MLI) techniques. This component computes SHAP explanations, Partial Dependence Plot (PDP), Individual Conditional Expectation (ICE) curves and surrogate decision tree view. - SHAP values help in explaining the prediction by computing the contribution of each feature to the prediction. The sum of all SHAP values adds up to the difference between the prediction value and the average prediction in the provided sample dataset. Each explanation in the view is represented as a bubble and the aggregated sum of multiple explanations values in a violin plot. - A Partial Dependence Plot (PDP) denotes the relationship between the target and a single feature in a cartesian graph as a filled area. Individual Conditional Expectation (ICE) curves in the PDP show the reaction of a single prediction when changing a single feature. - Surrogate Decision Tree View is the result of overfitting a Decision Tree on the predictions of the original model instead of using the actual ground truth target. By committing the same mistakes of the original model, a view of the tree explains the black box model as a hierarchical decision process. The dashboard is interactive, select explanations bubbles to see the same predictions highlighted in the other views. If the component is used as a nested component you can also add additional charts to visualize its output in other ways. The user needs to provide a sample of the dataset used to train a model, the model and a set of instances (rows) from the test set. DATA INPUT REQUIREMENTS - The two input data tables (top and bottom ports) need to have exactly the same columns (Table Spec) beside the target column which can be omitted in the bottom port as you might need to explain instances for which the ground truth is not available. - The bottom input with instances to be explained can be at max 100 rows. More instances would clutter the visualization and take even more time to compute. BLACK-BOX MODEL REQUIREMENTS We recommend using the "AutoML" component to test the “XAI View”, but any model could be explained by the component as long as it behaves as a black box and it is captured with Integrated Deployment. Precise requirements are listed below. - The model should be captured with Integrated Deployment and have a single input and single output of type Data. - All features columns have to be provided at the input. - Any other additional columns that are not features can be provided at the input. - The output should store all the input data (features and non-features) and present attached the output predictions columns. - The output predictions should be one String type and “n” Double type, where “n” is the number of classes in the target column. - The String type prediction column should be named “Prediction([T])” where [T] is the name of your target class (e.g. “Prediction (Churn)”). - The Double type prediction columns should be named “P ([T]=[C1])”, “P ([T]=[C2])”, …, “P (T=[Cn])”, where [Cn] is the name of the class that probability is predicting (e.g. “P (Churn=not churned)” and ”P (Churn=churned)” in the binary case). Additionally, if you are not using the AutoML component, you need to provide a flow variable called “target_column” of type String with the name of your ground truth / target column in the top input of the XAI View component.
- Type: TableSampling TableProvide the table that contains feature and target column (Dataset Sample).
- Type: Workflow Port ObjectModelsProvide the model table from the top output of "AutoML" Component.
- Type: TableExplainable InstancesProvide the table with predictions that have to be explained (Explainable Instances) .