Hub
Pricing About
ComponentComponent

XAI View

H
Draft Latest edits on 
Sep 30, 2020 8:03 AM
Drag & drop
Like
Use or download
In order to decipher the decision making process of a black-box model you can use the eXplainable Artificial Intelligence (XAI) view. The view works for machine learning classifiers for binary and multiclass targets. The component generates an interactive dashboard view visualizing explanations for a set of instances you provide, as well other charts and Machine Learning Interpretability (MLI) techniques. This component computes SHAP explanations, Partial Dependence Plot (PDP), Individual Conditional Expectation (ICE) curves and surrogate decision tree view. - SHAP values help in explaining the prediction by computing the contribution of each feature to the prediction. The sum of all SHAP values adds up to the difference between the prediction value and the average prediction in the provided sample dataset. Each explanation in the view is represented as a bubble and the aggregated sum of multiple explanations values in a violin plot. - A Partial Dependence Plot (PDP) denotes the relationship between the target and a single feature in a cartesian graph as a filled area. Individual Conditional Expectation (ICE) curves in the PDP show the reaction of a single prediction when changing a single feature. - Surrogate Decision Tree View is the result of overfitting a Decision Tree on the predictions of the original model instead of using the actual ground truth target. By committing the same mistakes of the original model, a view of the tree explains the black box model as a hierarchical decision process. The dashboard is interactive, select explanations bubbles to see the same predictions highlighted in the other views. If the component is used as a nested component you can also add additional charts to visualize its output in other ways. The user needs to provide a sample of the dataset used to train a model, the model and a set of instances (rows) from the test set. DATA INPUT REQUIREMENTS - The two input data tables (top and bottom ports) need to have exactly the same columns (Table Spec) beside the target column which can be omitted in the bottom port as you might need to explain instances for which the ground truth is not available. - The bottom input with instances to be explained can be at max 100 rows. More instances would clutter the visualization and take even more time to compute. BLACK-BOX MODEL REQUIREMENTS We recommend using the "AutoML" component to test the “XAI View”, but any model could be explained by the component as long as it behaves as a black box and it is captured with Integrated Deployment. Precise requirements are listed below. - The model should be captured with Integrated Deployment and have a single input and single output of type Data. - All features columns have to be provided at the input. - Any other additional columns that are not features can be provided at the input. - The output should store all the input data (features and non-features) and present attached the output predictions columns. - The output predictions should be one String type and “n” Double type, where “n” is the number of classes in the target column. - The String type prediction column should be named “Prediction([T])” where [T] is the name of your target class (e.g. “Prediction (Churn)”). - The Double type prediction columns should be named “P ([T]=[C1])”, “P ([T]=[C2])”, …, “P (T=[Cn])”, where [Cn] is the name of the class that probability is predicting (e.g. “P (Churn=not churned)” and ”P (Churn=churned)” in the binary case). Additionally, if you are not using the AutoML component, you need to provide a flow variable called “target_column” of type String with the name of your ground truth / target column in the top input of the XAI View component.

Component details

Input ports
  1. Type: Table
    Sampling Table
    Provide the table that contains feature and target column (Dataset Sample).
  2. Type: Workflow Port Object
    Models
    Provide the model table from the top output of "AutoML" Component.
  3. Type: Table
    Explainable Instances
    Provide the table with predictions that have to be explained (Explainable Instances) .
Output ports
  1. Type: Table
    Explanations
    The instances (second input) is rendered with attached the SHAP explanations and predictions.

Used extensions & nodes

Created with KNIME Analytics Platform version 4.2.2
  • Go to item
    KNIME Base nodesTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.2

    knime
  • Go to item
    KNIME Data GenerationTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.0

    knime
  • Go to item
    KNIME Integrated DeploymentTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.2

    knime
  • Go to item
    KNIME JavaScript ViewsTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.2

    knime
  • Go to item
    KNIME JavasnippetTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.0

    knime
  • Go to item
    KNIME Machine Learning Interpretability ExtensionTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.1

    knime
  • Go to item
    KNIME Math Expression (JEP)Trusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.2

    knime
  • Go to item
    KNIME PlotlyTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.0

    knime
  • Go to item
    KNIME Quick FormsTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.2.2

    knime

This component does not have nodes, extensions, nested components and related workflows

Legal

By using or downloading the component, you agree to our terms and conditions.

KNIME
Open for Innovation

KNIME AG
Talacker 50
8001 Zurich, Switzerland
  • Software
  • Getting started
  • Documentation
  • Courses + Certification
  • Solutions
  • KNIME Hub
  • KNIME Forum
  • Blog
  • Events
  • Partner
  • Developers
  • KNIME Home
  • Careers
  • Contact us
Download KNIME Analytics Platform Read more about KNIME Business Hub
© 2025 KNIME AG. All rights reserved.
  • Trademarks
  • Imprint
  • Privacy
  • Terms & Conditions
  • Data Processing Agreement
  • Credits