Hub
Pricing About
ComponentComponent

Counterfactual Explanations (Python)

knime profile image
Versionv1.0Latest, created on 
Oct 20, 2023 1:30 PM
Drag & drop
Like
Use or download
Counterfactual Explanations describe the smallest changes to the feature values required to change the prediction outcome. Those values should be intuitive to explain any prediction as they point out what feature values should be improved to go from a negative class to a positive class. This component generates the Counterfactual Explanations for Binary Classification models using the Python Library “alibi” (docs.seldon.io/projects/alibi). The component outputs a table listing for each input row a counterfactual explanation followed by the original prediction and the new prediction. The new prediction is relative to the counterfactual instance obtained by summing the explanation values to the original instance. In case the component couldn't find any explanation for an original instance, null values will be listed in the relative row at the component output. DATA INPUT REQUIREMENTS - The input data should be instances that you would like to explain. Those instances must have all the columns that were used while training the model. DATA PRE-PROCESSING REQUIREMENTS - The data preprocessing should be provided as a Python pickled object defined by a custom Python class. - The custom Python class should be present in the workflow folder where the component is executed. - The custom Python class should be called “custom_class_data_processing.py”. - The default Python class as well as Jupyter Notebooks to understand how to use it are available on the following KNIME Hub space: ---> kni.me/s/hLLRgZLzgSNv8Z6M BLACK BOX MODEL REQUIREMENTS - The model should be trained on normalized numerical features only: no other kind of data preparation is supported, unless you edit the custom Python class defining the pre-processing. - The model has to be trained using the Python libraries Keras or scikit-learn. - The model can be trained in Python outside of KNIME Analytics Platform or inside using either KNIME Deep Learning Integration or KNIME Python Integration. - If the model is trained with scikit-learn, it has to be provided as a pickled object. - If the model was trained with Keras (either Tensorflow 1 or 2) it has to be provided in h5 format. - The counterfactual library ‘alibi’ only supports differentiable black box models. This means in our case you cannot explain any scikit-learn tree ensemble (e.g. Random Forest).

Component details

Input ports
  1. Type: PortObject
    Trained Model
    Trained Model either from Keras Network Learner, Python Learner or Python Object Reader.
  2. Type: PortObject
    Pre-Processing Pickled Object
    Pre-Processing Pickled Object from Python Object Reader or Data Preprocessing for Keras Model Component.
  3. Type: Table
    Input Instances
    Row instances to be explained. Those rows should be in raw format: not normalized before the preprocessing Python script was applied.
Output ports
  1. Type: Table
    Counterfactual Explanations
    The component outputs a table listing for each input row a counterfactual explanation followed by the original prediction and the new prediction. The new prediction is relative to the counterfactual instance obtained by summing the explanation values to the original instance. In case the component couldn't find any explanation for an original instance, null values will be listed in the relative row at the component output.

Used extensions & nodes

Created with KNIME Analytics Platform version 4.3.2 Note: Not all extensions may be displayed.
  • Go to item
    KNIME Base nodesTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.3.2

    knime profile image
    knime
  • Go to item
    KNIME Data GenerationTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.3.0

    knime profile image
    knime
  • Go to item
    KNIME Deep Learning - Keras IntegrationTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.3.1

    knime profile image
    knime
  • Go to item
    KNIME Deep Learning - TensorFlow 2 IntegrationTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.3.1

    knime profile image
    knime
  • Go to item
    KNIME JavasnippetTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.3.0

    knime profile image
    knime
  • Go to item
    KNIME Python Integration

    KNIME AG, Zurich, Switzerland

    Version 4.3.2

    knime profile image
    knime
  • Go to item
    KNIME Quick FormsTrusted extension

    KNIME AG, Zurich, Switzerland

    Version 4.3.2

    knime profile image
    knime

This component does not have nodes, extensions, nested components and related workflows

Legal

By using or downloading the component, you agree to our terms and conditions.

KNIME
Open for Innovation

KNIME AG
Talacker 50
8001 Zurich, Switzerland
  • Software
  • Getting started
  • Documentation
  • Courses + Certification
  • Solutions
  • KNIME Hub
  • KNIME Forum
  • Blog
  • Events
  • Partner
  • Developers
  • KNIME Home
  • Careers
  • Contact us
Download KNIME Analytics Platform Read more about KNIME Business Hub
© 2025 KNIME AG. All rights reserved.
  • Trademarks
  • Imprint
  • Privacy
  • Terms & Conditions
  • Data Processing Agreement
  • Credits