Counterfactual Explanations (Python)

Counterfactual Explanations describe the smallest changes to the feature values required to change the prediction outcome. Those values should be intuitive to explain any prediction as they point out what feature values should be improved to go from a negative class to a positive class.

This component generates the Counterfactual Explanations for Binary Classification models using the Python Library “alibi” (docs.seldon.io/projects/alibi). The component outputs a table listing for each input row a counterfactual explanation followed by the original prediction and the new prediction. The new prediction is relative to the counterfactual instance obtained by summing the explanation values to the original instance. In case the component couldn't find any explanation for an original instance, null values will be listed in the relative row at the component output.

DATA INPUT REQUIREMENTS
- The input data should be instances that you would like to explain. Those instances must have all the columns that were used while training the model.

DATA PRE-PROCESSING REQUIREMENTS

- The data preprocessing should be provided as a Python pickled object defined by a custom Python class.
- The custom Python class should be present in the workflow folder where the component is executed.
- The custom Python class should be called “custom_class_data_processing.py”.
- The default Python class as well as Jupyter Notebooks to understand how to use it are available on the following KNIME Hub space:

---> kni.me/s/hLLRgZLzgSNv8Z6M

BLACK BOX MODEL REQUIREMENTS

- The model should be trained on normalized numerical features only: no other kind of data preparation is supported, unless you edit the custom Python class defining the pre-processing.
- The model has to be trained using the Python libraries Keras or scikit-learn.
- The model can be trained in Python outside of KNIME Analytics Platform or inside using either KNIME Deep Learning Integration or KNIME Python Integration.
- If the model is trained with scikit-learn, it has to be provided as a pickled object.
- If the model was trained with Keras (either Tensorflow 1 or 2) it has to be provided in h5 format.
- The counterfactual library ‘alibi’ only supports differentiable black box models. This means in our case you cannot explain any scikit-learn tree ensemble (e.g. Random Forest).

Options

Trained Model Type:
Select here the Python Library used to train the binary classification model inside or outside of KNIME Analytics Platform.

Input Ports

Icon
Trained Model either from Keras Network Learner, Python Learner or Python Object Reader.
Icon
Pre-Processing Pickled Object from Python Object Reader or Data Preprocessing for Keras Model Component.
Icon
Row instances to be explained. Those rows should be in raw format: not normalized before the preprocessing Python script was applied.

Output Ports

Icon
The component outputs a table listing for each input row a counterfactual explanation followed by the original prediction and the new prediction. The new prediction is relative to the counterfactual instance obtained by summing the explanation values to the original instance. In case the component couldn't find any explanation for an original instance, null values will be listed in the relative row at the component output.

Nodes

Extensions

Links