Counterfactual Explanations describe the smallest changes to the feature values required to change the prediction outcome. Those values should be intuitive to explain any prediction as they point out what feature values should be improved to go from a negative class to a positive class.
This component generates the Counterfactual Explanations for Binary Classification models using the Python Library “alibi” (docs.seldon.io/projects/alibi). The component outputs a table listing for each input row a counterfactual explanation followed by the original prediction and the new prediction. The new prediction is relative to the counterfactual instance obtained by summing the explanation values to the original instance. In case the component couldn't find any explanation for an original instance, null values will be listed in the relative row at the component output.
DATA INPUT REQUIREMENTS
- The input data should be instances that you would like to explain. Those instances must have all the columns that were used while training the model.
DATA PRE-PROCESSING REQUIREMENTS
- The data preprocessing should be provided as a Python pickled object defined by a custom Python class.
- The custom Python class should be present in the workflow folder where the component is executed.
- The custom Python class should be called “custom_class_data_processing.py”.
- The default Python class as well as Jupyter Notebooks to understand how to use it are available on the following KNIME Hub space:
---> kni.me/s/hLLRgZLzgSNv8Z6M
BLACK BOX MODEL REQUIREMENTS
- The model should be trained on normalized numerical features only: no other kind of data preparation is supported, unless you edit the custom Python class defining the pre-processing.
- The model has to be trained using the Python libraries Keras or scikit-learn.
- The model can be trained in Python outside of KNIME Analytics Platform or inside using either KNIME Deep Learning Integration or KNIME Python Integration.
- If the model is trained with scikit-learn, it has to be provided as a pickled object.
- If the model was trained with Keras (either Tensorflow 1 or 2) it has to be provided in h5 format.
- The counterfactual library ‘alibi’ only supports differentiable black box models. This means in our case you cannot explain any scikit-learn tree ensemble (e.g. Random Forest).
To use this component in KNIME, download it from the below URL and open it in KNIME:
Download ComponentDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.