This directory contains 10 workflows.
This Component is able to create a Local Interpretable Model-agnostic Explanation (LIME) to explain the predictions of any machine learning model in […]
Counterfactual Explanations describe the smallest changes to the feature values required to change the prediction outcome. Those values should be intuitive […]
This Component computes fairness metrics over an input classification model adopting the following metrics: demographic parity, equal opportunity and […]
This component is able to compute Global Feature Importance for classification models with up to 4 different techniques. The component additionally offers […]
This is an implementation of the model explanation technique developed by H2O.ai called K-LIME using the KNIME H2O Machine Learning Integration. To find […]
This component generates an interactive visualization to help the user understand their model’s behavior on a single example data point. It works in two […]
This component generates a view to interactively execute a model on an artificial data point. The view updates a visualization of the model output based on […]
This Component is required to sample the data to be visualized in the Partial Dependence/ICE Plot (JavaScript) node. You can select only numerical features […]
This Component can be used before the bottom input port of SHAP Loop Start. This technique will use k-means to summarize the validation set and create a […]
In order to decipher the decision making process of a black-box model you can use the eXplainable Artificial Intelligence (XAI) view. The view works for […]
Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.