This is an example for computing explanation using LIME.
AutoML component was used to pick the best model, but any model and its set of Learner and Predictor nodes can be used.
- Read the dataset about wines
- Partition the data in train and test
- Pick few test set instances rows to explain
- Create local samples for each instance in the input table (LIME Loop Start)
- Score the samples using a workflow executor along with AutoML component
- Compute LIMEs, that is local model-agnostic explanations by training a local GLM using the samples and extracting the weights.
- Visualize them in the Composite View (Right Click > Open View)
URL: LIME - Local Interpretable Model-Agnostic Explanations, Marco Tulio Ribeiro, Blog Post https://homes.cs.washington.edu/~marcotcr/blog/lime/
URL: Verified Components https://www.knime.com/verified-components
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.