This is an example for computing explanation using LIME.
An XGBoost model was picked, but any model and its set of Learner and Predictor nodes can be used.
- Read the dataset about wines
- Partition the data in train and test
- Pick few test set instances rows to explain
- Create local samples for each instance in the input table (LIME Loop Start)
- Score the samples using the predictor node and a trained model
- Compute LIMEs, that is local model-agnostic explanations by training a local GLM using the samples and extracting the weights.
- Visualize them in the Composite View (Right Click > Open View)
URL: LIME - Local Interpretable Model-Agnostic Explanations, Marco Tulio Ribeiro, Blog Post https://homes.cs.washington.edu/~marcotcr/blog/lime/
URL: Data Source: UCI - Wine Quality Data Set http://archive.ics.uci.edu/ml/datasets/wine+quality
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!