After creating a model (especially a DL Python Model), that model will be used to make predictions given new input data. Sometimes the need to make such predictions can occur much later in a Workflow and in different Python nodes. Loading and reloading these models to make a prediction can become costly.
This Workflow introduces a pattern for preserving a DL Python Model in memory such that it can be used to make predictions on-demand without needing to reload it each time. It consists of three reusable Components:
1. a Component to preserve the model in memory and make it possible to call the model's predict() method, on-demand;
2. a Component to call the model's predict() method; and
3. a Component to release the model from memory, once it is no longer needed anywhere in the Workflow.
As a practical demo, this Workflow employs Keras+TensorFlow2's definition of resnet50 to recognize common objects depicted in one or more input images.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.