This workflow shows how the new KNIME Keras integration can be used to train and deploy a specialized deep neural network for semantic segmentation.
This means that our network decides for each pixel in the input image, what class of object it belongs to.
In order to run the example, please make sure you have the following KNIME extensions installed:
* KNIME Deep Learning - Keras Integration (Labs)
* KNIME Image Processing (Community Contributions Trusted)
* KNIME Image Processing - Deep Learning Extension (Community Contributions Trusted)
* KNIME Streaming Execution (Beta) (Labs)
* KNIME Image Processing - Python Extension (Community Contributions Trusted)
You also need a local Python installation that includes Keras. Please refer to https://www.knime.com/deeplearning#keras for installation recommendations and further information.
Acknowledgements:
The network architecture we use is an adaptation of the U-Net proposed in [1].
The dataset we used is taken from [2]
[1] Ronneberger et al. in "U-Net: Convolutional Networks for Biomedical Image Segmentation" (https://arxiv.org/abs/1505.04597)
[2] Gould et al. "Decomposing a Scene into Geometric and Semantically Consistent Regions." (http://dags.stanford.edu/projects/scenedataset.html)
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.