This workflow shows one way of applying deep learning to tabular data.
The main focus of the workflow lies on data preparation and semi-automatic network creation.
Semi-automatic because the network structure is created in a data-dependent way while the user has the possibility to specify certain architectural parameters e.g. the number of hidden layers and the number of neurons per hidden layer.
This workflow is heavily influenced by TensorFlow's Wide & Deep Learning tutorial (https://www.tensorflow.org/tutorials/wide_and_deep) and also uses the Census dataset.
Please note that both the dataset, as well as the network architecture are just examples and can be switched out to fit your specific use-case.
In order to run the example, please make sure you have the following KNIME extensions installed:
* KNIME Deep Learning - Keras Integration (Labs)
* KNIME Deep Learning - TensorFlow Integration (Labs)
* KNIME JavaScript Views (Labs)
You also need a local Python installation that includes Keras (we recommend version 2.1.6). Please refer to https://www.knime.com/deeplearning#keras for installation recommendations and further information.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.