Uses a character level encoder-decoder network of LSTMs.
The encoder network reads the input sentence character by character and summarizes the sentence in its state.
This state is then used as initial state of the decoder network to produce the translated sentence one character at a time.
During prediction, the decoder also recieves its previous output as input to the next time.
For training we use a technique called "teacher forcing" i.e. we feed the actual previous character instead of the previous prediction which greatly benefits the training.
This example is an adaptation of the following Keras blog post to KNIME:
https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html
In order to run the example, please make sure you have the following KNIME extensions installed:
* KNIME Deep Learning - Keras Integration (Labs)
You also need a local Python installation that includes Keras. Please refer to https://www.knime.com/deeplearning#keras for installation recommendations and further information.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.