This workflow uses preprocessed midi files to train a many to many RNN to generate music.
The brown nodes in the upper part define the network architecture. The chosen network architecture has 5 inputs for
- the notes
- the duration
- the offset difference to the previous note
- the initial hidden states of the LSTM
After an LSTM layer the network splitt into three, parallel, feedforward subnetworks with different activation functions:
- one for the notes
- one for the duration
- one for the offset difference
Afterwards the three subnetworks are collected.
In the Keras Network Learner node the Loss function is defined by selecting a loss for each feedforward subnetwork.
- Categorical Cross Entropy for the notes
- MSE for the duration and th offset difference.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.