This workflow uses preprocessed midi files to train a many to many RNN to generate music.
The brown nodes in the upper part define the network architecture. The chosen network architecture has 5 inputs for
- the notes
- the duration
- the offset difference to the previous note
- the initial hidden states of the LSTM
After an LSTM layer the network splitt into three, parallel, feedforward subnetworks with different activation functions:
- one for the notes
- one for the duration
- one for the offset difference
Afterwards the three subnetworks are collected.
In the Keras Network Learner node the Loss function is defined by selecting a loss for each feedforward subnetwork.
- Categorical Cross Entropy for the notes
- MSE for the duration and th offset difference.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!