This workflow uses preprocessed midi files to train a many to many RNN to generate music.
The brown nodes in the upper part define the network architecture. The chosen network architecture has 5 inputs for
- the notes
- the duration
- the offset difference to the previous note
- the initial hidden states of the LSTM
After an LSTM layer the network splitt into three, parallel, feedforward subnetworks with different activation functions:
- one for the notes
- one for the duration
- one for the offset difference
Afterwards the three subnetworks are collected.
In the Keras Network Learner node the Loss function is defined by selecting a loss for each feedforward subnetwork.
- Categorical Cross Entropy for the notes
- MSE for the duration and th offset difference.
Workflow
Train RNN to generate piano music
Used extensions & nodes
Created with KNIME Analytics Platform version 4.4.1
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
Loading deployments
Loading ad hoc executions
Legal
By using or downloading the workflow, you agree to our terms and conditions.
Discussion
Discussions are currently not available, please try again later.