Keras Network Learner

This node performs supervised learning on a Keras deep learning network.

Options

General Settings

Back end
The deep learning back end which is used to train the input network.
Epochs
The number of iterations over the input training data.
Training batch size
The number of training data rows that are used for a single gradient update during training.
Validation batch size
The number of validation data rows that are processed at a time during validation. This option is only enabled if the node's validation data input port is connected.
Shuffle training data before each epoch
Shuffling the training data often improves the learning process because updating the network with the same batches in the same order in each epoch can have an detrimental effect on the convergence speed of the training.
Use random seed
If the checkbox is selected, the random seed displayed in the field on the right is used to perform the shuffling of the training data. Clicking the "New seed" button generates a new random seed. Leaving the checkbox unselected corresponds to creating a new seed for each execution of the node. NOTE: If your network contains weights that are initialized randomly, we currently don't seed this initialization. This means that you will very likely receive slightly different results for multiple model runs even though you are using the random seed for the shuffling of the training data.

Optimizer Settings

Optimizer
The optimization algorithm. The following optimizers are available: Please refer to the Keras documentation for further information on parameterization.
Clip norm
If checked, gradients whose L2 norm exceeds the given norm will be clipped to that norm.
Clip value
If checked, gradients whose absolute value exceeds the given value will be clipped to that value (or the negated value, respectively).

Learning Behavior

Terminate on NaN loss
If checked, training is terminated if a NaN (not a number) training loss is encountered. Corresponds to the TerminateOnNaN Keras callback.
Terminate on training stagnation (early stopping)
If checked, training is terminated if the monitored quantity has stopped improving.
  • Monitored quantity: the quantity on which early stopping is evaluated. Validation quantities are available for selection if the node's validation data input port is connected.
  • Min. delta: minimum change of the monitored quantity which qualifies as an improvement. Absolute changes below this value are considered a stagnation.
  • Patience: number of epochs with no improvements after which training will be stopped.
Corresponds to the EarlyStopping Keras callback.
Reduce learning rate on training stagnation
If checked, the learning rate is reduced if the monitored quantity has stopped improving.
  • Monitored quantity: the quantity on which learning rate reduction is evaluated. Validation quantities are available for selection if the node's validation data input port is connected.
  • Factor: factor by which the learning rate will be reduced
  • Patience: number of epochs with no improvements after which the learning rate will be reduced.
  • Epsilon: threshold for measuring the new optimum, to only focus on significant changes.
  • Cooldown: number of epochs to wait before resuming normal operation after the learning rate has been reduced.
  • Min. learning rate: lower bound of the learning rate. The learning rate is not reduced below this value.
Corresponds to the ReduceLROnPlateau Keras callback.

Input Data

Conversion
The converter that is used to transform the selected input columns into a format that is accepted by the respective network input specification.
Input columns
The table columns that are part of the respective network input. The availability of a column depends on the currently selected input converter.

Target Data

Conversion
The converter that is used to transform the selected target columns into a format that is accepted by the respective network target specification.
Target columns
The table columns that are part of the respective network target. The availability of a column depends on the currently selected input converter.
Standard loss function
Choose one of the loss functions provided by Keras for your target.
Custom loss function
Define your own loss function as a Python snippet. The function custom_loss will be used as loss function for the target. It must always be present, therefore its signature is not editable.

GPU Selection

CUDA visible devices
Content of the environment variable CUDA_VISIBLE_DEVICES which identifies the GPUs that are visible to the Node. If no value is given the environment variable will not be set which will result in all GPUs being visible to the Node. Otherwise, the value should be a comma-separated list of GPU identifiers (See CUDA Environment Variables ).

Input Ports

Icon
The input Keras deep learning network.
Icon
The training data table that contains training and target columns.
Icon
The validation data table (optional). Must have the same column names and types in the same order as the training data table.

Output Ports

Icon
The trained output Keras deep learning network.

Views

Learning Monitor
Shows information about the current learning run. Has an option for early stopping of training. If training is stopped before it is finished, the model will be saved in the current status.

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.