DL4J Feedforward Learner (Classification) (legacy)

This node performs supervised training of a feedforward deep learning model for classification. Thereby, the learning procedure can be adjusted using several training methods and parameters, which can be customized in the node dialog. Additionally, the node supplies further methods for regularization, gradient normalization and learning refinements. The learner node automatically adds an output layer to the network configuration, which can be also configured in the node dialog. For classification, the output layer will always use 'softmax' as the activation function and the number of outputs will be automatically set to match the number of unique labels used for training. The output of the node is a trained deep learning model which can be used to predict the labels of unseen data instances.

The KNIME Deeplearning4J Integration has been marked as legacy with KNIME Analytics Platform 5.0 and will be deprecated in a future version. If you are using this extension in a production workflow, consider switching to one of the other deep learning integrations available in KNIME Analytics Platform.

Options

Learning Parameters

Number of Training Iterations
The number of parameter updates that will be done on one batch of input data.
Optimization Algorithm
The type of optimization method to use. The following algorithms are available:


For Line Gradient Descent, Conjugate Gradient Descent, and LBFGS the maximum number of line search iterations can be specified.
Do Finetuning?
Whether to to finetuning. If this option is chosen the learner will perform Stochastic Gradient Descent on the output layer of the network. This is usually done after the network was previously pretrained using an unsupervised learning algorithm. An example of a corresponding network architecture would be a Deep Belief Network consisting of RBMs.
Updater
The type of updater to use. These specify how the raw gradients will be modified. If this option is unchecked the node tries to use an updater from a previously trained network if available. If not available the default will be used (NESTEROVS). Some of the updater types may have additional coefficients which can be adjusted. The The following methods are available:

  • SGD
  • ADAM (ADAM Mean Decay, ADAM Var Decay)
  • ADADELTA (RHO)
  • NESTEROVS (Momentum, Schedule)
    Nesterovs Schedule:
    Schedule - Schedule for momentum value change during training. This is specified in the following format:
    'iteration':'momentum rate','iteration':'momentum rate' ...
    This creates a map, which maps the iteration to the momentum rate that should be used. E.g. '2:0.8' means that the rate '0.8' should be used in iteration '2'. Leave empty if you do not want to use a schedule.
  • ADAGRAD
  • RMSPROP (RMS Decay)

An explanation of these methods and their coefficients can be found here.
Random Seed
The seed value which should be used in order to compare training runs. Any Integer number may be used.
Regularization
The L1 and L2 regularization coefficients.
Gradient Normalization
Gradient normalization strategies. These are applied on raw gradients, before the gradients are passed to the updater. An explanation can be found here.

  • Renormalize L2 Per Layer
  • Renormalize L2 Per Param Type
  • ClipElement Wise Absolute Value
  • Clip L2 Per Layer
  • Clip L2 Per Param Type

For 'ClipElement Wise Absolute Value', 'Clip L2 Per Layer', and 'Clip L2 Per Param Type' you can additionally specify a threshold value.

Global Parameters

Global Learning Rate
The learning rate for the whole network. If not used the learning rate specified in each layer will be used.
Global Drop-Out Rate
The drop-out rate for the whole network. If not used the drop-out rate specified in each layer will be used.
Use Drop-Connect?
Whether to use Drop Connect.
Global Weight Initialization Strategy
The weight initialization strategy to use for the whole network.
Global Bias - Learning Rate
The bias learning rate for the whole network if you want to use a different learning rate for the bias.
Global Bias - Initialization
The value to initialize all biases with.

Data Parameters

Batch Size
The number of examples used for one minibatch.
Epochs
The number of epochs to train the network, hence the number of training runs on the whole data set.
Size of Input Image
If the input table contains images and a convolutional network is used, the dimensionality of the images needs to be specified. This value needs to be three numbers separated by a comma specifying the dimensionality of the used images (size x,size y,number of channels). E.g. 64,64,3

Column Selection

Label Column
The column of the input table containing labels for supervised learning.
Feature Column Selection
The columns of the input table containing the training data for the network.

Output Layer Parameter

Learning Rate
The learning rate that should be used for this layer.
Weight Initialization Strategy
The strategy which will be used to set the initial weights for this layer.
Loss Function
The type of loss function that should be used for this layer.

Input Ports

Icon
Finished configuration of a deep learning network.
Icon
Data table containing training data.

Output Ports

Icon
Trained Deep Learning Model

Popular Predecessors

Views

Learning Status
Shows information about the current learning run. Has an option for early stopping of training. If training is stopped before the last epoch the model will be saved in the current status.

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.