DL4J Feedforward Learner (Pretraining) (legacy)

This node performs unsupervised pretraining of a feedforward deep learning model. Thereby, the learning procedure can be adjusted using several training methods and parameters, which can be customized in the node dialog. Additionally, the node supplies further methods for regularization, gradient normalization and learning refinements. The learner node automatically adds an output layer to the network configuration, which can be also configured in the node dialog. For pretraining the network architecture needs to contain layers which are can be trained unsupervised. Such layers are for example an RBM or a Autoencoder. Usually, this node is used together with a classification learner node which performs finetuning of the output layer after the network was pretrained. The output of the node is a pretrained deep learning model.

The KNIME Deeplearning4J Integration has been marked as legacy with KNIME Analytics Platform 5.0 and will be deprecated in a future version. If you are using this extension in a production workflow, consider switching to one of the other deep learning integrations available in KNIME Analytics Platform.

Options

Learning Parameters

Number of Training Iterations
The number of parameter updates that will be done on one batch of input data.
Optimization Algorithm
The type of optimization method to use. The following algorithms are available:


For Line Gradient Descent, Conjugate Gradient Descent, and LBFGS the maximum number of line search iterations can be specified.
Updater
The type of updater to use. These specify how the raw gradients will be modified. If this option is unchecked the node tries to use an updater from a previously trained network if available. If not available the default will be used (NESTEROVS). Some of the updater types may have additional coefficients which can be adjusted. The The following methods are available:

  • SGD
  • ADAM (ADAM Mean Decay, ADAM Var Decay)
  • ADADELTA (RHO)
  • NESTEROVS (Momentum, Schedule)
    Nesterovs Schedule:
    Schedule - Schedule for momentum value change during training. This is specified in the following format:
    'iteration':'momentum rate','iteration':'momentum rate' ...
    This creates a map, which maps the iteration to the momentum rate that should be used. E.g. '2:0.8' means that the rate '0.8' should be used in iteration '2'. Leave empty if you do not want to use a schedule.
  • ADAGRAD
  • RMSPROP (RMS Decay)

An explanation of these methods and their coefficients can be found here.
Random Seed
The seed value which should be used in order to compare training runs. Any Integer number may be used.
Regularization
The L1 and L2 regularization coefficients.
Gradient Normalization
Gradient normalization strategies. These are applied on raw gradients, before the gradients are passed to the updater. An explanation can be found here.

  • Renormalize L2 Per Layer
  • Renormalize L2 Per Param Type
  • ClipElement Wise Absolute Value
  • Clip L2 Per Layer
  • Clip L2 Per Param Type

For 'ClipElement Wise Absolute Value', 'Clip L2 Per Layer', and 'Clip L2 Per Param Type' you can additionally specify a threshold value.

Global Parameters

Global Learning Rate
The learning rate for the whole network. If not used the learning rate specified in each layer will be used.
Global Drop-Out Rate
The drop-out rate for the whole network. If not used the drop-out rate specified in each layer will be used.
Use Drop-Connect?
Whether to use Drop Connect.
Global Weight Initialization Strategy
The weight initialization strategy to use for the whole network.
Global Bias - Learning Rate
The bias learning rate for the whole network if you want to use a different learning rate for the bias.
Global Bias - Initialization
The value to initialize all biases with.

Data Parameters

Batch Size
The number of examples used for one minibatch.
Epochs
The number of epochs to train the network, hence the number of training runs on the whole data set.

Column Selection

Feature Column Selection
The columns of the input table containing the training data for the network.

Output Layer Parameter

Number of Output Units
The number of output units of the output layer. This value specifies the length of the output vector of the network.
Learning Rate
The learning rate that should be used for this layer.
Weight Initialization Strategy
The strategy which will be used to set the initial weights for this layer.
Loss Function
The type of loss function that should be used for this layer.

Input Ports

Icon
Finished configuration of a deep learning network.
Icon
Data table containing training data.

Output Ports

Icon
Trained Deep Learning Model

Views

Learning Status
Shows information about the current learning run. Has an option for early stopping of training. If training is stopped before the last epoch the model will be saved in the current status.

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.