RProp MLP Learner

Implementation of the RProp algorithm for multilayer feedforward networks. RPROP performs a local adaptation of the weight-updates according to the behavior of the error function. For further details see: Riedmiller, M. Braun, H. : "A direct adaptive method for faster backpropagation learning: theRPROP algorithm",Proceedings of the IEEE International Conference on Neural Networks (ICNN) (Vol. 16, pp. 586-591). Piscataway, NJ: IEEE. This node provides a view of the error plot.

Options

Maximum number of iterations
The number of learning iterations.
Number of hidden layers
Specifies the number of hidden layers in the architecture of the neural network.
Number of hidden neurons per layer
Specifies the number of neurons contained in each hidden layer.
Class column
Choose the column that contains the target variable: it can either be nominal or numerical. All nominal class values are extracted and assigned to output neurons. If you use a numerical target variable (regression), please make sure it is normalized!
Ignore missing values
If this checkbox is set, rows with missing values will not be used for training.
Use seed for random initialization
If this checkbox is set, a seed (see next field) can be set for initializing the weights and thresholds can be set.
Random seed
Seed for the random number generator.

Input Ports

Icon
Datatable with training data

Output Ports

Icon
RProp trained Neural Network

Views

Error Plot
Displays the error for each iteration.

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.