Spark Logistic Regression Learner (MLlib)

This node applies the Apache Spark Logistic Regression algorithm. It outputs the the learned model for later application.

Please note that all data must be numeric, including the label column. Use the Spark Category To Number nodes to convert nominal values to numeric columns.

For more details about linear methods in Spark see the Linear methods section of the MLlib documentation.

Use the Spark Predictor node to apply the learned model to unseen data.

Options

Regularizer
The purpose of the regularizer is to encourage simple models and avoid overfitting. For more details on supported regularizers see the Regularizers section of the MLlib documentation.
Regularization
The fixed regularization parameter r >= 0 defines the trade-off between the two goals of minimizing the loss (i.e., training error) and minimizing model complexity (i.e., to avoid overfitting).
Number of iterations
The number of iterations the method should run.
Optimization Method
Under the hood, linear methods use convex optimization methods to optimize the objective functions. MLlib uses two methods, Stoachastic Gradient Descent (SGD) and Limited-memory BFGS (L-BFGS) which are described in detail in the MLlib optimization section.
Number of corrections
The number of corrections used in the L-BFGS update. Only available for L-BFGS.
Tolerance
The convergence tolerance. The tolerance is a condition which decides iteration termination. The end of iteration is decided based on below logic.
  • If the norm of the new solution vector is > 1, the diff of solution vectors is compared to relative tolerance which means normalizing by the norm of the new solution vector.
  • If the norm of the new solution vector is <= 1, the diff of solution vectors is compared to absolute tolerance which is not normalizing.
Only available for L-BFGS.
Loss function
For more details on the supported loss functions see the Loss function section of the MLlib documentation.
Step size
The initial step size of SGD for the first step. In subsequent steps, the step size will decrease with stepSize/sqrt(t). Only available for SGD.
Fraction
The fraction of data to be used for each SGD iteration. The default of 1.0 corresponds to deterministic/classical gradient descent. Only available for SGD.
Use feature scaling
Select this option to use feature scaling before model training to reduce the condition numbers which can significantly help the optimizer converging faster. Whether to perform feature scaling before model training to reduce the condition numbers which can significantly help the optimizer converging faster. The scaling correction will be translated back to resulting model weights, so it's transparent to users.
Add intercept
Select this option to add intercept.
Validate data
Select this option if the algorithm should validate data before training.
Class column
The classification column. Must be numeric.
Feature Columns
The feature columns to learn the model from. Supports only numeric columns.

Input Ports

Icon
Input Spark DataFrame/RDD

Output Ports

Icon
Spark MLlib Logistic Regression Model

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.