Spark Gradient Boosted Trees Learner (Regression)

Gradient Boosted Trees are ensembles of Decision Trees. They iteratively train Decision Trees in order to minimize a loss function. This node uses the spark.ml Gradient Boosted Trees implementation to train a regression model in Spark. The target column must be numerical, whereas the feature columns can be either nominal or numerical.

Use the Spark Predictor (Regression) node to apply the learned model to unseen data.

Please refer to the Spark documentation for a full description of the underlying algorithm.

This node requires at least Apache Spark 2.0.

Options

Settings

Target column
A numeric column that contains the values to train with. Rows with missing values in this column will be ignored during model training.
Feature Columns
The feature columns to learn the model with. Both nominal and numeric columns are supported. The dialog allows to select the columns manually (by moving them to the right panel) or via a wildcard/regex selection (all columns whose names match the wildcard/regex are used for learning). In case of manual selection, the behavior for new columns (i.e. that are not available at the time you configure the node) can be specified as either Enforce exclusion (new columns are excluded and therefore not used for learning) or Enforce inclusion (new columns are included and therefore used for learning).
Number of models
The number of Decision Tree models in the ensemble model. Increasing this number makes the model more expressive and improves training data accuracy. However, increasing it too much may lead to overfitting. Also, increasing this number directly increases the time required to train the ensemble, because the trees need to be trained sequentially.
Loss function
The loss function to minimize by the learner algorithm:
  • squared (default): Squared Error, also known as L2 loss.
  • absolute: Absolute Error, also known as L1 loss.
More information is available in the Spark documentation.
Max tree depth
Maximum depth of the Decision Trees. Must be >= 1.
Min rows per tree node
Minimum number of rows each tree node must have. If a split causes the left or right child node to have fewer rows, the split will be discarded as invalid. Must be >= 1.
Min information gain per split
Minimum information gain for a split to be considered.
Max number of bins
Number of bins to use when discretizing continuous features. Increasing the number of bins means that the algorithm will consider more split candidates and make more fine-grained decisions on how to split. However, it also increases the amount of computation and communication that needs to be performed and hence increases training time. Additionally, the number of bins must be at least the maximum number of distinct values for any nominal feature.

Advanced

Learning rate
Learning rate in interval (0, 1] for shrinking the contribution of each decision tree in the ensemble. This parameter should not need to be tuned often. Decreasing this value may improve stability, if the algorithm behavior seems unstable.
Data sampling (rows)
Sampling the rows is also known as bagging, a very popular ensemble learning strategy. If sampling is disabled (default), then each Decision Tree is trained on the full data set. Otherwise each tree is trained with a different data sample that contains the configured fraction of rows of the original data.
Feature sampling
Feature sampling is also called random subspace method or attribute bagging. Its most famous application are Random Forests, but it can also be used for Gradient Boosted Trees. This option specifies the sample size for each split at a tree node:
  • Auto: If "Max number of models" is one, then this is the same as "All", otherwise "Square root" will be used.
  • All (default): Each sample contains all features.
  • Square root: Sample size is sqrt(number of features)
  • Log2: Sample size is log2(number of features)
  • One third: Sample size is 1/3 of the features
Use static random seed
Seed for generating random numbers. Randomness is used when sampling rows and features, as well as binning numeric features during splitting.

Input Ports

Icon
Input Spark DataFrame with training data.

Output Ports

Icon
Table with estimates of the importance of each feature. The features are listed in order of decreasing importance and are normalized to sum up to 1.
Icon
Spark ML Gradient Boosted Trees model (regression)

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.