XGBoost Linear Ensemble Learner (Regression)

This Node Is Deprecated — This node is kept for backwards-compatibility, but the usage in new workflows is no longer recommended. The documentation below might contain more information.

Learns a linear model based XGBoost model for regression. XGBoost is a popular machine learning library that is based on the ideas of boosting. Checkout the official documentation for some tutorials on how XGBoost works. Since XGBoost requires its features to be single precision floats, we automatically cast double precision values to float, which can cause problems for extreme numbers.

Options

Objective
One of
  • linear
  • logistic
  • gamma
  • poisson
  • tweedie
Tweedie regression variance
Controls the variance of the Tweedie distribution. Must be in the range (1, 2) and is by default set to 1.5.
Target column
The column containing the regression target.
Weight column
The column containing the row weights (also called sample weights or instance weights). Note that the selected column must not contain missing values.
Feature columns
Allows to select which columns should be used as features in training. Note that the domain of nominal features must contain the possible values otherwise the node can't be executed. Use the Domain Calculator node to calculate any missing possible value sets.
Boosting rounds
The number of models to train in the boosting ensemble.
Base score
The initial prediction score of all instances; this global bias will have little effect for a sufficiently large number of iterations.
Use static random seed
If checked, the seed displayed in the text field is used as seed for randomized operations such as sampling. Otherwise a new seed is generated for each node execution. Note that the Shotgun updater is always non-deterministic even if a static seed is set.
Manual number of threads
Allows to specify the number of threads to use for training. The default if the checkbox is not selected is the number of available cores.

Booster

Lambda
L2 regularization term on weights. Increasing this value will make model more conservative. Normalized to number of training examples.
Alpha
L1 regularization term on weights. Increasing this value will make model more conservative. Normalized to number of training examples.
Updater
Choice of algorithm to fit linear model
  • Shotgun: Parallel coordinate descent algorithm based on shotgun algorithm. Uses ‘hogwild’ parallelism and therefore produces a nondeterministic solution on each run no matter whether a static random seed is set.
  • CoordDescent: Ordinary coordinate descent algorithm. Also multithreaded but still produces a deterministic solution.
Feature selector
Feature selection and ordering method.
  • Cyclic: Deterministc selection by cycling through features one at a time.
  • Shuffle: Similar to cyclic but with random feature shuffling prior to update.
  • Random: Randomly (with replacement) selects coordinates.
  • Greedy: It is fully deterministic. It allows restricting the selection to top k features per group with the largest magnitude of univariate weight change, by setting the top k parameter. Doing so would reduce the complexity to O(num_feature*top k).
  • Thrifty: Thrifty, approximately-greedy feature selector. Prior to cyclic updates, reorders features in descending magnitude of their univariate weight changes. This operation is multithreaded and is a linear complexity approximation of the quadratic greedy selection. It allows restricting the selection to top_k features per group with the largest magnitude of univariate weight change, by setting the top k parameter.
Top k
The number of top feature to select in greedy and thrifty feature selector. The value of 0 corresponds to using all features.

Input Ports

Icon
The data to learn from.

Output Ports

Icon
The trained model.

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.