Spark Decision Tree Learner

This node uses the spark.ml Decision Tree implementation to train a Decision Tree classification model in Spark. The underlying algorithm performs a recursive binary partitioning of the feature space. Each split is chosen by selecting the best split from a set of possible splits, in order to maximize the information gain at a tree node. It supports binary and multiclass classification. The target column must be nominal, whereas the feature columns can be either nominal or numerical.

Use the Spark Predictor (Classification) node to apply the learned model to unseen data.

Please refer to the Spark documentation for a full description of the underlying algorithm.

This node requires at least Apache Spark 2.0.

Options

Settings

Target column
A nominal column that contains the labels to train with. Rows with missing values in this column will be ignored during model training.
Feature Columns
The feature columns to learn the model with. Both nominal and numeric columns are supported. The dialog allows to select the columns manually (by moving them to the right panel) or via a wildcard/regex selection (all columns whose names match the wildcard/regex are used for learning). In case of manual selection, the behavior for new columns (i.e. that are not available at the time you configure the node) can be specified as either Enforce exclusion (new columns are excluded and therefore not used for learning) or Enforce inclusion (new columns are included and therefore used for learning).
Quality measure
Measure to use for information gain calculation when evaluating splits. Available methods are "gini" (recommended) or "entropy". For more details on the available methods see the Spark documentation.
Max tree depth
Maximum depth of the Decision Tree. Must be >= 1.
Min rows per tree node
Minimum number of rows each tree node must have. If a split causes the left or right child node to have fewer rows, the split will be discarded as invalid. Must be >= 1.
Min information gain per split
Minimum information gain for a split to be considered.
Max number of bins
Number of bins to use when discretizing continuous features. Increasing the number of bins means that the algorithm will consider more split candidates and make more fine-grained decisions on how to split. However, it also increases the amount of computation and communication that needs to be performed and hence increases training time. Additionally, the number of bins must be at least the maximum number of distinct values for any nominal feature.

Advanced

Use static random seed
Seed for generating random numbers. Randomness is used when binning numeric features during splitting.

Input Ports

Icon
Input Spark DataFrame with training data.

Output Ports

Icon
Table with estimates of the importance of each feature. The features are listed in order of decreasing importance and are normalized to sum up to 1. Note that feature importances for single Decision Trees can have high variance due to correlated predictor variables. Consider using the Spark Random Forest Learner to determine feature importance instead.
Icon
Spark ML Decision Tree model (classification)

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.