Learns a random forest* (an ensemble of decision trees) for regression. Each of the decision tree models is built with a different set of rows (records) and for each split within a tree a randomly chosen set of columns (describing attributes) is used. The row sets for each decision tree are created by bootstrapping and have the same size as the original input table. The attribute set for an individual split in a decision tree is determined by randomly selecting sqrt(m) attributes from the available attributes where m is the total number of learning columns. The attributes can also be provided as bit (fingerprint), byte, or double vector. The output model describes an ensemble of regression tree models and is applied in the corresponding predictor node.

In a regression tree the predicted value for a leaf node is the mean target value of the records within the leaf. Hence the predictions are best (with respect to the training data) if the variance of target values within a leaf is minimal. This is achieved by splits that minimize the sum of squared errors in their respective children.

For a more general description and suggested default parameters see the node description of the classification
*Random Forest Learner* node.

This node provides a subset of the functionality of the *Tree Ensemble Learner (Regression)*. If you need additional
functionality, please check out the *Tree Ensemble Learner (Regression)*

(*) RANDOM FORESTS is a registered trademark of Minitab, LLC and is used with Minitab’s permission.

- Target Column
- Select the column containing the value to be learned. Rows with missing values in this column are ignored during the learning process.
- Attribute Selection
Select the attributes on which the model should be learned. You can choose from two modes.

*Fingerprint attribute*Uses a fingerprint/vector (bit, byte and double are possible) column to learn the model by treating each entry of the vector as a separate attribute (e.g. a bit vector of length 1024 is expanded into 1024 binary attributes). The node requires all vectors to be of the same length.*Column attributes*Uses ordinary columns in your table (e.g. String, Double, Integer, etc.) as attributes to learn the model on. The dialog allows you to select the columns manually (by moving them to the right panel) or via a wildcard/regex selection (all columns whose names match the wildcard/regex are used for learning). In case of manual selection, the behavior for new columns (i.e. that are not available at the time you configure the node) can be specified as either*Enforce exclusion*(new columns are excluded and therefore not used for learning) or*Enforce inclusion*(new columns are included and therefore used for learning).- Enable Hightlighting (#patterns to store)
- If selected, the node stores the selected number of rows and allows highlighting them in the node view.
- Limit number of levels (tree depth)
- Number of tree levels to be learned. For instance, a value of 1 would only split the (single) root node (decision stump).
- Minimum child node size
- Minimum number of records in child nodes.
- Number of models
- The number of regression trees to be learned. A "reasonable" value can range from very few (say 10) to many thousands - although a value between 100 and 500 suffices for most datasets.
- Use static random seed
- Choose a seed to get reproducible results.

- The input data with the out-of-bag predictions, i.e. for each input row the mean and variance of outputs of all models that
did not use the row for training. The appended columns are
equivalent to the columns appended by the corresponding predictor node. There is one additional column
*model count*, which contains the number of models used for the voting (number of models not using the row throughout the learning.) The out-of-bag predictions can be used to get an estimate of the generalization ability of the random forest by feeding them into the Numeric Scorer node. - A statistics table on the attributes used in the different tree learners. Each row represents one training
attribute with these statistics:
*#splits (level x)*as the number of models, which use the attribute as split on level*x*(with level 0 as root split);*#candidates (level x)*is the number of times an attribute was in the attribute sample for level*x*(in a random forest setup these samples differ from node to node). If no attribute sampling is used*#candidates*is the number of models. Note, these numbers are uncorrected, i.e. if an attribute is selected on level 0 but is also in the candidate set of level 1 (but is not split on level 1 because it has been split one level up), the #candidate number will still count the attribute as candidate. - The trained model.

- Tree Views
- A decision tree viewer for all the trained models. Use the spinner to iterate through the different models.

- 01.RF_SonKNIME Hub
- 02_AutoML_Regression_and_Classification_ExamplesKNIME Hub
- 02_AutoML_Regression_and_Classification_ExamplesKNIME Hub
- 02_Learning_a_Random_ForestKNIME Hub
- 02_SHAP_and_Shapley_ValuesKNIME Hub
- Show all 291 workflows

- No links available

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.

To use this node in KNIME, install the extension KNIME Ensemble Learning Wrappers from the below update site following our NodePit Product and Node Installation Guide:

v5.3

A zipped version of the software site can be downloaded here.

Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud
or on-premises – with our brand new **NodePit Runner**.

Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.

**Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.**