Learns an ensemble of decision trees (such as random forest* variants). Each of the decision tree models is learned on a different set of rows (records) and/or a different set of columns (describing attributes), whereby the latter can also be a bit/byte/double-vector descriptor (e.g. molecular fingerprint). The output model describes an ensemble of decision tree models and is applied in the corresponding predictor node using the selected aggregation mode to aggregate the votes of the individual decision trees.
The following configuration settings learn a model that is similar to the random forest™ classifier described by Leo Breiman and Adele Cutler:
The decision tree construction takes place in main memory (all data and all models are kept in memory).
The missing value handling corresponds to the method described here. The basic idea is to try for each split to send the missing values in every direction and the one yielding the best results (i.e. largest gain) is then used. If no missing values are present during training, the direction of a split that the most records are following is chosen as direction for missing values during testing.
The tree ensemble nodes now also support binary splits for nominal columns. Depending on the kind of problem (two- or multi-class) different algorithms are implemented to enable the efficient calculation of splits.
Select the attributes on which the model should be learned. You can choose from two modes.
Fingerprint attribute Uses a fingerprint/vector (bit, byte and double are possible) column to learn the model by treating each entry of the vector as separate attribute (e.g. a bit vector of length 1024 is expanded into 1024 binary attributes). The node requires all vectors to be of the same length.
Column attributes Uses ordinary columns in your table (e.g. String, Double, Integer, etc.) as attributes to learn the model on. The dialog allows to select the columns manually (by moving them to the right panel) or via a wildcard/regex selection (all columns whose names match the wildcard/regex are used for learning). In case of manual selection, the behavior for new columns (i.e. that are not available at the time you configure the node) can be specified as either Enforce exclusion (new columns are excluded and therefore not used for learning) or Enforce inclusion (new columns are included and therefore used for learning).
Use same set of attributes for each tree describes that the attributes are sampled once for each tree and this sample is then used to construct the tree.
Use different set of attributes for each tree node samples a different set of candidate attributes in each of the tree nodes from which the optimal one is chosen to perform the split.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Ensemble Learning Wrappers from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.