Hub
  • Software
  • Blog
  • Forum
  • Events
  • Documentation
  • About KNIME
  • KNIME Hub
  • Nodes
  • Tree Ensemble Learner
NodeNode / Learner

Tree Ensemble Learner

Analytics Mining Decision Tree Ensemble Random Forest Classification
Drag & drop
Like
Copy short link

Learns an ensemble of decision trees (such as random forest* variants). Typically, each tree is built with a different set of rows (records) and/or columns (attributes). See the options for Data Sampling and Attribute Sampling for more details. The attributes can also be provided as bit (fingerprint), byte, or double vector. The output model describes an ensemble of decision tree models and is applied in the corresponding predictor node using the selected aggregation mode to aggregate the votes of the individual decision trees.

The following configuration settings learn a model that is similar to the random forest ™ classifier described by Leo Breiman and Adele Cutler:

  • Tree Options - Split Criterion: Gini Index
  • Tree Options - Limit number of levels (tree depth): unlimited
  • Tree Options - Minimum node size: unlimited
  • Ensemble Configuration - Number of models: Arbitrary (arguably, random forest does not overfit)
  • Ensemble Configuration - Data Sampling: Use all rows (fraction = 1) but choose sampling with replacement (bootstrapping)
  • Ensemble Configuration - Attribute Sampling: Sample using a different set of attributes for each tree node split; usually square root of number of attributes - but this can vary
Experiments have shown that results on different datasets are very similar to the random forest implementation available in R .

Decision tree construction takes place in main memory (all data and all models are kept in memory).

The missing value handling corresponds to the method described here . The basic idea is that for each split to try and send the missing values in every possible direction and the one yielding the best results (i.e. largest gain) is then used. If no missing values are present during training, the direction of the split that the most records are following is chosen as the direction for missing values during testing.

The tree ensemble nodes now also support binary splits for nominal columns. Depending on the kind of problem (two- or multi-class), different algorithms are implemented to enable the efficient calculation of splits.

  • For two-class classification problems the method described in section 9.4 of "Classification and Regression Trees" by Breiman et al. (1984) is used.
  • For multi-class classification problems the method described in "Partitioning Nominal Attributes in Decision Trees" by Coppersmith et al. (1999) is used.

(*) RANDOM FORESTS is a registered trademark of Minitab, LLC and is used with Minitab’s permission.

Node details

Input ports
  1. Type: Table
    Input Data
    The data to be learned from. It must contain at least one nominal target column and either a fingerprint (bit/byte/double vector) column or another numeric or nominal column.
Output ports
  1. Type: Table
    Out-of-bag Predictions
    The input data with the out-of-bag predictions, i.e. for each input row the majority vote of all models that did not use the row during their training. If the entire data was used to train the individual models then this output will contain the input data with missing values in the prediction columns. The appended columns are equivalent to the columns appended by the corresponding predictor node. There is one additional column model count , which contains the number of models used for voting (number of models not using the row throughout the learning.) The out-of-bag predictions can be used to get an estimate of the generalization error of the tree ensemble by feeding them into the Scorer node.
  2. Type: Table
    Attribute Statistics
    A statistics table on the attributes used in the different trees. Each row represents one training attribute with these statistics: #splits (level x) as the number of models, which use the attribute as split on level x (with level 0 as root split); #candidates (level x) is the number of times an attribute was in the attribute sample for level x (in a random forest setup these samples differ from node to node). If no attribute sampling is used #candidates is the number of models. Note, these numbers are uncorrected, i.e. if an attribute is selected on level 0 but is also in the candidate set of level 1 (but is not split on level 1 because it has been split one level up), the #candidate number still counts the attribute as a candidate.
  3. Type: Tree Ensembles
    Tree Ensemble Model
    The trained model.

Extension

The Tree Ensemble Learner node is part of this extension:

  1. Go to item

Related workflows & nodes

  1. Go to item
    Learner workflow for a call workflow demo
    This workflow learns a tree ensemble model on German credit data and writes the trained m…
    knime > Examples > 50_Applications > 08_RESTDemo > _workflows > _Learner_Flow
  2. Go to item
    Exercise 4_1 - Forest Type Prediction
    dataminer_1 > Public > UCI Applications Project > Workflows > Exercise 4_1 - Forest Type Prediction
  3. Go to item
    How to use the Tree Ensemble nodes
    This workflow shows how the tree ensemble nodes can be used for regression and classifica…
    knime > Examples > 04_Analytics > 13_Meta_Learning > 03_Learning_a_Tree_Ensemble_Model
  4. Go to item
    Exercise 3_2 - Insurance Analysis
    dataminer_1 > Public > UCI Applications Project > Workflows > Exercise 3_2 - Insurance Analysis
  5. Go to item
    Random Forest, Gradient Boosted Trees, and TreeEnsemble
    Classification Machine learning Prediction
    +11
    This workflow solves a binary classification problem on the adult dataset using more adva…
    knime > Academic Alliance > Guide to Intelligent Data Science > Example Workflows > Chapter9 > 04_TreeEnsemble
  6. Go to item
    TreeSHAP Example Workflow
    Explainable-ml SHAP Explainable
    +7
    An overview of the functions for the Tree SHAP nodes for the newly released TreeSHAP pack…
    morris_kurz > Public > TreeSHAP Example Workflow
  7. Go to item
    Amazon.com - Employee Access Challenge--Kaggle
    Categorical features encoding
    This workflow demonstrates use of Auto Categorical Features Embedding component. The comp…
    ashokharnal > Collection of Components and Workflows > feature-generation > Categorical Features Encoding--III
  8. Go to item
    Random Forest, Gradient Boosted Trees, and TreeEnsemble
    Classification Machine learning Prediction
    +9
    This workflow solves a binary classification problem on the adult dataset using more adva…
    rs1 > Public > Tree_Ensembles
  9. Go to item
    Text classification using glove word2vec model
    Glove word2vec model Word Vector Model Reader
    The workflow uses glove's word2vec model. Unzip glove's file 'glove.6B.zip'. The file con…
    ashokharnal > Collection of Components and Workflows > text classification > word2vec--sentiment analysis--I
  10. Go to item
    Sentiment analysis using Word2Vec learner
    Text classification Word2Vec learner
    The workflow demonstrates as to how to use Word2Vec learner for processing text for class…
    ashokharnal > Collection of Components and Workflows > text classification > word2vec--sentiment analysis--II
  1. Go to item
  2. Go to item
  3. Go to item
  4. Go to item
  5. Go to item
  6. Go to item

KNIME
Open for Innovation

KNIME AG
Hardturmstrasse 66
8005 Zurich, Switzerland
  • Software
  • Getting started
  • Documentation
  • E-Learning course
  • Solutions
  • KNIME Hub
  • KNIME Forum
  • Blog
  • Events
  • Partner
  • Developers
  • KNIME Home
  • KNIME Open Source Story
  • Careers
  • Contact us
Download KNIME Analytics Platform Read more on KNIME Server
© 2022 KNIME AG. All rights reserved.
  • Trademarks
  • Imprint
  • Privacy
  • Terms & Conditions
  • Credits