This workflow demonstrates the usage of the Spark MLlib Decision Tree Learner and Spark Predictor. It also demonstrates the conversion of categorical columns into numerical columns which is necessary since the MLlib algorithms only support numerical features and labels.
The workflow makes use of the Create Local Big Data Environment node to create a Spark context. You can swap this node out for a Create Spark Context (Livy) node to connect to a remote cluster.
External resources
Used extensions & nodes
Created with KNIME Analytics Platform version 4.1.0
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
Legal
By using or downloading the workflow, you agree to our terms and conditions.