This workflow demonstrates the usage of the Spark MLlib to PMML node. Together with the Compiled Model Predictor and the JSON Input/Output node it can be used to model a so called lambda architecture which learns a machine learning model at scale on historical data offline and predicts events online using the learned model.
The workflow makes use of the Create Local Big Data Environment node to create a Spark context. You can swap this node out for a Create Spark Context (Livy) node to connect to a remote cluster.
Used extensions & nodes
Created with KNIME Analytics Platform version 4.1.0
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
Loading deployments
Loading ad hoc executions
Legal
By using or downloading the workflow, you agree to our terms and conditions.
Discussion
Discussions are currently not available, please try again later.