This workflow demonstrates the usage of the Spark Compiled Model Predictor node which converts a given PMML model into machine code and uses the compiled model to predict vast amounts of data in parallel within Apache Spark.
The workflow makes use of the Create Local Big Data Environment node to create a Spark context. You can swap this node out for a Create Spark Context (Livy) node to connect to a remote cluster.
Used extensions & nodes
Created with KNIME Analytics Platform version 4.1.0
-
KNIME Core
KNIME AG, Zurich, Switzerland
Version 4.1.0
-
KNIME Ensemble Learning Wrappers
KNIME AG, Zurich, Switzerland
Version 4.1.0
-
KNIME Extension for Apache Spark
KNIME AG, Zurich, Switzerland
Version 4.1.0
-
KNIME Extension for Local Big Data Environments
KNIME AG, Zurich, Switzerland
Version 4.1.0
Legal
By downloading the workflow, you agree to our terms and conditions.
License (CC-BY-4.0)Discussion
Discussions are currently not available, please try again later.