This workflow demonstrates the usage of the different Spark Java Snippet nodes to read a text file from HDFS, parse it, filter it and write the result back to HDFS. You might also want to have a look at the provided snippet templates that each of the node provides. In order to do so simply open the configuration dialog of a Spark Java Snippet node and go to the Templates tab. Note that this workflow requires that access to a Hadoop cluster running Apache Spark 1.2.1 or newer
Used extensions & nodes
Created with KNIME Analytics Platform version 4.1.0
By using or downloading the workflow, you agree to our terms and conditions.
Discussions are currently not available, please try again later.