This workflow demonstrates the usage of the Hive to Spark and Spark to Hive nodes that allow you to transfer data between Apache Spark and Apache Hive.
To run this workflow on a remote cluster, use an HDFS Connection node, Hive Connector node, and Create Spark Context (Livy) node (available in the KNIME Big Data Connectors Extension) in place of the Create Local Big Data Environment node.
Used extensions & nodes
Created with KNIME Analytics Platform version 4.4.0
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
Loading deployments
Loading ad hoc executions
Legal
By using or downloading the workflow, you agree to our terms and conditions.
Discussion
Discussions are currently not available, please try again later.