An overview of KNIME based functions to access big data systems (with KNIME's local big data environment) Use SQL with Impala/Hive and Spark and also PySpark to access and manipulate data on a big data system. The example is from the classic MS "Northwind" database. THX to J. Thelen for input from SQL lecture --------------- REMEMBER: Spark is about lazy evaluation. That means it will not do anything besides *planning* and preparing the transformations *until* you force it to do something. So the initial load of Spark may take some time (setting up the environment), the next steps might seem super fast (just structuring RDDs and creating -empty- place holders). The moment you want to get data back Spark springs into action and delivers the results.
Used extensions & nodes
Created with KNIME Analytics Platform version 4.4.1
By using or downloading the workflow, you agree to our terms and conditions.
Discussions are currently not available, please try again later.