This workflow trains classification models for the Airlines Delay dataset using H2O AutoML on Spark. The dataset is expected to be stored on S3 in parquet format. It is first read into the Spark cluster and preprocessed on Spark (missing value handling, normalization, etc.). Then, Sparkling Water is used to train both binary and muliclass classification models with H2O AutoML on the dataset. Last, the models are scored on the previously partitioned test data.
The Airlines Delay dataset and description for it can be found here: https://www.kaggle.com/giovamata/airlinedelaycauses
You can use the Parquet Writer node to write the dataset to S3 or, e.g., replace the Parquet to Spark node with the CSV Reader and Table to Spark nodes (note that using parquet provides a better performance of the whole process).
By increasing or removing the runtime limit for the H2O AutoML Learner nodes, better models might be learned.
Workflow
H2O AutoML on Spark
External resources
Used extensions & nodes
Created with KNIME Analytics Platform version 4.4.0
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
- Go to item
Loading deployments
Loading ad hoc executions
Legal
By using or downloading the workflow, you agree to our terms and conditions.
Discussion
Discussions are currently not available, please try again later.