This workflow trains classification models for the Airlines Delay dataset using H2O AutoML on Spark. The dataset is expected to be stored on S3 in parquet format. It is first read into the Spark cluster and preprocessed on Spark (missing value handling, normalization, etc.). Then, Sparkling Water is used to train both binary and muliclass classification models with H2O AutoML on the dataset. Last, the models are scored on the previously partitioned test data.
The Airlines Delay dataset and description for it can be found here: https://www.kaggle.com/giovamata/airlinedelaycauses
You can use the Parquet Writer node to write the dataset to S3 or, e.g., replace the Parquet to Spark node with the CSV Reader and Table to Spark nodes (note that using parquet provides a better performance of the whole process).
By increasing or removing the runtime limit for the H2O AutoML Learner nodes, better models might be learned.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.