This workflow is designed to illustrate how to estimate test error using cross validation.
The workflow reads in the dataset, splits it into training and test sets multiple times (cross-validation), trains a polynomial regression model on each training set, predicts outcomes on the corresponding test set, and then evaluates prediction accuracy. By repeating this process for different model complexities, it helps identify which model best balances fit and generalization, reducing the risk of overfitting.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!