Learning objective: In this exercise you will learn building the data pipeline for production and usage of Call Workflow services
Workflow description: This workflow showcases the important aspects of building the data pipeline for production. Find below the three major tasks performed in this workflow
The data is read from the real data sources, eg, weekly data is updated in Google sheets, we read it using the Google Sheet reader node
Further the data transformation workflow (created in exercise 1a) is called using the Call Workflow service node and it transforms the data
At last, the transformed data is written to production database to allow us to check the consistency of the processed data
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.