This workflow demonstrates several methods to import one or many CSV file into Hive
Demonstrated are direct Uploads where you create a Hive table with KNIME nodes. Or you copy your file to an /upload/ folder and point an external table to them. If they all have the same structure they will be read by Hive. You can then use this external file to further handel your files.
If the fies are very large you might have to use partitions. In the KNIME installemnt of the Hive drivers there is a problem with the headers of the CSV files. It is also demonstrated how to get rid of them.
Please familiarize yourself with the concepts of big data and partitions in order to use this. And please note. KNIME's local big data environment is just there to demonstrate the usage. It might work with your large files but it is called Big Data for areason ....
https://hub.knime.com/mlauber71/spaces/Public/latest/kn_example_hive_school_of?u=mlauber71
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.