Impala Loader (legacy)

This Node Is Deprecated — This node is kept for backwards-compatibility, but the usage in new workflows is no longer recommended. The documentation below might contain more information.

This node is part of the legacy database framework. For more information on how to migrate to the new database framework see the migration section of the database documentation.

This node loads a KNIME data table into Impala. Impala requires imported data to be present in the HDFS file system on the Impala server, therefore this node first copies the data onto the Impala server. You can use the HDFS Connection node to establish a connection to the HDFS file system. The data is then loaded into a Impala table and the uploaded file is deleted.

Additionally the data can be partitioned by selecting one or more compatible columns (e.g. integer or string). The node relies on Impala's dynamic partitioning.


Target folder
A folder on the server where the temporary copy of the data is copied. The Impala server user needs read and write access to this folder.
Table name
The name of the new table in the database.
Drop existing table
If this option is selected, an existing table will be dropped and re-created. Otherwise the table is loaded into the existing table. Note that the import may fail in this case if the table structure and partitioning information do not match.
Partition columns
Here you can select one or more columns that should be used for partitioning the data in the table. A partitioned table requires a two-step import via a temporary table and therefore is much slower than import into an unpartitioned table.

Input Ports

A HDFS connection to the remote Impala server
The data table that should be loaded into Impala
A connection to a Impala database

Output Ports

A database connection with the imported table


This node has no views


  • No workflows found



You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.