Database to Spark

This Node Is Deprecated — This node is kept for backwards-compatibility, but the usage in new workflows is no longer recommended. The documentation below might contain more information.
Reads a database query/table into a Spark DataFrame/RDD. See Spark documentation for more information.

Notice: This feature requires at least Apache Spark 1.5.

Options

Driver
Upload local driver (used in this KNIME instance) or depend on cluster side provided driver.
Fetch size
Optional: The JDBC fetch size, which determines how many rows to fetch per round trip. This can help performance on JDBC drivers which default to low fetch size (eg. Oracle with 10 rows).
Partition column, lower bound, upper bound, num partitions
These options must all be specified if any of them is specified. They describe how to partition the table when reading in parallel from multiple workers. partitionColumn must be a numeric column from the table in question. Notice that lowerBound and upperBound are just used to decide the partition stride, not for filtering the rows in table. So all rows in the table will be partitioned and returned.
Query DB for upper and lower count
Fetch bounds via min/max query or use manual entered bounds.

Input Ports

Icon
Input query
Icon
Optional Spark context. If not connected a context is created based on the settings in the Spark preferences page.

Output Ports

Icon
Spark DataFrame/RDD

Popular Predecessors

  • No recommendations found

Popular Successors

  • No recommendations found

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.