Note: To avoid an accidental cluster startup, this node creates a dummy DB and Spark port if loaded in executed state from a stored workflow. Reset and execute the node to start the cluster and create a Spark execution context.
Cluster access control: KNIME uploads additional libraries to the cluster. This requires manage cluster-level permissions if your cluster is secured with access control. See the Databricks documentation on how to set up the permission.
This tab allows you to define JDBC driver connection parameter. The value of a parameter can be a constant, variable, credential user, credential password or KNIME URL.
This tab allows you to define KNIME framework properties such as connection handling, advanced SQL dialect settings or logging options. The available properties depend on the selected database type and driver.
This tab allows you to define rules to map from database types to KNIME types.
This tab allows you to define rules to map from KNIME types to database types.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Extension for Apache Spark (legacy) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.