This node allows you to execute arbitrary java code to manipulate or create Spark DataFrames. Simply enter the java code in the text area.
Note, that this node also supports flow variables as input to your Spark job. To use a flow variable simply double click on the variable in the "Flow Variable List".
It is also possible to use external java libraries. In order to
include such external jar or zip files, add their location in the
"Additional Libraries" tab using the control buttons.
For details see the "Additional Libraries" tab description below.
The used libraries need to be present on your cluster and added to the class path of your Spark job server.
They are not automatically uploaded!
You can define reusable templates with the "Create templates..." button. Templates are stored in the users workspace by default and can be accessed via the "Templates" tab. For details see the "Templates" tab description below.
For Spark 2.2 and above, this node compiles the snippet code with Java 8 support, otherwise it uses Java 7.
Enter your java code here.
The SparkSession can be accessed via the method input parameter spark. The input Dataset<Row> can be accessed via the method input parameter dataFrame1 and dataFrame2 whereas dataFrame2 is null if the input port is not connected.
Flow variables:
You can access input flow variables by defining them in the Input table.
To define a flow variable simply double click on the variable in the "Flow Variable list".
You can hit ctrl+space to get an auto completion box with all available classes, methods and fields. When you select a class and hit enter a import statement will be generated if missing.
Note, that the snippet allows to define custom global variables and custom imports. To view the hidden editor parts simply click on the plus symbols in the editor.
Allows you to add additional jar files to the java snippet class path.
The used libraries need to be present on your cluster and added to the class path of your Spark job server.
They are not automatically uploaded!
Provides predefined templates and allows you to define new reusable templates by saving the current snippet state.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Extension for Apache Spark (legacy) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.