This node allows you to execute arbitrary java code to create a Spark RDD e.g. by reading a file from HDFS (See provided templates). Simply enter the java code in the text area.
Note, that this node also supports flow variables as input to your Spark job. To use a flow variable simply double click on the variable in the "Flow Variable List".
It is also possible to use external java libraries. In order to
include such external jar or zip files, add their location in the
"Additional Libraries" tab using the control buttons.
For details see the "Additional Libraries" tab description below.
The used libraries need to be present on your cluster and added to the class path of your Spark job server.
They are not automatically uploaded!
You can define reusable templates with the "Create templates..." button. Templates are stored in the users workspace by default and can be accessed via the "Templates" tab. For details see the "Templates" tab description below.
Enter your java code here.
The JavaSparkContext can be accessed via the method input parameter sc.
Output Schema:
The schema (e.g. data table specification) of the returned JavaRDD<Row> is by default
derived automatically by looking at the top 10 rows of the returned JavaRDD<Row>.
However you can also specify the schema programmatically by overwriting the getSchema() method.
For an example on how to implement the method have a look at the "Create result schema manually" template
in the "Templates" tab.
Flow variables:
You can access input flow variables by defining them in the Input table.
To define a flow variable simply double click on the variable in the "Flow Variable list".
You can hit ctrl+space to get an auto completion box with all available classes, methods and fields. When you select a class and hit enter a import statement will be generated if missing.
Note, that the snippet allows to define custom global variables and custom imports. To view the hidden editor parts simply click on the plus symbols in the editor.
Allows you to add additional jar files to the java snippet class path.
The used libraries need to be present on your cluster and added to the class path of your Spark job server.
They are not automatically uploaded!
Provides predefined templates and allows you to define new reusable templates by saving the current snippet state.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Extension for Apache Spark (legacy) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.