This workflow demonstrates the usage of the different Spark Java Snippet nodes to read a text file from HDFS, parse it, filter it and write the result back to HDFS.
You might also want to have a look at the provided snippet templates that each of the node provides. In order to do so simply open the configuration dialog of a Spark Java Snippet node and go to the Templates tab.
Note that this workflow requires that access to a Hadoop cluster running Apache Spark 1.2.1 or newer
Get this workflow from the following link: Download
06_Modularized_Spark_Scripting consists of the following 15 nodes(s):
06_Modularized_Spark_Scripting contains nodes provided by the following 4 plugin(s):
Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to email@example.com, follow @NodePit on Twitter, or chat on Gitter!
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.