Create Local Big Data Environment

Creates a fully functional local big data environment including Apache Hive, Apache Spark and HDFS.

The Spark WebUI of the created local Spark context is available via the Spark context outport view. Simply click on the Click here to open link and the Spark WebUI is opened in the internal web browser.

Note: Executing this node only creates a new Spark context, when no local Spark context with the same Context name currently exists. Resetting the node does not destroy the context. Whether closing the KNIME workflow will destroy the context or not, depends on the configured Action to perform on dispose. Spark contexts created by this node can be shared between KNIME workflows.

Options

Context name
The unique name of the context. Only one Spark context will be created when you execute several Create Local Big Data Environment nodes with the same context name.
Number of threads
The number of threads the local Spark runtime can be use.
Action to perform on dispose
Decides what happens with the Spark context when the workflow or KNIME is closed.
  • Destroy Spark context: Will destroy the Spark context and free up all allocated resources.
  • Delete Spark DataFrames: Will delete all Spark DataFrames but keeps the Spark context with all the allocated resources open.
  • Do nothing: Leaves the Spark context and all created Spark DataFrames as is.
Use custom Spark settings
Select this option to specify additional Spark settings. For more details see Custom Spark settings description.
Custom Spark settings
Allows you to pass on any settings to the Spark context. Especially interesting if you want to add additional jars e.g. to test your own UDFs.
SQL Support
  • Spark SQL only: The Spark SQL node will only support Spark SQL syntax. The Hive connection port will be disabled.
  • HiveQL: The Spark SQL node will support HiveQL syntax. The Hive connection port will be disabled.
  • HiveQL and provide JDBC connection: The Spark SQL node will support HiveQL syntax. The Hive connection port will be enabled, which allows you to also work with a local Hive instance using the KNIME database nodes.
Use custom Hive data folder (Metastore DB & Warehouse)
If selected, the Hive table definitions and data files are stored in the specified location and will be also available after a KNIME restart. If not selected, all Hive related information is stored in a temporary location which will be deleted when the local Spark context is destroyed.
Hide warning about an existing local Spark context
Enable this option to suppress a warning message shown when the Spark context to be created by this node already exists. For further details see the Context name option.

Input Ports

This node has no input ports

Output Ports

Icon
JDBC connection to a local Hive instance. This port can be connected to the KNIME database nodes.
Icon
HDFS connection that points to the local file system. This port can be connected for example to the Spark nodes that read/write files.
Icon
Local Spark context, that can be connected to all Spark nodes.

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.