0 ×

JSON to Spark

KNIME Extension for Apache Spark core infrastructure version 4.3.1.v202101261633 by KNIME AG, Zurich, Switzerland

This node supports the path flow variable. For further information about file handling in general see the File Handling Guide.

Creates a Spark DataFrame/RDD from given JSON file. See Jackson JSON Parser documentation for more information.

Notice: This feature requires at least Apache Spark 1.5.



Select if you like to read a file or folder.
Enter the input path. The required syntax of a path depends on the connected file system. The node description of the respective connector node describes the required path format. You can also choose a previously selected file from the drop-down list, or select a location from the "Browse..." dialog. Note that browsing is disabled in some cases:
  • Browsing is disabled if the connector node hasn't been executed since the workflow has been opened. (Re)execute the connector node to enable browsing.
The location can be exposed as or automatically set via a path flow variable.
Sampling ratio
Infer the type of a collection of JSON records in three stages:
  1. Sample given amount of records and infer the type.
  2. Merge types by choosing the lowest type necessary to cover equal keys.
  3. Replace any remaining null fields with string, the top type.
Convert primitives into Strings.
Allow comments.
Allow unquoted fieldnames.
Feature that determines whether parser will allow use of single quotes (apostrophe, character '\'') for quoting Strings (names and String values).
  • Allow numeric leading zeros: Feature that determines whether parser will allow JSON integral numbers to start with additional (ignorable) zeroes (like: 000001).
  • Allow non numeric numbers: Feature that allows parser to recognize set of "Not-a-Number" (NaN) tokens as legal floating number values (similar to how many other data formats and programming language source code allows it). Supported tokens: INF, -INF, Infinity, -Infinity and NaN.

Input Ports

Spark compatible connection (HDFS, WebHDFS, HttpFS, S3, Blob Storage, ...)
Spark context

Output Ports

Spark DataFrame/RDD


To use this node in KNIME, install KNIME Extension for Apache Spark from the following update site:


A zipped version of the software site can be downloaded here.

You don't know what to do with this link? Read our NodePit Product and Node Installation Guide that explains you in detail how to install nodes to your KNIME Analytics Platform.

Wait a sec! You want to explore and install nodes even faster? We highly recommend our NodePit for KNIME extension for your KNIME Analytics Platform. Browse NodePit from within KNIME, install nodes with just one click and share your workflows with NodePit Space.


You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.