Spark to Parquet

Converts an incoming Spark DataFrame/RDD into a parquet file.

Notice: This feature requires at least Apache Spark 1.5.

Options

Save mode
How to handle existing data.
Partitions
Overwrite default partition count. This can be useful to reduce output file count to e.g. one file.
Warning: This might result in serious performance issues on huge data sets. Use with caution!
See Spark documentation for more informations.

Input Ports

Icon
Spark compatible connection (HDFS, WebHDFS, HttpFS, S3, Blob Storage, ...)
Icon
Spark DataFrame/RDD

Output Ports

This node has no output ports

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.