Spark to CSV

Writes a Spark DataFrame/RDD into a CSV file. See CSV Data Source documentation for more information.

Notice: This feature requires at least Apache Spark 1.5.

Options

Save mode
How to handle existing data.
Driver
Upload data source driver or depend on cluster side provided driver.
Partitions
Overwrite default partition count. This can be useful to reduce output file count to e.g. one file.
Warning: This might result in serious performance issues on huge data sets. Use with caution!
See Spark documentation for more informations.
Header
The header will be written at the first line.
Delimiter
Character used as delimiter between columns (supports escape sequences, e.g. \t or \u0123).
Quote character
Quote character (delimiters inside quotes are ignored).
Escape character
Escape character (escaped quote characters are ignored).
Null value
Specifies a string that indicates a null value, nulls in the DataFrame will be written as this string.
Date format
Specifies a string that indicates the date format to use when reading dates or timestamps. Custom date formats follow the formats at java.text.SimpleDateFormat. This applies to both DateType and TimestampType. By default, it is null which means trying to parse times and date by java.sql.Timestamp.valueOf() and java.sql.Date.valueOf().
codec
Compression codec to use when saving to file.
Quote mode
When to quote fields (ALL, MINIMAL (default), NON_NUMERIC, NONE), see Quote Modes.

Input Ports

Icon
Spark compatible connection (HDFS, WebHDFS, HttpFS, S3, Blob Storage, ...)
Icon
Spark DataFrame/RDD

Output Ports

This node has no output ports

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.