CSV to Spark

Creates a Spark DataFrame/RDD from given CSV file. See CSV Data Source documentation for more information.

Notice: This feature requires at least Apache Spark 1.5.

Options

Driver
Upload data source driver or depend on cluster side provided driver.
Header
First line of files will be used to name columns and will not be included in data.
Delimiter
Character used as delimiter between columns (supports escape sequences, e.g. \t or \u0123).
Quote character
Quote character (delimiters inside quotes are ignored).
Escape character
Escape character (escaped quote characters are ignored).
Mode
Determines the parsing mode. By default it is PERMISSIVE.
  • PERMISSIVE: tries to parse all lines: nulls are inserted for missing tokens and extra tokens are ignored.
  • DROPMALFORMED: drops lines which have fewer or more tokens than expected or tokens which do not match the schema.
  • FAILFAST: aborts with a RuntimeException if encounters any malformed line.
Charset
Valid charset name (see java.nio.charset.Charset).
Schema
Automatically infers column types. It requires one extra pass over the data. All types will be assumed string otherwise.
Comments
Skip lines beginning with this character.
Null value
Specifies a string that indicates a null value, any fields matching this string will be set as nulls in the DataFrame.
Date format
Specifies a string that indicates the date format to use when reading dates or timestamps. Custom date formats follow the formats at java.text.SimpleDateFormat. This applies to both DateType and TimestampType. By default, it is null which means trying to parse times and date by java.sql.Timestamp.valueOf() and java.sql.Date.valueOf().

Input Ports

Icon
Spark compatible connection (HDFS, WebHDFS, HttpFS, S3, Blob Storage, ...)
Icon
Required Spark context.

Output Ports

Icon
Spark DataFrame/RDD

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.