0 ×

Spark Column Rename (Regex)

KNIME Extension for Apache Spark core infrastructure version 4.2.0.v202007072005 by KNIME AG, Zurich, Switzerland

Renames all columns based on a regular expression search & replace pattern. The search pattern is a regular expression, possibly containing groups for further back referencing in the replace field.

In the simplest case, you can search and replace string literals. E.g. if the input columns are called "Foo 1", "Foo 2", "Foo 3", etc and the search string is "Foo", the replacement is "Bar", the output would be "Bar 1", "Bar 2", "Bar 3".

More complicated cases contain capturing groups, i.e. expressions in parentheses that, if matched in a column name, are saved. The groups can be referenced in the replacement string using $g, whereby g is a number 0-9. These placeholders will be replaced by the original occurrence in the input column name. For instance, to rename the columns that are produced by the Data Generator node (they follow a scheme Universe_<number1>_<number2>) to <number2> (Uni <number1>), you would use as search string: "Universe_(\d+)_(\d+)" and as replacement: "$2 (Uni $1)".

The special sequence $i represents the current column index (unless escaped by '\' (backslash)). E.g. in order to precede each column name with the column index, use as search string "(^.+$)", capturing the entire column name in a group, and as replacement "$i: $1".

Further documentation regarding regular expressions can be found in the Java API documentation, in particular the classes Pattern and Matcher.

Options

Search String (regexp)
The search pattern, which may contain group patterns used in back references.
Replacement
The replacement string. Use $1, $2, etc. to address the groups defined in the search pattern.
Case Insensitive
Enables case-insensitive matching.
Literal
When this flag is specified then the search string is treated as a sequence of literal characters. Metacharacters or escape sequences in the input sequence will be given no special meaning.

Input Ports

Icon
Arbitrary input Spark DataFrame/RDD.

Output Ports

Icon
Input Spark DataFrame/RDD with renamed columns according to configuration parameters.

Best Friends (Incoming)

Best Friends (Outgoing)

Workflows

Installation

To use this node in KNIME, install KNIME Extension for Apache Spark from the following update site:

KNIME 4.2

A zipped version of the software site can be downloaded here.

You don't know what to do with this link? Read our NodePit Product and Node Installation Guide that explains you in detail how to install nodes to your KNIME Analytics Platform.

Wait a sec! You want to explore and install nodes even faster? We highly recommend our NodePit for KNIME extension for your KNIME Analytics Platform. Browse NodePit from within KNIME, install nodes with just one click and share your workflows with NodePit Space.

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.