There are 96 nodes that can be used as predessesor
for a node with an input port of type Spark Context.
Execute arbitrary Python code in Spark.
Execute arbitrary Python code in Spark.
Execute arbitrary Python code in Spark.
Execute arbitrary Python code in Spark.
Repartitions a Spark DataFrame.
Executes a Spark SQL query statement
Concatenates Spark DataFrame/RDDs row wise, inputs are optional.
The Spark GroupBy allows to group by the selected columns and output aggregated data to the generated groups.
Splits input data into two partitions.
Pivots and groups the given Spark DataFrame/RDD by the selected columns for pivoting and grouping. Also performs aggregations for each pivot value.