Hierarchical Clustering

Hierarchically clusters the input data.
Note: This node works only on small data sets. It keeps the entire data in memory and has cubic complexity.
There are two methods to do hierarchical clustering:

  • Top-down or divisive, i.e. the algorithm starts with all data points in one huge cluster and the most dissimilar datapoints are divided into subclusters until each cluster consists of exactly one data point.
  • Bottom-up or agglomerative, i.e. the algorithm starts with every datapoint as one single cluster and tries to combine the most similar ones into superclusters until it ends up in one huge cluster containing all subclusters.
This algorithm works agglomerative.

In order to determine the distance between clusters a measure has to be defined. Basically, there exist three methods to compare two clusters:

  • Single Linkage: defines the distance between two clusters c1 and c2 as the minimal distance between any two points x, y with x in c1 and y in c2.
  • Complete Linkage: defines the distance between two clusters c1 and c2 as the maximal distance between any two points x, y with x in c1 and y in c2.
  • Average Linkage: defines the distance between two clusters c1 and c2 as the mean distance between all points in c1 and c2.

In order to measure the distance between two points a distance measure is necessary. You can choose between the Manhattan distance and the Euclidean distance, which corresponds to the L1 and the L2 norm.

The output is the same data as the input with one additional column with the clustername the data point is assigned to. Since a hierarchical clustering algorithm produces a series of cluster results, the number of clusters for the output has to be defined in the dialog.

Options

Number output cluster
Which level of the hierarchy to use for the output column.
Distance function
Which distance measure to use for the distance between points.
Linkage type
Which method to use to measure the distance between points (as described above)
Distance cache
Caching the distances between the data points drastically improves performance especially for high-dimensional datasets. However, it needs much memory, so you can switch it off for large datasets.

Input Ports

Icon
The data that should be clustered using hierarchical clustering. Only numeric columns are considered, nominal columns are ignored.

Output Ports

Icon
The input data with an extra column with the cluster name where the data point is assigned to.

Popular Predecessors

Views

Dendrogram/Distance View
  • Dendrogram: The view shows a dendrogram which displays the whole cluster hierarchy. At the bottom are all datapoints. The closest data points are connected, where the height of the connection shows the distance between them. Thus, the y coordinate displays the distance of the fusions and thereby also the hierarchy level. The x axis is nominal and displays the single data points with their RowID. Each cluster can be selected and hilited. All contained subclusters will be hilited, too.
  • Distance plot: The distance plot displays the distances between the cluster for each number of clusters. This view can help to determine a "good" number of clusters, since there will be sudden jumps in the level of similarity as dissimilar groups are fused. The y coordinate is the distance of the fusion, the x axis the number of the fusion, i.e. the hierarchy level. The tooltip over the datapoints provides detailed information about that point, where "x" is the hierarchy level and "y" the distance of that fusion. The points can not be hilited, since the distances correspond to the height of the dendrogram not to any data points. The appearance tab let you adjust the view by hiding or displaying the dots, change the line thickness and the dot size.

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.