0 ×

**KNIME Base Nodes** version **4.3.2.v202103021015** by **KNIME AG, Zurich, Switzerland**

Hierarchically clusters the input data.

Note: This node works only on small data sets. It keeps the entire data
in memory and has cubic complexity.

There are two methods to do hierarchical clustering:

- Top-down or divisive, i.e. the algorithm starts with all data points in one huge cluster and the most dissimilar datapoints are divided into subclusters until each cluster consists of exactly one data point.
- Bottom-up or agglomerative, i.e. the algorithm starts with every datapoint as one single cluster and tries to combine the most similar ones into superclusters until it ends up in one huge cluster containing all subclusters.

In order to determine the distance between clusters a measure has to be defined. Basically, there exist three methods to compare two clusters:

- Single Linkage: defines the distance between two clusters c1 and c2 as the minimal distance between any two points x, y with x in c1 and y in c2.
- Complete Linkage: defines the distance between two clusters c1 and c2 as the maximal distance between any two points x, y with x in c1 and y in c2.
- Average Linkage: defines the distance between two clusters c1 and c2 as the mean distance between all points in c1 and c2.

In order to measure the distance between two points a distance measure is necessary. You can choose between the Manhattan distance and the Euclidean distance, which corresponds to the L1 and the L2 norm.

The output is the same data as the input with one additional column with the clustername the data point is assigned to. Since a hierarchical clustering algorithm produces a series of cluster results, the number of clusters for the output has to be defined in the dialog.

- Number output cluster
- Which level of the hierarchy to use for the output column.
- Distance function
- Which distance measure to use for the distance between points.
- Linkage type
- Which method to use to measure the distance between points (as described above)
- Distance cache
- Caching the distances between the data points drastically improves performance especially for high-dimensional datasets. However, it needs much memory, so you can switch it off for large datasets.

- The data that should be clustered using hierarchical clustering. Only numeric columns are considered, nominal columns are ignored.

- Dendrogram/Distance View
- Dendrogram: The view shows a dendrogram which displays the whole cluster hierarchy. At the bottom are all datapoints. The closest data points are connected, where the height of the connection shows the distance between them. Thus, the y coordinate displays the distance of the fusions and thereby also the hierarchy level. The x axis is nominal and displays the single data points with their row ID. Each cluster can be selected and hilited. All contained subclusters will be hilited, too.
- Distance plot: The distance plot displays the distances between the cluster for each number of clusters. This view can help to determine a "good" number of clusters, since there will be sudden jumps in the level of similarity as dissimilar groups are fused. The y coordinate is the distance of the fusion, the x axis the number of the fusion, i.e. the hierarchy level. The tooltip over the datapoints provides detailed information about that point, where "x" is the hierarchy level and "y" the distance of that fusion. The points can not be hilited, since the distances correspond to the height of the dendrogram not to any data points. The appearance tab let you adjust the view by hiding or displaying the dots, change the line thickness and the dot size.

- Normalizer (13 %)
- Column Filter (7 %) Streamable
- File Reader (7 %) Streamable
~~CSV Reader~~(6 %) Deprecated- Color Manager (5 %)
- Show all 296 recommendations

- Color Manager (23 %)
- Hierarchical Cluster View (18 %)
- Denormalizer (4 %) Streamable
- k-Means (4 %)
- Hierarchical Clustering (DistMatrix) (4 %)
- Show all 262 recommendations

- 01_HierarchicalClustering (KNIME Hub)
- 01_HierarchicalClustering (KNIME Hub)
- House_Prices 7 (KNIME Hub)
- ML Fall Summit 19.knwf (KNIME Hub)
- OK (KNIME Hub)

To use this node in KNIME, install KNIME Base nodes from the following update site:

KNIME 4.3

A zipped version of the software site can be downloaded here.

You don't know what to do with this link? Read our NodePit Product and Node Installation Guide that explains you in detail how to install nodes to your KNIME Analytics Platform.

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.

Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com, follow @NodePit on Twitter, or chat on Gitter!

Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.