Hierarchically clusters the input data.
Note: This node works only on small data sets. It keeps the entire data in memory and has cubic complexity.
There are two methods to do hierarchical clustering:
- Top-down or divisive, i.e. the algorithm starts with all data points in one huge cluster and the most dissimilar datapoints are divided into subclusters until each cluster consists of exactly one data point.
- Bottom-up or agglomerative, i.e. the algorithm starts with every datapoint as one single cluster and tries to combine the most similar ones into superclusters until it ends up in one huge cluster containing all subclusters.
In order to determine the distance between clusters a measure has to be defined. Basically, there exist three methods to compare two clusters:
- Single Linkage: defines the distance between two clusters c1 and c2 as the minimal distance between any two points x, y with x in c1 and y in c2.
- Complete Linkage: defines the distance between two clusters c1 and c2 as the maximal distance between any two points x, y with x in c1 and y in c2.
- Average Linkage: defines the distance between two clusters c1 and c2 as the mean distance between all points in c1 and c2.
In order to measure the distance between two points a distance measure is necessary. You can choose between the Manhattan distance and the Euclidean distance, which corresponds to the L1 and the L2 norm.
The output is the same data as the input with one additional column with the clustername the data point is assigned to. Since a hierarchical clustering algorithm produces a series of cluster results, the number of clusters for the output has to be defined in the dialog.