Hierarchical Clustering (DistMatrix)
Hierarchically clusters the input data using a distance matrix.
Note: This node works only on small data sets, because it has cubic complexity.
There are two methods to do hierarchical clustering:
- Top-down or divisive, i.e. the algorithm starts with all data points in one huge cluster and the most dissimilar datapoints are divided into subclusters until each cluster consists of exactly one data point.
- Bottom-up or agglomerative, i.e. the algorithm starts with every datapoint as one single cluster and tries to combine the most similar ones into superclusters until it ends up in one huge cluster containing all subclusters.
In order to determine the distance between clusters a measure has to be defined. Basically, there exist three methods to compare two clusters:
- Single Linkage: defines the distance between two clusters c1 and c2 as the minimal distance between any two points x, y with x in c1 and y in c2.
- Complete Linkage: defines the distance between two clusters c1 and c2 as the maximal distance between any two points x, y with x in c1 and y in c2.
- Average Linkage: defines the distance between two clusters c1 and c2 as the mean distance between all points in c1 and c2.
The distance information used by this node is either read from a distance vector column that must be available in the input data or is computed directly with usage of a connected distance measure. You can always calculate the distance matrix using the corresponding calculate node.
- Type: Data Contains the data that should be clustered using hierarchical clustering and the the optional distance matrix.
- Type: Distance Measure Optional distance measure, which renders the distance matrix at Port 0 unnecessary.
- Type: Cluster Tree The hierarchical cluster tree that can be fed into the Hierarchical Cluster View node or the Hierarchical Cluster Assigner node.