guide:2cfffcb6b0: Difference between revisions
No edit summary |
mNo edit summary |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
In data mining and statistics, '''hierarchical clustering''' (also called '''hierarchical cluster analysis''' or '''HCA''') is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types: | |||
{| class="table" | |||
|- | |||
! Type !! Description | |||
|- | |||
| Agglomerative || This is a bottom-up approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy | |||
|- | |||
| Divisive || This is a top-down approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy | |||
|} | |||
The results of hierarchical clustering<ref>{{cite book |first=Frank |last=Nielsen | title=Introduction to HPC with MPI for Data Science | year=2016 | publisher=Springer |isbn=978-3-319-21903-5 |pages=195–211 | |||
|chapter=8. Hierarchical Clustering | url=https://www.springer.com/gp/book/9783319219028 |chapter-url=https://www.researchgate.net/publication/314700681 }}</ref> are usually presented in a dendrogram (see the examples below). | |||
== Cluster dissimilarity == | |||
In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate metric (a measure of distance between pairs of observations), and a linkage criterion which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets. | |||
=== Metric === | |||
The choice of an appropriate metric will influence the shape of the clusters, as some elements may be relatively closer to one another under one metric than another. For example, in two dimensions, under the Manhattan distance metric, the distance between the origin (0,0) and (0.5, 0.5) is the same as the distance between the origin and (0, 1), while under the Euclidean distance metric the latter is strictly greater. | |||
Some commonly used metrics for hierarchical clustering are:<ref>{{cite web | title=The DISTANCE Procedure: Proximity Measures | url=https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/statug_distance_sect016.htm | work=SAS/STAT 9.2 Users Guide | publisher= [[SAS Institute|SAS Institute]] | access-date=2009-04-26}}</ref> | |||
{|class="table" | |||
! Names | |||
! Formula | |||
|- | |||
| Euclidean distance | |||
| <math> \|a-b \|_2 = \sqrt{\sum_i (a_i-b_i)^2} </math> | |||
|- | |||
| Squared Euclidean distance | |||
| <math> \|a-b \|_2^2 = \sum_i (a_i-b_i)^2 </math> | |||
|- | |||
| Manhattan (or city block ) distance | |||
| <math> \|a-b \|_1 = \sum_i |a_i-b_i| </math> | |||
|- | |||
| Maximum distance (or Chebyshev distance) | |||
| <math> \|a-b \|_\infty = \max_i |a_i-b_i| </math> | |||
|- | |||
| Mahalanobis distance | |||
| <math> \sqrt{(a-b)^{\top}S^{-1}(a-b)} </math> where ''S'' is the Covariance matrix | |||
|- | |||
|} | |||
For text or other non-numeric data, metrics such as the [[wikipedia:Hamming distance|Hamming distance]] or [[wikipedia:Levenshtein distance|Levenshtein distance]] are often used. | |||
=== Linkage criteria === | |||
The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations. | |||
Some commonly used linkage criteria between two sets of observations <math>A</math> and <math>B</math> are:<ref>{{cite web | title=The CLUSTER Procedure: Clustering Methods | url=https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/statug_cluster_sect012.htm | work=SAS/STAT 9.2 Users Guide | publisher= [[SAS Institute|SAS Institute]] | access-date=2009-04-26}}</ref><ref>{{cite journal |last1=Székely |first1=G. J. |last2=Rizzo |first2=M. L. |year=2005 |title=Hierarchical clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method |journal=Journal of Classification |volume=22 |issue=2 |pages=151–183 |doi=10.1007/s00357-005-0012-9 |s2cid=206960007 }}</ref> | |||
{|class="table" | |||
! Names | |||
! Formula | |||
|- | |||
| Maximum or [[#Complete Linkage Clustering|complete-linkage clustering]] | |||
| <math> \max \, \{\, d(a,b) : a \in A,\, b \in B \,\}. </math> | |||
|- | |||
| Minimum or [[#Single-linkage clustering|single-linkage clustering]] | |||
| <math> \min \, \{\, d(a,b) : a \in A,\, b \in B \,\}.. </math> | |||
|- | |||
| Unweighted average linkage clustering | |||
| <math> \frac{1}{|A|\cdot|B|} \sum_{a \in A }\sum_{ b \in B} d(a,b). </math> | |||
|- | |||
| Weighted average linkage clustering | |||
| <math> d(i \cup j, k) = \frac{d(i, k) + d(j, k)}{2}. </math> | |||
|- | |||
| Centroid linkage clustering, | |||
| <math> d(c_A,c_B) </math> where <math>c_A</math> and <math>c_B</math> are the centroids of clusters <math>A</math> and <math>B</math>, respectively. | |||
|- | |||
| Minimum energy clustering | |||
| <math> \frac {2}{nm}\sum_{i,j=1}^{n,m} \|a_i- b_j\|_2 - \frac {1}{n^2}\sum_{i,j=1}^{n} \|a_i-a_j\|_2 - \frac{1}{m^2}\sum_{i,j=1}^{m} \|b_i-b_j\|_2 </math> | |||
|} | |||
==Complete Linkage Clustering== | |||
'''Complete-linkage clustering''' is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all elements end up being in the same cluster. The method is also known as '''farthest neighbour clustering'''. The result of the clustering can be visualized as a dendrogram, which shows the sequence of cluster fusion and the distance at which each fusion took place.<ref>{{Cite journal| vauthors = Sorensen T |year = 1948|title = A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. |journal = Biologiske Skrifter|volume=5|pages = 1–34}}</ref><ref>{{cite book | vauthors = Legendre P, Legendre L | year = 1998 | title = Numerical Ecology | edition = Second English | pages = 853 }}</ref><ref>{{Cite book| first1 = Brian S. | last1 = Everitt | first2 = Sabine | last2 = Landau | authorlink2=Sabine Landau | first3 = Morven | last3 = Leese | name-list-style = vanc |title = Cluster Analysis |edition = Fourth|publisher = Arnold |location = London|year = 2001|isbn = 0-340-76119-9}}</ref> | |||
=== Clustering procedure === | |||
At each step, the two clusters separated by the shortest distance are combined. The definition of 'shortest distance' is what differentiates between the different agglomerative clustering methods. In complete-linkage clustering, the link between two clusters contains all element pairs, and the distance between clusters equals the distance between those two elements (one in each cluster) that are '''farthest away''' from each other. The shortest of these links that remains at any step causes the fusion of the two clusters whose elements are involved. | |||
Mathematically, the complete linkage function — the distance <math>D(X,Y)</math> between clusters <math>X</math> and <math>Y</math> — is described by the following expression : | |||
<math display = "block">D(X,Y)= \max_{x\in X, y\in Y} d(x,y)</math> | |||
where <math>d(x,y)</math> is the distance between elements <math>x \in X</math> and <math>y \in Y</math>, and <math>X</math> and <math>Y</math> are two sets of elements (clusters). | |||
=== Naive scheme === | |||
The following algorithm is an agglomerative scheme that erases rows and columns in a proximity matrix as old clusters are merged into new ones. The <math>N \times N</math> proximity matrix <math>D</math> contains all distances <math>d(i,j)</math>. The clusterings are assigned sequence numbers <math>0,1,\ldots,n-1</math> and <math>L(k)</math> is the level of the <math>k</math>-th clustering. A cluster with sequence number <math>m</math> is denoted (<math>m</math>) and the proximity between clusters (<math>r</math>) and (<math>s</math>) is denoted <math>d[(r),(s)]</math>. | |||
The complete linkage clustering algorithm consists of the following steps: | |||
<proc label = "Complete Linkage Clustering"> | |||
# Begin with the disjoint clustering having level <math>L(0) = 0</math> and sequence number <math>m=0</math>. | |||
# Find the most similar pair of clusters in the current clustering, say pair <math>(r), (s)</math>, according to <math>d[(r),(s)] = \min d[(i),(j)]</math>where the minimum is over all pairs of clusters in the current clustering. | |||
# Increment the sequence number: <math>m = m + 1</math>. Merge clusters <math>(r)</math> and <math>(s)</math> into a single cluster to form the next clustering <math>m</math>. Set the level of this clustering to <math>L(m) = d[(r),(s)]</math> | |||
# Update the proximity matrix, <math>D</math>, by deleting the rows and columns corresponding to clusters <math>(r)</math> and <math>(s)</math> and adding a row and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted <math>(r,s)</math> and old cluster <math>(k)</math> is defined as <math>d[(r), (s)] = \max \{d[(k),(r)], d[(k),(s)] \}</math>. | |||
# If all objects are in one cluster, stop. Else, go to step 2. | |||
</proc> | |||
=== Working example === | |||
The working example is based on a [[wikipedia:Models_of_DNA_evolution#JC69_model_(Jukes_and_Cantor_1969)|JC69]] genetic distance matrix computed from the [[wikipedia:5S ribosomal RNA|5S ribosomal RNA]] sequence alignment of five bacteria: ''[[wikipedia:Bacillus subtilis|Bacillus subtilis]]'' (<math>a</math>), ''[[wikipedia:Bacillus stearothermophilus|Bacillus stearothermophilus]]'' (<math>b</math>), ''[[wikipedia:Weissella|Lactobacillus]] viridescens'' (<math>c</math>), ''[[wikipedia:Acholeplasma|Acholeplasma]] modicum'' (<math>d</math>), and ''[[wikipedia:Micrococcus luteus|Micrococcus luteus]]'' (<math>e</math>).<ref name="Erdmann1986">{{cite journal | vauthors = Erdmann VA, Wolters J | title = Collection of published 5S, 5.8S and 4.5S ribosomal RNA sequences | journal = Nucleic Acids Research | volume = 14 Suppl | issue = Suppl | pages = r1-59 | date = 1986 | pmid = 2422630 | pmc = 341310 | doi=10.1093/nar/14.suppl.r1}}</ref><ref name="Olsen1988">{{cite journal | vauthors = Olsen GJ | title = Phylogenetic analysis using ribosomal RNA | journal = Methods in Enzymology | volume = 164 | pages = 793–812 | date = 1988 | pmid = 3241556 | doi = 10.1016/s0076-6879(88)64084-5 }}</ref> | |||
==== First step ==== | |||
'''First clustering''' | |||
Let us assume that we have five elements <math>(a,b,c,d,e)</math> and the following matrix <math>D_1</math> of pairwise distances between them: | |||
{| class="table table-bordered" style="text-align: center; | |||
|- | |||
! style="width: 50px;" | | |||
! style="width: 50px;" | a | |||
! style="width: 50px;" | b | |||
! style="width: 50px;" | c | |||
! style="width: 50px;" | d | |||
! style="width: 50px;" | e | |||
|- | |||
! a | |||
| 0 || style=background:#dff4fe; | 17 || 21 || 31 || 23 | |||
|- | |||
! b | |||
| style=background:#dff4fe; | 17 || 0 || 30 || 34 || 21 | |||
|- | |||
! c | |||
| 21 || 30 || 0 || 28 || 39 | |||
|- | |||
! d | |||
| 31 || 34 || 28 || 0 || 43 | |||
|- | |||
! e | |||
| 23 || 21 || 39 || 43 || 0 | |||
|} | |||
In this example, <math>D_1 (a,b)=17</math> is the smallest value of <math>D_1</math>, so we join elements <math>a</math> and <math>b</math>. | |||
'''First branch length estimation''' | |||
Let <math>u</math> denote the node to which <math>a</math> and <math>b</math> are now connected. Setting <math>\delta(a,u)=\delta(b,u)=D_1(a,b)/2</math> ensures that elements <math>a</math> and <math>b</math> are equidistant from <math>u</math>. This corresponds to the expectation of the [[wikipedia:ultrametricity|ultrametricity]] hypothesis. | |||
The branches joining <math>a</math> and <math>b</math> to <math>u</math> then have lengths <math>\delta(a,u)=\delta(b,u)=17/2=8.5</math> | |||
'''First distance matrix update''' | |||
We then proceed to update the initial proximity matrix <math>D_1</math> into a new proximity matrix <math>D_2</math> (see below), reduced in size by one row and one column because of the clustering of <math>a</math> with <math>b</math>. | |||
Bold values in <math>D_2</math> correspond to the new distances, calculated by retaining the '''maximum distance''' between each element of the first cluster <math>(a,b)</math> and each of the remaining elements: | |||
<math>D_2((a,b),c)=max(D_1(a,c),D_1(b,c))=max(21,30)=30</math> | |||
<math>D_2((a,b),d)=max(D_1(a,d),D_1(b,d))=max(31,34)=34</math> | |||
<math>D_2((a,b),e)=max(D_1(a,e),D_1(b,e))=max(23,21)=23</math> | |||
Italicized values in <math>D_2</math> are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster. | |||
==== Second step ==== | |||
'''Second clustering''' | |||
We now reiterate the three previous steps, starting from the new distance matrix <math>D_2</math> : | |||
{| class="table table-bordered" style="text-align: center; | |||
|- | |||
! style="width: 50px;" | | |||
! style="width: 50px;" | (a,b) | |||
! style="width: 50px;" | c | |||
! style="width: 50px;" | d | |||
! style="width: 50px;" | e | |||
|- | |||
! (a,b) | |||
| 0 || '''30''' || '''34''' || style=background:#dff4fe; | '''23''' | |||
|- | |||
! c | |||
| '''30''' || 0 || ''28'' || ''39'' | |||
|- | |||
! d | |||
| '''34''' || ''28'' || 0 || ''43'' | |||
|- | |||
! e | |||
| style=background:#dff4fe; | '''23''' || ''39'' || ''43'' || 0 | |||
|} | |||
Here, <math>D_2 ((a,b),e)=23</math> is the lowest value of <math>D_2</math>, so we join cluster <math>(a,b)</math> with element <math>e</math>. | |||
'''Second branch length estimation''' | |||
Let <math>v</math> denote the node to which <math>(a,b)</math> and <math>e</math> are now connected. Because of the ultrametricity constraint, the branches joining <math>a</math> or <math>b</math> to <math>v</math>, and <math>e</math> to <math>v</math>, are equal and have the following total length: | |||
<math>\delta(a,v)=\delta(b,v)=\delta(e,v)=23/2=11.5</math> | |||
We deduce the missing branch length: | |||
<math>\delta(u,v)=\delta(e,v)-\delta(a,u)=\delta(e,v)-\delta(b,u)=11.5-8.5=3</math> | |||
'''Second distance matrix update''' | |||
We then proceed to update the <math>D_2</math> matrix into a new distance matrix <math>D_3</math> (see below), reduced in size by one row and one column because of the clustering of <math>(a,b)</math> with <math>e</math> : | |||
<math>D_3(((a,b),e),c)=max(D_2((a,b),c),D_2(e,c))=max(30,39)=39</math> | |||
<math>D_3(((a,b),e),d)=max(D_2((a,b),d),D_2(e,d))=max(34,43)=43</math> | |||
==== Third step ==== | |||
'''Third clustering''' | |||
We again reiterate the three previous steps, starting from the updated distance matrix <math>D_3</math>. | |||
{| class="table table-bordered" style="text-align: center; | |||
|- | |||
! style="width: 50px;" | | |||
! style="width: 50px;" | ((a,b),e) | |||
! style="width: 50px;" | c | |||
! style="width: 50px;" | d | |||
|- | |||
! ((a,b),e) | |||
| 0 || '''39''' || '''43''' | |||
|- | |||
! c | |||
| '''39''' || 0 || style=background:#dff4fe; | ''28'' | |||
|- | |||
! d | |||
| '''43''' || style=background:#dff4fe; | ''28'' || 0 | |||
|} | |||
Here, <math>D_3 (c,d)=28</math> is the smallest value of <math>D_3</math>, so we join elements <math>c</math> and <math>d</math>. | |||
'''Third branch length estimation''' | |||
Let <math>w</math> denote the node to which <math>c</math> and <math>d</math> are now connected. | |||
The branches joining <math>c</math> and <math>d</math> to <math>w</math> then have lengths <math>\delta(c,w)=\delta(d,w)=28/2=14</math> (''[[#Dendrogram1|see the final dendrogram]]'') | |||
'''Third distance matrix update''' | |||
There is a single entry to update: | |||
<math>D_4((c,d),((a,b),e))=max(D_3(c,((a,b),e)), D_3(d,((a,b),e)))=max(39, 43)=43</math> | |||
====Final step==== | |||
The final <math>D_4</math> matrix is: | |||
{| class="table table-bordered" style="text-align: center; | |||
|- | |||
! style="width: 50px;" | | |||
! style="width: 50px;" | ((a,b),e) | |||
! style="width: 50px;" | (c,d) | |||
|- | |||
! ((a,b),e) | |||
| 0 || style=background:#dff4fe; | '''43''' | |||
|- | |||
! (c,d) | |||
| style=background:#dff4fe; | '''43''' || 0 | |||
|} | |||
So we join clusters <math>((a,b),e)</math> and <math>(c,d)</math>. Let <math>r</math> denote the (root) node to which <math>((a,b),e)</math> and <math>(c,d)</math> are now connected. The branches joining <math>((a,b),e)</math> and <math>(c,d)</math> to <math>r</math> then have lengths: | |||
<math>\delta(((a,b),e),r)=\delta((c,d),r)=43/2=21.5</math> | |||
We deduce the two remaining branch lengths: | |||
<math>\delta(v,r)=\delta(((a,b),e),r)-\delta(e,v)=21.5-11.5=10</math> | |||
<math>\delta(w,r)=\delta((c,d),r)-\delta(c,w)=21.5-14=7.5</math> | |||
====The complete-linkage dendrogram==== | |||
<div class="text-center mw-d3" id="d3-dendo-complete"></div> | |||
The dendrogram is now complete. It is ultrametric because all tips (<math>a</math> to <math>e</math>) are equidistant from <math>r</math> : | |||
<math>\delta(a,r)=\delta(b,r)=\delta(e,r)=\delta(c,r)=\delta(d,r)=21.5</math> | |||
The dendrogram is therefore rooted by <math>r</math>, its deepest node. | |||
==Single-linkage Clustering== | |||
In statistics, '''single-linkage clustering''' is one of several methods of hierarchical clustering. It is based on grouping clusters in bottom-up fashion (agglomerative clustering), at each step combining two clusters that contain the closest pair of elements not yet belonging to the same cluster as each other. | |||
A drawback of this method is that it tends to produce long thin clusters in which nearby elements of the same cluster have small distances, but elements at opposite ends of a cluster may be much farther from each other than two elements of other clusters. This may lead to difficulties in defining classes that could usefully subdivide the data.<ref name = "Everitt"> {{cite book | last = Everitt | first = Brian | name-list-style = vanc | title = Cluster analysis | publisher = Wiley | location = Chichester, West Sussex, U.K | year = 2011 | isbn = 9780470749913 }} </ref> | |||
=== Overview of agglomerative clustering methods === | |||
In the beginning of the agglomerative clustering process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters, until all elements end up being in the same cluster. At each step, the two clusters separated by the shortest distance are combined. The function used to determine the distance between two clusters, known as the ''linkage function'', is what differentiates the agglomerative clustering methods. | |||
In single-linkage clustering, the distance between two clusters is determined by a single pair of elements: those two elements (one in each cluster) that are closest to each other. The shortest of these pairwise distances that remain at any step causes the two clusters whose elements are involved to be merged. The method is also known as ''nearest neighbour clustering''. The result of the clustering can be visualized as a dendrogram, which shows the sequence in which clusters were merged and the distance at which each merge took place.<ref>{{cite book | vauthors = Legendre P, Legendre L | date = 1998 | title = Numerical Ecology | edition = Second English | series = Developments in Environmental Modelling | volume = 20 | publisher = Elsevier | location = Amsterdam }}</ref> | |||
Mathematically, the linkage function – the distance <math>D(X,Y)</math> between clusters <math>X</math> and <math>Y</math>– is described by the expression | |||
<math display="block">D(X,Y)=\min_{x\in X, y\in Y} d(x,y),</math> | |||
where <math>X</math> and <math>Y</math> are any two sets of elements considered as clusters, and <math>d(x,y)</math> denotes the distance between the two elements <math>x</math> and <math>y</math>. | |||
=== Naive algorithm === | |||
The following algorithm is an agglomerative scheme that erases rows and columns in a proximity matrix as old clusters are merged into new ones. The <math>N \times N</math> proximity matrix <math>D</math> contains all distances <math>d(i,j)</math>. The clusterings are assigned sequence numbers <math>0,1, \ldots, n-1</math> and <math>L(k)</math> is the level of the <math>k</math>-th clustering. A cluster with sequence number <math>m</math> is denoted (<math>m</math>) and the proximity between clusters <math>(r)</math> and <math>(s)</math> is denoted <math>d[(r),(s)]</math>. | |||
The single linkage algorithm is composed of the following steps: | |||
<proc label = "Single Linkage Algorithm"> | |||
# Begin with the disjoint clustering having level <math>L(0) = 0</math> and sequence number <math>m=0</math>. | |||
# Find the most similar pair of clusters in the current clustering, say pair <math>(r), (s)</math>, according to <math>d[(r),(s)] = \min d[(i),(j)]</math>where the minimum is over all pairs of clusters in the current clustering. | |||
# Increment the sequence number: <math>m = m + 1</math>. Merge clusters <math>(r)</math> and <math>(s)</math> into a single cluster to form the next clustering <math>m</math>. Set the level of this clustering to <math>L(m) = d[(r),(s)]</math> | |||
# Update the proximity matrix, <math>D</math>, by deleting the rows and columns corresponding to clusters <math>(r)</math> and <math>(s)</math> and adding a row and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted <math>(r,s)</math> and old cluster <math>(k)</math> is defined as <math>d[(r,s),(k)] = \min \{d[(k),(r)], d[(k),(s)] \}</math>. | |||
# If all objects are in one cluster, stop. Else, go to step 2. | |||
</proc> | |||
=== Working example === | |||
This working example is based on a [[wikipedia:Models_of_DNA_evolution#JC69_model_(Jukes_and_Cantor_1969)|JC69]] genetic distance matrix computed from the [[wikipedia:5S ribosomal RNA|5S ribosomal RNA]] sequence alignment of five bacteria: ''[[wikipedia:Bacillus subtilis|Bacillus subtilis]]'' (<math>a</math>), ''[[wikipedia:Bacillus stearothermophilus|Bacillus stearothermophilus]]'' (<math>b</math>), ''[[wikipedia:Weissella|Lactobacillus]] viridescens'' (<math>c</math>), ''[[wikipedia:Acholeplasma|Acholeplasma]] modicum'' (<math>d</math>), and ''[[wikipedia:Micrococcus luteus|Micrococcus luteus]]'' (<math>e</math>).<ref name="Erdmann1986">{{cite journal | vauthors = Erdmann VA, Wolters J | title = Collection of published 5S, 5.8S and 4.5S ribosomal RNA sequences | journal = Nucleic Acids Research | volume = 14 Suppl | issue = Suppl | pages = r1-59 | date = 1986 | pmid = 2422630 | pmc = 341310 | doi=10.1093/nar/14.suppl.r1}}</ref><ref name="Olsen1988">{{cite journal | vauthors = Olsen GJ | title = Phylogenetic analysis using ribosomal RNA | journal = Methods in Enzymology | volume = 164 | pages = 793–812 | date = 1988 | pmid = 3241556 | doi = 10.1016/s0076-6879(88)64084-5 }}</ref> | |||
==== First step ==== | |||
'''First clustering''' | |||
Let us assume that we have five elements <math>(a,b,c,d,e)</math> and the following matrix <math>D_1</math> of pairwise distances between them: | |||
{| class="table table-bordered" style="text-align: center; | |||
|- | |||
! style="width: 50px;" | | |||
! style="width: 50px;" | a | |||
! style="width: 50px;" | b | |||
! style="width: 50px;" | c | |||
! style="width: 50px;" | d | |||
! style="width: 50px;" | e | |||
|- | |||
! a | |||
| 0 || style=background:#dff4fe; | 17 || 21 || 31 || 23 | |||
|- | |||
! b | |||
| style=background:#dff4fe; | 17 || 0 || 30 || 34 || 21 | |||
|- | |||
! c | |||
| 21 || 30 || 0 || 28 || 39 | |||
|- | |||
! d | |||
| 31 || 34 || 28 || 0 || 43 | |||
|- | |||
! e | |||
| 23 || 21 || 39 || 43 || 0 | |||
|} | |||
In this example, <math>D_1 (a,b)=17</math> is the lowest value of <math>D_1</math>, so we cluster elements <math>a</math> and <math>b</math>. | |||
'''First branch length estimation''' | |||
Let <math>u</math> denote the node to which <math>a</math> and <math>b</math> are now connected. Setting <math>\delta(a,u)=\delta(b,u)=D_1(a,b)/2</math> ensures that elements <math>a</math> and <math>b</math> are equidistant from <math>u</math>. This corresponds to the expectation of the [[wikipedia:ultrametricity|ultrametricity]] hypothesis. | |||
The branches joining <math>a</math> and <math>b</math> to <math>u</math> then have lengths <math>\delta(a,u)=\delta(b,u)=17/2=8.5</math> (''[[#Dendrogram1|see the final dendrogram]]'') | |||
'''First distance matrix update''' | |||
We then proceed to update the initial proximity matrix <math>D_1</math> into a new proximity matrix <math>D_2</math> (see below), reduced in size by one row and one column because of the clustering of <math>a</math> with <math>b</math>. | |||
Bold values in <math>D_2</math> correspond to the new distances, calculated by retaining the '''minimum distance''' between each element of the first cluster <math>(a,b)</math> and each of the remaining elements: | |||
:<math display="block">\begin{array}{lllllll} | |||
D_2((a,b),c)&=&\min(D_1(a,c),D_1(b,c))&=&\min(21,30)&=&21 | |||
\\ | |||
D_2((a,b),d)&=&\min(D_1(a,d),D_1(b,d))&=&\min(31,34)&=&31 | |||
\\ | |||
D_2((a,b),e)&=&\min(D_1(a,e),D_1(b,e))&=&\min(23,21)&=&21 | |||
\end{array}</math> | |||
Italicized values in <math>D_2</math> are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster. | |||
==== Second step ==== | |||
'''Second clustering''' | |||
We now reiterate the three previous actions, starting from the new distance matrix <math>D_2</math> : | |||
{| class="table table-bordered" style="text-align: center; | |||
|- | |||
! style="width: 50px;" | | |||
! style="width: 50px;" | (a,b) | |||
! style="width: 50px;" | c | |||
! style="width: 50px;" | d | |||
! style="width: 50px;" | e | |||
|- | |||
! (a,b) | |||
| 0 || style=background:#dff4fe; | '''21''' || '''31''' || style=background:#dff4fe; | '''21''' | |||
|- | |||
! c | |||
| style=background:#dff4fe; | '''21''' || 0 || ''28'' || ''39'' | |||
|- | |||
! d | |||
| '''31''' || ''28'' || 0 || ''43'' | |||
|- | |||
! e | |||
| style=background:#dff4fe; | '''21''' || ''39'' || ''43'' || 0 | |||
|} | |||
Here, <math>D_2 ((a,b),c)=21</math> and <math>D_2 ((a,b),e)=21</math> are the lowest values of <math>D_2</math>, so we join cluster <math>(a,b)</math> with element {{mvar|c}} and with element {{mvar|e}}. | |||
'''Second branch length estimation''' | |||
Let <math>v</math> denote the node to which <math>(a,b)</math>, {{mvar|c}} and {{mvar|e}} are now connected. Because of the ultrametricity constraint, the branches joining <math>a</math> or <math>b</math> to <math>v</math>, and {{mvar|c}} to <math>v</math>, and also {{mvar|e}} to <math>v</math> are equal and have the following total length: | |||
<math display="block">\delta(a,v)=\delta(b,v)=\delta(c,v)=\delta(e,v)=21/2=10.5</math> | |||
We deduce the missing branch length: | |||
<math display="block">\delta(u,v)=\delta(c,v)-\delta(a,u)=\delta(c,v)-\delta(b,u)=10.5-8.5=2</math> | |||
'''Second distance matrix update''' | |||
We then proceed to update the <math>D_2</math> matrix into a new distance matrix <math>D_3</math> (see below), reduced in size by two rows and two columns because of the clustering of <math>(a,b)</math> with {{mvar|c}} and with {{mvar|e}} : | |||
<math display="block">D_3(((a,b),c,e),d)=\min(D_2((a,b),d),D_2(c,d),D_2(e,d))=\min(31,28,43)=28</math> | |||
==== Final step ==== | |||
The final <math>D_3</math> matrix is: | |||
{| class="table table-bordered" style="text-align: center; | |||
|- | |||
! style="width: 50px;" | | |||
! style="width: 50px;" | ((a,b),c,e) | |||
! style="width: 50px;" | d | |||
|- | |||
! ((a,b),c,e) | |||
| 0 || style=background:#dff4fe; | '''28''' | |||
|- | |||
! d | |||
| style=background:#dff4fe; | '''28''' || 0 | |||
|} | |||
So we join clusters <math>((a,b),c,e)</math> and <math>d</math>. Let <math>r</math> denote the (root) node to which <math>((a,b),c,e)</math> and <math>d</math> are now connected. The branches joining <math>((a,b),c,e)</math> and <math>d</math> to <math>r</math> then have lengths: | |||
<math>\delta(((a,b),c,e),r)=\delta(d,r)=28/2=14</math> | |||
We deduce the remaining branch length: | |||
<math>\delta(v,r)=\delta(a,r)-\delta(a,v)=\delta(b,r)-\delta(b,v)=\delta(c,r)-\delta(c,v)=\delta(e,r)-\delta(e,v)=14-10.5=3.5</math> | |||
==== The single-linkage dendrogram ==== | |||
<div class="mw-d3 text-center" id = "d3-dendo-single"></div> | |||
The dendrogram is now complete. It is ultrametric because all tips (<math>a</math>, <math>b</math>, <math>c</math>, <math>e</math>, and <math>d</math>) are equidistant from <math>r</math> : | |||
<math>\delta(a,r)=\delta(b,r)=\delta(c,r)=\delta(e,r)=\delta(d,r)=14</math> | |||
The dendrogram is therefore rooted by <math>r</math>, its deepest node. | |||
==References== | |||
{{Reflist}} | |||
==Wikipedia References== | |||
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Hierarchical_clustering&oldid=1096531593|title= Hierarchical clustering | author = Wikipedia contributors |website= Wikipedia |publisher= Wikipedia |access-date = 17 August 2022 }} | |||
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Complete-linkage_clustering&oldid=1070593186|title= Single-linkage clustering | author = Wikipedia contributors |website= Wikipedia |publisher= Wikipedia |access-date = 17 August 2022 }} | |||
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Complete-linkage_clustering&oldid=1070593186|title= Complete-linkage clustering | author = Wikipedia contributors |website= Wikipedia |publisher= Wikipedia |access-date = 17 August 2022 }} |
Latest revision as of 00:25, 15 April 2024
In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:
Type | Description |
---|---|
Agglomerative | This is a bottom-up approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy |
Divisive | This is a top-down approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy |
The results of hierarchical clustering[1] are usually presented in a dendrogram (see the examples below).
Cluster dissimilarity
In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate metric (a measure of distance between pairs of observations), and a linkage criterion which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets.
Metric
The choice of an appropriate metric will influence the shape of the clusters, as some elements may be relatively closer to one another under one metric than another. For example, in two dimensions, under the Manhattan distance metric, the distance between the origin (0,0) and (0.5, 0.5) is the same as the distance between the origin and (0, 1), while under the Euclidean distance metric the latter is strictly greater.
Some commonly used metrics for hierarchical clustering are:[2]
Names | Formula |
---|---|
Euclidean distance | [math] \|a-b \|_2 = \sqrt{\sum_i (a_i-b_i)^2} [/math] |
Squared Euclidean distance | [math] \|a-b \|_2^2 = \sum_i (a_i-b_i)^2 [/math] |
Manhattan (or city block ) distance | [math] \|a-b \|_1 = \sum_i |a_i-b_i| [/math] |
Maximum distance (or Chebyshev distance) | [math] \|a-b \|_\infty = \max_i |a_i-b_i| [/math] |
Mahalanobis distance | [math] \sqrt{(a-b)^{\top}S^{-1}(a-b)} [/math] where S is the Covariance matrix |
For text or other non-numeric data, metrics such as the Hamming distance or Levenshtein distance are often used.
Linkage criteria
The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations.
Some commonly used linkage criteria between two sets of observations [math]A[/math] and [math]B[/math] are:[3][4]
Names | Formula |
---|---|
Maximum or complete-linkage clustering | [math] \max \, \{\, d(a,b) : a \in A,\, b \in B \,\}. [/math] |
Minimum or single-linkage clustering | [math] \min \, \{\, d(a,b) : a \in A,\, b \in B \,\}.. [/math] |
Unweighted average linkage clustering | [math] \frac{1}{|A|\cdot|B|} \sum_{a \in A }\sum_{ b \in B} d(a,b). [/math] |
Weighted average linkage clustering | [math] d(i \cup j, k) = \frac{d(i, k) + d(j, k)}{2}. [/math] |
Centroid linkage clustering, | [math] d(c_A,c_B) [/math] where [math]c_A[/math] and [math]c_B[/math] are the centroids of clusters [math]A[/math] and [math]B[/math], respectively. |
Minimum energy clustering | [math] \frac {2}{nm}\sum_{i,j=1}^{n,m} \|a_i- b_j\|_2 - \frac {1}{n^2}\sum_{i,j=1}^{n} \|a_i-a_j\|_2 - \frac{1}{m^2}\sum_{i,j=1}^{m} \|b_i-b_j\|_2 [/math] |
Complete Linkage Clustering
Complete-linkage clustering is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all elements end up being in the same cluster. The method is also known as farthest neighbour clustering. The result of the clustering can be visualized as a dendrogram, which shows the sequence of cluster fusion and the distance at which each fusion took place.[5][6][7]
Clustering procedure
At each step, the two clusters separated by the shortest distance are combined. The definition of 'shortest distance' is what differentiates between the different agglomerative clustering methods. In complete-linkage clustering, the link between two clusters contains all element pairs, and the distance between clusters equals the distance between those two elements (one in each cluster) that are farthest away from each other. The shortest of these links that remains at any step causes the fusion of the two clusters whose elements are involved.
Mathematically, the complete linkage function — the distance [math]D(X,Y)[/math] between clusters [math]X[/math] and [math]Y[/math] — is described by the following expression :
where [math]d(x,y)[/math] is the distance between elements [math]x \in X[/math] and [math]y \in Y[/math], and [math]X[/math] and [math]Y[/math] are two sets of elements (clusters).
Naive scheme
The following algorithm is an agglomerative scheme that erases rows and columns in a proximity matrix as old clusters are merged into new ones. The [math]N \times N[/math] proximity matrix [math]D[/math] contains all distances [math]d(i,j)[/math]. The clusterings are assigned sequence numbers [math]0,1,\ldots,n-1[/math] and [math]L(k)[/math] is the level of the [math]k[/math]-th clustering. A cluster with sequence number [math]m[/math] is denoted ([math]m[/math]) and the proximity between clusters ([math]r[/math]) and ([math]s[/math]) is denoted [math]d[(r),(s)][/math].
The complete linkage clustering algorithm consists of the following steps:
- Begin with the disjoint clustering having level [math]L(0) = 0[/math] and sequence number [math]m=0[/math].
- Find the most similar pair of clusters in the current clustering, say pair [math](r), (s)[/math], according to [math]d[(r),(s)] = \min d[(i),(j)][/math]where the minimum is over all pairs of clusters in the current clustering.
- Increment the sequence number: [math]m = m + 1[/math]. Merge clusters [math](r)[/math] and [math](s)[/math] into a single cluster to form the next clustering [math]m[/math]. Set the level of this clustering to [math]L(m) = d[(r),(s)][/math]
- Update the proximity matrix, [math]D[/math], by deleting the rows and columns corresponding to clusters [math](r)[/math] and [math](s)[/math] and adding a row and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted [math](r,s)[/math] and old cluster [math](k)[/math] is defined as [math]d[(r), (s)] = \max \{d[(k),(r)], d[(k),(s)] \}[/math].
- If all objects are in one cluster, stop. Else, go to step 2.
Working example
The working example is based on a JC69 genetic distance matrix computed from the 5S ribosomal RNA sequence alignment of five bacteria: Bacillus subtilis ([math]a[/math]), Bacillus stearothermophilus ([math]b[/math]), Lactobacillus viridescens ([math]c[/math]), Acholeplasma modicum ([math]d[/math]), and Micrococcus luteus ([math]e[/math]).[8][9]
First step
First clustering
Let us assume that we have five elements [math](a,b,c,d,e)[/math] and the following matrix [math]D_1[/math] of pairwise distances between them:
a | b | c | d | e | |
---|---|---|---|---|---|
a | 0 | 17 | 21 | 31 | 23 |
b | 17 | 0 | 30 | 34 | 21 |
c | 21 | 30 | 0 | 28 | 39 |
d | 31 | 34 | 28 | 0 | 43 |
e | 23 | 21 | 39 | 43 | 0 |
In this example, [math]D_1 (a,b)=17[/math] is the smallest value of [math]D_1[/math], so we join elements [math]a[/math] and [math]b[/math].
First branch length estimation
Let [math]u[/math] denote the node to which [math]a[/math] and [math]b[/math] are now connected. Setting [math]\delta(a,u)=\delta(b,u)=D_1(a,b)/2[/math] ensures that elements [math]a[/math] and [math]b[/math] are equidistant from [math]u[/math]. This corresponds to the expectation of the ultrametricity hypothesis. The branches joining [math]a[/math] and [math]b[/math] to [math]u[/math] then have lengths [math]\delta(a,u)=\delta(b,u)=17/2=8.5[/math]
First distance matrix update
We then proceed to update the initial proximity matrix [math]D_1[/math] into a new proximity matrix [math]D_2[/math] (see below), reduced in size by one row and one column because of the clustering of [math]a[/math] with [math]b[/math]. Bold values in [math]D_2[/math] correspond to the new distances, calculated by retaining the maximum distance between each element of the first cluster [math](a,b)[/math] and each of the remaining elements:
[math]D_2((a,b),c)=max(D_1(a,c),D_1(b,c))=max(21,30)=30[/math]
[math]D_2((a,b),d)=max(D_1(a,d),D_1(b,d))=max(31,34)=34[/math]
[math]D_2((a,b),e)=max(D_1(a,e),D_1(b,e))=max(23,21)=23[/math]
Italicized values in [math]D_2[/math] are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster.
Second step
Second clustering
We now reiterate the three previous steps, starting from the new distance matrix [math]D_2[/math] :
(a,b) | c | d | e | |
---|---|---|---|---|
(a,b) | 0 | 30 | 34 | 23 |
c | 30 | 0 | 28 | 39 |
d | 34 | 28 | 0 | 43 |
e | 23 | 39 | 43 | 0 |
Here, [math]D_2 ((a,b),e)=23[/math] is the lowest value of [math]D_2[/math], so we join cluster [math](a,b)[/math] with element [math]e[/math].
Second branch length estimation
Let [math]v[/math] denote the node to which [math](a,b)[/math] and [math]e[/math] are now connected. Because of the ultrametricity constraint, the branches joining [math]a[/math] or [math]b[/math] to [math]v[/math], and [math]e[/math] to [math]v[/math], are equal and have the following total length: [math]\delta(a,v)=\delta(b,v)=\delta(e,v)=23/2=11.5[/math]
We deduce the missing branch length: [math]\delta(u,v)=\delta(e,v)-\delta(a,u)=\delta(e,v)-\delta(b,u)=11.5-8.5=3[/math]
Second distance matrix update
We then proceed to update the [math]D_2[/math] matrix into a new distance matrix [math]D_3[/math] (see below), reduced in size by one row and one column because of the clustering of [math](a,b)[/math] with [math]e[/math] :
[math]D_3(((a,b),e),c)=max(D_2((a,b),c),D_2(e,c))=max(30,39)=39[/math]
[math]D_3(((a,b),e),d)=max(D_2((a,b),d),D_2(e,d))=max(34,43)=43[/math]
Third step
Third clustering
We again reiterate the three previous steps, starting from the updated distance matrix [math]D_3[/math].
((a,b),e) | c | d | |
---|---|---|---|
((a,b),e) | 0 | 39 | 43 |
c | 39 | 0 | 28 |
d | 43 | 28 | 0 |
Here, [math]D_3 (c,d)=28[/math] is the smallest value of [math]D_3[/math], so we join elements [math]c[/math] and [math]d[/math].
Third branch length estimation
Let [math]w[/math] denote the node to which [math]c[/math] and [math]d[/math] are now connected. The branches joining [math]c[/math] and [math]d[/math] to [math]w[/math] then have lengths [math]\delta(c,w)=\delta(d,w)=28/2=14[/math] (see the final dendrogram)
Third distance matrix update
There is a single entry to update: [math]D_4((c,d),((a,b),e))=max(D_3(c,((a,b),e)), D_3(d,((a,b),e)))=max(39, 43)=43[/math]
Final step
The final [math]D_4[/math] matrix is:
((a,b),e) | (c,d) | |
---|---|---|
((a,b),e) | 0 | 43 |
(c,d) | 43 | 0 |
So we join clusters [math]((a,b),e)[/math] and [math](c,d)[/math]. Let [math]r[/math] denote the (root) node to which [math]((a,b),e)[/math] and [math](c,d)[/math] are now connected. The branches joining [math]((a,b),e)[/math] and [math](c,d)[/math] to [math]r[/math] then have lengths:
[math]\delta(((a,b),e),r)=\delta((c,d),r)=43/2=21.5[/math]
We deduce the two remaining branch lengths:
[math]\delta(v,r)=\delta(((a,b),e),r)-\delta(e,v)=21.5-11.5=10[/math]
[math]\delta(w,r)=\delta((c,d),r)-\delta(c,w)=21.5-14=7.5[/math]
The complete-linkage dendrogram
The dendrogram is now complete. It is ultrametric because all tips ([math]a[/math] to [math]e[/math]) are equidistant from [math]r[/math] :
[math]\delta(a,r)=\delta(b,r)=\delta(e,r)=\delta(c,r)=\delta(d,r)=21.5[/math]
The dendrogram is therefore rooted by [math]r[/math], its deepest node.
Single-linkage Clustering
In statistics, single-linkage clustering is one of several methods of hierarchical clustering. It is based on grouping clusters in bottom-up fashion (agglomerative clustering), at each step combining two clusters that contain the closest pair of elements not yet belonging to the same cluster as each other.
A drawback of this method is that it tends to produce long thin clusters in which nearby elements of the same cluster have small distances, but elements at opposite ends of a cluster may be much farther from each other than two elements of other clusters. This may lead to difficulties in defining classes that could usefully subdivide the data.[10]
Overview of agglomerative clustering methods
In the beginning of the agglomerative clustering process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters, until all elements end up being in the same cluster. At each step, the two clusters separated by the shortest distance are combined. The function used to determine the distance between two clusters, known as the linkage function, is what differentiates the agglomerative clustering methods.
In single-linkage clustering, the distance between two clusters is determined by a single pair of elements: those two elements (one in each cluster) that are closest to each other. The shortest of these pairwise distances that remain at any step causes the two clusters whose elements are involved to be merged. The method is also known as nearest neighbour clustering. The result of the clustering can be visualized as a dendrogram, which shows the sequence in which clusters were merged and the distance at which each merge took place.[11]
Mathematically, the linkage function – the distance [math]D(X,Y)[/math] between clusters [math]X[/math] and [math]Y[/math]– is described by the expression
where [math]X[/math] and [math]Y[/math] are any two sets of elements considered as clusters, and [math]d(x,y)[/math] denotes the distance between the two elements [math]x[/math] and [math]y[/math].
Naive algorithm
The following algorithm is an agglomerative scheme that erases rows and columns in a proximity matrix as old clusters are merged into new ones. The [math]N \times N[/math] proximity matrix [math]D[/math] contains all distances [math]d(i,j)[/math]. The clusterings are assigned sequence numbers [math]0,1, \ldots, n-1[/math] and [math]L(k)[/math] is the level of the [math]k[/math]-th clustering. A cluster with sequence number [math]m[/math] is denoted ([math]m[/math]) and the proximity between clusters [math](r)[/math] and [math](s)[/math] is denoted [math]d[(r),(s)][/math].
The single linkage algorithm is composed of the following steps:
- Begin with the disjoint clustering having level [math]L(0) = 0[/math] and sequence number [math]m=0[/math].
- Find the most similar pair of clusters in the current clustering, say pair [math](r), (s)[/math], according to [math]d[(r),(s)] = \min d[(i),(j)][/math]where the minimum is over all pairs of clusters in the current clustering.
- Increment the sequence number: [math]m = m + 1[/math]. Merge clusters [math](r)[/math] and [math](s)[/math] into a single cluster to form the next clustering [math]m[/math]. Set the level of this clustering to [math]L(m) = d[(r),(s)][/math]
- Update the proximity matrix, [math]D[/math], by deleting the rows and columns corresponding to clusters [math](r)[/math] and [math](s)[/math] and adding a row and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted [math](r,s)[/math] and old cluster [math](k)[/math] is defined as [math]d[(r,s),(k)] = \min \{d[(k),(r)], d[(k),(s)] \}[/math].
- If all objects are in one cluster, stop. Else, go to step 2.
Working example
This working example is based on a JC69 genetic distance matrix computed from the 5S ribosomal RNA sequence alignment of five bacteria: Bacillus subtilis ([math]a[/math]), Bacillus stearothermophilus ([math]b[/math]), Lactobacillus viridescens ([math]c[/math]), Acholeplasma modicum ([math]d[/math]), and Micrococcus luteus ([math]e[/math]).[8][9]
First step
First clustering
Let us assume that we have five elements [math](a,b,c,d,e)[/math] and the following matrix [math]D_1[/math] of pairwise distances between them:
a | b | c | d | e | |
---|---|---|---|---|---|
a | 0 | 17 | 21 | 31 | 23 |
b | 17 | 0 | 30 | 34 | 21 |
c | 21 | 30 | 0 | 28 | 39 |
d | 31 | 34 | 28 | 0 | 43 |
e | 23 | 21 | 39 | 43 | 0 |
In this example, [math]D_1 (a,b)=17[/math] is the lowest value of [math]D_1[/math], so we cluster elements [math]a[/math] and [math]b[/math].
First branch length estimation
Let [math]u[/math] denote the node to which [math]a[/math] and [math]b[/math] are now connected. Setting [math]\delta(a,u)=\delta(b,u)=D_1(a,b)/2[/math] ensures that elements [math]a[/math] and [math]b[/math] are equidistant from [math]u[/math]. This corresponds to the expectation of the ultrametricity hypothesis. The branches joining [math]a[/math] and [math]b[/math] to [math]u[/math] then have lengths [math]\delta(a,u)=\delta(b,u)=17/2=8.5[/math] (see the final dendrogram)
First distance matrix update
We then proceed to update the initial proximity matrix [math]D_1[/math] into a new proximity matrix [math]D_2[/math] (see below), reduced in size by one row and one column because of the clustering of [math]a[/math] with [math]b[/math]. Bold values in [math]D_2[/math] correspond to the new distances, calculated by retaining the minimum distance between each element of the first cluster [math](a,b)[/math] and each of the remaining elements:
- [[math]]\begin{array}{lllllll} D_2((a,b),c)&=&\min(D_1(a,c),D_1(b,c))&=&\min(21,30)&=&21 \\ D_2((a,b),d)&=&\min(D_1(a,d),D_1(b,d))&=&\min(31,34)&=&31 \\ D_2((a,b),e)&=&\min(D_1(a,e),D_1(b,e))&=&\min(23,21)&=&21 \end{array}[[/math]]
Italicized values in [math]D_2[/math] are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster.
Second step
Second clustering
We now reiterate the three previous actions, starting from the new distance matrix [math]D_2[/math] :
(a,b) | c | d | e | |
---|---|---|---|---|
(a,b) | 0 | 21 | 31 | 21 |
c | 21 | 0 | 28 | 39 |
d | 31 | 28 | 0 | 43 |
e | 21 | 39 | 43 | 0 |
Here, [math]D_2 ((a,b),c)=21[/math] and [math]D_2 ((a,b),e)=21[/math] are the lowest values of [math]D_2[/math], so we join cluster [math](a,b)[/math] with element c and with element e.
Second branch length estimation
Let [math]v[/math] denote the node to which [math](a,b)[/math], c and e are now connected. Because of the ultrametricity constraint, the branches joining [math]a[/math] or [math]b[/math] to [math]v[/math], and c to [math]v[/math], and also e to [math]v[/math] are equal and have the following total length:
We deduce the missing branch length:
Second distance matrix update
We then proceed to update the [math]D_2[/math] matrix into a new distance matrix [math]D_3[/math] (see below), reduced in size by two rows and two columns because of the clustering of [math](a,b)[/math] with c and with e :
Final step
The final [math]D_3[/math] matrix is:
((a,b),c,e) | d | |
---|---|---|
((a,b),c,e) | 0 | 28 |
d | 28 | 0 |
So we join clusters [math]((a,b),c,e)[/math] and [math]d[/math]. Let [math]r[/math] denote the (root) node to which [math]((a,b),c,e)[/math] and [math]d[/math] are now connected. The branches joining [math]((a,b),c,e)[/math] and [math]d[/math] to [math]r[/math] then have lengths:
[math]\delta(((a,b),c,e),r)=\delta(d,r)=28/2=14[/math]
We deduce the remaining branch length:
[math]\delta(v,r)=\delta(a,r)-\delta(a,v)=\delta(b,r)-\delta(b,v)=\delta(c,r)-\delta(c,v)=\delta(e,r)-\delta(e,v)=14-10.5=3.5[/math]
The single-linkage dendrogram
The dendrogram is now complete. It is ultrametric because all tips ([math]a[/math], [math]b[/math], [math]c[/math], [math]e[/math], and [math]d[/math]) are equidistant from [math]r[/math] :
[math]\delta(a,r)=\delta(b,r)=\delta(c,r)=\delta(e,r)=\delta(d,r)=14[/math]
The dendrogram is therefore rooted by [math]r[/math], its deepest node.
References
- Nielsen, Frank (2016). "8. Hierarchical Clustering". Introduction to HPC with MPI for Data Science. Springer. pp. 195–211. ISBN 978-3-319-21903-5.
- "The DISTANCE Procedure: Proximity Measures". SAS/STAT 9.2 Users Guide. SAS Institute. Retrieved 2009-04-26.
- "The CLUSTER Procedure: Clustering Methods". SAS/STAT 9.2 Users Guide. SAS Institute. Retrieved 2009-04-26.
- "Hierarchical clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method" (2005). Journal of Classification 22 (2): 151–183. doi: .
- "A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons." (1948). Biologiske Skrifter 5: 1–34.
- Legendre P, Legendre L (1998). Numerical Ecology (Second English ed.). p. 853.
- Everitt BS, Landau S, Leese M (2001). Cluster Analysis (Fourth ed.). London: Arnold. ISBN 0-340-76119-9.
- 8.0 8.1 "Collection of published 5S, 5.8S and 4.5S ribosomal RNA sequences" (1986). Nucleic Acids Research 14 Suppl (Suppl): r1-59. doi: . PMID 2422630.
- 9.0 9.1 "Phylogenetic analysis using ribosomal RNA" (1988). Methods in Enzymology 164: 793–812. doi: . PMID 3241556.
- Everitt B (2011). Cluster analysis. Chichester, West Sussex, U.K: Wiley. ISBN 9780470749913.
- Legendre P, Legendre L (1998). Numerical Ecology. Developments in Environmental Modelling. 20 (Second English ed.). Amsterdam: Elsevier.
Wikipedia References
- Wikipedia contributors. "Hierarchical clustering". Wikipedia. Wikipedia. Retrieved 17 August 2022.
- Wikipedia contributors. "Single-linkage clustering". Wikipedia. Wikipedia. Retrieved 17 August 2022.
- Wikipedia contributors. "Complete-linkage clustering". Wikipedia. Wikipedia. Retrieved 17 August 2022.