guide:A7f37b1612: Difference between revisions

From Stochiki
mNo edit summary
mNo edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''k-means clustering''' is a method of [[wikipedia:vector quantization|vector quantization]], originally from [[wikipedia:signal processing|signal processing]], that aims to [[wikipedia:Partition of a set|partition]] ''n'' observations into <math>k</math> clusters in which each observation belongs to the [[wikipedia:Cluster (statistics)|cluster]] with the nearest [[wikipedia:mean|mean]] (cluster centers or cluster [[wikipedia:centroid|centroid]]), serving as a prototype of the cluster. This results in a partitioning of the data space into [[wikipedia:Voronoi cell|Voronoi cell]]s. ''k''-means clustering minimizes within-cluster variances ([[wikipedia:squared Euclidean distance|squared Euclidean distance]]s), but not regular Euclidean distances, which would be the more difficult [[wikipedia:Weber problem|Weber problem]]: the mean optimizes squared errors, whereas only the [[wikipedia:geometric median|geometric median]] minimizes Euclidean distances. For instance, better Euclidean solutions can be found using [[wikipedia:K-medians clustering|k-medians]] and [[wikipedia:k-medoids|k-medoids]].
'''k-means clustering''' is a method of [[vector quantization|vector quantization]], originally from [[signal processing|signal processing]], that aims to partition <math>n</math> observations into <math>k</math> clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into [[Voronoi cell|Voronoi cell]]s. <math>k</math>-means clustering minimizes within-cluster variances ([[squared Euclidean distance|squared Euclidean distance]]s), but not regular Euclidean distances, which would be the more difficult [[wikipedia:Weber problem|Weber problem]]: the mean optimizes squared errors, whereas only the [[wikipedia:geometric median|geometric median]] minimizes Euclidean distances. For instance, better Euclidean solutions can be found using [[wikipedia:K-medians clustering|k-medians]] and [[wikipedia:k-medoids|k-medoids]].
 
The problem is computationally difficult ([[wikipedia:NP-hardness|NP-hard]]); however, efficient [[wikipedia:heuristic algorithm|heuristic algorithm]]s converge quickly to a [[wikipedia:local optimum|local optimum]]. These are usually similar to the [[wikipedia:expectation-maximization algorithm|expectation-maximization algorithm]] for [[wikipedia:Mixture model|mixtures]] of [[wikipedia:Gaussian distribution|Gaussian distribution]]s via an iterative refinement approach employed by both ''k-means'' and ''Gaussian mixture modeling''. They both use cluster centers to model the data; however, <math>k</math>-means clustering tends to find clusters of comparable spatial extent, while the Gaussian mixture model allows clusters to have different shapes.
 
The unsupervised k-means algorithm has a loose relationship to the [[wikipedia:K-nearest neighbor|<math>k</math>-nearest neighbor classifier]], a popular supervised [[wikipedia:machine learning|machine learning]] technique for classification that is often confused with <math>k</math>-means due to the name. Applying the 1-nearest neighbor classifier to the cluster centers obtained by <math>k</math>-means classifies new data into the existing clusters. This is known as [[wikipedia:nearest centroid classifier|nearest centroid classifier]] or [[wikipedia:Rocchio algorithm|Rocchio algorithm]].


== Description ==
== Description ==


Given a set of observations <math>\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n</math>, where each observation is a <math>d</math>-dimensional real vector, <math>k</math>-means clustering aims to partition the <math>n</math> observations into <math>k \leq n</math> sets <math>S=\{S_1,S_2,\ldots,S_k\}</math> so as to minimize the within-cluster sum of squares (WCSS) (i.e. [[wikipedia:variance|variance]]). Formally, the objective is to find:<math display="block">\underset{\mathbf{S}} {\operatorname{arg\,min}}  \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \underset{\mathbf{S}} {\operatorname{arg\,min}}  \sum_{i=1}^k |S_i| \operatorname{Var} S_i </math> where <math>\boldsymbol\mu_i</math> is the mean of points in <math>S_i</math>. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:
Given a set of observations <math>\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n</math>, where each observation is a <math>d</math>-dimensional real vector, <math>k</math>-means clustering aims to partition the <math>n</math> observations into <math>k \leq n</math> sets <math>S=\{S_1,S_2,\ldots,S_k\}</math> so as to minimize the within-cluster sum of squares (WCSS) (i.e. [[variance|variance]]). Formally, the objective is to find:<math display="block">\underset{\mathbf{S}} {\operatorname{arg\,min}}  \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \underset{\mathbf{S}} {\operatorname{arg\,min}}  \sum_{i=1}^k |S_i| \operatorname{Var} S_i </math> where <math>\boldsymbol\mu_i</math> is the mean of points in <math>S_i</math>. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:


<math display = "block">\underset{\mathbf{S}} {\operatorname{arg\,min}}  \sum_{i=1}^{k} \, \frac{1}{ |S_i|} \, \sum_{\mathbf{x}, \mathbf{y} \in S_i} \left\| \mathbf{x} - \mathbf{y} \right\|^2</math>
<math display = "block">\underset{\mathbf{S}} {\operatorname{arg\,min}}  \sum_{i=1}^{k} \, \frac{1}{ |S_i|} \, \sum_{\mathbf{x}, \mathbf{y} \in S_i} \left\| \mathbf{x} - \mathbf{y} \right\|^2</math>


The equivalence can be deduced from identity <math display ="block">|S_i|\sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \sum_{\mathbf{x}\neq\mathbf{y} \in S_i}\left\|\mathbf x -  \mathbf y\right\|^2.</math> Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in ''different'' clusters (between-cluster sum of squares, BCSS),.<ref name=":12">{{cite journal |last1=Kriegel |first1=Hans-Peter |author-link=wikipedia:Hans-Peter Kriegel |last2=Schubert |first2=Erich |last3=Zimek |first3=Arthur |author-link3=wikipedia:Arthur Zimek |year=2016 |title=The (black) art of runtime evaluation: Are we comparing algorithms or implementations? |journal=Knowledge and Information Systems |volume=52 |issue=2 |pages=341–378 |doi=10.1007/s10115-016-1004-2 |s2cid=40772241 |issn=0219-1377 }}</ref> This deterministic relationship is also related to the [[wikipedia:law of total variance|law of total variance]] in probability theory.
The equivalence can be deduced from identity <math display ="block">|S_i|\sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \sum_{\mathbf{x}\neq\mathbf{y} \in S_i}\left\|\mathbf x -  \mathbf y\right\|^2.</math> Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in ''different'' clusters (between-cluster sum of squares, BCSS),.<ref name=":12">{{cite journal |last1=Kriegel |first1=Hans-Peter |author-link=Hans-Peter Kriegel |last2=Schubert |first2=Erich |last3=Zimek |first3=Arthur |author-link3=Arthur Zimek |year=2016 |title=The (black) art of runtime evaluation: Are we comparing algorithms or implementations? |journal=Knowledge and Information Systems |volume=52 |issue=2 |pages=341–378 |doi=10.1007/s10115-016-1004-2 |s2cid=40772241 |issn=0219-1377 }}</ref> This deterministic relationship is also related to the [[wikipedia:law of total variance|law of total variance]] in probability theory.


== History ==
== History ==
The term "<math>k</math>-means" was first used by James MacQueen in 1967,<ref name="macqueen19672">{{cite conference |last=MacQueen |first=J. B. |year=1967 |title=Some Methods for classification and Analysis of Multivariate Observations |url=http://projecteuclid.org/euclid.bsmsp/1200512992 |conference=Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability |publisher=University of California Press |volume=1 |pages=281&ndash;297 |mr=0214227 |zbl=0214.46201 |access-date=2009-04-07 }}</ref> though the idea goes back to [[wikipedia:Hugo Steinhaus|Hugo Steinhaus]] in 1956.<ref>{{cite journal |last=Steinhaus |first=Hugo |author-link=wikipedia:Hugo Steinhaus |year=1957 |title=Sur la division des corps matériels en parties |journal=Bull. Acad. Polon. Sci. |language=fr |volume=4 |issue=12 |pages=801&ndash;804 |mr=0090073 |zbl=0079.16403 }}</ref> The standard algorithm was first proposed by Stuart Lloyd of [[wikipedia:Bell Labs|Bell Labs]] in 1957 as a technique for [[wikipedia:pulse-code modulation|pulse-code modulation]], although it was not published as a journal article until 1982.<ref name="lloyd19572">{{cite journal |last=Lloyd |first=Stuart P. |year=1957 |title=Least square quantization in PCM |journal=Bell Telephone Laboratories Paper }} Published in journal much later: {{cite journal |last=Lloyd |first=Stuart P. |year=1982 |title=Least squares quantization in PCM |url=http://www.cs.toronto.edu/~roweis/csc2515-2006/readings/lloyd57.pdf |journal=[[wikipedia:IEEE Transactions on Information Theory|IEEE Transactions on Information Theory]] |volume=28 |issue=2 |pages=129&ndash;137 |doi=10.1109/TIT.1982.1056489 |access-date=2009-04-15 |citeseerx=10.1.1.131.1338 |s2cid=10833328 }}</ref> In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm.<ref name="forgy652">{{Cite journal |first=Edward W. |last=Forgy |year=1965 |title=Cluster analysis of multivariate data: efficiency versus interpretability of classifications |journal=Biometrics |volume=21 |issue=3 |pages=768–769 |jstor=2528559 }}</ref>
The term "<math>k</math>-means" was first used by James MacQueen in 1967,<ref name="macqueen19672">{{cite conference |last=MacQueen |first=J. B. |year=1967 |title=Some Methods for classification and Analysis of Multivariate Observations |url=http://projecteuclid.org/euclid.bsmsp/1200512992 |conference=Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability |publisher=University of California Press |volume=1 |pages=281&ndash;297 |mr=0214227 |zbl=0214.46201 |access-date=2009-04-07 }}</ref> though the idea goes back to [[Hugo Steinhaus|Hugo Steinhaus]] in 1956.<ref>{{cite journal |last=Steinhaus |first=Hugo |author-link=Hugo Steinhaus |year=1957 |title=Sur la division des corps matériels en parties |journal=Bull. Acad. Polon. Sci. |language=fr |volume=4 |issue=12 |pages=801&ndash;804 |mr=0090073 |zbl=0079.16403 }}</ref> The standard algorithm was first proposed by Stuart Lloyd of [[Bell Labs|Bell Labs]] in 1957 as a technique for [[pulse-code modulation|pulse-code modulation]], although it was not published as a journal article until 1982.<ref name="lloyd19572">{{cite journal |last=Lloyd |first=Stuart P. |year=1957 |title=Least square quantization in PCM |journal=Bell Telephone Laboratories Paper }} Published in journal much later: {{cite journal |last=Lloyd |first=Stuart P. |year=1982 |title=Least squares quantization in PCM |url=http://www.cs.toronto.edu/~roweis/csc2515-2006/readings/lloyd57.pdf |journal=[[IEEE Transactions on Information Theory|IEEE Transactions on Information Theory]] |volume=28 |issue=2 |pages=129&ndash;137 |doi=10.1109/TIT.1982.1056489 |access-date=2009-04-15 |citeseerx=10.1.1.131.1338 |s2cid=10833328 }}</ref> In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm.<ref name="forgy652">{{Cite journal |first=Edward W. |last=Forgy |year=1965 |title=Cluster analysis of multivariate data: efficiency versus interpretability of classifications |journal=Biometrics |volume=21 |issue=3 |pages=768–769 |jstor=2528559 }}</ref>


== Algorithms ==
== Algorithms ==
=== Standard algorithm (naive k-means) ===
=== Standard algorithm (naive k-means) ===


The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the <math>k</math>-means algorithm"; it is also referred to as [[wikipedia:Lloyd's algorithm|Lloyd's algorithm]], particularly in the computer science community. It is sometimes also referred to as "naïve ''k''-means", because there exist much faster alternatives.<ref>{{Cite journal|last1=Pelleg|first1=Dan|last2=Moore|first2=Andrew|date=1999|title=Accelerating exact k -means algorithms with geometric reasoning|url=http://portal.acm.org/citation.cfm?doid=312129.312248|journal=Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '99|language=en|location=San Diego, California, United States|publisher=ACM Press|pages=277–281|doi=10.1145/312129.312248|isbn=9781581131437|s2cid=13907420}}</ref>
The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the <math>k</math>-means algorithm"; it is also referred to as [[Lloyd's algorithm|Lloyd's algorithm]], particularly in the computer science community. It is sometimes also referred to as "naïve <math>k</math>-means", because there exist much faster alternatives.<ref>{{Cite journal|last1=Pelleg|first1=Dan|last2=Moore|first2=Andrew|date=1999|title=Accelerating exact k -means algorithms with geometric reasoning|url=http://portal.acm.org/citation.cfm?doid=312129.312248|journal=Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '99|language=en|location=San Diego, California, United States|publisher=ACM Press|pages=277–281|doi=10.1145/312129.312248|isbn=9781581131437|s2cid=13907420}}</ref>


Given an initial set of <math>k</math> means <math>m_1^{(1)},\ldots,m_k^{(1)}</math> (see below), the algorithm proceeds by alternating between two steps:<ref>{{Cite book |url=http://www.inference.phy.cam.ac.uk/mackay/itila/book.html |title=Information Theory, Inference and Learning Algorithms |last=MacKay |first=David |publisher=Cambridge University Press |year=2003 |isbn=978-0-521-64298-9 |pages=284&ndash;292 |chapter=Chapter 20. An Example Inference Task: Clustering |mr=2012999 |ref=mackay2003 |author-link=wikipedia:David MacKay (scientist) |chapter-url=http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/284.292.pdf }}</ref>
Given an initial set of <math>k</math> means <math>m_1^{(1)},\ldots,m_k^{(1)}</math> (see below), the algorithm proceeds by alternating between two steps:<ref>{{Cite book |url=http://www.inference.phy.cam.ac.uk/mackay/itila/book.html |title=Information Theory, Inference and Learning Algorithms |last=MacKay |first=David |publisher=Cambridge University Press |year=2003 |isbn=978-0-521-64298-9 |pages=284&ndash;292 |chapter=Chapter 20. An Example Inference Task: Clustering |mr=2012999 |ref=mackay2003 |author-link=David MacKay (scientist) |chapter-url=http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/284.292.pdf }}</ref>


<proc label = "K-means Algorithm">
<proc label = "K-means Algorithm">


#'''Assignment step''': Assign each observation to the cluster with the nearest mean: that with the least squared [[wikipedia:Euclidean distance|Euclidean distance]].<ref>Since the square root is a monotone function, this also is the minimum Euclidean distance assignment.</ref> (Mathematically, this means partitioning the observations according to the [[wikipedia:Voronoi diagram|Voronoi diagram]] generated by the means.)<math display = "block">S_i^{(t)} = \left \{ x_p : \left \| x_p - m^{(t)}_i \right \|^2 \le \left \| x_p - m^{(t)}_j \right \|^2 \ \forall j, 1 \le j \le k \right\},</math> where each <math>x_p</math> is assigned to exactly one <math>S^{(t)}</math>, even if it could be assigned to two or more of them.
#'''Assignment step''': Assign each observation to the cluster with the nearest mean: that with the least squared [[Euclidean distance|Euclidean distance]].<ref>Since the square root is a monotone function, this also is the minimum Euclidean distance assignment.</ref> (Mathematically, this means partitioning the observations according to the [[Voronoi diagram|Voronoi diagram]] generated by the means.)<math display = "block">S_i^{(t)} = \left \{ x_p : \left \| x_p - m^{(t)}_i \right \|^2 \le \left \| x_p - m^{(t)}_j \right \|^2 \ \forall j, 1 \le j \le k \right\},</math> where each <math>x_p</math> is assigned to exactly one <math>S^{(t)}</math>, even if it could be assigned to two or more of them.
#'''Update step''': Recalculate means ([[wikipedia:centroids|centroids]]) for observations assigned to each cluster.
#'''Update step''': Recalculate means ([[centroids|centroids]]) for observations assigned to each cluster.
<math display = "block">m^{(t+1)}_i = \frac{1}{\left|S^{(t)}_i\right|} \sum_{x_j \in S^{(t)}_i} x_j </math>
<math display = "block">m^{(t+1)}_i = \frac{1}{\left|S^{(t)}_i\right|} \sum_{x_j \in S^{(t)}_i} x_j </math>
</proc>
</proc>




The algorithm has converged when the assignments no longer change. The algorithm is not guaranteed to find the optimum.<ref name="hartigan19792">{{Cite journal |last1=Hartigan |first1=J. A. |last2=Wong |first2=M. A. |year=1979 |title=Algorithm AS 136: A <math>k</math>-Means Clustering Algorithm |journal=[[wikipedia:Journal of the Royal Statistical Society, Series C|Journal of the Royal Statistical Society, Series C]] |volume=28 |issue=1 |pages=100&ndash;108 |jstor=2346830 }}</ref>
The algorithm has converged when the assignments no longer change. The algorithm is not guaranteed to find the optimum.<ref name="hartigan19792">{{Cite journal |last1=Hartigan |first1=J. A. |last2=Wong |first2=M. A. |year=1979 |title=Algorithm AS 136: A <math>k</math>-Means Clustering Algorithm |journal=[[Journal of the Royal Statistical Society, Series C|Journal of the Royal Statistical Society, Series C]] |volume=28 |issue=1 |pages=100&ndash;108 |jstor=2346830 }}</ref>


The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of <math>k</math>-means such as spherical <math>k</math>-means and [[wikipedia:K-medoids|<math>k</math>-medoids]] have been proposed to allow using other distance measures.
The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of <math>k</math>-means such as spherical <math>k</math>-means and [[K-medoids|<math>k</math>-medoids]] have been proposed to allow using other distance measures.


====Initialization methods====
====Initialization methods====
Commonly used initialization methods are Forgy and Random Partition.<ref name="hamerly4">{{Cite conference |last1=Hamerly |first1=Greg |last2=Elkan |first2=Charles |year=2002 |title=Alternatives to the <math>k</math>-means algorithm that find better clusterings |url=http://people.csail.mit.edu/tieu/notebook/kmeans/15_p600-hamerly.pdf |book-title=Proceedings of the eleventh international conference on Information and knowledge management (CIKM) }}</ref> The Forgy method randomly chooses <math>k</math> observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al.,<ref name="hamerly4" /> the Random Partition method is generally preferable for algorithms such as the <math>k</math>-harmonic means and fuzzy <math>k</math>-means. For expectation maximization and standard <math>k</math>-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al.,<ref>{{cite journal |last1=Celebi |first1=M. E. |last2=Kingravi |first2=H. A. |last3=Vela |first3=P. A. |year=2013 |title=A comparative study of efficient initialization methods for the <math>k</math>-means clustering algorithm |journal=[[wikipedia:Expert Systems with Applications|Expert Systems with Applications]] |volume=40 |issue=1 |pages=200&ndash;210 |arxiv=1209.1960 |doi=10.1016/j.eswa.2012.07.021 |s2cid=6954668 }}</ref> however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach<ref>{{Cite conference |last1=Bradley |first1=Paul S. |last2=Fayyad |first2=Usama M. |author-link2=wikipedia:Usama Fayyad |year=1998 |title=Refining Initial Points for ''k''-Means Clustering |book-title=Proceedings of the Fifteenth International Conference on Machine Learning }}</ref> performs "consistently" in "the best group" and [[wikipedia:K-means++|''k''-means++]] performs "generally well".
Commonly used initialization methods are Forgy and Random Partition.<ref name="hamerly4">{{Cite conference |last1=Hamerly |first1=Greg |last2=Elkan |first2=Charles |year=2002 |title=Alternatives to the <math>k</math>-means algorithm that find better clusterings |url=http://people.csail.mit.edu/tieu/notebook/kmeans/15_p600-hamerly.pdf |book-title=Proceedings of the eleventh international conference on Information and knowledge management (CIKM) }}</ref> The Forgy method randomly chooses <math>k</math> observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al.,<ref name="hamerly4" /> the Random Partition method is generally preferable for algorithms such as the <math>k</math>-harmonic means and fuzzy <math>k</math>-means. For expectation maximization and standard <math>k</math>-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al.,<ref>{{cite journal |last1=Celebi |first1=M. E. |last2=Kingravi |first2=H. A. |last3=Vela |first3=P. A. |year=2013 |title=A comparative study of efficient initialization methods for the <math>k</math>-means clustering algorithm |journal=[[Expert Systems with Applications|Expert Systems with Applications]] |volume=40 |issue=1 |pages=200&ndash;210 |arxiv=1209.1960 |doi=10.1016/j.eswa.2012.07.021 |s2cid=6954668 }}</ref> however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach<ref>{{Cite conference |last1=Bradley |first1=Paul S. |last2=Fayyad |first2=Usama M. |author-link2=Usama Fayyad |year=1998 |title=Refining Initial Points for <math>k</math>-Means Clustering |book-title=Proceedings of the Fifteenth International Conference on Machine Learning }}</ref> performs "consistently" in "the best group" and [[wikipedia:K-means++|<math>k</math>-means++]] performs "generally well".


<gallery class="center" widths="150px" caption="Demonstration of the standard algorithm">
<gallery class="center" widths="150px" caption="Demonstration of the standard algorithm">
File:K Means Example Step 1.svg|1. ''k'' initial "means" (in this case ''k''=3) are randomly generated within the data domain (shown in color).
File:K Means Example Step 1.svg|1. <math>k</math> initial "means" (in this case <math>k</math>=3) are randomly generated within the data domain (shown in color).
File:K Means Example Step 2.svg|2. ''k'' clusters are created by associating every observation with the nearest mean. The partitions here represent the [[wikipedia:Voronoi diagram|Voronoi diagram]] generated by the means.
File:K Means Example Step 2.svg|2. <math>k</math> clusters are created by associating every observation with the nearest mean. The partitions here represent the [[Voronoi diagram|Voronoi diagram]] generated by the means.
File:K Means Example Step 3.svg|3. The [[wikipedia:centroid|centroid]] of each of the <math>k</math> clusters becomes the new mean.
File:K Means Example Step 33.svg|3. The [[centroid|centroid]] of each of the <math>k</math> clusters becomes the new mean.
File:K Means Example Step 4.svg|4. Steps 2 and 3 are repeated until convergence has been reached.
File:K Means Example Step 4.svg|4. Steps 2 and 3 are repeated until convergence has been reached.
</gallery>
</gallery>
Line 49: Line 45:
Three key features of <math>k</math>-means that make it efficient are often regarded as its biggest drawbacks:
Three key features of <math>k</math>-means that make it efficient are often regarded as its biggest drawbacks:


* [[wikipedia:Euclidean distance|Euclidean distance]] is used as a [[wikipedia:Metric (mathematics)|metric]] and [[wikipedia:variance|variance]] is used as a measure of cluster scatter.
* Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
* The number of clusters <math>k</math> is an input parameter: an inappropriate choice of <math>k</math> may yield poor results. That is why, when performing <math>k</math>-means, it is important to run diagnostic checks for [[wikipedia:Determining the number of clusters in a data set|determining the number of clusters in the data set]].
* The number of clusters <math>k</math> is an input parameter: an inappropriate choice of <math>k</math> may yield poor results.  
* Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.).
* Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.).


[[File:K-means convergence to a local minimum.png|center|thumb|650x650px|A typical example of the <math>k</math>-means convergence to a local minimum. In this example, the result of <math>k</math>-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet.<ref name="Mirkes20112">{{cite web |url=http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html |title=K-means and <math>k</math>-medoids applet |last1=Mirkes |first1=E. M. |access-date=2 January 2016 }}</ref>]]
[[File:K-means convergence to a local minimum.png|center|thumb|650x650px|A typical example of the <math>k</math>-means convergence to a local minimum. In this example, the result of <math>k</math>-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet.<ref name="Mirkes20112">{{cite web |url=http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html |title=K-means and <math>k</math>-medoids applet |last1=Mirkes |first1=E. M. |access-date=2 January 2016 }}</ref>]]


A key limitation of <math>k</math>-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying <math>k</math>-means with a value of <math>k=3</math> onto the well-known [[wikipedia:Iris flower data set|Iris flower data set]], the result often fails to separate the three [[wikipedia:Iris (plant)|Iris]] species contained in the data set. With <math>k=2</math>, the two visible clusters (one containing two species) will be discovered, whereas with <math>k=3</math> one of the two clusters will be split into two even parts. In fact, <math>k=2</math> is more appropriate for this data set, despite the data set's containing 3 ''classes''. As with any other clustering algorithm, the <math>k</math>-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others.
A key limitation of <math>k</math>-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying <math>k</math>-means with a value of <math>k=3</math> onto the well-known [[wikipedia:Iris flower data set|Iris flower data set]], the result often fails to separate the three Iris species contained in the data set. With <math>k=2</math>, the two visible clusters (one containing two species) will be discovered, whereas with <math>k=3</math> one of the two clusters will be split into two even parts. In fact, <math>k=2</math> is more appropriate for this data set, despite the data set's containing 3 ''classes''. As with any other clustering algorithm, the <math>k</math>-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others.


[[File:Iris_Flowers_Clustering_kMeans.svg|thumb|center|450x450px|''k''-means clustering result for the [[wikipedia:Iris flower data set|Iris flower data set]] and actual species visualized using [[wikipedia:Environment for DeveLoping KDD-Applications Supported by Index-Structures|ELKI]]. Cluster means are marked using larger, semi-transparent symbols.]]
[[File:Iris_Flowers_Clustering_kMeans.svg|thumb|center|450x450px|<math>k</math>-means clustering result for the [[Iris flower data set|Iris flower data set]] and actual species visualized using [[Environment for DeveLoping KDD-Applications Supported by Index-Structures|ELKI]]. Cluster means are marked using larger, semi-transparent symbols.]]


The result of <math>k</math>-means can be seen as the [[wikipedia:Voronoi diagram|Voronoi cells]] of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the [[wikipedia:expectation-maximization algorithm|expectation-maximization algorithm]] (arguably a generalization of <math>k</math>-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than <math>k</math>-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices. <math>k</math>-means is closely related to nonparametric [[wikipedia:Bayesian inference|Bayesian modeling]].<ref>{{Cite book |last1=Kulis |first1=Brian |last2=Jordan |first2=Michael I. |date=2012-06-26 |title=Revisiting ''k''-means: new algorithms via Bayesian nonparametrics |url=https://icml.cc/2012/papers/291.pdf |journal=ICML |pages=1131–1138 |isbn=9781450312851 }}</ref>
The result of <math>k</math>-means can be seen as the [[Voronoi diagram|Voronoi cells]] of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the [[wikipedia:expectation-maximization algorithm|expectation-maximization algorithm]] (arguably a generalization of <math>k</math>-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than <math>k</math>-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices.  


[[File:ClusterAnalysis_Mouse.png|thumb|450x450px|center|''k''-means clustering vs. [[wikipedia:EM clustering|EM clustering]] on an artificial dataset ("mouse"). The tendency of <math>k</math>-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set.]]
[[File:ClusterAnalysis_Mouse.svg|thumb|450x450px|center|<math>k</math>-means clustering vs. [[EM clustering|EM clustering]] on an artificial dataset ("mouse"). The tendency of <math>k</math>-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set.]]


==References==
==References==

Latest revision as of 23:37, 14 April 2024

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition [math]n[/math] observations into [math]k[/math] clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. [math]k[/math]-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.

Description

Given a set of observations [math]\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n[/math], where each observation is a [math]d[/math]-dimensional real vector, [math]k[/math]-means clustering aims to partition the [math]n[/math] observations into [math]k \leq n[/math] sets [math]S=\{S_1,S_2,\ldots,S_k\}[/math] so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find:

[[math]]\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^k |S_i| \operatorname{Var} S_i [[/math]]

where [math]\boldsymbol\mu_i[/math] is the mean of points in [math]S_i[/math]. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:

[[math]]\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \, \frac{1}{ |S_i|} \, \sum_{\mathbf{x}, \mathbf{y} \in S_i} \left\| \mathbf{x} - \mathbf{y} \right\|^2[[/math]]

The equivalence can be deduced from identity

[[math]]|S_i|\sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \sum_{\mathbf{x}\neq\mathbf{y} \in S_i}\left\|\mathbf x - \mathbf y\right\|^2.[[/math]]

Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS),.[1] This deterministic relationship is also related to the law of total variance in probability theory.

History

The term "[math]k[/math]-means" was first used by James MacQueen in 1967,[2] though the idea goes back to Hugo Steinhaus in 1956.[3] The standard algorithm was first proposed by Stuart Lloyd of Bell Labs in 1957 as a technique for pulse-code modulation, although it was not published as a journal article until 1982.[4] In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm.[5]

Algorithms

Standard algorithm (naive k-means)

The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the [math]k[/math]-means algorithm"; it is also referred to as Lloyd's algorithm, particularly in the computer science community. It is sometimes also referred to as "naïve [math]k[/math]-means", because there exist much faster alternatives.[6]

Given an initial set of [math]k[/math] means [math]m_1^{(1)},\ldots,m_k^{(1)}[/math] (see below), the algorithm proceeds by alternating between two steps:[7]

K-means Algorithm
  1. Assignment step: Assign each observation to the cluster with the nearest mean: that with the least squared Euclidean distance.[8] (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means.)
    [[math]]S_i^{(t)} = \left \{ x_p : \left \| x_p - m^{(t)}_i \right \|^2 \le \left \| x_p - m^{(t)}_j \right \|^2 \ \forall j, 1 \le j \le k \right\},[[/math]]
    where each [math]x_p[/math] is assigned to exactly one [math]S^{(t)}[/math], even if it could be assigned to two or more of them.
  2. Update step: Recalculate means (centroids) for observations assigned to each cluster.

[[math]]m^{(t+1)}_i = \frac{1}{\left|S^{(t)}_i\right|} \sum_{x_j \in S^{(t)}_i} x_j [[/math]]


The algorithm has converged when the assignments no longer change. The algorithm is not guaranteed to find the optimum.[9]

The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of [math]k[/math]-means such as spherical [math]k[/math]-means and [math]k[/math]-medoids have been proposed to allow using other distance measures.

Initialization methods

Commonly used initialization methods are Forgy and Random Partition.[10] The Forgy method randomly chooses [math]k[/math] observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al.,[10] the Random Partition method is generally preferable for algorithms such as the [math]k[/math]-harmonic means and fuzzy [math]k[/math]-means. For expectation maximization and standard [math]k[/math]-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al.,[11] however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach[12] performs "consistently" in "the best group" and [math]k[/math]-means++ performs "generally well".

Discussion

Three key features of [math]k[/math]-means that make it efficient are often regarded as its biggest drawbacks:

  • Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
  • The number of clusters [math]k[/math] is an input parameter: an inappropriate choice of [math]k[/math] may yield poor results.
  • Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.).
A typical example of the [math]k[/math]-means convergence to a local minimum. In this example, the result of [math]k[/math]-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet.[13]

A key limitation of [math]k[/math]-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying [math]k[/math]-means with a value of [math]k=3[/math] onto the well-known Iris flower data set, the result often fails to separate the three Iris species contained in the data set. With [math]k=2[/math], the two visible clusters (one containing two species) will be discovered, whereas with [math]k=3[/math] one of the two clusters will be split into two even parts. In fact, [math]k=2[/math] is more appropriate for this data set, despite the data set's containing 3 classes. As with any other clustering algorithm, the [math]k[/math]-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others.

[math]k[/math]-means clustering result for the Iris flower data set and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols.

The result of [math]k[/math]-means can be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the expectation-maximization algorithm (arguably a generalization of [math]k[/math]-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than [math]k[/math]-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices.

[math]k[/math]-means clustering vs. EM clustering on an artificial dataset ("mouse"). The tendency of [math]k[/math]-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set.

References

  1. "The (black) art of runtime evaluation: Are we comparing algorithms or implementations?" (2016). Knowledge and Information Systems 52 (2): 341–378. doi:10.1007/s10115-016-1004-2. ISSN 0219-1377. 
  2. MacQueen, J. B. (1967). Some Methods for classification and Analysis of Multivariate Observations. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. 1. University of California Press. pp. 281–297. MR 0214227. Zbl 0214.46201. Retrieved 2009-04-07.
  3. Steinhaus, Hugo (1957). "Sur la division des corps matériels en parties" (in fr). Bull. Acad. Polon. Sci. 4 (12): 801–804. 
  4. Lloyd, Stuart P. (1957). "Least square quantization in PCM". Bell Telephone Laboratories Paper.  Published in journal much later: Lloyd, Stuart P. (1982). "Least squares quantization in PCM". IEEE Transactions on Information Theory 28 (2): 129–137. doi:10.1109/TIT.1982.1056489. 
  5. Forgy, Edward W. (1965). "Cluster analysis of multivariate data: efficiency versus interpretability of classifications". Biometrics 21 (3): 768–769. 
  6. "Accelerating exact k -means algorithms with geometric reasoning" (in en) (1999). Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '99: 277–281. San Diego, California, United States: ACM Press. doi:10.1145/312129.312248. 
  7. MacKay, David (2003). "Chapter 20. An Example Inference Task: Clustering" (PDF). Information Theory, Inference and Learning Algorithms. Cambridge University Press. pp. 284–292. ISBN 978-0-521-64298-9. MR 2012999.
  8. Since the square root is a monotone function, this also is the minimum Euclidean distance assignment.
  9. "Algorithm AS 136: A [math]k[/math]-Means Clustering Algorithm" (1979). Journal of the Royal Statistical Society, Series C 28 (1): 100–108. 
  10. 10.0 10.1 Hamerly, Greg; Elkan, Charles (2002). "Alternatives to the [math]k[/math]-means algorithm that find better clusterings" (PDF). Proceedings of the eleventh international conference on Information and knowledge management (CIKM).
  11. "A comparative study of efficient initialization methods for the [math]k[/math]-means clustering algorithm" (2013). Expert Systems with Applications 40 (1): 200–210. doi:10.1016/j.eswa.2012.07.021. 
  12. Bradley, Paul S.; Fayyad, Usama M. (1998). "Refining Initial Points for [math]k[/math]-Means Clustering". Proceedings of the Fifteenth International Conference on Machine Learning.
  13. Mirkes, E. M. "K-means and [math]k[/math]-medoids applet". Retrieved 2 January 2016.

Wikipedia References