Revision as of 23:37, 14 April 2024 by Admin
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

K-means Clustering

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition [math]n[/math] observations into [math]k[/math] clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. [math]k[/math]-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.

Description

Given a set of observations [math]\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n[/math], where each observation is a [math]d[/math]-dimensional real vector, [math]k[/math]-means clustering aims to partition the [math]n[/math] observations into [math]k \leq n[/math] sets [math]S=\{S_1,S_2,\ldots,S_k\}[/math] so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find:

[[math]]\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^k |S_i| \operatorname{Var} S_i [[/math]]

where [math]\boldsymbol\mu_i[/math] is the mean of points in [math]S_i[/math]. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:

[[math]]\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \, \frac{1}{ |S_i|} \, \sum_{\mathbf{x}, \mathbf{y} \in S_i} \left\| \mathbf{x} - \mathbf{y} \right\|^2[[/math]]

The equivalence can be deduced from identity

[[math]]|S_i|\sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \sum_{\mathbf{x}\neq\mathbf{y} \in S_i}\left\|\mathbf x - \mathbf y\right\|^2.[[/math]]

Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS),.[1] This deterministic relationship is also related to the law of total variance in probability theory.

History

The term "[math]k[/math]-means" was first used by James MacQueen in 1967,[2] though the idea goes back to Hugo Steinhaus in 1956.[3] The standard algorithm was first proposed by Stuart Lloyd of Bell Labs in 1957 as a technique for pulse-code modulation, although it was not published as a journal article until 1982.[4] In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm.[5]

Algorithms

Standard algorithm (naive k-means)

The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the [math]k[/math]-means algorithm"; it is also referred to as Lloyd's algorithm, particularly in the computer science community. It is sometimes also referred to as "naïve [math]k[/math]-means", because there exist much faster alternatives.[6]

Given an initial set of [math]k[/math] means [math]m_1^{(1)},\ldots,m_k^{(1)}[/math] (see below), the algorithm proceeds by alternating between two steps:[7]

K-means Algorithm
  1. Assignment step: Assign each observation to the cluster with the nearest mean: that with the least squared Euclidean distance.[8] (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means.)
    [[math]]S_i^{(t)} = \left \{ x_p : \left \| x_p - m^{(t)}_i \right \|^2 \le \left \| x_p - m^{(t)}_j \right \|^2 \ \forall j, 1 \le j \le k \right\},[[/math]]
    where each [math]x_p[/math] is assigned to exactly one [math]S^{(t)}[/math], even if it could be assigned to two or more of them.
  2. Update step: Recalculate means (centroids) for observations assigned to each cluster.

[[math]]m^{(t+1)}_i = \frac{1}{\left|S^{(t)}_i\right|} \sum_{x_j \in S^{(t)}_i} x_j [[/math]]


The algorithm has converged when the assignments no longer change. The algorithm is not guaranteed to find the optimum.[9]

The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of [math]k[/math]-means such as spherical [math]k[/math]-means and [math]k[/math]-medoids have been proposed to allow using other distance measures.

Initialization methods

Commonly used initialization methods are Forgy and Random Partition.[10] The Forgy method randomly chooses [math]k[/math] observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al.,[10] the Random Partition method is generally preferable for algorithms such as the [math]k[/math]-harmonic means and fuzzy [math]k[/math]-means. For expectation maximization and standard [math]k[/math]-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al.,[11] however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach[12] performs "consistently" in "the best group" and [math]k[/math]-means++ performs "generally well".

Discussion

Three key features of [math]k[/math]-means that make it efficient are often regarded as its biggest drawbacks:

  • Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
  • The number of clusters [math]k[/math] is an input parameter: an inappropriate choice of [math]k[/math] may yield poor results.
  • Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.).
A typical example of the [math]k[/math]-means convergence to a local minimum. In this example, the result of [math]k[/math]-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet.[13]

A key limitation of [math]k[/math]-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying [math]k[/math]-means with a value of [math]k=3[/math] onto the well-known Iris flower data set, the result often fails to separate the three Iris species contained in the data set. With [math]k=2[/math], the two visible clusters (one containing two species) will be discovered, whereas with [math]k=3[/math] one of the two clusters will be split into two even parts. In fact, [math]k=2[/math] is more appropriate for this data set, despite the data set's containing 3 classes. As with any other clustering algorithm, the [math]k[/math]-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others.

[math]k[/math]-means clustering result for the Iris flower data set and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols.

The result of [math]k[/math]-means can be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the expectation-maximization algorithm (arguably a generalization of [math]k[/math]-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than [math]k[/math]-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices.

[math]k[/math]-means clustering vs. EM clustering on an artificial dataset ("mouse"). The tendency of [math]k[/math]-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set.

References

  1. "The (black) art of runtime evaluation: Are we comparing algorithms or implementations?" (2016). Knowledge and Information Systems 52 (2): 341–378. doi:10.1007/s10115-016-1004-2. ISSN 0219-1377. 
  2. MacQueen, J. B. (1967). Some Methods for classification and Analysis of Multivariate Observations. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. 1. University of California Press. pp. 281–297. MR 0214227. Zbl 0214.46201. Retrieved 2009-04-07.
  3. Steinhaus, Hugo (1957). "Sur la division des corps matériels en parties" (in fr). Bull. Acad. Polon. Sci. 4 (12): 801–804. 
  4. Lloyd, Stuart P. (1957). "Least square quantization in PCM". Bell Telephone Laboratories Paper.  Published in journal much later: Lloyd, Stuart P. (1982). "Least squares quantization in PCM". IEEE Transactions on Information Theory 28 (2): 129–137. doi:10.1109/TIT.1982.1056489. 
  5. Forgy, Edward W. (1965). "Cluster analysis of multivariate data: efficiency versus interpretability of classifications". Biometrics 21 (3): 768–769. 
  6. "Accelerating exact k -means algorithms with geometric reasoning" (in en) (1999). Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '99: 277–281. San Diego, California, United States: ACM Press. doi:10.1145/312129.312248. 
  7. MacKay, David (2003). "Chapter 20. An Example Inference Task: Clustering" (PDF). Information Theory, Inference and Learning Algorithms. Cambridge University Press. pp. 284–292. ISBN 978-0-521-64298-9. MR 2012999.
  8. Since the square root is a monotone function, this also is the minimum Euclidean distance assignment.
  9. "Algorithm AS 136: A [math]k[/math]-Means Clustering Algorithm" (1979). Journal of the Royal Statistical Society, Series C 28 (1): 100–108. 
  10. 10.0 10.1 Hamerly, Greg; Elkan, Charles (2002). "Alternatives to the [math]k[/math]-means algorithm that find better clusterings" (PDF). Proceedings of the eleventh international conference on Information and knowledge management (CIKM).
  11. "A comparative study of efficient initialization methods for the [math]k[/math]-means clustering algorithm" (2013). Expert Systems with Applications 40 (1): 200–210. doi:10.1016/j.eswa.2012.07.021. 
  12. Bradley, Paul S.; Fayyad, Usama M. (1998). "Refining Initial Points for [math]k[/math]-Means Clustering". Proceedings of the Fifteenth International Conference on Machine Learning.
  13. Mirkes, E. M. "K-means and [math]k[/math]-medoids applet". Retrieved 2 January 2016.

Wikipedia References