guide:610e794060: Difference between revisions

From Stochiki
mNo edit summary
mNo edit summary
 
Line 1: Line 1:
The '''principal components''' of a collection of points in a [[wikipedia:real coordinate space|real coordinate space]] are a sequence of <math>p</math> [[wikipedia:unit vector|unit vector]]s, where the <math>i</math>-th vector is the direction of a line that best fits the data while being [[wikipedia:orthogonal|orthogonal]] to the first <math>i-1</math> vectors. Here, a best-fitting line is defined as one that minimizes the average squared [[wikipedia:Distance from a point to a line|distance from the points to the line]]. These directions constitute an [[wikipedia:orthonormal basis|orthonormal basis]] in which different individual dimensions of the data are [[wikipedia:Linear correlation|linearly uncorrelated]]. '''Principal component analysis''' ('''PCA''') is the process of computing the principal components and using them to perform a [[wikipedia:change of basis|change of basis]] on the data, sometimes using only the first few principal components and ignoring the rest.
The '''principal components''' of a collection of points in a [[real coordinate space|real coordinate space]] are a sequence of <math>p</math> [[unit vector|unit vector]]s, where the <math>i</math>-th vector is the direction of a line that best fits the data while being [[orthogonal|orthogonal]] to the first <math>i-1</math> vectors. Here, a best-fitting line is defined as one that minimizes the average squared [[Distance from a point to a line|distance from the points to the line]]. These directions constitute an [[orthonormal basis|orthonormal basis]] in which different individual dimensions of the data are [[Linear correlation|linearly uncorrelated]]. '''Principal component analysis''' ('''PCA''') is the process of computing the principal components and using them to perform a [[change of basis|change of basis]] on the data, sometimes using only the first few principal components and ignoring the rest.


In data analysis, the first principal component of a set of  <math>p</math> variables, presumed to be jointly normally distributed, is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through  <math>p</math> iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set.
In data analysis, the first principal component of a set of  <math>p</math> variables, presumed to be jointly normally distributed, is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through  <math>p</math> iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set.


PCA is used in [[wikipedia:exploratory data analysis|exploratory data analysis]] and for making [[wikipedia:predictive modeling|predictive models]]. It is commonly used for [[wikipedia:dimensionality reduction|dimensionality reduction]] by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The <math>i</math>-th principal component can be taken as a direction orthogonal to the first <math>i-1</math> principal components that maximizes the variance of the projected data.
PCA is used in [[exploratory data analysis|exploratory data analysis]] and for making [[predictive modeling|predictive models]]. It is commonly used for [[dimensionality reduction|dimensionality reduction]] by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The <math>i</math>-th principal component can be taken as a direction orthogonal to the first <math>i-1</math> principal components that maximizes the variance of the projected data.


For either objective, it can be shown that the principal components are [[wikipedia:eigenvectors|eigenvectors]] of the data's [[wikipedia:covariance matrix|covariance matrix]]. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or [[wikipedia:singular value decomposition|singular value decomposition]] of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to [[wikipedia:factor analysis|factor analysis]].  
For either objective, it can be shown that the principal components are [[eigenvectors|eigenvectors]] of the data's [[covariance matrix|covariance matrix]]. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or [[singular value decomposition|singular value decomposition]] of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to [[factor analysis|factor analysis]].  


== Intuition ==
== Intuition ==
PCA can be thought of as fitting a <math>p</math>-dimensional [[wikipedia:ellipsoid|ellipsoid]] to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small.
PCA can be thought of as fitting a <math>p</math>-dimensional [[ellipsoid|ellipsoid]] to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small.


To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute the [[wikipedia:covariance matrix|covariance matrix]] of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must [[wikipedia:Normalization (statistics)|normalize]] each of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.
To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute the [[covariance matrix|covariance matrix]] of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.


== Details ==
== Details ==
PCA is defined as an [[wikipedia:orthogonal transformation|orthogonal]] [[wikipedia:linear transformation|linear transformation]] that transforms the data to a new [[wikipedia:coordinate system|coordinate system]] such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.<ref name="Jolliffe2002">{{Cite book |last=Jolliffe |first=I. T.  |url=http://link.springer.com/10.1007/b98835 |title=Principal Component Analysis |date=2002 |publisher=Springer-Verlag |isbn=978-0-387-95442-4 |series=Springer Series in Statistics |location=New York |language=en |doi=10.1007/b98835}}</ref>
PCA is defined as an [[orthogonal transformation|orthogonal]] [[linear transformation|linear transformation]] that transforms the data to a new [[coordinate system|coordinate system]] such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.<ref name="Jolliffe2002">{{Cite book |last=Jolliffe |first=I. T.  |url=http://link.springer.com/10.1007/b98835 |title=Principal Component Analysis |date=2002 |publisher=Springer-Verlag |isbn=978-0-387-95442-4 |series=Springer Series in Statistics |location=New York |language=en |doi=10.1007/b98835}}</ref>


Consider an <math>n \times p</math> data [[wikipedia:Matrix (mathematics)|matrix]], <math>\mathbf{X}</math>, with column-wise zero [[wikipedia:empirical mean|empirical mean]] (the sample mean of each column has been shifted to zero), where each of the <math>n</math> rows represents a different repetition of the experiment, and each of the <math>p</math> columns gives a particular kind of feature (say, the results from a particular sensor).
Consider an <math>n \times p</math> data [[Matrix (mathematics)|matrix]], <math>\mathbf{X}</math>, with column-wise zero [[empirical mean|empirical mean]] (the sample mean of each column has been shifted to zero), where each of the <math>n</math> rows represents a different repetition of the experiment, and each of the <math>p</math> columns gives a particular kind of feature (say, the results from a particular sensor).


Mathematically, the transformation is defined by a set of size <math>l</math> of <math>p</math>-dimensional vectors of weights or coefficients <math>\mathbf{w}_{(k)} = (w_1, \dots, w_p)_{(k)} </math> that map each row vector <math>\mathbf{x}_{(i)}</math> of <math>\mathbf{X}</math> to a new vector of principal component ''scores'' <math>\mathbf{t}_{(i)} = (t_1, \dots, t_l)_{(i)}</math>, given by <math display = "block">{t_{k}}_{(i)} = \mathbf{x}_{(i)} \cdot \mathbf{w}_{(k)} \qquad \mathrm{for} \qquad i = 1,\dots,n \qquad k = 1,\dots,l </math>
Mathematically, the transformation is defined by a set of size <math>l</math> of <math>p</math>-dimensional vectors of weights or coefficients <math>\mathbf{w}_{(k)} = (w_1, \dots, w_p)_{(k)} </math> that map each row vector <math>\mathbf{x}_{(i)}</math> of <math>\mathbf{X}</math> to a new vector of principal component ''scores'' <math>\mathbf{t}_{(i)} = (t_1, \dots, t_l)_{(i)}</math>, given by <math display = "block">{t_{k}}_{(i)} = \mathbf{x}_{(i)} \cdot \mathbf{w}_{(k)} \qquad \mathrm{for} \qquad i = 1,\dots,n \qquad k = 1,\dots,l </math>
in such a way that the individual variables <math>\mathbf{t}_1, \dots, \mathbf{t}_l</math> of <math>\mathbf{t}</math> considered over the data set successively inherit the maximum possible variance from <math>\mathbf{X}</math>, with each coefficient vector <math>\mathbf{w}</math> constrained to be a [[wikipedia:unit vector|unit vector]] (where <math>l</math> is usually selected to be strictly less than <math>p</math> to reduce dimensionality).
in such a way that the individual variables <math>\mathbf{t}_1, \dots, \mathbf{t}_l</math> of <math>\mathbf{t}</math> considered over the data set successively inherit the maximum possible variance from <math>\mathbf{X}</math>, with each coefficient vector <math>\mathbf{w}</math> constrained to be a [[unit vector|unit vector]] (where <math>l</math> is usually selected to be strictly less than <math>p</math> to reduce dimensionality).


=== First component ===
=== First component ===
Line 34: Line 34:
<math display = "block">\mathbf{w}_{(1)} = \arg\max \left\{ \frac{\mathbf{w}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X w}}{\mathbf{w}^\mathsf{T} \mathbf{w}} \right\}</math>
<math display = "block">\mathbf{w}_{(1)} = \arg\max \left\{ \frac{\mathbf{w}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X w}}{\mathbf{w}^\mathsf{T} \mathbf{w}} \right\}</math>


The quantity to be maximised can be recognised as a [[wikipedia:Rayleigh quotient|Rayleigh quotient]]. A standard result for a [[wikipedia:positive semidefinite matrix|positive semidefinite matrix]] such as <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math> is that the quotient's maximum possible value is the largest [[wikipedia:eigenvalue|eigenvalue]] of the matrix, which occurs when <math>\mathbf{w}</math> is the corresponding [[wikipedia:eigenvector|eigenvector]].
A standard result for a [[positive semidefinite matrix|positive semidefinite matrix]] such as <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math> is that the quotient's maximum possible value is the largest [[eigenvalue|eigenvalue]] of the matrix, which occurs when <math>\mathbf{w}</math> is the corresponding [[eigenvector|eigenvector]].


With <math>\mathbf{w}_{(1)}</math> found, the first principal component of a data vector <math>\mathbf{x}_i</math> can then be given as a score <math>t_{(i)}=\mathbf{x}_{(i)}</math>⋅ <math>w_{(1)}</math> in the transformed co-ordinates, or as the corresponding vector in the original variables, {<math>x_{(i)} \cdot w_{(1)}</math>} <math>w_{(1)}</math>.
With <math>\mathbf{w}_{(1)}</math> found, the first principal component of a data vector <math>\mathbf{x}_i</math> can then be given as a score <math>t_{(i)}=\mathbf{x}_{(i)}</math>⋅ <math>w_{(1)}</math> in the transformed co-ordinates, or as the corresponding vector in the original variables, {<math>x_{(i)} \cdot w_{(1)}</math>} <math>w_{(1)}</math>.
Line 52: Line 52:
The <math>k</math>-th principal component of a data vector <math>\mathbf{x}_{(i)}</math> can therefore be given as a score <math>t_{k(i)} = \mathbf{x}_{(i)}\cdot \mathbf{w}_{(k)}</math> in the transformed coordinates, or as the corresponding vector in the space of the original variables, {'''x'''<sub>(''i'')</sub> ⋅ '''w'''<sub>(''k'')</sub>} '''w'''<sub>(''k'')</sub>, where <math>\mathbf{w}_{(k)}</math> is the <math>k</math><sup>th</sup> eigenvector of <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>. The full principal components decomposition of <math>\mathbf{X}</math> can therefore be given as
The <math>k</math>-th principal component of a data vector <math>\mathbf{x}_{(i)}</math> can therefore be given as a score <math>t_{k(i)} = \mathbf{x}_{(i)}\cdot \mathbf{w}_{(k)}</math> in the transformed coordinates, or as the corresponding vector in the space of the original variables, {'''x'''<sub>(''i'')</sub> ⋅ '''w'''<sub>(''k'')</sub>} '''w'''<sub>(''k'')</sub>, where <math>\mathbf{w}_{(k)}</math> is the <math>k</math><sup>th</sup> eigenvector of <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>. The full principal components decomposition of <math>\mathbf{X}</math> can therefore be given as
<math display = "block">\mathbf{T} = \mathbf{X} \mathbf{W}</math>
<math display = "block">\mathbf{T} = \mathbf{X} \mathbf{W}</math>
where <math>\mathbf{W}</math> is a <math>p</math>-by-<math>p</math> matrix of weights whose columns are the eigenvectors of <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>. The transpose of <math>\mathbf{W}</math> is sometimes called the [[wikipedia:whitening transformation|whitening or sphering transformation]]. Columns of <math>\mathbf{W}</math> multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called ''loadings'' in PCA or in Factor analysis.
where <math>\mathbf{W}</math> is a <math>p</math>-by-<math>p</math> matrix of weights whose columns are the eigenvectors of <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>. Columns of <math>\mathbf{W}</math> multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called ''loadings'' in PCA or in Factor analysis.


=== Covariances ===
=== Covariances ===
<math>\mathbf{X}^\mathsf{T}\mathbf{X}</math> itself can be recognized as proportional to the empirical sample [[wikipedia:covariance matrix|covariance matrix]] of the dataset <math>\mathbf{X}^{\mathsf{T}}</math>.<ref name="Jolliffe2002"/>{{rp|30–31}}
<math>\mathbf{X}^\mathsf{T}\mathbf{X}</math> itself can be recognized as proportional to the empirical sample [[covariance matrix|covariance matrix]] of the dataset <math>\mathbf{X}^{\mathsf{T}}</math>.<ref name="Jolliffe2002"/>{{rp|30–31}}


The sample covariance <math>Q</math> between two of the different principal components over the dataset is given by:
The sample covariance <math>Q</math> between two of the different principal components over the dataset is given by:
Line 88: Line 88:
where the matrix <math>\mathbf{T}_L</math>now has <math>n</math> rows but only <math>L</math> columns. In other words, PCA learns a linear transformation <math> t = W_L^\mathsf{T} x, x \in \mathbb{R}^p, t \in \mathbb{R}^L,</math> where the columns of <math>p \times L</math> matrix <math>W_L</math> form an orthogonal basis for the <math>L</math> features (the components of representation ''t'') that are decorrelated.<ref>{{Cite journal |author=Bengio, Y.|year=2013|title=Representation Learning: A Review and New Perspectives |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=35 |issue=8 |pages=1798–1828 |doi=10.1109/TPAMI.2013.50|pmid=23787338|display-authors=etal|arxiv=1206.5538|s2cid=393948}}</ref> By construction, of all the transformed data matrices with only <math>L</math> columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error <math>\|\mathbf{T}\mathbf{W}^{\mathsf{T}} - \mathbf{T}_L\mathbf{W}^{\mathsf{T}}_L\|_2^2</math> or <math>\|\mathbf{X} - \mathbf{X}_L\|_2^2</math>.
where the matrix <math>\mathbf{T}_L</math>now has <math>n</math> rows but only <math>L</math> columns. In other words, PCA learns a linear transformation <math> t = W_L^\mathsf{T} x, x \in \mathbb{R}^p, t \in \mathbb{R}^L,</math> where the columns of <math>p \times L</math> matrix <math>W_L</math> form an orthogonal basis for the <math>L</math> features (the components of representation ''t'') that are decorrelated.<ref>{{Cite journal |author=Bengio, Y.|year=2013|title=Representation Learning: A Review and New Perspectives |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=35 |issue=8 |pages=1798–1828 |doi=10.1109/TPAMI.2013.50|pmid=23787338|display-authors=etal|arxiv=1206.5538|s2cid=393948}}</ref> By construction, of all the transformed data matrices with only <math>L</math> columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error <math>\|\mathbf{T}\mathbf{W}^{\mathsf{T}} - \mathbf{T}_L\mathbf{W}^{\mathsf{T}}_L\|_2^2</math> or <math>\|\mathbf{X} - \mathbf{X}_L\|_2^2</math>.


Such [[wikipedia:dimensionality reduction|dimensionality reduction]] can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting <math>L=2</math> and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains [[wikipedia:Cluster analysis|clusters]] these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.
Such [[dimensionality reduction|dimensionality reduction]] can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting <math>L=2</math> and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains [[Cluster analysis|clusters]] these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.


Similarly, in [[wikipedia:regression analysis|regression analysis]], the larger the number of [[wikipedia:explanatory variable|explanatory variable]]s allowed, the greater is the chance of [[wikipedia:overfitting|overfitting]] the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called [[wikipedia:principal component regression|principal component regression]].
Similarly, in [[regression analysis|regression analysis]], the larger the number of [[explanatory variable|explanatory variable]]s allowed, the greater is the chance of [[overfitting|overfitting]] the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called [[principal component regression|principal component regression]].


Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of <math>\mathbf{T}</math> will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix <math>\mathbf{W}</math>, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher [[wikipedia:signal-to-noise ratio|signal-to-noise ratio]]. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using [[wikipedia:Bootstrapping (statistics)#Parametric bootstrap|parametric bootstrap]], as an aid in determining how many principal components to retain.<ref>{{Cite journal|author=Forkman J., Josse, J., Piepho, H. P. |year=2019 |title= Hypothesis tests for principal component analysis when variables are standardized |journal= Journal of Agricultural, Biological, and Environmental Statistics|volume=24 |issue=2 |pages= 289–308 |doi=10.1007/s13253-019-00355-5|doi-access=free }}</ref>
Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of <math>\mathbf{T}</math> will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix <math>\mathbf{W}</math>, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher [[signal-to-noise ratio|signal-to-noise ratio]]. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using [[Bootstrapping (statistics)#Parametric bootstrap|parametric bootstrap]], as an aid in determining how many principal components to retain.<ref>{{Cite journal|author=Forkman J., Josse, J., Piepho, H. P. |year=2019 |title= Hypothesis tests for principal component analysis when variables are standardized |journal= Journal of Agricultural, Biological, and Environmental Statistics|volume=24 |issue=2 |pages= 289–308 |doi=10.1007/s13253-019-00355-5|doi-access=free }}</ref>


=== Singular value decomposition ===
=== Singular value decomposition ===


The principal components transformation can also be associated with another matrix factorization, the [[wikipedia:singular value decomposition|singular value decomposition]] (SVD) of <math>\mathbf{X}</math>,
The principal components transformation can also be associated with another matrix factorization, the [[singular value decomposition|singular value decomposition]] (SVD) of <math>\mathbf{X}</math>,
<math display = "block">\mathbf{X} = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^{\mathsf{T}}</math>
<math display = "block">\mathbf{X} = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^{\mathsf{T}}</math>
Here '''Σ''' is an <math>n</math>-by-''p'' [[wikipedia:Diagonal matrix|rectangular diagonal matrix]] of positive numbers <math>\sigma_{(k)}</math>, called the singular values of <math>\mathbf{X}</math>; <math>\mathbf{U}</math> is an <math>n</math>-by-<math>n</math> matrix, the columns of which are orthogonal unit vectors of length <math>n</math> called the left singular vectors of <math>\mathbf{X}</math>; and <math>\mathbf{W}</math> is a <math>p</math>-by-<math>p</math> whose columns are orthogonal unit vectors of length <math>p</math> and called the right singular vectors of <math>\mathbf{X}</math>.
Here '''Σ''' is an <math>n</math>-by-''p'' [[Diagonal matrix|rectangular diagonal matrix]] of positive numbers <math>\sigma_{(k)}</math>, called the singular values of <math>\mathbf{X}</math>; <math>\mathbf{U}</math> is an <math>n</math>-by-<math>n</math> matrix, the columns of which are orthogonal unit vectors of length <math>n</math> called the left singular vectors of <math>\mathbf{X}</math>; and <math>\mathbf{W}</math> is a <math>p</math>-by-<math>p</math> whose columns are orthogonal unit vectors of length <math>p</math> and called the right singular vectors of <math>\mathbf{X}</math>.


In terms of this factorization, the matrix <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math> can be written
In terms of this factorization, the matrix <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math> can be written
Line 117: Line 117:
& = \mathbf{U}\mathbf{\Sigma}
& = \mathbf{U}\mathbf{\Sigma}
\end{align*}</math>
\end{align*}</math>
so each column of <math>\mathbf{T}</math> is given by one of the left singular vectors of <math>\mathbf{X}</math> multiplied by the corresponding singular value. This form is also the [[wikipedia:polar decomposition|polar decomposition]] of <math>\mathbf{T}</math>.
so each column of <math>\mathbf{T}</math> is given by one of the left singular vectors of <math>\mathbf{X}</math> multiplied by the corresponding singular value. This form is also the [[polar decomposition|polar decomposition]] of <math>\mathbf{T}</math>.


Efficient algorithms exist to calculate the SVD of <math>\mathbf{X}</math> without having to form the matrix <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix, unless only a handful of components are required.
Efficient algorithms exist to calculate the SVD of <math>\mathbf{X}</math> without having to form the matrix <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix, unless only a handful of components are required.
Line 123: Line 123:
As with the eigen-decomposition, a truncated <math>n \times L </math> score matrix <math>\mathbf{T}_L</math> can be obtained by considering only the first <math>L</math> largest singular values and their singular vectors:
As with the eigen-decomposition, a truncated <math>n \times L </math> score matrix <math>\mathbf{T}_L</math> can be obtained by considering only the first <math>L</math> largest singular values and their singular vectors:
<math display = "block">\mathbf{T}_L = \mathbf{U}_L\mathbf{\Sigma}_L = \mathbf{X} \mathbf{W}_L </math>
<math display = "block">\mathbf{T}_L = \mathbf{U}_L\mathbf{\Sigma}_L = \mathbf{X} \mathbf{W}_L </math>
The truncation of a matrix <math>\mathbf{M}</math> or <math>\mathbf{T}</math> using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of [[wikipedia:Rank (linear algebra)|rank]] <math>L</math> to the original matrix, in the sense of the difference between the two having the smallest possible [[wikipedia:Frobenius norm|Frobenius norm]], a result known as the Eckart–Young theorem [1936].
The truncation of a matrix <math>\mathbf{M}</math> or <math>\mathbf{T}</math> using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of [[Rank (linear algebra)|rank]] <math>L</math> to the original matrix, in the sense of the difference between the two having the smallest possible [[Frobenius norm|Frobenius norm]], a result known as the Eckart–Young theorem [1936].


== Further considerations ==
== Further considerations ==


The singular values (in '''Σ''') are the square roots of the [[wikipedia:eigenvalue|eigenvalue]]s of the matrix <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see [[wikipedia:Principle Component Analysis#PCA and information theory|below]]). PCA is often used in this manner for [[wikipedia:dimensionality reduction|dimensionality reduction]]. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the [[wikipedia:discrete cosine transform|discrete cosine transform]], and in particular to the DCT-II which is simply known as the "DCT". [[wikipedia:Nonlinear dimensionality reduction|Nonlinear dimensionality reduction]] techniques tend to be more computationally demanding than PCA.
The singular values (in '''Σ''') are the square roots of the [[eigenvalue|eigenvalue]]s of the matrix <math>\mathbf{X}^\mathsf{T}\mathbf{X}</math>. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see [[Principle Component Analysis#PCA and information theory|below]]). PCA is often used in this manner for [[dimensionality reduction|dimensionality reduction]]. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the [[discrete cosine transform|discrete cosine transform]], and in particular to the DCT-II which is simply known as the "DCT". [[Nonlinear dimensionality reduction|Nonlinear dimensionality reduction]] techniques tend to be more computationally demanding than PCA.


PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same [[wikipedia:sample variance|sample variance]] and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.
PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same [[sample variance|sample variance]] and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.


Mean subtraction (a.k.a. "mean centering") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the [[wikipedia:Minimum mean square error|mean square error]] of the approximation of the data.<ref>A. A. Miranda, Y. A. Le Borgne, and G. Bontempi. [http://www.ulb.ac.be/di/map/yleborgn/pub/NPL_PCA_07.pdf New Routes from Minimal Approximation Error to Principal Components], Volume 27, Number 3 / June, 2008, Neural Processing Letters, Springer</ref>
Mean subtraction (a.k.a. "mean centering") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the [[Minimum mean square error|mean square error]] of the approximation of the data.<ref>A. A. Miranda, Y. A. Le Borgne, and G. Bontempi. [http://www.ulb.ac.be/di/map/yleborgn/pub/NPL_PCA_07.pdf New Routes from Minimal Approximation Error to Principal Components], Volume 27, Number 3 / June, 2008, Neural Processing Letters, Springer</ref>
 
Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: ''Pearson Product-Moment Correlation''). Also see the article by Kromrey & Foster-Johnson (1998) on ''"Mean-centering in Moderated Regression: Much Ado About Nothing"''. Since [[wikipedia:Covariance matrix#Relation to the correlation matrix|covariances are correlations of normalized variables]] ([[wikipedia:Standard score#Calculation|Z- or standard-scores]]) a PCA based on the correlation matrix of  <math>\mathbf{X}</math> is [[wikipedia:Equality (mathematics)|equal]] to a PCA based on the covariance matrix of  <math>\mathbf{Z}</math>, the standardized version of  <math>\mathbf{X}</math>.


== Table of symbols and abbreviations ==
== Table of symbols and abbreviations ==
Line 165: Line 163:
|-
|-
| <math>\mathbf{u} = \{ u_j \}</math>
| <math>\mathbf{u} = \{ u_j \}</math>
| vector of empirical [[wikipedia:mean|mean]]s, one mean for each column ''j'' of the data matrix
| vector of empirical [[mean|mean]]s, one mean for each column ''j'' of the data matrix
| <math> p \times 1</math>
| <math> p \times 1</math>
| <math> j = 1 \ldots p </math>
| <math> j = 1 \ldots p </math>
|-
|-
| <math>\mathbf{s} = \{ s_j \}</math>
| <math>\mathbf{s} = \{ s_j \}</math>
| vector of empirical [[wikipedia:standard deviation|standard deviation]]s, one standard deviation for each column ''j'' of the data matrix
| vector of empirical [[standard deviation|standard deviation]]s, one standard deviation for each column ''j'' of the data matrix
| <math> p \times 1</math>
| <math> p \times 1</math>
| <math> j = 1 \ldots p </math>
| <math> j = 1 \ldots p </math>
Line 180: Line 178:
|-
|-
| <math>\mathbf{B} = \{ B_{ij} \}</math>
| <math>\mathbf{B} = \{ B_{ij} \}</math>
| [[wikipedia:Standard deviation|deviations]] from the mean of each column ''j'' of the data matrix
| [[Standard deviation|deviations]] from the mean of each column ''j'' of the data matrix
| <math> n \times p</math>
| <math> n \times p</math>
| <math> i = 1 \ldots n </math> <br /> <math> j = 1 \ldots p </math>
| <math> i = 1 \ldots n </math> <br /> <math> j = 1 \ldots p </math>
|-
|-
| <math>\mathbf{Z} = \{ Z_{ij} \} </math>
| <math>\mathbf{Z} = \{ Z_{ij} \} </math>
| [[wikipedia:z-score|z-score]]s, computed using the mean and standard deviation for each row ''m'' of the data matrix
| [[z-score|z-score]]s, computed using the mean and standard deviation for each row ''m'' of the data matrix
| <math> n \times p</math>
| <math> n \times p</math>
| <math> i = 1 \ldots n </math> <br /> <math> j = 1 \ldots p </math>
| <math> i = 1 \ldots n </math> <br /> <math> j = 1 \ldots p </math>
|-
|-
| <math>\mathbf{C} = \{ C_{jj'} \} </math>
| <math>\mathbf{C} = \{ C_{jj'} \} </math>
| [[wikipedia:covariance matrix|covariance matrix]]
| [[covariance matrix|covariance matrix]]
| <math> p \times p </math>
| <math> p \times p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>
|-
|-
| <math>\mathbf{R} = \{ R_{jj'} \} </math>
| <math>\mathbf{R} = \{ R_{jj'} \} </math>
| [[wikipedia:correlation matrix|correlation matrix]]
| [[correlation matrix|correlation matrix]]
| <math> p \times p </math>
| <math> p \times p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>
|-
|-
| <math> \mathbf{V} = \{ V_{jj'} \} </math>
| <math> \mathbf{V} = \{ V_{jj'} \} </math>
| matrix consisting of the set of all [[wikipedia:eigenvectors|eigenvectors]] of '''C''', one eigenvector per column
| matrix consisting of the set of all [[eigenvectors|eigenvectors]] of '''C''', one eigenvector per column
| <math> p \times p </math>
| <math> p \times p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>
|-
|-
| <math>\mathbf{D} = \{ D_{jj'} \} </math>
| <math>\mathbf{D} = \{ D_{jj'} \} </math>
| [[wikipedia:diagonal matrix|diagonal matrix]] consisting of the set of all [[wikipedia:eigenvalues|eigenvalues]] of '''C''' along its [[wikipedia:principal diagonal|principal diagonal]], and 0 for all other elements  ( note <math>\mathbf{\Lambda}</math> used above )
| [[diagonal matrix|diagonal matrix]] consisting of the set of all [[eigenvalues|eigenvalues]] of '''C''' along its [[principal diagonal|principal diagonal]], and 0 for all other elements  ( note <math>\mathbf{\Lambda}</math> used above )
| <math> p \times p </math>
| <math> p \times p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>
| <math> j = 1 \ldots p </math> <br /><math> j' = 1 \ldots p </math>

Latest revision as of 08:06, 14 April 2024

The principal components of a collection of points in a real coordinate space are a sequence of [math]p[/math] unit vectors, where the [math]i[/math]-th vector is the direction of a line that best fits the data while being orthogonal to the first [math]i-1[/math] vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest.

In data analysis, the first principal component of a set of [math]p[/math] variables, presumed to be jointly normally distributed, is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through [math]p[/math] iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set.

PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The [math]i[/math]-th principal component can be taken as a direction orthogonal to the first [math]i-1[/math] principal components that maximizes the variance of the projected data.

For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis.

Intuition

PCA can be thought of as fitting a [math]p[/math]-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small.

To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.

Details

PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[1]

Consider an [math]n \times p[/math] data matrix, [math]\mathbf{X}[/math], with column-wise zero empirical mean (the sample mean of each column has been shifted to zero), where each of the [math]n[/math] rows represents a different repetition of the experiment, and each of the [math]p[/math] columns gives a particular kind of feature (say, the results from a particular sensor).

Mathematically, the transformation is defined by a set of size [math]l[/math] of [math]p[/math]-dimensional vectors of weights or coefficients [math]\mathbf{w}_{(k)} = (w_1, \dots, w_p)_{(k)} [/math] that map each row vector [math]\mathbf{x}_{(i)}[/math] of [math]\mathbf{X}[/math] to a new vector of principal component scores [math]\mathbf{t}_{(i)} = (t_1, \dots, t_l)_{(i)}[/math], given by

[[math]]{t_{k}}_{(i)} = \mathbf{x}_{(i)} \cdot \mathbf{w}_{(k)} \qquad \mathrm{for} \qquad i = 1,\dots,n \qquad k = 1,\dots,l [[/math]]

in such a way that the individual variables [math]\mathbf{t}_1, \dots, \mathbf{t}_l[/math] of [math]\mathbf{t}[/math] considered over the data set successively inherit the maximum possible variance from [math]\mathbf{X}[/math], with each coefficient vector [math]\mathbf{w}[/math] constrained to be a unit vector (where [math]l[/math] is usually selected to be strictly less than [math]p[/math] to reduce dimensionality).

First component

In order to maximize variance, the first weight vector w(1) thus has to satisfy

[[math]]\mathbf{w}_{(1)} = \arg\max_{\Vert \mathbf{w} \Vert = 1} \,\left\{ \sum_i (t_1)^2_{(i)} \right\} = \arg\max_{\Vert \mathbf{w} \Vert = 1} \,\left\{ \sum_i \left(\mathbf{x}_{(i)} \cdot \mathbf{w} \right)^2 \right\}[[/math]]

Equivalently, writing this in matrix form gives

[[math]]\mathbf{w}_{(1)} = \arg\max_{\left\| \mathbf{w} \right\| = 1} \left\{ \left\| \mathbf{Xw} \right\|^2 \right\} = \arg\max_{\left\| \mathbf{w} \right\| = 1} \left\{ \mathbf{w}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X w} \right\}[[/math]]

Since [math]\mathbf{w}_{(1)}[/math] has been defined to be a unit vector, it equivalently also satisfies

[[math]]\mathbf{w}_{(1)} = \arg\max \left\{ \frac{\mathbf{w}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X w}}{\mathbf{w}^\mathsf{T} \mathbf{w}} \right\}[[/math]]

A standard result for a positive semidefinite matrix such as [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math] is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when [math]\mathbf{w}[/math] is the corresponding eigenvector.

With [math]\mathbf{w}_{(1)}[/math] found, the first principal component of a data vector [math]\mathbf{x}_i[/math] can then be given as a score [math]t_{(i)}=\mathbf{x}_{(i)}[/math][math]w_{(1)}[/math] in the transformed co-ordinates, or as the corresponding vector in the original variables, {[math]x_{(i)} \cdot w_{(1)}[/math]} [math]w_{(1)}[/math].

Further components

The [math]k[/math]-th component can be found by subtracting the first [math]k-1[/math] principal components from [math]\mathbf{X}[/math]:

[[math]]\mathbf{\hat{X}}_k = \mathbf{X} - \sum_{s = 1}^{k - 1} \mathbf{X} \mathbf{w}_{(s)} \mathbf{w}_{(s)}^{\mathsf{T}} [[/math]]

and then finding the weight vector which extracts the maximum variance from this new data matrix

[[math]]\mathbf{w}_{(k)} = \mathop{\operatorname{arg\,max}}_{\left\| \mathbf{w} \right\| = 1} \left\{ \left\| \mathbf{\hat{X}}_{k} \mathbf{w} \right\|^2 \right\} = \arg\max \left\{ \tfrac{\mathbf{w}^\mathsf{T} \mathbf{\hat{X}}_{k}^\mathsf{T} \mathbf{\hat{X}}_{k} \mathbf{w}}{\mathbf{w}^{\mathsf{T}} \mathbf{w}} \right\}[[/math]]

It turns out that this gives the remaining eigenvectors of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math], with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math].

The [math]k[/math]-th principal component of a data vector [math]\mathbf{x}_{(i)}[/math] can therefore be given as a score [math]t_{k(i)} = \mathbf{x}_{(i)}\cdot \mathbf{w}_{(k)}[/math] in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i)w(k)} w(k), where [math]\mathbf{w}_{(k)}[/math] is the [math]k[/math]th eigenvector of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math]. The full principal components decomposition of [math]\mathbf{X}[/math] can therefore be given as

[[math]]\mathbf{T} = \mathbf{X} \mathbf{W}[[/math]]

where [math]\mathbf{W}[/math] is a [math]p[/math]-by-[math]p[/math] matrix of weights whose columns are the eigenvectors of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math]. Columns of [math]\mathbf{W}[/math] multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called loadings in PCA or in Factor analysis.

Covariances

[math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math] itself can be recognized as proportional to the empirical sample covariance matrix of the dataset [math]\mathbf{X}^{\mathsf{T}}[/math].[1]:30–31

The sample covariance [math]Q[/math] between two of the different principal components over the dataset is given by:

[[math]]\begin{align*} Q(\mathrm{PC}_{(j)}, \mathrm{PC}_{(k)}) & \propto (\mathbf{X}\mathbf{w}_{(j)})^\mathsf{T} (\mathbf{X}\mathbf{w}_{(k)}) \\ & = \mathbf{w}_{(j)}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X} \mathbf{w}_{(k)} \\ & = \mathbf{w}_{(j)}^\mathsf{T} \lambda_{(k)} \mathbf{w}_{(k)} \\ & = \lambda_{(k)} \mathbf{w}_{(j)}^\mathsf{T} \mathbf{w}_{(k)} \end{align*}[[/math]]

where the eigenvalue property of [math]\mathbf{w}_k[/math] has been used to move from line 2 to line 3. However eigenvectors [math]\mathbf{w}_j[/math] and [math]\mathbf{w}_k[/math] corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset.

Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix.

In matrix form, the empirical covariance matrix for the original variables can be written

[[math]]\mathbf{Q} \propto \mathbf{X}^\mathsf{T} \mathbf{X} = \mathbf{W} \mathbf{\Lambda} \mathbf{W}^\mathsf{T}[[/math]]

The empirical covariance matrix between the principal components becomes

[[math]]\mathbf{W}^\mathsf{T} \mathbf{Q} \mathbf{W} \propto \mathbf{W}^\mathsf{T} \mathbf{W} \, \mathbf{\Lambda} \, \mathbf{W}^\mathsf{T} \mathbf{W} = \mathbf{\Lambda} [[/math]]

where [math]\mathbf{\Lambda}[/math] is the diagonal matrix of eigenvalues [math]\lambda_{(k)}[/math] of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math]. [math]\lambda_{(k)}[/math] is equal to the sum of the squares over the dataset associated with each component [math]k[/math], that is,

[[math]]\lambda_{(k)} = \sum_i t_{k(i)}^2 = \sum_i (\mathbf{x}_i \cdot \mathbf{w}_k)^2.[[/math]]

Dimensionality reduction

The transformation [math]\mathbf{T}=\mathbf{X}\mathbf{W}[/math] maps a data vector [math]\mathbf{x}_i[/math] from an original space of [math]p[/math] variables to a new space of [math]p[/math] variables which are uncorrelated over the dataset. However, not all the principal components need to be kept. Keeping only the first [math]L[/math] principal components, produced by using only the first [math]L[/math] eigenvectors, gives the truncated transformation

[[math]]\mathbf{T}_L = \mathbf{X} \mathbf{W}_L[[/math]]

where the matrix [math]\mathbf{T}_L[/math]now has [math]n[/math] rows but only [math]L[/math] columns. In other words, PCA learns a linear transformation [math] t = W_L^\mathsf{T} x, x \in \mathbb{R}^p, t \in \mathbb{R}^L,[/math] where the columns of [math]p \times L[/math] matrix [math]W_L[/math] form an orthogonal basis for the [math]L[/math] features (the components of representation t) that are decorrelated.[2] By construction, of all the transformed data matrices with only [math]L[/math] columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error [math]\|\mathbf{T}\mathbf{W}^{\mathsf{T}} - \mathbf{T}_L\mathbf{W}^{\mathsf{T}}_L\|_2^2[/math] or [math]\|\mathbf{X} - \mathbf{X}_L\|_2^2[/math].

Such dimensionality reduction can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting [math]L=2[/math] and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains clusters these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.

Similarly, in regression analysis, the larger the number of explanatory variables allowed, the greater is the chance of overfitting the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called principal component regression.

Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of [math]\mathbf{T}[/math] will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix [math]\mathbf{W}[/math], which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher signal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using parametric bootstrap, as an aid in determining how many principal components to retain.[3]

Singular value decomposition

The principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of [math]\mathbf{X}[/math],

[[math]]\mathbf{X} = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^{\mathsf{T}}[[/math]]

Here Σ is an [math]n[/math]-by-p rectangular diagonal matrix of positive numbers [math]\sigma_{(k)}[/math], called the singular values of [math]\mathbf{X}[/math]; [math]\mathbf{U}[/math] is an [math]n[/math]-by-[math]n[/math] matrix, the columns of which are orthogonal unit vectors of length [math]n[/math] called the left singular vectors of [math]\mathbf{X}[/math]; and [math]\mathbf{W}[/math] is a [math]p[/math]-by-[math]p[/math] whose columns are orthogonal unit vectors of length [math]p[/math] and called the right singular vectors of [math]\mathbf{X}[/math].

In terms of this factorization, the matrix [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math] can be written

[[math]]\begin{align*} \mathbf{X}^{\mathsf{T}}\mathbf{X} & = \mathbf{W}\mathbf{\Sigma}^\mathsf{T} \mathbf{U}^\mathsf{T} \mathbf{U}\mathbf{\Sigma}\mathbf{W}^\mathsf{T} \\ & = \mathbf{W}\mathbf{\Sigma}^\mathsf{T} \mathbf{\Sigma} \mathbf{W}^\mathsf{T} \\ & = \mathbf{W}\mathbf{\hat{\Sigma}}^2 \mathbf{W}^\mathsf{T} \end{align*}[[/math]]

where [math] \mathbf{\hat{\Sigma}} [/math] is the square diagonal matrix with the singular values of [math]\mathbf{X}[/math] and the excess zeros chopped off that satisfies [math] \mathbf{\hat{\Sigma}^2}=\mathbf{\Sigma}^\mathsf{T} \mathbf{\Sigma} [/math]. Comparison with the eigenvector factorization of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math] establishes that the right singular vectors [math]\mathbf{W}[/math] of [math]\mathbf{X}[/math] are equivalent to the eigenvectors of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math], while the singular values [math]\sigma_{(k)}[/math] of [math] \mathbf{{X}}[/math] are equal to the square-root of the eigenvalues [math]\lambda_{(k)}[/math] of [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math].

Using the singular value decomposition the score matrix [math]\mathbf{T}[/math] can be written

[[math]]\begin{align*} \mathbf{T} & = \mathbf{X} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^\mathsf{T} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma} \end{align*}[[/math]]

so each column of [math]\mathbf{T}[/math] is given by one of the left singular vectors of [math]\mathbf{X}[/math] multiplied by the corresponding singular value. This form is also the polar decomposition of [math]\mathbf{T}[/math].

Efficient algorithms exist to calculate the SVD of [math]\mathbf{X}[/math] without having to form the matrix [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math], so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix, unless only a handful of components are required.

As with the eigen-decomposition, a truncated [math]n \times L [/math] score matrix [math]\mathbf{T}_L[/math] can be obtained by considering only the first [math]L[/math] largest singular values and their singular vectors:

[[math]]\mathbf{T}_L = \mathbf{U}_L\mathbf{\Sigma}_L = \mathbf{X} \mathbf{W}_L [[/math]]

The truncation of a matrix [math]\mathbf{M}[/math] or [math]\mathbf{T}[/math] using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of rank [math]L[/math] to the original matrix, in the sense of the difference between the two having the smallest possible Frobenius norm, a result known as the Eckart–Young theorem [1936].

Further considerations

The singular values (in Σ) are the square roots of the eigenvalues of the matrix [math]\mathbf{X}^\mathsf{T}\mathbf{X}[/math]. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see below). PCA is often used in this manner for dimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the discrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT". Nonlinear dimensionality reduction techniques tend to be more computationally demanding than PCA.

PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same sample variance and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.

Mean subtraction (a.k.a. "mean centering") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.[4]

Table of symbols and abbreviations

Symbol Meaning Dimensions Indices
[math]\mathbf{X} = \{ X_{ij} \}[/math] data matrix, consisting of the set of all data vectors, one vector per row [math] n \times p[/math] [math] i = 1 \ldots n [/math]
[math] j = 1 \ldots p [/math]
[math]n [/math] the number of row vectors in the data set [math]1 \times 1[/math] scalar
[math]p [/math] the number of elements in each row vector (dimension) [math]1 \times 1[/math] scalar
[math]L [/math] the number of dimensions in the dimensionally reduced subspace, [math] 1 \le L \le p [/math] [math]1 \times 1[/math] scalar
[math]\mathbf{u} = \{ u_j \}[/math] vector of empirical means, one mean for each column j of the data matrix [math] p \times 1[/math] [math] j = 1 \ldots p [/math]
[math]\mathbf{s} = \{ s_j \}[/math] vector of empirical standard deviations, one standard deviation for each column j of the data matrix [math] p \times 1[/math] [math] j = 1 \ldots p [/math]
[math]\mathbf{h} = \{ h_i \}[/math] vector of all 1's [math] 1 \times n[/math] [math] i = 1 \ldots n [/math]
[math]\mathbf{B} = \{ B_{ij} \}[/math] deviations from the mean of each column j of the data matrix [math] n \times p[/math] [math] i = 1 \ldots n [/math]
[math] j = 1 \ldots p [/math]
[math]\mathbf{Z} = \{ Z_{ij} \} [/math] z-scores, computed using the mean and standard deviation for each row m of the data matrix [math] n \times p[/math] [math] i = 1 \ldots n [/math]
[math] j = 1 \ldots p [/math]
[math]\mathbf{C} = \{ C_{jj'} \} [/math] covariance matrix [math] p \times p [/math] [math] j = 1 \ldots p [/math]
[math] j' = 1 \ldots p [/math]
[math]\mathbf{R} = \{ R_{jj'} \} [/math] correlation matrix [math] p \times p [/math] [math] j = 1 \ldots p [/math]
[math] j' = 1 \ldots p [/math]
[math] \mathbf{V} = \{ V_{jj'} \} [/math] matrix consisting of the set of all eigenvectors of C, one eigenvector per column [math] p \times p [/math] [math] j = 1 \ldots p [/math]
[math] j' = 1 \ldots p [/math]
[math]\mathbf{D} = \{ D_{jj'} \} [/math] diagonal matrix consisting of the set of all eigenvalues of C along its principal diagonal, and 0 for all other elements ( note [math]\mathbf{\Lambda}[/math] used above ) [math] p \times p [/math] [math] j = 1 \ldots p [/math]
[math] j' = 1 \ldots p [/math]
[math]\mathbf{W} = \{ W_{jl} \} [/math] matrix of basis vectors, one vector per column, where each basis vector is one of the eigenvectors of C, and where the vectors in [math]\mathbf{W}[/math] are a sub-set of those in V [math] p \times L[/math] [math] j = 1 \ldots p [/math]
[math] l = 1 \ldots L[/math]
[math]\mathbf{T} = \{ T_{il} \} [/math] matrix consisting of [math]n[/math] row vectors, where each vector is the projection of the corresponding data vector from matrix [math]\mathbf{X}[/math] onto the basis vectors contained in the columns of matrix [math]\mathbf{W}[/math]. [math] n \times L[/math] [math] i = 1 \ldots n [/math]
[math] l = 1 \ldots L[/math]

References

  1. 1.0 1.1 Jolliffe, I. T. (2002). Principal Component Analysis. Springer Series in Statistics (in English). New York: Springer-Verlag. doi:10.1007/b98835. ISBN 978-0-387-95442-4.
  2. Bengio, Y. (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8): 1798–1828. doi:10.1109/TPAMI.2013.50. PMID 23787338. 
  3. Forkman J., Josse, J., Piepho, H. P. (2019). "Hypothesis tests for principal component analysis when variables are standardized". Journal of Agricultural, Biological, and Environmental Statistics 24 (2): 289–308. doi:10.1007/s13253-019-00355-5. 
  4. A. A. Miranda, Y. A. Le Borgne, and G. Bontempi. New Routes from Minimal Approximation Error to Principal Components, Volume 27, Number 3 / June, 2008, Neural Processing Letters, Springer

Wikipedia References