I ran PCA on 25 variables and selected the top 7 PCs using
prc <- prcomp(pollutions, center=T, scale=T, retx=T)
I have then done varimax rotation on those components.
varimax7 <- varimax(prc$rotation[,1:7])
And now I wish to varimax rotate the PCA-rotated data (as it is not part of the varimax object - only the loadings matrix and the rotation matrix). I read that to do this you multiply the transpose of the rotation matrix by the transpose of the data so I would have done this:
newData <- t(varimax7$rotmat) %*% t(prc$x[,1:7])
But that doesn't make sense as the dimensions of the matrix transposes above are $7\times 7$ and $7 \times 16933$ respectively and so I will be left with a matrix of only $7$ rows, rather than $16933$ rows... does anyone know what I am doing wrong here or what my final line should be? Do I just need to transpose back afterwards?
"Rotations" is an approach developed in factor analysis; there rotations (such as e.g. varimax) are applied to loadings, not to eigenvectors of the covariance matrix. Loadings are eigenvectors scaled by the square roots of the respective eigenvalues. After the varimax rotation, the loading vectors are not orthogonal anymore (even though the rotation is called "orthogonal"), so one cannot simply compute orthogonal projections of the data onto the rotated loading directions.
@FTusell's answer assumes that varimax rotation is applied to the eigenvectors (not to loadings). This would be pretty unconventional. Please see my detailed account of PCA+varimax for details: Is PCA followed by a rotation (such as varimax) still PCA? Briefly, if we look at the SVD of the data matrix $X=USV^\top$, then to rotate the loadings means inserting $RR^\top$ for some rotation matrix $R$ as follows: $X=(UR)(R^\top SV^\top).$
If rotation is applied to loadings (as it usually is), then there are at least three easy ways to compute varimax-rotated PCs in R :
They are readily available via function
psych::principal (demonstrating that this is indeed the standard approach). Note that it returns standardized scores, i.e. all PCs have unit variance.
One can manually use
varimax function to rotate the loadings, and then use the new rotated loadings to obtain the scores; one needs to multiple the data with the transposed pseudo-inverse of the rotated loadings (see formulas in this answer by @ttnphns). This will also yield standardized scores.
One can use
varimax function to rotate the loadings, and then use the
$rotmat rotation matrix to rotate the standardized scores obtained with
All three methods yield the same result:
irisX <- iris[,1:4] # Iris data ncomp <- 2 pca_iris_rotated <- psych::principal(irisX, rotate="varimax", nfactors=ncomp, scores=TRUE) print(pca_iris_rotated$scores[1:5,]) # Scores returned by principal() pca_iris <- prcomp(irisX, center=T, scale=T) rawLoadings <- pca_iris$rotation[,1:ncomp] %*% diag(pca_iris$sdev, ncomp, ncomp) rotatedLoadings <- varimax(rawLoadings)$loadings invLoadings <- t(pracma::pinv(rotatedLoadings)) scores <- scale(irisX) %*% invLoadings print(scores[1:5,]) # Scores computed via rotated loadings scores <- scale(pca_iris$x[,1:2]) %*% varimax(rawLoadings)$rotmat print(scores[1:5,]) # Scores computed via rotating the scores
This yields three identical outputs:
1 -1.083475 0.9067262 2 -1.377536 -0.2648876 3 -1.419832 0.1165198 4 -1.471607 -0.1474634 5 -1.095296 1.0949536
varimax function in R uses
normalize = TRUE, eps = 1e-5 parameters by default (see documentation). One might want to change these parameters (decrease the
eps tolerance and take care of Kaiser normalization) when comparing the results to other software such as SPSS. I thank @GottfriedHelms for bringing this to my attention. [Note: these parameters work when passed to the
varimax function, but do not work when passed to the
psych::principal function. This appears to be a bug that will be fixed.]
You need to use the matrix
x <- matrix(rnorm(600),60,10) prc <- prcomp(x, center=TRUE, scale=TRUE) varimax7 <- varimax(prc$rotation[,1:7]) newData <- scale(x) %*% varimax7$loadings
$rotmat is the orthogonal matrix that produces the new loadings from the unrotated ones.
EDIT as of Feb, 12, 2015:
As rightly pointed below by @amoeba (see also his/her previous post as well as another post from @ttnphns) this answer is not correct. Consider an $n\times m$ data matrix $X$. The singular value decomposition is $$X = USV^T$$ where $V$ has as its columns the (normalized) eigenvectors of $X'X$. Now, a rotation is a change of coordinates and amounts to writing the above equality as: $$X = (UST)(T^TV^T) = U^*V^*$$ with $T$ being an orthogonal matrix chosen to achieve a $V^*$ close to sparse (maximum contrast between entries, loosely speaking). Now, if that were all, which it is not, one could post-multiply the equality above by $V^*$ to obtain scores $U^*$ as $X(V^*)^T$, But of course we never rotate all PC. Rather, we consider a subset of $k<m$ which provides still a decent rank-$k$ approximation of $X$, $$X \approx (U_kS_k)(V_k^T)$$ so the rotated solution is now $$X \approx (U_kS_kT_k)(T_k^TV_k^T) = U_k^*V_k^*$$ where now $V_k^*$ is a $k\times n$ matrix. We cannot any more simply multiply $X$ by the transpose of $V_k^*$, but rather we need to resort to one of the solutions described by @amoeba.
In other words, the solution I proposed is only correct in the particular case where it would be useless and nonsensical.
Heartfelt thanks go to @amoeba for making clear this matter to me; I have been living with this misconception for years.
One point where the note above departs from @amoeba's post is that she/he seems to associate $S$ with $V$ in $L$. I think in PCA it is more common to have $V$'s columns of norm 1 and absorb $S$ in the principal component's values. In fact, usually those are presented as linear combinations $v_i^TX$ $(i=1,\ldots,m)$ of the original (centered, perhaps scaled) variables subject to $\|v_i\|=1$. Either way is acceptable I think, and everything in between (as in biplot analysis).
FURTHER EDIT Feb. 12, 2015
As pointed out by @amoeba, even though $V_k^*$ is rectangular, the solution I proposed might still be acceptable: $V_k^*(V_k^*)^T$ would give a unit matrix and $X(V_k^*)^T \approx U_k^*$. So it all seems to hinge on the definition of scores that one prefers.