2

我是生物学家。我的实验输出包含大量特征(存储为列数和 563 行)。这些列是数量为 8603 的特征,这些特征非常高。

因此,当我尝试在 R 中进行 PCA 分析时,它会出现“内存不足”错误。

我也尝试过分片做princomp,但它似乎不适用于我们的方法。

我尝试使用链接中给出的脚本...

http://www.r-bloggers.com/introduction-to-feature-selection-for-bioinformaticians-using-r-correlation-matrix-filters-pca-backward-selection/

但它还是不行:(

我正在尝试使用以下代码

bumpus <- read.table("http://www.ndsu.nodak.edu/ndsu/doetkott/introsas/rawdata/bumpus.html", 
                     skip=20, nrows=49, 
                     col.names=c("id","total","alar","head","humerus","sternum"))

boxplot(bumpus, main="Boxplot of Bumpus' data") ## in this step it is showing the ERROR

# we first standardize the data:
bumpus.scaled <- data.frame( apply(bumpus,2,scale) )
boxplot(bumpus.scaled, main="Boxplot of standardized Bumpus' data")

pca.res <- prcomp(bumpus.scaled, retx=TRUE)
pca.res

# note:
# PC.1 is some kind of average of all the measurements 
#    => measure of size of the bird
# PC.2 has a negative weight for 'sternum' 
#    and positive weights for 'alar', 'head' and 'humerus'
#    => measure of shape of the bird

# first two principal components:
pca.res$x[,1:2]
plot(pca.res$x[,1:2], pch="", main="PC.1 and PC.2 for Bumpus' data (blue=survived, red=died)")
text(pca.res$x[,1:2], labels=c(1:49), col=c(rep("blue",21),rep("red",28)))
abline(v=0, lty=2)
abline(h=0, lty=2)

# compare to segment plot:
windows()
palette(rainbow(12, s = 0.6, v = 0.75)) 
stars(bumpus, labels=c(1:49), nrow=6, key.loc=c(20,-1), 
      main="Segment plot of Bumpus' data", draw.segment=TRUE) 

# compare to biplot:
windows()
biplot(pca.res, scale=0)
# what do the arrows mean?
# consider the arrow for sternum:
abline(0, pca.res$rotation[5,2]/pca.res$rotation[5,1])
# consider the arrow for head:
abline(0, pca.res$rotation[3,2]/pca.res$rotation[3,1])

但是第二行

boxplot(bumpus, main="Bumpus 数据的箱线图") ## 显示错误

错误是

Error: cannot allocate vector of size 1.4 Mb

In addition: There were 27 warnings (use warnings() to see them)

请帮忙!

4

1 回答 1

3

在特征数量巨大或超过观察数量的情况下,建议基于转置数据集计算主成分。在您的情况下尤其如此,因为默认值意味着计算一个 8603 x 8603 协方差矩阵,该矩阵本身已经消耗了大约 500 MB 的内存(哦,这不是太多,但是嘿......)。

假设矩阵的行X对应于观察值,列对应于特征,将数据居中,然后对居中的转置执行 PCA X。无论如何,不​​会有比观察次数更多的特征对。最后,将每个得到的特征向量乘以X^T。您不需要对特征值执行后者(有关详细说明,请参见下面的方式):

你想要什么

此代码演示了 PCA 在转置数据集上的实现,并将结果prcomp与​​“转置 PCA”进行了比较:

pca.reduced <- function(X, center=TRUE, retX=TRUE) {
  # Note that the data must first be centered on the *original* dimensions
  # because the centering of the 'transposed covariance' is meaningless for
  # the dataset. This is also why Sigma must be computed dependent on N
  # instead of simply using cov().
  if (center) {
    mu <- colMeans(X)
    X <- sweep(X, 2, mu, `-`)
  }
  # From now on we're looking at the transpose of X:
  Xt <- t(X)
  aux <- svd(Xt)
  V <- Xt %*% aux$v
  # Normalize the columns of V.
  V <- apply(V, 2, function(x) x / sqrt(sum(x^2)))
  # Done.
  list(X = if (retX) X %*% V else NULL,
       V = V,
       sd = aux$d / sqrt(nrow(X)-1),
       mean = if (center) mu else NULL)
}

# Example data (low-dimensional, but sufficient for this example):
X <- cbind(rnorm(1000), rnorm(1000) * 5, rnorm(1000) * 3)

original   <- prcomp(X, scale=FALSE)
transposed <- pca.reduced(X)

# See what happens:    
> print(original$sdev)
[1] 4.6468136 2.9240382 0.9681769
> print(transposed$sd)
[1] 4.6468136 2.9240382 0.9681769
> 
> print(original$rotation)
               PC1           PC2          PC3
[1,] -0.0055505001  0.0067322416  0.999961934
[2,] -0.9999845292 -0.0004024287 -0.005547916
[3,]  0.0003650635 -0.9999772572  0.006734371
> print(transposed$V)
              [,1]          [,2]         [,3]
[1,]  0.0055505001  0.0067322416 -0.999961934
[2,]  0.9999845292 -0.0004024287  0.005547916
[3,] -0.0003650635 -0.9999772572 -0.006734371

细节

要了解为什么可以处理转置矩阵,请考虑以下内容:

特征值方程的一般形式是

          A x = λ x                               (1)

不失一般性,让我们M成为原始数据集的中心“副本” X。替代收益M^T MA

          M^T M x = λ x                           (2)

将此等式乘以M收益率

          M M^T M x = λ M x                       (3)

随之而来的y = M x产量替代

          M M^T y = λ y                           (4)

已经可以看到它y对应于转置数据集的“协方差”矩阵的特征向量(请注意,M M^T实际上不是真正的协方差矩阵,因为数据集X沿其列而不是其行居中。此外,缩放必须通过手段完成样本的数量(行M)而不是特征的数量(列M和行M^T)。

还可以看出 和 的特征值相同M M^TM^T M

最后,最后一次乘以M^T结果

          (M^T M) M^T y = λ M^T y                 (5)

其中M^T M是原始协方差矩阵。

从等式 (5) 可以得出M^T y是 的 特征向量M^T M特征值λ

于 2014-01-16T08:52:05.073 回答