16

我有一个相当大的数据框,大约有 1000 万行。它有列xy,我想要计算

hypot <- function(x) {sqrt(x[1]^2 + x[2]^2)}

对于每一行。使用apply它会花费大量时间(大约 5 分钟,从较小的尺寸进行插值)和内存。

但这对我来说似乎太多了,所以我尝试了不同的东西:

  • 编译hypot函数减少了大约 10% 的时间
  • 使用函数 fromplyr大大增加了运行时间。

做这件事的最快方法是什么?

4

3 回答 3

23

怎么样with(my_data,sqrt(x^2+y^2))

set.seed(101)
d <- data.frame(x=runif(1e5),y=runif(1e5))

library(rbenchmark)

两种不同的每行功能,一种利用矢量化:

hypot <- function(x) sqrt(x[1]^2+x[2]^2)
hypot2 <- function(x) sqrt(sum(x^2))

也尝试编译这些:

library(compiler)
chypot <- cmpfun(hypot)
chypot2 <- cmpfun(hypot2)

benchmark(sqrt(d[,1]^2+d[,2]^2),
          with(d,sqrt(x^2+y^2)),
          apply(d,1,hypot),
          apply(d,1,hypot2),
          apply(d,1,chypot),
          apply(d,1,chypot2),
          replications=50)

结果:

                       test replications elapsed relative user.self sys.self
5       apply(d, 1, chypot)           50  61.147  244.588    60.480    0.172
6      apply(d, 1, chypot2)           50  33.971  135.884    33.658    0.172
3        apply(d, 1, hypot)           50  63.920  255.680    63.308    0.364
4       apply(d, 1, hypot2)           50  36.657  146.628    36.218    0.260
1 sqrt(d[, 1]^2 + d[, 2]^2)           50   0.265    1.060     0.124    0.144
2  with(d, sqrt(x^2 + y^2))           50   0.250    1.000     0.100    0.144

As expected the with() solution and the column-indexing solution à la Tyler Rinker are essentially identical; hypot2 is twice as fast as the original hypot (but still about 150 times slower than the vectorized solutions). As already pointed out by the OP, compilation doesn't help very much.

于 2012-12-20T19:21:26.360 回答
12

While Ben Bolkers answer is comprehensive, I will explain other reasons to avoid apply on data.frames.

apply will convert your data.frame to a matrix. This will create a copy (waste of time and memory), as well as perhaps causing unintended type conversions.

Given that you have 10 million rows of data, I would suggest you look at the data.table package that will let you do things efficiently in terms of memory and time.


For example, using tracemem

x <- apply(d,1, hypot2)
tracemem[0x2f2f4410 -> 0x2f31b8b8]: as.matrix.data.frame as.matrix apply 

This is even worse if you then assign to a column in d

d$x <- apply(d,1, hypot2)
tracemem[0x2f2f4410 -> 0x2ee71cb8]: as.matrix.data.frame as.matrix apply 
tracemem[0x2f2f4410 -> 0x2fa9c878]: 
tracemem[0x2fa9c878 -> 0x2fa9c3d8]: $<-.data.frame $<- 
tracemem[0x2fa9c3d8 -> 0x2fa9c1b8]: $<-.data.frame $<- 

4 copies! -- with 10 million rows, that will probably come and bite you at somepoint.

If we use with, there is no copying involved, if we assign to a vector

y <- with(d, sqrt(x^2 + y^2))

But there will be if we assign to a column in the data.frame d

d$y <- with(d, sqrt(x^2 + y^2))
tracemem[0x2fa9c1b8 -> 0x2faa00d8]: 
tracemem[0x2faa00d8 -> 0x2faa0f48]: $<-.data.frame $<- 
tracemem[0x2faa0f48 -> 0x2faa0d08]: $<-.data.frame $<- 

Now, if you use data.table and := to assign by reference (no copying)

 library(data.table)
 DT <- data.table(d)



tracemem(DT)
[1] "<0x2d67a9a0>"
DT[,y := sqrt(x^2 + y^2)]

No copies!


Perhaps I will be corrected here, but another memory issue to consider is that sqrt(x^2+y^2)) will create 4 temporary variables (internally) x^2, y^2, x^2 + y^2 and then sqrt(x^2 + y^2))

The following will be slower, but only involve two variables being created.

 DT[, rowid := .I] # previous option: DT[, rowid := seq_len(nrow(DT))]
 DT[, y2 := sqrt(x^2 + y^2), by = rowid]
于 2012-12-20T23:04:05.167 回答
3

R is vectorised, so you could use the following, plugging in your own matrix of course

X = t(matrix(1:4, 2, 2))^2
>      [,1] [,2]
 [1,]    1    4
 [2,]    9   16

rowSums(X)^0.5

Nice and efficient :)

于 2012-12-21T01:53:31.097 回答