46

我有一个data.frame,我想把它写出来。my 的尺寸为data.frame256 行 x 65536 列。什么是更快的替代品write.csv

4

6 回答 6

74

data.table::fwrite()由 Otto Seiskari 贡献,在 1.9.8+ 版本中可用。Matt 在上面做了额外的增强(包括并行化)并写了一篇关于它的文章。请报告跟踪器上的任何问题。

首先,这是上面@chase 使用的相同维度的比较(即非常多的列:65,000 列(!) x 256 行),以及fwriteand write_feather,以便我们在机器之间具有一些一致性。compress=FALSE请注意基础 R 中的巨大差异。

# -----------------------------------------------------------------------------
# function  | object type |  output type | compress= | Runtime | File size |
# -----------------------------------------------------------------------------
# save      |      matrix |    binary    |   FALSE   |    0.3s |    134MB  |
# save      |  data.frame |    binary    |   FALSE   |    0.4s |    135MB  |
# feather   |  data.frame |    binary    |   FALSE   |    0.4s |    139MB  |
# fwrite    |  data.table |    csv       |   FALSE   |    1.0s |    302MB  |
# save      |      matrix |    binary    |   TRUE    |   17.9s |     89MB  |
# save      |  data.frame |    binary    |   TRUE    |   18.1s |     89MB  |
# write.csv |      matrix |    csv       |   FALSE   |   21.7s |    302MB  |
# write.csv |  data.frame |    csv       |   FALSE   |  121.3s |    302MB  |

请注意,fwrite()并行运行。此处显示的时序是在 13 英寸 Macbook Pro 上,配备 2 个内核和 1 个线程/内核(通过超线程+2 个虚拟线程)、512GB SSD、256KB/内核 L2 缓存和 4MB L4 缓存。根据您的系统规格,YMMV。

我还针对相对更有可能(和更大)的数据重新运行了基准测试:

library(data.table)
NN <- 5e6 # at this number of rows, the .csv output is ~800Mb on my machine
set.seed(51423)
DT <- data.table(
  str1 = sample(sprintf("%010d",1:NN)), #ID field 1
  str2 = sample(sprintf("%09d",1:NN)),  #ID field 2
  # varying length string field--think names/addresses, etc.
  str3 = replicate(NN,paste0(sample(LETTERS,sample(10:30,1),T), collapse="")),
  # factor-like string field with 50 "levels"
  str4 = sprintf("%05d",sample(sample(1e5,50),NN,T)),
  # factor-like string field with 17 levels, varying length
  str5 = sample(replicate(17,paste0(sample(LETTERS, sample(15:25,1),T),
      collapse="")),NN,T),
  # lognormally distributed numeric
  num1 = round(exp(rnorm(NN,mean=6.5,sd=1.5)),2),
  # 3 binary strings
  str6 = sample(c("Y","N"),NN,T),
  str7 = sample(c("M","F"),NN,T),
  str8 = sample(c("B","W"),NN,T),
  # right-skewed (integer type)
  int1 = as.integer(ceiling(rexp(NN))),
  num2 = round(exp(rnorm(NN,mean=6,sd=1.5)),2),
  # lognormal numeric that can be positive or negative
  num3 = (-1)^sample(2,NN,T)*round(exp(rnorm(NN,mean=6,sd=1.5)),2))

# -------------------------------------------------------------------------------
# function  |   object   | out |        other args         | Runtime  | File size |
# -------------------------------------------------------------------------------
# fwrite    | data.table | csv |      quote = FALSE        |   1.7s   |  523.2MB  |
# fwrite    | data.frame | csv |      quote = FALSE        |   1.7s   |  523.2MB  |
# feather   | data.frame | bin |     no compression        |   3.3s   |  635.3MB  |
# save      | data.frame | bin |     compress = FALSE      |  12.0s   |  795.3MB  |
# write.csv | data.frame | csv |    row.names = FALSE      |  28.7s   |  493.7MB  |
# save      | data.frame | bin |     compress = TRUE       |  48.1s   |  190.3MB  |
# -------------------------------------------------------------------------------

所以fwritefeather这个测试快 2 倍。这是在上面提到的同一台机器上fwrite运行的,在 2 个内核上并行运行。

feather似乎也相当快的二进制格式,但还没有压缩。


这是一个尝试显示如何fwrite在比例方面进行比较:

注意:基准已经通过运行 base R 来更新save()compress = FALSE因为羽毛也没有被压缩)。

mb

因此fwrite,所有这些数据中最快的(在 2 个内核上运行)加上它创建了一个.csv可以轻松查看、检查和传递给grep等的sed

复制代码:

require(data.table)
require(microbenchmark)
require(feather)
ns <- as.integer(10^seq(2, 6, length.out = 25))
DTn <- function(nn)
    data.table(
          str1 = sample(sprintf("%010d",1:nn)),
          str2 = sample(sprintf("%09d",1:nn)),
          str3 = replicate(nn,paste0(sample(LETTERS,sample(10:30,1),T), collapse="")),
          str4 = sprintf("%05d",sample(sample(1e5,50),nn,T)),
          str5 = sample(replicate(17,paste0(sample(LETTERS, sample(15:25,1),T), collapse="")),nn,T),
          num1 = round(exp(rnorm(nn,mean=6.5,sd=1.5)),2),
          str6 = sample(c("Y","N"),nn,T),
          str7 = sample(c("M","F"),nn,T),
          str8 = sample(c("B","W"),nn,T),
          int1 = as.integer(ceiling(rexp(nn))),
          num2 = round(exp(rnorm(nn,mean=6,sd=1.5)),2),
          num3 = (-1)^sample(2,nn,T)*round(exp(rnorm(nn,mean=6,sd=1.5)),2))

count <- data.table(n = ns,
                    c = c(rep(1000, 12),
                          rep(100, 6),
                          rep(10, 7)))

mbs <- lapply(ns, function(nn){
  print(nn)
  set.seed(51423)
  DT <- DTn(nn)
  microbenchmark(times = count[n==nn,c],
               write.csv=write.csv(DT, "writecsv.csv", quote=FALSE, row.names=FALSE),
               save=save(DT, file = "save.RData", compress=FALSE),
               fwrite=fwrite(DT, "fwrite_turbo.csv", quote=FALSE, sep=","),
               feather=write_feather(DT, "feather.feather"))})

png("microbenchmark.png", height=600, width=600)
par(las=2, oma = c(1, 0, 0, 0))
matplot(ns, t(sapply(mbs, function(x) {
  y <- summary(x)[,"median"]
  y/y[3]})),
  main = "Relative Speed of fwrite (turbo) vs. rest",
  xlab = "", ylab = "Time Relative to fwrite (turbo)",
  type = "l", lty = 1, lwd = 2, 
  col = c("red", "blue", "black", "magenta"), xaxt = "n", 
  ylim=c(0,25), xlim=c(0, max(ns)))
axis(1, at = ns, labels = prettyNum(ns, ","))
mtext("# Rows", side = 1, las = 1, line = 5)
legend("right", lty = 1, lwd = 3, 
       legend = c("write.csv", "save", "feather"),
       col = c("red", "blue", "magenta"))
dev.off()
于 2016-04-07T02:11:03.930 回答
27

如果您的所有列都属于同一类,则在写出之前转换为矩阵,可提供近 6 倍的速度。此外,您可以考虑使用write.matrix()from package MASS,尽管在本示例中并没有证明它更快。也许我没有正确设置一些东西:

#Fake data
m <- matrix(runif(256*65536), nrow = 256)
#AS a data.frame
system.time(write.csv(as.data.frame(m), "dataframe.csv"))
#----------
#   user  system elapsed 
# 319.53   13.65  333.76 

#As a matrix
system.time(write.csv(m, "matrix.csv"))
#----------
#   user  system elapsed 
#  52.43    0.88   53.59 

#Using write.matrix()
require(MASS)
system.time(write.matrix(m, "writematrix.csv"))
#----------
#   user  system elapsed 
# 113.58   59.12  172.75 

编辑

为了解决下面提出的关于上述结果对 data.frame 不公平的担忧,这里有一些更多的结果和时间,以表明整体消息仍然是“如果可能,将您的数据对象转换为矩阵。如果不可能,处理或者,如果时机至关重要,请重新考虑为什么需要以 CSV 格式写出 200MB 以上的文件”:

#This is a data.frame
m2 <- as.data.frame(matrix(runif(256*65536), nrow = 256))
#This is still 6x slower
system.time(write.csv(m2, "dataframe.csv"))
#   user  system elapsed 
# 317.85   13.95  332.44
#This even includes the overhead in converting to as.matrix in the timing 
system.time(write.csv(as.matrix(m2), "asmatrix.csv"))
#   user  system elapsed 
#  53.67    0.92   54.67 

所以,什么都没有真正改变。为了确认这是合理的,请考虑 的相对时间成本as.data.frame()

m3 <- as.matrix(m2)
system.time(as.data.frame(m3))
#   user  system elapsed 
#   0.77    0.00    0.77 

因此,并没有像下面的评论所相信的那样有什么大不了的或歪曲的信息。如果您仍然不相信write.csv()在大型 data.frames 上使用在性能方面是一个坏主意,请查阅以下手册Note

write.table can be slow for data frames with large numbers (hundreds or more) of
columns: this is inevitable as each column could be of a different class and so must be
handled separately. If they are all of the same class, consider using a matrix instead.

最后,如果您仍然因为更快地保存内容而失眠,请考虑迁移到本机 RData 对象

system.time(save(m2, file = "thisisfast.RData"))
#   user  system elapsed 
#  21.67    0.12   21.81
于 2012-05-08T20:19:58.613 回答
12

另一种选择是使用羽毛文件格式。

df <- as.data.frame(matrix(runif(256*65536), nrow = 256))

system.time(feather::write_feather(df, "df.feather"))
#>   user  system elapsed 
#>  0.237   0.355   0.617 

Feather 是一种二进制文件格式,旨在提高读写效率。它旨在与多种语言一起使用:目前有 R 和 python 客户端,还有一个 julia 客户端正在开发中。

为了比较,这里需要多长时间saveRDS

system.time(saveRDS(df, "df.rds"))
#>   user  system elapsed 
#> 17.363   0.307  17.856

现在,这是一个有点不公平的比较,因为默认saveRDS是压缩数据,而这里的数据是不可压缩的,因为它是完全随机的。关闭压缩会saveRDS显着加快:

system.time(saveRDS(df, "df.rds", compress = FALSE))
#>   user  system elapsed 
#>  0.181   0.247   0.473     

事实上,它现在比羽毛快一点。那么为什么要使用羽毛呢?嗯,它通常比 快readRDS(),并且与读取数据的次数相比,您通常写入数据的次数相对较少。

system.time(readRDS("df.rds"))
#>   user  system elapsed 
#>  0.198   0.090   0.287 

system.time(feather::read_feather("df.feather"))
#>   user  system elapsed 
#>  0.125   0.060   0.185 
于 2016-04-07T18:26:05.093 回答
3

包裹_fst

用于非常快速地读取和写入数据文件的最新选项是fstpackagefst生成二进制格式的文件。

使用write.fst(dat, "file.fst", compress=0),其中compress可以从 0(无压缩)到 100(最大压缩)。可以使用 将数据读回到 R 中dat = read.fst("file.fst")。根据包网站上列出的时间,它比feather,data.table和 base RreadRDSwriteRDS.

包开发站点警告说,fst数据格式仍在发展中,fst因此不应该用于长期数据存储。

于 2017-11-30T07:04:09.030 回答
0

您还可以尝试“readr”包的 read_rds(与 data.table::fread 相比)和 write_rds(与 data.table::fwrite 相比)。

这是我的数据集中的一个简单示例(1133 行和 429499 列):

写数据集

fwrite(rankp2,file="rankp2_429499.txt",col.names=T,row.names=F,quote = F,sep="\t") write_rds(rankp2,"rankp2_429499.rds")

读取数据集(1133 行和 429499 列)

system.time(fread("rankp2_429499.txt",sep="\t",header=T,fill = TRUE))  user system elapsed 42.391 0.526 42.949

system.time(read_rds("rankp2_429499.rds")) user system elapsed 2.157 0.388 2.547

希望能帮助到你。

于 2017-04-01T02:25:25.847 回答
0

我认为你应该使用 fwrite()

它更快,对我帮助很大:

fwrite(x, file = "", append = FALSE, quote = "auto",
  sep = ",", sep2 = c("","|",""),
  eol = if (.Platform$OS.type=="windows") "\r\n" else "\n",
  na = "", dec = ".", row.names = FALSE, col.names = TRUE,
  qmethod = c("double","escape"),
  logical01 = getOption("datatable.logical01", FALSE),  # due to change to TRUE; see NEWS
  logicalAsInt = logical01,  # deprecated
  dateTimeAs = c("ISO","squash","epoch","write.csv"),
  buffMB = 8L, nThread = getDTthreads(),
  showProgress = interactive(),
  verbose = getOption("datatable.verbose", FALSE))

https://jangorecki.gitlab.io/data.table/library/data.table/html/fwrite.html

于 2018-08-29T17:05:44.407 回答