所以我最终使用microbenchmark
了这两种方法,因为使用起来感觉有点奇怪:
mc <- montreal %>%
gCentroid(byid=TRUE) %>%
data.frame %>%
bind_cols(., data_frame(name=montreal[["NOM"]]))
我尝试了两个不同的数据集:
world <- readOGR("data/world.json", "OGRGeoJSON")
wmbm = microbenchmark(
base = world %>%
gCentroid(byid=TRUE) %>%
data.frame %>%
cbind(., name=world[["name"]]),
dplyr = world %>%
gCentroid(byid=TRUE) %>%
data.frame %>%
bind_cols(., data_frame(name=world[["name"]])),
times=100
)
微基准测试结果:
Unit: milliseconds
expr min lq mean median uq max neval
base 13.78396 14.08301 14.21357 14.12023 14.16435 20.04362 100
dplyr 13.87098 14.10680 14.25245 14.14330 14.18020 17.63248 100
montreal <- readOGR("data/limadmin.json", "OGRGeoJSON")
lmbm = microbenchmark(
base = montreal %>%
gCentroid(byid=TRUE) %>%
data.frame %>%
cbind(., name=montreal[["NOM"]]),
dplyr = montreal %>%
gCentroid(byid=TRUE) %>%
data.frame %>%
bind_cols(., data_frame(name=montreal[["NOM"]])),
times=100
)
微基准测试结果:
Unit: milliseconds
expr min lq mean median uq max neval
base 1.597957 1.628723 1.736709 1.651747 1.686554 3.091738 100
dplyr 1.621092 1.642678 1.756978 1.659041 1.739707 3.751866 100
这里没有真正的结论。即使它看起来有点慢,我想我会坚持使用dplyr
-esque 解决方案以保持一致性。