0

我正在使用 quanteda 从不同的数据源构建文档特征矩阵。使用议会演讲数据和 Facebook 数据构建 dfm 只需几分钟,而基于 Twitter 数据集编译 dfm 则需要 7 个多小时。这三个数据集的大小大致相等(60mb)。

R 更新(R 版本 3.5.3),RStudio 更新(版本 1.3.923)和 quanteda 更新(版本 2.0.1),我使用的是 MacBook Pro 2018(OS X 版本 10.14.5)。

在另一台使用 quanteda 旧版本(版本 1.5.2)的机器上运行完全相同的代码只需几分钟而不是几个小时。

不幸的是,由于无法共享数据,我无法提供可重现的示例。

你有什么想法可能是什么问题以及我该如何规避它?

以下是问题机器的 sessionInfo() 和代码以及需要 7 多个小时来创建 dfm 的输出:

> sessionInfo()
R version 3.5.3 (2019-03-11)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Mojave 10.14.5

Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK:   /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] quanteda_2.0.1  forcats_0.5.0   stringr_1.4.0   dplyr_0.8.5     purrr_0.3.3     readr_1.3.1     tidyr_1.0.2    
[8] tibble_3.0.0    ggplot2_3.3.0   tidyverse_1.3.0

loaded via a namespace (and not attached):
[1] tinytex_0.20       tidyselect_1.0.0   xfun_0.12          haven_2.2.0        lattice_0.20-40    colorspace_1.4-1  
[7] vctrs_0.2.4        generics_0.0.2     yaml_2.2.1         rlang_0.4.5        pillar_1.4.3       glue_1.3.2        
[13] withr_2.1.2        DBI_1.1.0          dbplyr_1.4.2       modelr_0.1.6       readxl_1.3.1       lifecycle_0.2.0   
[19] munsell_0.5.0      gtable_0.3.0       cellranger_1.1.0   rvest_0.3.5        fansi_0.4.1        broom_0.5.5       
[25] Rcpp_1.0.4         scales_1.1.0       backports_1.1.5    RcppParallel_5.0.0 jsonlite_1.6.1     fs_1.3.2          
[31] fastmatch_1.1-0    stopwords_1.0      hms_0.5.3          stringi_1.4.6      grid_3.5.3         cli_2.0.2         
[37] tools_3.5.3        magrittr_1.5       crayon_1.3.4       pkgconfig_2.0.3    ellipsis_0.3.0     Matrix_1.2-18     
[43] data.table_1.12.8  xml2_1.3.0         reprex_0.3.0       lubridate_1.7.4    assertthat_0.2.1   httr_1.4.1        
[49] rstudioapi_0.11    R6_2.4.1           nlme_3.1-145       compiler_3.5.3    

> dtmTW <- dfm(corpTW, groups = "user.id",
+              remove = stopwords("de"), 
+              tolower = TRUE,
+              remove_punct = TRUE,
+              remove_numbers = TRUE,
+              remove_twitter = TRUE, 
+              remove_url = TRUE,
+              dictionary = myDict,
+              verbose = TRUE)
Creating a dfm from a corpus input...
  ...lowercasing
  ...found 886,166 documents, 543,035 features
  ...grouping texts
  ...applying a dictionary consisting of 1 key
  ...removed 0 features
  ...complete, elapsed time:  25338 seconds.
  Finished constructing a 408 x 1 sparse dfm.
  Warning message:
 'remove_twitter' is deprecated; for FALSE, use 'what = "word"' instead. 

以下是在不到一分钟的时间内创建相同 dfm 的机器的 sessionInfo() 和代码以及输出:

R version 3.6.1 (2019-07-05)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6

Matrix products: default
BLAS:   /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib

 Random number generation:
 RNG:     Mersenne-Twister 
 Normal:  Inversion 
 Sample:  Rounding 

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] quanteda_1.5.2  forcats_0.4.0   stringr_1.4.0   dplyr_0.8.4     purrr_0.3.3    
 [6] readr_1.3.1     tidyr_1.0.0     tibble_2.1.3    ggplot2_3.2.1   tidyverse_1.3.0

> dtmTW <- dfm(corpTW, groups = "user.id",
+              remove = stopwords("de"), 
+              tolower = TRUE,
+              remove_punct = TRUE,
+              remove_numbers = TRUE,
+              remove_twitter = TRUE, 
+              remove_url = TRUE,
+              dictionary = myDict, 
+              verbose = TRUE)
Creating a dfm from a corpus input...
   ... lowercasing
   ... found 886,166 documents, 471,981 features
   ... grouping texts
   ... applying a dictionary consisting of 1 key
   ... removed 0 features
   ... created a 408 x 1 sparse dfm
   ... complete. 
Elapsed time: 108 seconds.
4

0 回答 0