我目前正在尝试使用包 doParallel 来并行化我的 RSelenium 网络爬虫(在 Docker 上运行)。我找到了这篇文章(Speed up web scraping using multiplie Rselenium browsers),并在此处复制@hdharrison 提供的答案:
library(RSelenium)
library(rvest)
library(magrittr)
library(foreach)
library(doParallel)
# using docker run -d -p 4445:4444 selenium/standalone-chrome:3.5.3
# in windows
URLsPar <- c("https://stackoverflow.com/", "https://github.com/",
"http://www.bbc.com/", "http://www.google.com",
"https://www.r-project.org/", "https://cran.r-project.org",
"https://twitter.com/", "https://www.facebook.com/")
appHTML <- c()
(cl <- (detectCores() - 1) %>% makeCluster) %>% registerDoParallel
# open a remoteDriver for each node on the cluster
clusterEvalQ(cl, {
library(RSelenium)
remDr <- remoteDriver(remoteServerAddr = "192.168.99.100", port = 4445L,
browserName = "chrome")
remDr$open()
})
ws <- foreach(x = 1:length(URLsPar),
.packages = c("rvest", "magrittr", "RSelenium")) %dopar% {
print(URLsPar[x])
remDr$navigate(URLsPar[x])
remDr$getTitle()[[1]]
}
> ws
[[1]]
[1] "Stack Overflow - Where Developers Learn, Share, & Build Careers"
[[2]]
[1] "The world's leading software development platform · GitHub"
[[3]]
[1] "BBC - Homepage"
[[4]]
[1] "Google"
[[5]]
[1] "R: The R Project for Statistical Computing"
[[6]]
[1] "The Comprehensive R Archive Network"
[[7]]
[1] "Twitter. It's what's happening."
[[8]]
[1] "Facebook - Log In or Sign Up"
# close browser on each node
clusterEvalQ(cl, {
remDr$close()
})
stopImplicitCluster()
这似乎是我正在寻找的解决方案,但是当我运行它时,我遇到了以下错误消息:
checkForRemoteErrors(lapply(cl, recvResult)) 中的错误:
3个节点产生错误;第一个错误:httr 调用中的未定义错误。httr 输出:无法连接到 192.168.99.100 端口 4445:连接被拒绝
这是“docker ps”输出:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2d62f6b293b selenium/standalone-chrome:3.5.3 "/opt/bin/entry_poin…" 36 minutes ago Up 35 minutes 0.0.0.0:4445->4444/tcp recursing_austin
我知道我必须为每个内核打开一个新的浏览器,但我认为这就是问题所在:当我减少内核时,产生的错误就会减少。
如果我可以提供更多详细信息,请告诉我!提前致谢!