对于此链接中的条目,我需要单击每个条目,然后单击页面左下方的 excel 文件路径的爬虫 url:
我如何使用 R 中的 web scrapy 包来实现这一点,例如rvest
等?提前真诚感谢。
library(rvest)
# Start by reading a HTML page with read_html():
common_list <- read_html("http://www.csrc.gov.cn/csrc/c100121/common_list.shtml")
common_list %>%
# extract paragraphs
rvest::html_nodes("a") %>%
# extract text
rvest::html_text() -> webtxt
# inspect
head(webtxt)
首先,我的问题是如何正确设置html_nodes
以获取每个网页的 url?
更新:
> driver
$client
[1] "No sessionInfo. Client browser is mostly likely not opened."
$server
PROCESS 'file105483d2b3a.bat', running, pid 37512.
> remDr
$remoteServerAddr
[1] "localhost"
$port
[1] 4567
$browserName
[1] "chrome"
$version
[1] ""
$platform
[1] "ANY"
$javascript
[1] TRUE
$nativeEvents
[1] TRUE
$extraCapabilities
list()
当我运行时remDr$navigate(url)
:
Error in checkError(res) :
Undefined error in httr call. httr output: length(url) == 1 is not TRUE