我正在使用 Scarpy 中的 CrawlSpider 类构建爬虫。我怀疑链接提取器一遍又一遍地循环相同的链接。有没有办法限制链接提取器并拒绝已经被抓取的链接?这可以在拒绝输入中没有正则表达式的情况下完成吗?
My Rules look like this:
{
rules = (
#Rule(SgmlLinkExtractor((allow='profile')), follow=True),
Rule(SgmlLinkExtractor(deny='feedback\.html'),callback='parse_item', follow=True),
)
}
And my parse_item is:
{
def parse_item(self, response):
hxs = HtmlXPathSelector(response)
element = hxs.select('//table[@id="profilehead"]/tr/td/a/@href').extract()
try:
open('urls.txt', 'a').write(element[0])
open('urls.txt', 'a').write('\n')
except IndexError:
# Site doesn't have link to another website
pass
}