我正在使用 Scrapy 抓取网站以获取所有页面,但我当前的代码规则仍然允许我获取不需要的 URL,例如“ http://www.example.com/some-article/comment-page- ”之类的评论链接1 " 除了帖子的主 URL。我可以在规则中添加什么来排除这些不需要的项目?这是我当前的代码:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item
class MySpider(CrawlSpider):
name = 'crawltest'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']
rules = [Rule(SgmlLinkExtractor(allow=[r'/\d+']), follow=True), Rule(SgmlLinkExtractor(allow=[r'\d+']), callback='parse_item')]
def parse_item(self, response):
#do something