0

我在通过Scrapy 文档中的CrawlSpider 示例运行时遇到问题。它似乎爬行得很好,但我无法将其输出到 CSV 文件(或任何其他文件)。

所以,我的问题是我可以使用这个:

scrapy crawl dmoz -o items.csv

还是我必须创建一个项目管道

更新,现在有代码!:

import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from targets.item import TargetsItem

class MySpider(CrawlSpider):
    name = 'abc'
    allowed_domains = ['ididntuseexample.com']
    start_urls = ['http://www.ididntuseexample.com']

    rules = (
    # Extract links matching 'category.php' (but not matching 'subsection.php')
    # and follow links from them (since no callback means follow=True by default).
    Rule(LinkExtractor(allow=('ididntuseexample.com', ))),

)

    def parse_item(self, response):
       self.log('Hi, this is an item page! %s' % response.url)
       item = TargetsItem()
       item['title'] = response.xpath('//h2/a/text()').extract() #this pulled down data in scrapy shell
       item['link'] = response.xpath('//h2/a/@href').extract()   #this pulled down data in scrapy shell
       return item
4

1 回答 1

2

规则是CrawlSpider用于以下链接的机制。这些链接用LinkExtractor. 该元素基本上指示要从抓取的页面中提取哪些链接(如start_urls列表中定义的链接)。然后,您可以传递将在每个提取的链接上调用的回调,或者更准确地说,在这些链接之后下载的页面上。

您的规则必须调用parse_item. 所以,替换:

Rule(LinkExtractor(allow=('ididntuseexample.com', ))),

和:

Rule(LinkExtractor(allow=('ididntuseexample.com',)), callback='parse_item),

此规则定义您要调用parse_item的每个链接href都是ididntuseexample.com. 我怀疑你想要的链接提取器不是域,而是你想要关注/抓取的链接。

这里有一个基本示例,它爬取Hacker News以检索主页中所有新闻的标题和第一条评论的第一行。

import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor

class HackerNewsItem(scrapy.Item):
    title = scrapy.Field()
    comment = scrapy.Field()

class HackerNewsSpider(CrawlSpider):
    name = 'hackernews'
    allowed_domains = ['news.ycombinator.com']
    start_urls = [
        'https://news.ycombinator.com/'
    ]
    rules = (
        # Follow any item link and call parse_item.
        Rule(LinkExtractor(allow=('item.*', )), callback='parse_item'),
    )

    def parse_item(self, response):
        item = HackerNewsItem()
        # Get the title
        item['title'] = response.xpath('//*[contains(@class, "title")]/a/text()').extract()
        # Get the first words of the first comment
        item['comment'] = response.xpath('(//*[contains(@class, "comment")])[1]/font/text()').extract()
        return item
于 2014-10-23T21:32:24.147 回答