我在通过Scrapy 文档中的CrawlSpider 示例运行时遇到问题。它似乎爬行得很好,但我无法将其输出到 CSV 文件(或任何其他文件)。
所以,我的问题是我可以使用这个:
scrapy crawl dmoz -o items.csv
还是我必须创建一个项目管道?
更新,现在有代码!:
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from targets.item import TargetsItem
class MySpider(CrawlSpider):
name = 'abc'
allowed_domains = ['ididntuseexample.com']
start_urls = ['http://www.ididntuseexample.com']
rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
Rule(LinkExtractor(allow=('ididntuseexample.com', ))),
)
def parse_item(self, response):
self.log('Hi, this is an item page! %s' % response.url)
item = TargetsItem()
item['title'] = response.xpath('//h2/a/text()').extract() #this pulled down data in scrapy shell
item['link'] = response.xpath('//h2/a/@href').extract() #this pulled down data in scrapy shell
return item