我编写了一个似乎运行正常的蜘蛛,但我不确定如何保存它正在收集的数据。
蜘蛛从TheScienceForum开始,抓取主要论坛页面并为每个页面制作一个项目。然后它继续浏览所有单独的论坛页面(将项目连同它一起传递),将每个主题的标题添加到匹配的论坛项目。代码如下:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.http import Request
from individualProject.items import ProjectItem
class TheScienceForum(BaseSpider):
name = "TheScienceForum.com"
allowed_domains = ["www.thescienceforum.com"]
start_urls = ["http://www.thescienceforum.com"]
def parse(self, response):
Sel = HtmlXPathSelector(response)
forumNames = Sel.select('//h2[@class="forumtitle"]/a/text()').extract()
items = []
for forumName in forumNames:
item = ProjectItem()
item['name'] = forumName
items.append(item)
forums = Sel.select('//h2[@class="forumtitle"]/a/@href').extract()
itemDict = {}
itemDict['items'] = items
for forum in forums:
yield Request(url=forum,meta=itemDict,callback=self.addThreadNames)
def addThreadNames(self, response):
items = response.meta['items']
Sel = HtmlXPathSelector(response)
currentForum = Sel.select('//h1/span[@class="forumtitle"]/text()').extract()
for item in items:
if currentForum==item['name']:
item['thread'] += Sel.select('//h3[@class="threadtitle"]/a/text()').extract()
self.log(items)
itemDict = {}
itemDict['items'] = items
threadPageNavs = Sel.select('//span[@class="prev_next"]/a[@rel="next"]/@href').extract()
for threadPageNav in threadPageNavs:
yield Request(url=threadPageNav,meta=itemDict,callback=self.addThreadNames)
似乎因为我从不简单地返回对象(只产生新的请求),所以数据永远不会在任何地方持久存在。我尝试使用以下 JSON 管道:
class JsonWriterPipeline(object):
def __init__(self):
self.file = open('items.jl', 'wb')
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
并且还使用以下命令运行蜘蛛:
scrapy crawl TheScienceForum.com -o items.json -t json
但到目前为止,没有任何效果。我可能哪里出错了?
任何想法或积极的批评都受到热烈欢迎。