嗨,我正在研究 scrapy 以获取一些 html 页面,
我已经编写了我的蜘蛛,并且我已经从文件中的页面中获取了所需的数据spider.py
,并且在我的pipeline.py
文件中,我想将所有数据写入一个csv file
以蜘蛛的名称动态创建的文件,下面是我的pipeline.py
代码
管道.py:
from scrapy import log
from datetime import datetime
class examplepipeline(object):
def __init__(self):
dispatcher.connect(self.spider_opened, signal=signals.spider_opened)
dispatcher.connect(self.spider_closed, signal=signals.spider_closed)
def spider_opened(self, spider):
log.msg("opened spider %s at time %s" % (spider.name,datetime.now().strftime('%H-%M-%S')))
self.exampleCsv = csv.writer(open("%s(%s).csv"% (spider.name,datetime.now().strftime("%d/%m/%Y,%H-%M-%S")), "wb"),
delimiter=',', quoting=csv.QUOTE_MINIMAL)
self.exampleCsv.writerow(['Listing Name', 'Address','Pincode','Phone','Website'])
def process_item(self, item, spider):
log.msg("Processsing item " + item['title'], level=log.DEBUG)
self.exampleCsv.writerow([item['listing_name'].encode('utf-8'),
item['address_1'].encode('utf-8'),
[i.encode('utf-8') for i in item['pincode']],
item['phone'].encode('utf-8'),
[i.encode('utf-8') for i in item['web_site']]
])
return item
def spider_closed(self, spider):
log.msg("closed spider %s at %s" % (spider.name,datetime.now().strftime('%H-%M-%S')))
结果:
--- <exception caught here> ---
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 133, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python2.7/site-packages/Scrapy-0.14.3-py2.7.egg/scrapy/xlib/pydispatch/robustapply.py", line 47, in robustApply
return receiver(*arguments, **named)
File "/home/local/user/example/example/pipelines.py", line 19, in spider_opened
self.examplecsv = csv.writer(open("%s(%s).csv"% (spider.name,datetime.now().strftime("%d/%m/%Y,%H-%M-%S")), "wb"),
exceptions.IOError: [Errno 2] No such file or directory: 'example(27/07/2012,10-30-40).csv'
这里实际上蜘蛛的名字是example
我不明白上面的代码有什么问题,它应该使用蜘蛛名称动态创建 csv 文件,但是显示上面提到的错误,谁能告诉我那里发生了什么............