我正在使用以下scrapy框架是我的spider.py代码
class Example(BaseSpider):
name = "example"
allowed_domains = {"http://www.example.com"}
start_urls = [
"http://www.example.com/servlet/av/search&SiteName=page1"
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
hrefs = hxs.select('//table[@class="knxa"]/tr/td/a/@href').extract()
# href consists of all href tags and i am copying in to forwarding_hrefs by making them as a string
forwarding_hrefs = []
for i in hrefs:
forwarding_hrefs.append(i.encode('utf-8'))
return Request('http://www.example.com/servlet/av/search&SiteName=page2',
meta={'forwarding_hrefs': response.meta['forwarding_hrefs']},
callback=self.parseJob)
def parseJob(self, response):
print response,">>>>>>>>>>>"
结果:
2012-07-18 17:29:15+0530 [example] DEBUG: Crawled (200) <GET http://www.example.com/servlet/av/search&SiteName=page1> (referer: None)
2012-07-18 17:29:15+0530 [MemorialReqionalHospital] ERROR: Spider error processing <GET http://www.example.com/servlet/av/search&SiteName=page2>
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/internet/base.py", line 1167, in mainLoop
self.runUntilCurrent()
File "/usr/lib64/python2.7/site-packages/twisted/internet/base.py", line 789, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 361, in callback
self._startRunCallbacks(result)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 455, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 542, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/local/user/project/example/example/spiders/example_spider.py", line 36, in parse
meta={'forwarding_hrefs': response.meta['forwarding_hrefs']},
exceptions.KeyError: 'forwarding_hrefs'
我想做的是收集所有的href标签
http://www.example.com/servlet/av/search&SiteName=page1
并在下一个请求中放入forward_hrefs
并调用它forward_hrefs
(想forward_urls
在下一个方法中使用这个列表)
http://www.example.com/servlet/av/search&SiteName=page2
我还想在 forward_urls 中添加 page2 中的 href 标记,并在其中循环forward_hrefs
并生成每个 href 标记的请求,这是我的想法,但它显示错误如上,上面的代码有什么问题,实际上元标记是为了复制项目。任何人都可以让我知道如何从方法复制forward_hrefs
列表。parse
parseJob
最后,我的意图是将forward_hrefs
列表从parse
一个方法复制到另一个parseJob
方法。
希望我解释清楚对不起,如果不是请告诉我....
提前致谢