我对python和scrapy都很陌生。我想从维基百科上抓取数据,但没有成功。每次我做scrapy crawl wiki,我总是得到;“TypeError:‘WikipediaItem’对象不支持项目分配”。我该如何解决这个问题并让我成功地从维基百科中获取详细信息。
无论如何,这是我的代码:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from wikipedia.items import WikipediaItem
class WikipediaItem(BaseSpider):
name = "wiki"
allowed_domains = ["wikipedia.org"]
start_urls = ["http://en.wikipedia.org/wiki/Main_Page"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//table[@id="mp-upper"]/tr')
items = []
for site in sites:
item = WikipediaItem()
item['title'] = site.select('.//a[@class="MainPageBG"]/text()').extract()
item['link'] = site.select('.//a[@class="MainPageBG"]').extract()
item['details'] = site.select('.//p/text()').extract()
items.append(item)
return items
这是我得到的结果:
2013-04-18 23:56:54+0800 [scrapy] INFO: Scrapy 0.14.4 started (bot: wikipedia)
2013-04-18 23:56:54+0800 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2013-04-18 23:56:54+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-04-18 23:56:54+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-04-18 23:56:54+0800 [scrapy] DEBUG: Enabled item pipelines:
2013-04-18 23:56:54+0800 [wiki] INFO: Spider opened
2013-04-18 23:56:54+0800 [wiki] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-04-18 23:56:54+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-04-18 23:56:54+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-04-18 23:56:56+0800 [wiki] DEBUG: Crawled (200) <GET http://en.wikipedia.org/wiki/Main_Page> (referer: None)
2013-04-18 23:56:56+0800 [wiki] ERROR: Spider error processing <GET http://en.wikipedia.org/wiki/Main_Page>
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1178, in mainLoop
self.runUntilCurrent()
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 800, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 368, in callback
self._startRunCallbacks(result)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 464, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 551, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/jean/wiki/wikipedia/spiders/wikipedia_spider.py", line 17, in parse
item['title'] = row.select('.//a[@class="MainPageBG"]/text()').extract()
exceptions.TypeError: 'WikipediaItem' object does not support item assignment
2013-04-18 23:56:56+0800 [wiki] INFO: Closing spider (finished)
2013-04-18 23:56:56+0800 [wiki] INFO: Dumping spider stats:
{'downloader/request_bytes': 215,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 17762,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 4, 18, 15, 56, 56, 244255),
'scheduler/memory_enqueued': 1,
'spider_exceptions/TypeError': 1,
'start_time': datetime.datetime(2013, 4, 18, 15, 56, 54, 592948)}
2013-04-18 23:56:56+0800 [wiki] INFO: Spider closed (finished)
2013-04-18 23:56:56+0800 [scrapy] INFO: Dumping global stats:
{'memusage/max': 28065792, 'memusage/startup': 28065792}
这是我的 items.py
从 scrapy.item 导入项目、字段
类维基百科项目(项目):
title = Field()
link = Field()
details = Field()