我正在Python 2.7
使用Mac OS X Lion 10.7.5
.
我最初在MySQLdb
使用两者进行安装pip-2.7 install MySQL-python
以及下载然后运行python2.7 setup.py build
和python2.7 setup.py install
. 我使用 32 位和 64 位安装MySQL
和相应的体系结构尝试了这些不同的方法,但无济于事。
我的解决方案是安装Macports
. 然后我使用 Macports安装MySQL
和MySQL-python
( )。MySQLdb
我Wing IDE
用于开发代码,所以我切换到 Macports 版本的Python
- 导入MySQLdb
作品。python
我还将 Python 的默认终端版本切换到这个 Macports 版本,并通过从命令行调用来验证它是默认版本——启动了正确的版本。
所以现在的问题是:我正在使用scrapy来抓取电影网页以获取信息。我的管道将抓取的数据定向到使用前面提到的MySQLdb
模块的数据库。当我转到命令行,cd
进入我的项目并运行scrapy crawl MySpider
时,我收到以下错误:
raise ImportError, "Error loading object '%s': %s" % (path, e)
ImportError: Error loading object 'BoxOfficeMojo.pipelines.BoxofficemojoPipeline': No module named MySQLdb.cursors
我已经检查并确保我可以从 python2.7 shell 导入 MySQLdb.cursors,所以我认为使用哪个版本的 Python scrapy 存在问题......
:::::更新:::::
这是完整的回溯:
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 4, in <module>
execute()
File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 131, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 76, in _run_print_help
func(*a, **kw)
File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 138, in _run_command
cmd.run(args, opts)
File "/Library/Python/2.7/site-packages/scrapy/commands/crawl.py", line 43, in run
spider = self.crawler.spiders.create(spname, **opts.spargs)
File "/Library/Python/2.7/site-packages/scrapy/command.py", line 33, in crawler
self._crawler.configure()
File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 41, in configure
self.engine = ExecutionEngine(self, self._spider_closed)
File "/Library/Python/2.7/site-packages/scrapy/core/engine.py", line 63, in __init__
self.scraper = Scraper(crawler)
File "/Library/Python/2.7/site-packages/scrapy/core/scraper.py", line 66, in __init__
self.itemproc = itemproc_cls.from_crawler(crawler)
File "/Library/Python/2.7/site-packages/scrapy/middleware.py", line 50, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/Library/Python/2.7/site-packages/scrapy/middleware.py", line 29, in from_settings
mwcls = load_object(clspath)
File "/Library/Python/2.7/site-packages/scrapy/utils/misc.py", line 39, in load_object
raise ImportError, "Error loading object '%s': %s" % (path, e)
ImportError: Error loading object 'BoxOfficeMojo.pipelines.BoxofficemojoPipeline': No module named MySQLdb.cursors
:::::更新 2:::::
这是我当前的路径:
$PATH
-bash: /opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:~/bin:/Library/Frameworks/Python .framework/Versions/3.3/bin:/Library/Frameworks/Python.framework/Versions/3.3/bin:/Library/Frameworks/Python.framework/Versions/3.3/bin:/Library/Frameworks/Python.framework/Versions/3.3/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin: No such file or directory
::还::
我将此添加到代码中以希望修复问题 - 它是py27-mysql
( MySQLdb
) 的位置,但返回相同的错误:
import sys; sys.path.append("/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
::ALSO #2::
这是我的管道的代码-我不知道它是否有效,因为我不断收到有关 的错误import
,但认为它可能会有所帮助:
from scrapy import log
from twisted.enterprise import adbapi
import time
import MySQLdb.cursors
import sys; sys.path.append("/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
class BoxofficemojoPipeline(object):
def __init__(self):
print ('init')
self.dbpool = adbapi.ConnectionPool('MySQLdb', db = 'testdb', user='testuser', passwd='test', cursorclass=MySQLdb.cursors.DictCursor, charset='utf8', use_unicode=True)
def process_item(self, item, spider):
print('process')
query = self.dbpool.runInteraction(self._conditional_insert, item) #("""INSERT INTO Example_Movie (title, url, gross, release) VALUES (%s, %s, %s, %s)""", (item['title'].endcode('utf-8'), item['url'].encode('utf-8'), item['gross'].encode('utf-8'), item['release'].encode('utf-8')))
query.addErrback(self.handle_error)#self.conn.commit()
return item
def _conditional_insert(self, tx, item):
print ('conditional insert')
#Create record if doesn't exist
#all this block run on it's own thread
tx.execute("select * from example_movie where url = %s", (item['url'], ))
result = tx.fetchone()
if result:
log.msg("Item already stored in db: %s" % item, level = log.DEBUG)
else:
tx.execute("insert into example_movie (title, url, gross, release) values (%s, %s, %s, %s)", (item['title'].encode('utf-8'), item['url'].encode('utf-8'), item['gross'].encode('utf-8'), item['release'].encode('utf-8')))
log.msg("Item stored in db: %s" % item, level=log.DEBUG)
def handle_error(self, e):
print ('handle_error')
log.err(e)