2

我正在尝试覆盖在脚本中调用的爬虫的一些设置,但这些设置似乎没有生效:

from scrapy import log
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from someproject.spiders import SomeSpider

spider = SomeSpider()
overrides = {
    'LOG_ENABLED': True,
    'LOG_STDOUT': True,
}
settings = get_project_settings()
settings.overrides.update(overrides)
log.start()
crawler = CrawlerProcess(settings)
crawler.install()
crawler.configure()
crawler.crawl(spider)
crawler.start()

在蜘蛛中:

from scrapy.spider import BaseSpider

class SomeSpider(BaseSpider):

    def __init__(self):
        self.start_urls = [ 'http://somedomain.com' ]

    def parse(self, response):
        print 'some test' # won't print anything
        exit(0) # will normally exit failing the crawler

通过定义LOG_ENABLEDand LOG_STDOUT,我希望在日志中看到“一些测试”字符串。此外,我似乎无法将日志重定向到LOG_FILE我尝试过的其他一些设置中。

我一定做错了什么......请帮忙。=D

4

2 回答 2

0

用于log.msg('some test')打印日志

于 2013-11-08T02:57:15.513 回答
0

启动爬虫后可能需要启动 Twisted 的 reactor:

from twisted.internet import reactor
#...other imports...

#...setup crawler...
crawler.start()
reactor.run()

相关问题/更多代码:Scrapy crawl from script always blocks script execution after scraping

于 2014-03-19T01:06:43.307 回答