6

我设计了一个爬虫,其中会有两个蜘蛛。我使用scrapy设计了这些。
这些蜘蛛将通过从数据库中获取数据来独立运行。

我们使用反应器运行这些蜘蛛。因为我们知道我们不能重复运行反应器,所以
我们为第二个蜘蛛提供了大约 500 多个链接来爬行。如果我们这样做,我们就会遇到端口错误问题。即scrapy仅使用单个端口

Error caught on signal handler: <bound method ?.start_listening of <scrapy.telnet.TelnetConsole instance at 0x0467B440>>
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 1070, in _inlineCallbacks
result = g.send(result)
File "C:\Python27\lib\site-packages\scrapy-0.16.5-py2.7.egg\scrapy\core\engine.py", line 75, in start yield self.signals.send_catch_log_deferred(signal=signals.engine_started)
File "C:\Python27\lib\site-packages\scrapy-0.16.5-py2.7.egg\scrapy\signalmanager.py", line 23, in send_catch_log_deferred
return signal.send_catch_log_deferred(*a, **kw)
File "C:\Python27\lib\site-packages\scrapy-0.16.5-py2.7.egg\scrapy\utils\signal.py", line 53, in send_catch_log_deferred
*arguments, **named)
--- <exception caught here> ---
File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 137, in maybeDeferred
result = f(*args, **kw)
File "C:\Python27\lib\site-packages\scrapy-0.16.5-py2.7.egg\scrapy\xlib\pydispatch\robustapply.py", line 47, in robustApply
return receiver(*arguments, **named)
File "C:\Python27\lib\site-packages\scrapy-0.16.5-py2.7.egg\scrapy\telnet.py", line 47, in start_listening
self.port = listen_tcp(self.portrange, self.host, self)
File "C:\Python27\lib\site-packages\scrapy-0.16.5-py2.7.egg\scrapy\utils\reactor.py", line 14, in listen_tcp
return reactor.listenTCP(x, factory, interface=host)
File "C:\Python27\lib\site-packages\twisted\internet\posixbase.py", line 489, in listenTCP
p.startListening()
File "C:\Python27\lib\site-packages\twisted\internet\tcp.py", line 980, in startListening
raise CannotListenError(self.interface, self.port, le)
twisted.internet.error.CannotListenError: Couldn't listen on 0.0.0.0:6073: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted.

那么这里发生了什么问题?那么解决这种情况的最佳方法是什么?请帮助...

ps:我在设置中增加了端口的数量,但它总是以6073为默认值。

4

2 回答 2

7

最简单的方法是禁用 Telnet 控制台,方法是将其添加到您的 settings.py

EXTENSIONS = {
   'scrapy.telnet.TelnetConsole': None
}

另请参阅http://doc.scrapy.org/en/latest/topics/settings.html#extensions以获取默认启用的扩展名列表。

于 2013-07-05T14:53:42.723 回答
2

您的问题可以通过运行更少的并发爬虫来解决。这是我为按顺序发出请求而编写的一个秘诀:这个特定的类只运行一个爬虫,但使其批量运行(比如一次 10 个)所需的修改是微不足道的。

class SequentialCrawlManager(object):
    """Start spiders sequentially"""

    def __init__(self, spider, websites):
        self.spider = spider
        self.websites = websites
        # setup crawler
        self.settings = get_project_settings()
        self.current_site_idx = 0

    def next_site(self):
        if self.current_site_idx < len(self.websites):
            self.crawler = Crawler(self.settings)
            # the CSVs data in each column is passed as keyword arguments
            # the arguments come from the
            spider = self.spider() # pass arguments if desired
            self.crawler.crawl(spider)
            self.crawler.start()
            # wait for one spider to finish before starting the next one
            self.crawler.signals.connect(self.next_site, signal=signals.spider_closed)
            self.crawler.configure()
            self.current_site_idx += 1
        else:
            reactor.stop() # required for the program to terminate

    def start(self):
        log.start()
        self.next_site()
        reactor.run() # blocking call
于 2014-05-14T18:43:24.063 回答