9

我正在运行 Scrapyd,同时启动 4 个蜘蛛时遇到了一个奇怪的问题。

2012-02-06 15:27:17+0100 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,2,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,3,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:18+0100 [Launcher] Process started: project='thz' spider='spider_1' job='abb6b62650ce11e19123c8bcc8cc6233' pid=2545 
2012-02-06 15:27:19+0100 [Launcher] Process finished: project='thz' spider='spider_1' job='abb6b62650ce11e19123c8bcc8cc6233' pid=2545 
2012-02-06 15:27:23+0100 [Launcher] Process started: project='thz' spider='spider_2' job='abb72f8e50ce11e19123c8bcc8cc6233' pid=2546 
2012-02-06 15:27:24+0100 [Launcher] Process finished: project='thz' spider='spider_2' job='abb72f8e50ce11e19123c8bcc8cc6233' pid=2546 
2012-02-06 15:27:28+0100 [Launcher] Process started: project='thz' spider='spider_3' job='abb76f6250ce11e19123c8bcc8cc6233' pid=2547 
2012-02-06 15:27:29+0100 [Launcher] Process finished: project='thz' spider='spider_3' job='abb76f6250ce11e19123c8bcc8cc6233' pid=2547 
2012-02-06 15:27:33+0100 [Launcher] Process started: project='thz' spider='spider_4' job='abb7bb8e50ce11e19123c8bcc8cc6233' pid=2549 
2012-02-06 15:27:35+0100 [Launcher] Process finished: project='thz' spider='spider_4' job='abb7bb8e50ce11e19123c8bcc8cc6233' pid=2549 

我已经为 Scrapyd 设置了这些设置:

[scrapyd]
max_proc = 10

为什么 Scrapyd 不同时运行蜘蛛,就像它们预定的一样快?

4

2 回答 2

9

我通过在第 30 行编辑 scrapyd/app.py 解决了这个问题。

改为timer = TimerService(5, poller.poll)_ timer = TimerService(0.1, poller.poll)

编辑:AliBZ 下面关于配置设置的评论是更改轮询频率的更好方法。

于 2012-02-13T16:03:17.327 回答
5

根据我对scrapyd 的经验,它不会在您安排蜘蛛时立即运行蜘蛛。它通常会等待一点,直到当前的蜘蛛启动并运行,然后它开始下一个蜘蛛进程(scrapy crawl)。

因此,scrapyd 会一个接一个地启动进程,直到max_proc达到 count 为止。

从您的日志中,我看到您的每个蜘蛛都运行了大约 1 秒。我想,如果它们运行至少 30 秒,你会看到所有的蜘蛛都在运行。

于 2012-02-06T18:11:58.823 回答