我对twisted很陌生,我正在尝试制作一个异步客户端来获取一些url并将结果保存到每个url的不同文件中。当我用有限数量的服务器运行程序时,比如说 10 台,反应器循环正确结束并且程序终止。但是,当我使用例如 Alexa top 2500 运行程序时,程序开始获取 url,但随后不会终止。我已经设置了超时但它不起作用,我相信一定有一些打开的套接字不会触发任何错误或成功的回调。我的目标是一旦程序获取页面或每个连接超时已过期,程序必须终止并关闭所有活动的文件描述符。
对不起,复制和粘贴时没有保留代码缩进,现在我已经检查并修复了。该代码是给出示例的最低限度,请注意,我的问题是当我启动具有大量要抓取的站点的程序时反应堆没有停止。
#!/usr/bin/env python
from pprint import pformat
from twisted.internet import reactor
import twisted.internet.defer
import sys
from twisted.internet.protocol import Protocol
from twisted.web.client import Agent
from twisted.web.http_headers import Headers
class PrinterClient(Protocol):
def __init__(self, whenFinished, output):
self.whenFinished = whenFinished
self.output = output
def dataReceived(self, bytes):
#print '##### Received #####\n%s' % (bytes,)
self.output.write('%s' % (bytes,))
def connectionLost(self, reason):
print 'Finished:', reason.getErrorMessage()
self.output.write('Finished: %s \n'%(reason.getErrorMessage()))
self.output.write('#########end########%s\n'%(reason.getErrorMessage()))
self.whenFinished.callback(None)
def handleResponse(r, output, url):
output.write('############start############\n')
output.write('%s\n'%(url))
#print "version=%s\ncode=%s\nphrase='%s'" % (r.version, r.code, r.phrase)
output.write("version=%s\ncode=%s\nphrase='%s'"\
%(r.version, r.code, r.phrase))
for k, v in r.headers.getAllRawHeaders():
#print "%s: %s" % (k, '\n '.join(v))
output.write("%s: %s\n" % (k, '\n '.join(v)))
whenFinished = twisted.internet.defer.Deferred()
r.deliverBody(PrinterClient(whenFinished, output))
return whenFinished
def handleError(reason):
print reason
#reason.printTraceback()
#reactor.stop()
def getPage(url, output):
print "Requesting %s" % (url,)
d = Agent(reactor).request('GET',
url,
Headers({'User-Agent': ['Mozilla/4.0 (Windows XP 5.1) Java/1.6.0_26']}),
None)
d._connectTimeout = 10
d.addCallback(handleResponse, output, url)
d.addErrback(handleError)
return d
if __name__ == '__main__':
semaphore = twisted.internet.defer.DeferredSemaphore(500)
dl = list()
ipset = set()
queryset = set(['http://www.google.com','http://www.google1.com','http://www.google2.com', "up to 2500 sites"])
filemap = {}
for q in queryset:
fpos = q.split('http://')[1].split(':')[0]
dl.append(semaphore.run(getPage, q, filemap[fpos]))
dl = twisted.internet.defer.DeferredList(dl)
dl.addCallbacks(lambda x: reactor.stop(), handleError)
reactor.run()
for k in filemap:
filemap[k].close()
谢谢。捷波