7

通常,从服务器下载文件是这样的:

fp = open(file, 'wb')
req = urllib2.urlopen(url)
for line in req:
    fp.write(line)
fp.close()

在下载过程中,只需完成下载过程。如果进程停止或中断,则需要重新开始下载进程。所以,我想让我的程序暂停和恢复下载,我该如何实现呢?谢谢。

4

2 回答 2

13

Web 服务器必须支持Range请求标头以允许暂停/恢复下载:

Range: <unit>=<range-start>-<range-end>

Range然后,如果他/她想要检索指定的字节,客户端可以使用标头发出请求,例如:

Range: bytes=0-1024

在这种情况下,服务器可以用一个200 OK表示它不支持Range请求的响应,或者它可以206 Partial Content这样响应:

HTTP/1.1 206 Partial Content
Accept-Ranges: bytes
Content-Length: 1024
Content-Range: bytes 64-512/1024

Response body.... till 512th byte of the file

看:

于 2012-09-03T09:18:35.867 回答
2

在python中,你可以这样做:

import urllib, os

class myURLOpener(urllib.FancyURLopener):
    """Create sub-class in order to overide error 206.  This error means a
       partial file is being sent,
       which is ok in this case.  Do nothing with this error.
    """
    def http_error_206(self, url, fp, errcode, errmsg, headers, data=None):
        pass

loop = 1
dlFile = "2.6Distrib.zip"
existSize = 0
myUrlclass = myURLOpener()
if os.path.exists(dlFile):
    outputFile = open(dlFile,"ab")
    existSize = os.path.getsize(dlFile)
    #If the file exists, then only download the remainder
    myUrlclass.addheader("Range","bytes=%s-" % (existSize))
else:
    outputFile = open(dlFile,"wb")

webPage = myUrlclass.open("http://localhost/%s" % dlFile)

#If the file exists, but we already have the whole thing, don't download again
if int(webPage.headers['Content-Length']) == existSize:
    loop = 0
    print "File already downloaded"

numBytes = 0
while loop:
    data = webPage.read(8192)
    if not data:
        break
    outputFile.write(data)
    numBytes = numBytes + len(data)

webPage.close()
outputFile.close()

for k,v in webPage.headers.items():
    print k, "=", v
print "copied", numBytes, "bytes from", webPage.url

您可以找到来源:http ://code.activestate.com/recipes/83208-resuming-download-of-a-file/

它仅适用于 http dls

于 2012-09-03T08:04:54.037 回答