7

我正在通过 http 下载文件并使用 urllib 和以下代码显示进度 - 效果很好:

import sys
from urllib import urlretrieve

urlretrieve('http://example.com/file.zip', '/tmp/localfile', reporthook=dlProgress)

def dlProgress(count, blockSize, totalSize):
  percent = int(count*blockSize*100/totalSize)
  sys.stdout.write("\r" + "progress" + "...%d%%" % percent)
  sys.stdout.flush()

现在,如果下载速度太慢(比如 15 秒不到 1MB),我也想重新开始下载。我怎样才能做到这一点?

4

3 回答 3

4

这应该有效。它计算实际下载速率,如果它太低则中止。

import sys
from urllib import urlretrieve
import time

url = "http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz" # 14.135.620 Byte
startTime = time.time()

class TooSlowException(Exception):
    pass

def convertBToMb(bytes):
    """converts Bytes to Megabytes"""
    bytes = float(bytes)
    megabytes = bytes / 1048576
    return megabytes


def dlProgress(count, blockSize, totalSize):
    global startTime

    alreadyLoaded = count*blockSize
    timePassed = time.time() - startTime
    transferRate = convertBToMb(alreadyLoaded) / timePassed # mbytes per second
    transferRate *= 60 # mbytes per minute

    percent = int(alreadyLoaded*100/totalSize)
    sys.stdout.write("\r" + "progress" + "...%d%%" % percent)
    sys.stdout.flush()

    if transferRate < 4 and timePassed > 2: # download will be slow at the beginning, hence wait 2 seconds
        print "\ndownload too slow! retrying..."
        time.sleep(1) # let's not hammer the server
        raise TooSlowException

def main():
    try:
        urlretrieve(url, '/tmp/localfile', reporthook=dlProgress)

    except TooSlowException:
        global startTime
        startTime = time.time()
        main()

if __name__ == "__main__":
    main()
于 2012-08-23T16:56:41.657 回答
3

像这样的东西:

class Timeout(Exception): 
    pass 

def try_one(func,t=3):
    def timeout_handler(signum, frame):
        raise Timeout()

    old_handler = signal.signal(signal.SIGALRM, timeout_handler) 
    signal.alarm(t) # triger alarm in 3 seconds

    try: 
        t1=time.clock()
        func()
        t2=time.clock()

    except Timeout:
        print('{} timed out after {} seconds'.format(func.__name__,t))
        return None
    finally:
        signal.signal(signal.SIGALRM, old_handler) 

    signal.alarm(0)
    return t2-t1

使用您想要超时的函数和超时时间调用“try_one”:

try_one(downloader,15)

或者,您可以这样做:

import socket
socket.setdefaulttimeout(15)
于 2012-08-23T13:27:31.333 回答
0

圣鲭鱼!使用工具!

import urllib2, sys, socket, time, os

def url_tester(url = "http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz"):
    file_name = url.split('/')[-1]
    u = urllib2.urlopen(url,None,1)     # Note the timeout to urllib2...
    file_size = int(u.info().getheaders("Content-Length")[0])
    print ("\nDownloading: {} Bytes: {:,}".format(file_name, file_size))

    with open(file_name, 'wb') as f:    
        file_size_dl = 0
        block_sz = 1024*4
        time_outs=0
        while True:    
            try:
                buffer = u.read(block_sz)
            except socket.timeout:
                if time_outs > 3:   # file has not had activity in max seconds...
                    print "\n\n\nsorry -- try back later"
                    os.unlink(file_name)
                    raise
                else:              # start counting time outs...
                    print "\nHmmm... little issue... I'll wait a couple of seconds"
                    time.sleep(3)
                    time_outs+=1
                    continue

            if not buffer:   # end of the download             
                sys.stdout.write('\rDone!'+' '*len(status)+'\n\n')
                sys.stdout.flush()
                break

            file_size_dl += len(buffer)
            f.write(buffer)
            status = '{:20,} Bytes [{:.2%}] received'.format(file_size_dl, 
                                           file_size_dl * 1.0 / file_size)
            sys.stdout.write('\r'+status)
            sys.stdout.flush()

    return file_name 

这会按预期打印状态。如果我拔下以太网电缆,我会得到:

 Downloading: Python-2.7.3.tgz Bytes: 14,135,620
             827,392 Bytes [5.85%] received


sorry -- try back later

如果我拔下电缆,然后在 12 秒内重新插入,我会得到:

Downloading: Python-2.7.3.tgz Bytes: 14,135,620
             716,800 Bytes [5.07%] received
Hmmm... little issue... I'll wait a couple of seconds

Hmmm... little issue... I'll wait a couple of seconds
Done! 

该文件已成功下载。

您可以看到urllib2支持超时和重新连接。如果您断开连接并保持断开连接 3 * 4 秒 == 12 秒,它将永远超时并引发致命异常。这也可以处理。

于 2012-08-23T15:55:40.513 回答