0

我是 Python 编程的新手。我的问题是,如何同时下载几个文件。不是一个文件一个文件,而是同时从 ftp 上的一个目录。现在我使用这个脚本,但我不知道如何重建这个代码:

  filenames = []
    ftp.retrlines("NLST", filenames.append)
    print filenames
    print path
    for filename in filenames:
        local_filename = filename
        print filename
        print local_filename
        f = open(local_filename, "wb") 

        s = ftp.size(local_filename)     
        sMB = s/(1024*1024) 
        print "file name: " + local_filename + "\nfile size: " + str(sMB) + " MB" 
        ftp.retrbinary("RETR %s" % local_filename, f.write) 
    print "\n Done :) "
    time.sleep(2)
    f.close()
    ftp.quit() #closing connection
    time.sleep(5)

它工作正常,但不是我需要的。

4

1 回答 1

5

您可以使用多个线程或进程。ftplib.FTP确保在每个线程中创建一个新对象。最简单的方法(代码方面)是使用multiprocessing.Pool

#!/usr/bin/env python
from multiprocessing.dummy import Pool # use threads
try:
    from urllib import urlretrieve
except ImportError: # Python 3
    from urllib.request import urlretrieve

def download(url):
    url = url.strip()
    try:
        return urlretrieve(url, url2filename(url)), None
    except Exception as e:
        return None, e

if __name__ == "__main__":
   p = Pool(20) # specify number of concurrent downloads
   print(p.map(download, open('urls'))) # perform parallel downloads

其中urls包含要下载的文件的 ftp url,例如,ftp://example.com/path/to/fileurl2filename()从 url 中提取文件名部分,例如:

import os
import posixpath
try:
    from urlparse import urlsplit
    from urllib import unquote
except ImportError: # Python 3
    from urllib.parse import urlsplit, unquote

def url2filename(url, encoding='utf-8'):
    """Return basename corresponding to url.

    >>> print url2filename('http://example.com/path/to/dir%2Ffile%C3%80?opt=1')
    fileÀ
    """
    urlpath = urlsplit(url).path 
    basename = posixpath.basename(unquote(urlpath))
    if os.path.basename(basename) != basename:
        raise ValueError(url)  # reject 'dir%5Cbasename.ext' on Windows
    return basename
于 2013-05-11T20:17:37.707 回答