0

我正在尝试下载特定维基百科页面的所有图像。这是代码片段

from bs4 import BeautifulSoup as bs
import urllib2
import urlparse
from urllib import urlretrieve

site="http://en.wikipedia.org/wiki/Pune"
hdr= {'User-Agent': 'Mozilla/5.0'}
outpath=""
req = urllib2.Request(site,headers=hdr)
page = urllib2.urlopen(req)
soup =bs(page)
tag_image=soup.findAll("img")
for image in tag_image:
        print "Image: %(src)s" % image
        urlretrieve(image["src"], "/home/mayank/Desktop/test") 

运行程序后,我看到以下堆栈错误

Image: //upload.wikimedia.org/wikipedia/commons/thumb/0/04/Pune_Montage.JPG/250px-Pune_Montage.JPG
Traceback (most recent call last):
  File "download_images.py", line 15, in <module>
    urlretrieve(image["src"], "/home/mayank/Desktop/test")
  File "/usr/lib/python2.7/urllib.py", line 93, in urlretrieve
    return _urlopener.retrieve(url, filename, reporthook, data)
  File "/usr/lib/python2.7/urllib.py", line 239, in retrieve
    fp = self.open(url, data)
  File "/usr/lib/python2.7/urllib.py", line 207, in open
    return getattr(self, name)(url)
  File "/usr/lib/python2.7/urllib.py", line 460, in open_file
    return self.open_ftp(url)
  File "/usr/lib/python2.7/urllib.py", line 543, in open_ftp
    ftpwrapper(user, passwd, host, port, dirs)
  File "/usr/lib/python2.7/urllib.py", line 864, in __init__
    self.init()
  File "/usr/lib/python2.7/urllib.py", line 870, in init
    self.ftp.connect(self.host, self.port, self.timeout)
  File "/usr/lib/python2.7/ftplib.py", line 132, in connect
    self.sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib/python2.7/socket.py", line 571, in create_connection
    raise err
IOError: [Errno ftp error] [Errno 111] Connection refused

请帮助解决导致此错误的原因?

4

1 回答 1

1

//是当前协议的简写。似乎 Wikipedia 正在使用速记,因此您必须明确指定 HTTP 而不是 FTP(Python 出于某种原因假设):

for image in tag_image:
    src = 'http:' + image
于 2013-04-20T08:54:57.333 回答