-1

我需要从这些网站上抓取一些网站和一些图片。当图像是 *.jpg 时,我没有任何问题,但这些网站也有 *.svg 图像,我需要这些。

有没有人这样做过?

这是带有错误的shell输出:

2013-01-18 14:44:10-0600 [crawler] DEBUG: Image (downloaded): Downloaded image from <GET http://page/image.svg> referred in <None>
2013-01-18 14:44:10-0600 [crawler] Unhandled Error

Traceback (most recent call last):
File "/virtualenvs/asd/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 576, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
File "/virtualenvs/asd/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 381, in callback
        self._startRunCallbacks(result)
File "/virtualenvs/asd/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 489, in _startRunCallbacks
        self._runCallbacks()
File "/virtualenvs/asd/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 576, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
    --- <exception caught here> ---
File "/virtualenvs/asd/local/lib/python2.7/site-packages/Scrapy-0.16.3-py2.7.egg/scrapy/contrib/pipeline/images.py", line 199, in media_downloaded
        checksum = self.image_downloaded(response, request, info)
File "/virtualenvs/asd/local/lib/python2.7/site-packages/Scrapy-0.16.3-py2.7.egg/scrapy/contrib/pipeline/images.py", line 252, in image_downloaded
        for key, image, buf in self.get_images(response, request, info):
File "/virtualenvs/asd/local/lib/python2.7/site-packages/Scrapy-0.16.3-py2.7.egg/scrapy/contrib/pipeline/images.py", line 261, in get_images
        orig_image = Image.open(StringIO(response.body))
File "/virtualenvs/asd/local/lib/python2.7/site-packages/PIL/Image.py", line 1980, in open
        raise IOError("cannot identify image file")
    exceptions.IOError: cannot identify image file

谢谢 !(对不起我的英语不好)

4

1 回答 1

0

如果它对其他人有帮助,我可以按以下方式解决这个问题

在 item.py 中,将这些属性添加到对象中:

      body = Field()
      url = Field()

在蜘蛛中(在 def parse() 内),添加以下代码:

import urllib2 

(...)

    #select each img url
    relative_urls = info.select('tr/td/a[@class="image"]/img/@src').extract()

    for relative_url in relative_urls:
        #static url
        relative_url = relative_url.split("svg")[0][2:-1]+".svg"
        relative_url = ''.join(relative_url.split("/thumb")).strip()

        relative_url = "http://"+relative_url

        asd = urllib2.urlopen(relative_url)
        data = asd.read()
        with open("%s/%s" % ('/home/user/virtualenvs', img.svg), "wb") as code:
            code.write(data)

这个对我有用

(显然可以将蜘蛛和管道之间的代码分开)

于 2013-01-23T14:05:53.360 回答