1

我是 Python 和 Scrapy 的新手,所以我不确定我是否选择了最好的方法;但我的目标是在一个页面上获得两张(或更多)不同的图片,并以不同的方式命名图片。

我应该如何设置管道,我应该做一个组合管道还是分开管道?现在我尝试了分离的管道,但无法使其工作。第一张图片完美下载并重命名,但第二张完全没有下载(下面的错误消息)。

我在这个页面练习:http ://www.allabolag.se/2321000016/STOCKHOLMS_LANS_LANDSTING

allabolagspider.py

class allabolagspider(CrawlSpider):
name="allabolagspider"
# allowed_domains = ["byralistan.se"]
start_urls = [
    "http://www.allabolag.se/2321000016/STOCKHOLMS_LANS_LANDSTING"
]

pipelines = ['AllabolagPipeline', 'AllabolagPipeline2']

rules = (
    Rule(LinkExtractor(allow = "http://www.allabolag.se/2321000016/STOCKHOLMS_LANS_LANDSTING"), callback='parse_link'),
)

def parse_link(self, response):
    for sel in response.xpath('//*[@class="reportTable"]'):#//TODO==king it seems that IMDB has changed the html structure for these information
        image = AllabolagItem()
                    tmptitle = response.xpath('''.//tr[2]/td[2]/table//tr[13]/td/span/text()''').extract()
                    tmptitle.insert(0, "logo-")
                    image['title'] = ["".join(tmptitle)]
                    rel = response.xpath('''.//tr[5]/td[2]/div[1]/div/a/img/@src''').extract()
                    image['image_urls'] = [urljoin(response.url, rel[0])]
                    yield image

    for sel in response.xpath('//*[@class="mainWindow"]'):#//TODO==king it seems that IMDB has changed the html structure for these information
        image2 = AllabolagItem()
                    tmptitle2 = response.xpath('''./div[2]/div[1]/ul/li[6]/a/text()''').extract()
                    tmptitle2.insert(0, "hej-")
                    image2['title2'] = ["".join(tmptitle2)]
                    rel2 = response.xpath('''./div[3]/div[1]/a/img/@src''').extract()
                    image2['image_urls2'] = [urljoin(response.url, rel2[0])]
                    yield image2

设置.py

BOT_NAME = 'allabolag'

SPIDER_MODULES = ['allabolag.spiders']
NEWSPIDER_MODULE = 'allabolag.spiders'

DOWNLOAD_DELAY = 2.5
CONCURRENT_REQUESTS = 250

USER_AGENT = "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36"

ITEM_PIPELINES = {'allabolag.pipelines.AllabolagPipeline': 1,
'allabolag.pipelines.AllabolagPipeline2': 2,
}

IMAGES_STORE = 'Imagesfolder'

管道.py

import scrapy
from scrapy.pipelines.images import ImagesPipeline
import sqlite3 as lite
from allabolag import settings
from allabolag import items
con = None

class AllabolagPipeline(ImagesPipeline):
    def set_filename(self, response):
        return 'full/{0}.jpg'.format(response.meta['title'][0])

    def get_media_requests(self, item, info):
        for image_url in item['image_urls']:
            yield scrapy.Request(image_url, meta={'title': item['title']})

    def get_images(self, response, request, info):
        for key, image, buf in super(AllabolagPipeline, self).get_images(response, request, info):
            key = self.set_filename(response)
        yield key, image, buf

class AllabolagPipeline2(ImagesPipeline):
    def set_filename(self, response):
        return 'full/{0}.jpg'.format(response.meta['title2'][0])

    def get_media_requests(self, item, info):
        for image_url2 in item['image_urls2']:
            yield scrapy.Request(image_url2, meta={'title2': item['title2']})

    def get_images(self, response, request, info):
        for key, image, buf in super(AllabolagPipeline2, self).get_images(response, request, info):
            key = self.set_filename2(response)
        yield key, image, buf

从终端复制粘贴

2016-03-08 22:15:58 [scrapy] ERROR: Error processing {'image_urls': [u'http://www.allabolag.se/img/prv/2798135.JPG'],
 'images': [{'checksum': 'a567ec7c2bd99fd7eb20db42229a1bf9',
             'path': 'full/0280bf8228087cd571e86f43859552f9880e558a.jpg',
             'url': 'http://www.allabolag.se/img/prv/2798135.JPG'}],
 'title': [u'logo-UTDELNINGSADRESS']}
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-15.5.0-py2.7-macosx-10.6-intel.egg/twisted/internet/defer.py", line 588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-1.0.3-py2.7.egg/scrapy/pipelines/media.py", line 45, in process_item
dlist = [self._process_request(r, info) for r in requests]
  File "/Users/VickieB/Documents/Scrapy/Test1/tutorial/tandlakare/allabolag/pipelines.py", line 36, in get_media_requests
for image_url2 in item['image_urls2']:
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-1.0.3-py2.7.egg/scrapy/item.py", line 56, in __getitem__
return self._values[key]
KeyError: 'image_urls2'
4

1 回答 1

1

可能有几个我没有注意到的错误,但我可以解释其中一个...... KeyError通常表示字典查找失败。在这种情况下,这意味着在执行过程中的某个时刻,您将item(字典)传递给def get_media_requests(self, item, info):没有键“image_urls2”的

更改get_media_requests为此将向您显示何时以及应该允许脚本继续执行。

def get_media_requests(self, item, info):
    if "image_urls2" not in item:
        print("ERROR - 'image_urls2' NOT IN ITEM/DICT")
    else:
        for image_url2 in item['image_urls2']:
            yield scrapy.Request(image_url2, meta={'title2': item['title2']})

如果你很懒或者不关心一些缺失的值,你可以try/except像这样把整个东西括起来:

def get_media_requests(self, item, info):
    try:
        for image_url2 in item['image_urls2']:
            yield scrapy.Request(image_url2, meta={'title2': item['title2']})
    except Exception as e:
        print(str(e))
于 2016-03-08T22:43:27.447 回答