0

我从 Scrapy 开始,以便自动从网站下载文件。作为测试,我想从这个网站下载 jpg 文件。我的代码基于Scrapy 网站上的介绍教程文件和图像管道教程。

我的代码是这样的:

在 settings.py 中,我添加了以下几行:

ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}

IMAGES_STORE = '/home/lucho/Scrapy/jpg/'

我的 items.py 文件是:

import scrapy

class JpgItem(scrapy.Item):
    image_urls = scrapy.Field()
    images = scrapy.Field()
    pass

我的管道文件是:

import scrapy
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem

class JpgPipeline(object):
    def process_item(self, item, spider):
        return item
    def get_media_requests(self, item, info):
        for image_url in item['image_urls']:
            yield scrapy.Request(image_url)

    def item_completed(self, results, item, info):
        image_paths = [x['path'] for ok, x in results if ok]
        if not image_paths:
            raise DropItem("Item contains no images")
        item['image_paths'] = image_paths
        return item

最后,我的蜘蛛文件是:

import scrapy
from .. items import JpgItem

class JpgSpider(scrapy.Spider):
    name = "jpg"
    allowed_domains = ["http://www.kevinsmedia.com"]
    start_urls = [
        "http://www.kevinsmedia.com/km/mp3z/Fluke/Risotto/"
    ]

def init_request(self):
    #"""This function is called before crawling starts."""
    return Request(url=self.login_page, callback=self.parse)

def parse(self, response):
    item = JpgItem()
    return item

理想情况下,我想下载所有 jpg,而不为所需的每个文件指定确切的网址

“scrapy crawl jpg”的输出是:

2015-12-08 19:19:30 [scrapy] INFO: Scrapy 1.0.3.post6+g2d688cd started (bot: jpg)
2015-12-08 19:19:30 [scrapy] INFO: Optional features available: ssl, http11
2015-12-08 19:19:30 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'jpg.spiders', 'SPIDER_MODULES': ['jpg.spiders'], 'COOKIES_ENABLED': False, 'DOWNLOAD_DELAY': 3, 'BOT_NAME': 'jpg'}
2015-12-08 19:19:30 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-12-08 19:19:30 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-12-08 19:19:30 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-12-08 19:19:30 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2015-12-08 19:19:30 [scrapy] INFO: Spider opened
2015-12-08 19:19:30 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-12-08 19:19:30 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-12-08 19:19:31 [scrapy] DEBUG: Crawled (200) <GET http://www.kevinsmedia.com/km/mp3z/Fluke/Risotto/> (referer: None)
2015-12-08 19:19:31 [scrapy] DEBUG: Scraped from <200 http://www.kevinsmedia.com/km/mp3z/Fluke/Risotto/>
{'images': []}
2015-12-08 19:19:31 [scrapy] INFO: Closing spider (finished)
2015-12-08 19:19:31 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 254,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 2975,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2015, 12, 8, 22, 19, 31, 294139),
 'item_scraped_count': 1,
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2015, 12, 8, 22, 19, 30, 619918)}
2015-12-08 19:19:31 [scrapy] INFO: Spider closed (finished)

虽然似乎没有错误,但程序并未检索 jpg 文件。以防万一,我使用的是 Ubuntu。

4

1 回答 1

0

你还没有parse()在你的JpgSpider课堂上定义。

更新。既然我可以在您的更新中看到 URL,这看起来不像是您应该使用 scrapy 进行攻击的问题。WGET 可能更合适,看看这里的答案。特别是,请查看对最佳答案的第一条评论,了解如何使用文件扩展名来限制您下载的文件 ( -A jpg)。

<a>更新 2: parse() 例程可以使用此代码从标签中获取专辑封面 URL

part_urls = response.xpath('//a[contains(., "AlbumArt")]/@href')

这将返回部分 URL 的列表,您需要为要从中解析的页面添加根 URL response.url。我查看过的 URL 中有几个 % 代码,它们可能是个问题,但还是试试吧。获得完整 URL 列表后,将它们放入 item[]

item['image_urls'] = full_urls
yield item

这应该让 scrapy 自动下载图像,因此您可以删除中间件并让 scrapy 完成繁重的工作。

于 2015-12-07T09:32:15.990 回答