我从 Scrapy 开始,以便自动从网站下载文件。作为测试,我想从这个网站下载 jpg 文件。我的代码基于Scrapy 网站上的介绍教程和文件和图像管道教程。
我的代码是这样的:
在 settings.py 中,我添加了以下几行:
ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}
IMAGES_STORE = '/home/lucho/Scrapy/jpg/'
我的 items.py 文件是:
import scrapy
class JpgItem(scrapy.Item):
image_urls = scrapy.Field()
images = scrapy.Field()
pass
我的管道文件是:
import scrapy
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
class JpgPipeline(object):
def process_item(self, item, spider):
return item
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
yield scrapy.Request(image_url)
def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
item['image_paths'] = image_paths
return item
最后,我的蜘蛛文件是:
import scrapy
from .. items import JpgItem
class JpgSpider(scrapy.Spider):
name = "jpg"
allowed_domains = ["http://www.kevinsmedia.com"]
start_urls = [
"http://www.kevinsmedia.com/km/mp3z/Fluke/Risotto/"
]
def init_request(self):
#"""This function is called before crawling starts."""
return Request(url=self.login_page, callback=self.parse)
def parse(self, response):
item = JpgItem()
return item
(理想情况下,我想下载所有 jpg,而不为所需的每个文件指定确切的网址)
“scrapy crawl jpg”的输出是:
2015-12-08 19:19:30 [scrapy] INFO: Scrapy 1.0.3.post6+g2d688cd started (bot: jpg)
2015-12-08 19:19:30 [scrapy] INFO: Optional features available: ssl, http11
2015-12-08 19:19:30 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'jpg.spiders', 'SPIDER_MODULES': ['jpg.spiders'], 'COOKIES_ENABLED': False, 'DOWNLOAD_DELAY': 3, 'BOT_NAME': 'jpg'}
2015-12-08 19:19:30 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-12-08 19:19:30 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-12-08 19:19:30 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-12-08 19:19:30 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2015-12-08 19:19:30 [scrapy] INFO: Spider opened
2015-12-08 19:19:30 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-12-08 19:19:30 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-12-08 19:19:31 [scrapy] DEBUG: Crawled (200) <GET http://www.kevinsmedia.com/km/mp3z/Fluke/Risotto/> (referer: None)
2015-12-08 19:19:31 [scrapy] DEBUG: Scraped from <200 http://www.kevinsmedia.com/km/mp3z/Fluke/Risotto/>
{'images': []}
2015-12-08 19:19:31 [scrapy] INFO: Closing spider (finished)
2015-12-08 19:19:31 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 254,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 2975,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 12, 8, 22, 19, 31, 294139),
'item_scraped_count': 1,
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 12, 8, 22, 19, 30, 619918)}
2015-12-08 19:19:31 [scrapy] INFO: Spider closed (finished)
虽然似乎没有错误,但程序并未检索 jpg 文件。以防万一,我使用的是 Ubuntu。