0

我有兴趣抓取一个网站,首先我想从网站的这一部分获取链接:

<div id="lo-mas-visto">
<div class="item seccion-nacion">
<span class="fecha">04:21</span>
<span class="title">
<a href="http://www.eluniversal.com.mx/articulo/nacion/politica/2017/04/25/tras-videoescandalo-cae-candidata-de-morena">Tras videoescándalo cae candidata de Morena</a>
</span>
</div>
<div class="item seccion-nacion">
<span class="fecha">02:01</span>
<span class="title">
</div>
<div class="item seccion-mundo">
<span class="fecha">03:20</span>
<span class="title">
<a href="http://www.eluniversal.com.mx/articulo/mundo/2017/04/25/trump-se-rinde-no-incluye-muro-en-presupuesto">Trump se rinde; no incluye muro en presupuesto </a>
</span>
</div>
<div class="item seccion-nacion">
<span class="fecha">02:02</span>
<span class="title">
<a href="http://www.eluniversal.com.mx/entrada-de-opinion/columna/hector-de-mauleon/nacion/2017/04/25/amlo-y-el-dinero-sucio">AMLO y el dinero sucio</a>
</span>
</div>
<div class="item seccion-nacion">
<span class="fecha">02:06</span>
<span class="title">
<a href="http://www.eluniversal.com.mx/entrada-de-opinion/columna/salvador-garcia-soto/nacion/2017/04/25/morena-y-los-nuevos">Morena y los nuevos videoescándalos</a>
</span>
</div>
<div class="item seccion-metropoli">
<span class="fecha">01:09</span>
<span class="title">
<a href="http://www.eluniversal.com.mx/articulo/metropoli/edomex/2017/04/25/delfina-gomez-y-su-equipo-se-premian-con-bonos">Delfina Gómez y su equipo se premian con bonos</a>
</span>
</div>
<div class="item seccion-metropoli">
<span class="fecha">01:06</span>
<span class="title">
<a href="http://www.eluniversal.com.mx/articulo/metropoli/cdmx/2017/04/25/van-tras-mujer-que-prostituye-niñas-de-secundaria-y-prepa">Van tras mujer que prostituye a niñas de secundaria y prepa</a>
</span>

因此,我尝试实现一个蜘蛛来提取此类信息:

# -*- coding: utf-8 -*-
import scrapy
from scrapy_splash import SplashRequest


class OpinionsSpider(scrapy.Spider):
    name = "news"
    allowed_domains = ["http://www.eluniversal.com.mx/"]
    start_urls = ['http://www.eluniversal.com.mx/']

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, self.parse, args={'wait': 0.5})

    def parse(self, response):

        #print(response.body)

        item = {

        'url' : response.xpath(".//*[@id='lo-mas-visto']//div//span//a")

        }
        yield item

问题是我没有从上述对象中获取网址:

user@MacBook-Pro-de-User-3:~/PycharmProjects$ scrapy runspider opinions.py
2017-04-27 17:09:28 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot)
2017-04-27 17:09:28 [scrapy.utils.log] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
2017-04-27 17:09:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2017-04-27 17:09:29 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-27 17:09:29 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-27 17:09:29 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-27 17:09:29 [scrapy.core.engine] INFO: Spider opened
2017-04-27 17:09:29 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-27 17:09:29 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-27 17:09:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.eluniversal.com.mx/> (referer: None)
2017-04-27 17:09:29 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.eluniversal.com.mx/>
{'url': []}
2017-04-27 17:09:29 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-27 17:09:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 220,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 53389,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 4, 27, 22, 9, 29, 901099),
 'item_scraped_count': 1,
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 4, 27, 22, 9, 29, 143023)}
2017-04-27 17:09:29 [scrapy.core.engine] INFO: Spider closed (finished)

如何修复蜘蛛以获取链接并启动分页方案?...

4

1 回答 1

0

从评论中,我认为你的问题是你没有一个scrapy项目并且splash没有正确配置。您的蜘蛛程序不会呈现 JavaScript,因此您无法访问所需的链接。

如果你想提取链接,你最后xpath也会丢失。/@href如果您想测试 xpath,这是一个非常有用的工具:videlibri

您首先需要制作一个scrapy projectstartproject)。

然后,您需要正确配置 splash,以便您可以从网站呈现 JavaScript 并获取所需的链接。这是您需要的文档:scrapy_splash

希望这可以帮助。

于 2017-04-27T22:51:04.330 回答