0

我无法点击链接并取回值。

我尝试使用下面的代码我能够爬取第一个链接,之后它不会重定向到第二个跟随链接(函数)。

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request


class ScrapyOrgSpider(BaseSpider):
    name = "scrapy"
    allowed_domains = ["example.com"]
    start_urls = ["http://www.example.com/abcd"]


  def parse(self, response):
        hxs = HtmlXPathSelector(response)
        res1=Request("http://www.example.com/follow", self.a_1)
        print res1

  def a_1(self, response1):
        hxs2 = HtmlXPathSelector(response1)
        print hxs2.select("//a[@class='channel-link']").extract()[0]
        return response1
4

2 回答 2

0

parse函数必须返回请求,而不仅仅是打印它。

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    res1 = Request("http://www.example.com/follow", callback=self.a_1)
    print res1  # if you want
    return res1
于 2012-11-22T19:48:00.113 回答
0

您忘记在方法中返回您的请求parse()。试试这个代码:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request


class ScrapyOrgSpider(BaseSpider):
    name = "example.com"
    allowed_domains = ["example.com"]
    start_urls = ["http://www.example.com/abcd"]

    def parse(self, response):
        self.log('@@ Original response: %s' % response)
        req = Request("http://www.example.com/follow", callback=self.a_1)
        self.log('@@ Next request: %s' % req)
        return req

    def a_1(self, response):
        hxs = HtmlXPathSelector(response)
        self.log('@@ extraction: %s' %
            hxs.select("//a[@class='channel-link']").extract())

日志输出:

2012-11-22 12:20:06-0600 [scrapy] INFO: Scrapy 0.17.0 started (bot: oneoff)
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled item pipelines:
2012-11-22 12:20:06-0600 [example.com] INFO: Spider opened
2012-11-22 12:20:06-0600 [example.com] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-11-22 12:20:07-0600 [example.com] DEBUG: Redirecting (302) to <GET http://www.iana.org/domains/example/> from <GET http://www.example.com/abcd>
2012-11-22 12:20:07-0600 [example.com] DEBUG: Crawled (200) <GET http://www.iana.org/domains/example/> (referer: None)
2012-11-22 12:20:07-0600 [example.com] DEBUG: @@ Original response: <200 http://www.iana.org/domains/example/>
2012-11-22 12:20:07-0600 [example.com] DEBUG: @@ Next request: <GET http://www.example.com/follow>
2012-11-22 12:20:07-0600 [example.com] DEBUG: Redirecting (302) to <GET http://www.iana.org/domains/example/> from <GET http://www.example.com/follow>
2012-11-22 12:20:08-0600 [example.com] DEBUG: Crawled (200) <GET http://www.iana.org/domains/example/> (referer: http://www.iana.org/domains/example/)
2012-11-22 12:20:08-0600 [example.com] DEBUG: @@ extraction: []
2012-11-22 12:20:08-0600 [example.com] INFO: Closing spider (finished)
于 2012-11-22T18:23:11.473 回答