7

你好,我怎样才能让我的爬虫工作,我可以登录但什么也没发生,我真的没有被刮掉。另外我一直在阅读scrapy doc,但我真的不明白用来刮的规则。为什么“登录成功,开始爬吧!”之后什么也没有发生

我在 else 语句的末尾也有这个规则,但是删除它,因为它甚至没有被调用,因为它在我的 else 块中。所以我将它移到 start_request() 方法的顶部,但出现错误,所以我删除了我的规则。

 rules = (
                 Rule(extractor,callback='parse_item',follow=True),
                 )

我的代码:

from scrapy.contrib.spiders.init import InitSpider
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import Rule
from scrapy.contrib.spiders import CrawlSpider, Rule

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector

from linkedconv.items import LinkedconvItem

class LinkedPySpider(CrawlSpider):
    name = 'LinkedPy'
    allowed_domains = ['linkedin.com']
    login_page = 'https://www.linkedin.com/uas/login'
   # start_urls = ["http://www.linkedin.com/csearch/results?type=companies&keywords=&pplSearchOrigin=GLHD&pageKey=member-home&search=Search#facets=pplSearchOrigin%3DFCTD%26keywords%3D%26search%3DSubmit%26facet_CS%3DC%26facet_I%3D80%26openFacets%3DJO%252CN%252CCS%252CNFR%252CF%252CCCR%252CI"]
    start_urls = ["http://www.linkedin.com/csearch/results"]


    def start_requests(self):
    yield Request(
    url=self.login_page,
    callback=self.login,
    dont_filter=True
    )

  #  def init_request(self):
    #"""This function is called before crawling starts."""
  #      return Request(url=self.login_page, callback=self.login)

    def login(self, response):
    #"""Generate a login request."""
    return FormRequest.from_response(response,
            formdata={'session_key': 'myemail@gmail.com', 'session_password': 'mypassword'},
            callback=self.check_login_response)

    def check_login_response(self, response):
    #"""Check the response returned by a login request to see if we aresuccessfully logged in."""
    if "Sign Out" in response.body:
        self.log("\n\n\nSuccessfully logged in. Let's start crawling!\n\n\n")
        # Now the crawling can begin..
        self.log('Hi, this is an item page! %s' % response.url)

        return 

    else:
        self.log("\n\n\nFailed, Bad times :(\n\n\n")
        # Something went wrong, we couldn't log in, so nothing happens.


    def parse_item(self, response):
    self.log("\n\n\n We got data! \n\n\n")
    self.log('Hi, this is an item page! %s' % response.url)
    hxs = HtmlXPathSelector(response)
    sites = hxs.select('//ol[@id=\'result-set\']/li')
    items = []
    for site in sites:
        item = LinkedconvItem()
        item['title'] = site.select('h2/a/text()').extract()
        item['link'] = site.select('h2/a/@href').extract()
        items.append(item)
    return items    

我的输出

C:\Users\ye831c\Documents\Big Data\Scrapy\linkedconv>scrapy crawl LinkedPy
2013-07-12 13:39:40-0500 [scrapy] INFO: Scrapy 0.16.5 started (bot: linkedconv)
2013-07-12 13:39:40-0500 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetCon
sole, CloseSpider, WebService, CoreStats, SpiderState
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Enabled downloader middlewares: HttpAut
hMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, De
faultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMi
ddleware, ChunkedTransferMiddleware, DownloaderStats
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMi
ddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddle
ware
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Enabled item pipelines:
2013-07-12 13:39:41-0500 [LinkedPy] INFO: Spider opened
2013-07-12 13:39:41-0500 [LinkedPy] INFO: Crawled 0 pages (at 0 pages/min), scra
ped 0 items (at 0 items/min)
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:602
3
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-07-12 13:39:41-0500 [LinkedPy] DEBUG: Crawled (200) <GET https://www.linked
in.com/uas/login> (referer: None)
2013-07-12 13:39:42-0500 [LinkedPy] DEBUG: Redirecting (302) to <GET http://www.
linkedin.com/nhome/> from <POST https://www.linkedin.com/uas/login-submit>
2013-07-12 13:39:45-0500 [LinkedPy] DEBUG: Crawled (200) <GET http://www.linkedi
n.com/nhome/> (referer: https://www.linkedin.com/uas/login)
2013-07-12 13:39:45-0500 [LinkedPy] DEBUG:


    Successfully logged in. Let's start crawling!



2013-07-12 13:39:45-0500 [LinkedPy] DEBUG: Hi, this is an item page! http://www.
linkedin.com/nhome/
2013-07-12 13:39:45-0500 [LinkedPy] INFO: Closing spider (finished)
2013-07-12 13:39:45-0500 [LinkedPy] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 1670,
     'downloader/request_count': 3,
     'downloader/request_method_count/GET': 2,
     'downloader/request_method_count/POST': 1,
     'downloader/response_bytes': 65218,
     'downloader/response_count': 3,
     'downloader/response_status_count/200': 2,
     'downloader/response_status_count/302': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 7, 12, 18, 39, 45, 136000),
     'log_count/DEBUG': 11,
     'log_count/INFO': 4,
     'request_depth_max': 1,
     'response_received_count': 2,
     'scheduler/dequeued': 3,
     'scheduler/dequeued/memory': 3,
     'scheduler/enqueued': 3,
     'scheduler/enqueued/memory': 3,
     'start_time': datetime.datetime(2013, 7, 12, 18, 39, 41, 50000)}
2013-07-12 13:39:45-0500 [LinkedPy] INFO: Spider closed (finished)
4

1 回答 1

7

现在,爬行结束了,check_login_response()因为没有告诉 Scrapy 做更多的事情。

  • 登录页面的第一个请求使用start_requests():OK
  • POST 登录信息的第二个请求:OK
  • 哪个响应被解析为check_login_response... 就是这样

确实check_login_response()没有返回任何内容。为了让抓取继续进行,您需要返回Request实例(告诉 Scrapy 下一步要获取哪些页面,请参阅有关 Spiders 回调的 Scrapy 文档)

所以,在里面check_login_response(),你需要返回一个Request包含你接下来要抓取的链接的起始页面的实例,可能是你在start_urls.

    def check_login_response(self, response):
        #"""Check the response returned by a login request to see if we aresuccessfully logged in."""
        if "Sign Out" in response.body:
            self.log("\n\n\nSuccessfully logged in. Let's start crawling!\n\n\n")
            # Now the crawling can begin..
            return Request(url='http://linkedin.com/page/containing/links')

默认情况下,如果你没有设置回调Request,蜘蛛会调用它的parse()方法(http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spider.BaseSpider.parse)。

在您的情况下,它将自动为您调用CrawlSpider' 内置parse()方法,该方法应用Rule您定义的 s 来获取下一页。

您必须在您的蜘蛛类CrawlSpider的属性中定义您的规则rules,就像您在同一级别为 等name所做的那样。allowed_domain

http://doc.scrapy.org/en/latest/topics/spiders.html#crawlspider-example提供了示例规则。主要思想是你告诉提取器你在页面中对哪种绝对 URL 感兴趣,使用allow. 如果您没有allow在您的中设置SgmlLinkExtractor,它将匹配所有链接。

在您的情况下,每个规则都应该有一个用于这些链接的回调parse_item()

祝你解析LinkedIn页面好运,我认为页面中的很多内容都是通过Javascript生成的,可能不在Scrapy获取的HTML内容中。

于 2013-07-13T14:13:10.553 回答