2

我想抓取下面显示的这些页面,但它需要身份验证。尝试了下面的代码,但它说 0 个页面被刮掉了。我无法理解问题所在。有人可以帮忙吗..

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders.init import InitSpider
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import Rule
from kappaal.items import KappaalItem

class KappaalCrawler(InitSpider):
    name = "initkappaal"
    allowed_domains = ["http://www.kappaalphapsi1911.com/"]
    login_page = 'http://www.kappaalphapsi1911.com/login.aspx'
    #login_page = 'https://kap.site-ym.com/Login.aspx'
    start_urls = ["http://www.kappaalphapsi1911.com/search/newsearch.asp?cdlGroupID=102044"]

    rules = ( Rule(SgmlLinkExtractor(allow= r'-\w$'), callback='parseItems', follow=True), )
    #rules = ( Rule(SgmlLinkExtractor(allow=("*", ),restrict_xpaths=("//*[contains(@id, 'SearchResultsGrid')]",)) , callback="parseItems", follow= True), )

    def init_request(self):
        """This function is called before crawling starts."""
        return Request(url=self.login_page, callback=self.login)

    def login(self, response):
        """Generate a login request."""
        return FormRequest.from_response(response,
                    formdata={'u': 'username', 'p': 'password'},
                    callback=self.check_login_response)

    def check_login_response(self, response):
        """Check the response returned by a login request to see if we are
        successfully logged in.
        """
        if "Member Search Results" in response.body:
            self.log("Successfully logged in. Let's start crawling!")
            # Now the crawling can begin..
            self.initialized()
        else:
            self.log("Bad times :(")
            # Something went wrong, we couldn't log in, so nothing happens.

    def parseItems(self, response):
        hxs = HtmlXPathSelector(response)
        members = hxs.select('/html/body/form/div[3]/div/table/tbody/tr/td/div/table[2]/tbody')
        print members
        items = []
        for member in members:
            item = KappaalItem()
            item['Name'] = member.select("//a/text()").extract()
            item['MemberLink'] = member.select("//a/@href").extract()
            #item['EmailID'] = 
            #print item['Name'], item['MemberLink']
            items.append(item)
        return items

执行刮板后得到以下响应

2013-01-23 07:08:23+0530 [scrapy] INFO: Scrapy 0.16.3 started (bot: kappaal)
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats,      TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled downloader middlewares:HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled item pipelines:
2013-01-23 07:08:23+0530 [initkappaal] INFO: Spider opened
2013-01-23 07:08:23+0530 [initkappaal] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-01-23 07:08:26+0530 [initkappaal] DEBUG: Crawled (200) <GET https://kap.site-ym.com/Login.aspx> (referer: None)
2013-01-23 07:08:26+0530 [initkappaal] DEBUG: Filtered offsite request to 'kap.site-ym.com': <GET https://kap.site-ym.com/search/all.asp?bst=Enter+search+criteria...&p=P%40ssw0rd&u=9900146>
2013-01-23 07:08:26+0530 [initkappaal] INFO: Closing spider (finished)
2013-01-23 07:08:26+0530 [initkappaal] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 231,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 23517,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2013, 1, 23, 1, 38, 26, 194000),
         'log_count/DEBUG': 8,
     'log_count/INFO': 4,
     'request_depth_max': 1,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2013, 1, 23, 1, 38, 23, 542000)}
2013-01-23 07:08:26+0530 [initkappaal] INFO: Spider closed (finished)

我不明白为什么它没有像前面提到的那样验证和解析起始 URL。

4

4 回答 4

1

还要确保您启用了 cookie,以便在您登录时会话保持登录状态

COOKIES_ENABLED = True
COOKIES_DEBUG = True

在你的settings.py文件中

于 2013-01-24T12:21:19.060 回答
1

我这样修复它:

def start_requests(self):
    return self.init_request()

def init_request(self):
    return [Request(url=self.login_page, callback=self.login)]

def login(self, response):
    return FormRequest.from_response(response, formdata={'username': 'username', 'password': 'password'}, callback=self.check_login_response)

def check_login_response(self, response):
    if "Logout" in response.body:
        for url in self.start_urls:
            yield self.make_requests_from_url(url)
    else:
        self.log("Could not log in...")

通过重载 start_requests,您可以确保登录过程正确结束,然后才开始抓取。

我正在使用 CrawlSpider 并完美运行!希望能帮助到你。

于 2013-01-24T13:10:59.927 回答
0

这可能不是你想要的回应,但我能感觉到你的痛苦......

我遇到了同样的问题,我觉得文档对于 Scrapy 来说是不够的。我最终使用 mechanize 登录。如果你现在用 scrapy great if 来解决这个问题,Mechanize 非常简单。

于 2013-01-23T21:57:24.183 回答
0

好的,所以我可以看到几个问题。但是,由于用户名和密码,我无法测试代码。是否有可用于测试目的的虚拟帐户?

  1. InitSpider 不执行规则,因此虽然它不会引起问题,但应该将其删除。
  2. check_login_response需要返回一些东西。

以机智:

def check_login_response(self, response):
    """Check the response returned by a login request to see if we are
    successfully logged in.
    """
    if "Member Search Results" in response.body:
        self.log("Successfully logged in. Let's start crawling!")
        # Now the crawling can begin..
        return self.initialized()
    else:
        self.log("Bad times :(")
        # Something went wrong, we couldn't log in, so nothing happens.
        return
于 2013-01-23T17:11:41.800 回答