1

我正在编写一个刮板,它应该从初始网页中提取所有链接,如果它在元数据中有任何给定的关键字,并且如果它们在 URL 中包含“htt”,请按照它们并重复该过程两次,因此抓取的深度将是2. 这是我的代码:

from scrapy.spider import Spider
from scrapy import Selector
from socialmedia.items import SocialMediaItem
from scrapy.contrib.spiders import Rule, CrawlSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

class MySpider(Spider):
    name = 'smm'
    allowed_domains = ['*']
    start_urls = ['http://en.wikipedia.org/wiki/Social_media']
    rules = (
             Rule(SgmlLinkExtractor(allow=()), callback="parse_items", follow= True),
             )
    def parse_items(self, response):
        items = []
        #Define keywords present in metadata to scrap the webpage
        keywords = ['social media','social business','social networking','social marketing','online marketing','social selling',
            'social customer experience management','social cxm','social cem','social crm','google analytics','seo','sem',
            'digital marketing','social media manager','community manager']
        for link in response.xpath("//a"):
            item = SocialMediaItem()
            #Extract webpage keywords 
            metakeywords = link.xpath('//meta[@name="keywords"]').extract()
            #Compare keywords and extract if one of the defined keyboards is present in the metadata
            if (keywords in metaKW for metaKW in metakeywords):
                    item['SourceTitle'] = link.xpath('/html/head/title').extract()
                    item['TargetTitle'] = link.xpath('text()').extract()
                    item['link'] = link.xpath('@href').extract()
                    outbound = str(link.xpath('@href').extract())
                    if 'http' in outbound:
                        items.append(item)
        return items

但我得到这个错误:

    Traceback (most recent call last):
      File "C:\Anaconda\lib\site-packages\twisted\internet\base.py", line 1201, in mainLoop
        self.runUntilCurrent()
      File "C:\Anaconda\lib\site-packages\twisted\internet\base.py", line 824, in runUntilCurrent
        call.func(*call.args, **call.kw)
      File "C:\Anaconda\lib\site-packages\twisted\internet\defer.py", line 382, in callback
        self._startRunCallbacks(result)
      File "C:\Anaconda\lib\site-packages\twisted\internet\defer.py", line 490, in _startRunCallbacks
        self._runCallbacks()
    --- <exception caught here> ---
      File "C:\Anaconda\lib\site-packages\twisted\internet\defer.py", line 577, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
      File "C:\Anaconda\lib\site-packages\scrapy\spider.py", line 56, in parse
        raise NotImplementedError
    exceptions.NotImplementedError: 

您能帮我关注其 URL 中包含 http 的链接吗?谢谢!

丹妮

4

2 回答 2

1

它在这里忽略规则有两个主要原因:

  • 你需要使用CrawlSpider,而不是普通的Spider
  • 指定的回调parse_items()不存在。重命名parse()parse_items().
于 2014-12-12T13:14:24.543 回答
1

在您的代码中,更改class MySpider(Spider):class Myspider(crawlSpider).

于 2017-03-04T19:48:00.943 回答