1

我是使用动态刮刀的新手,我使用以下示例来学习open_news。我已经设置好了所有东西,但它让我一直显示同样的错误:dynamic_scraper.models.DoesNotExist: RequestPageType matching query does not exist.

2015-11-20 18:45:11+0000 [article_spider] ERROR: Spider error processing <GET https://en.wikinews.org/wiki/Main_page>
Traceback (most recent call last):
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/Twisted-15.4.0-py2.7-linux-x86_64.egg/twisted/internet/base.py", line 825, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/Twisted-15.4.0-py2.7-linux-x86_64.egg/twisted/internet/task.py", line 645, in _tick
    taskObj._oneWorkUnit()
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/Twisted-15.4.0-py2.7-linux-x86_64.egg/twisted/internet/task.py", line 491, in _oneWorkUnit
    result = next(self._iterator)
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 57, in <genexpr>
    work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 96, in iter_errback
    yield next(it)
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/scrapy/contrib/spidermiddleware/offsite.py", line 26, in process_spider_output
    for x in result:
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/dynamic_scraper/spiders/django_spider.py", line 378, in parse
    rpt = self.scraper.get_rpt_for_scraped_obj_attr(url_elem.scraped_obj_attr)
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/dynamic_scraper/models.py", line 98, in get_rpt_for_scraped_obj_attr
    return self.requestpagetype_set.get(scraped_obj_attr=soa)
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/Django-1.8.5-py2.7.egg/django/db/models/manager.py", line 127, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/home/suz/social-network-sujit/local/lib/python2.7/site-packages/Django-1.8.5-py2.7.egg/django/db/models/query.py", line 334, in get
    self.model._meta.object_name
dynamic_scraper.models.DoesNotExist: RequestPageType matching query does not exist.
4

2 回答 2

3

这是由于缺少“REQUEST PAGE TYPES”造成的。每个“SCRAPER ELEMS”都必须有它自己的“REQUEST PAGE TYPES”。

要解决此问题,请按照以下步骤操作:

  1. 登录管理页面(通常是http://localhost:8000/admin/
  2. Go to Home › Dynamic_Scraper › Scraper › Wikinews Scraper(文章)
  3. 单击“请求页面类型”下的“添加另一个请求页面类型”
  4. 为每个“(base(Article))”、“(title(Article))”、“(description(Article))”和“(url(Article))”共创建4个“REQUEST PAGE TYPES”

“请求页面类型”设置

所有“内容类型”都是“HTML”

所有“请求类型”都是“请求”

所有“方法”都是“获取”

对于“页面类型”,只需按顺序分配它们

(基础(文章)) | 主页

(标题(文章)) | 详情页1

(说明(文章) | 详情页2

(网址(文章))| 详情页 3

完成上述步骤后,您应该修复“DoesNotExist: RequestPageType”错误。

但是,“错误:缺少强制性元素标题!” 会出现!

为了解决这个问题。我建议您将“SCRAPER ELEMS”中的所有“REQUEST PAGE TYPE”更改为“Main Page”,包括“title (Article)”。

然后按如下方式更改 XPath:

(基础(文章)) | //td[@class="l_box"]

(标题(文章)) | 跨度[@class="l_title"]/a/@title

(描述(文章)| p/span[@class="l_summary"]/text()

(网址(文章))| 跨度[@class="l_title"]/a/@href

毕竟,scrapy crawl article_spider -a id=1 -a do_action=yes在命令提示符下运行。您应该能够抓取“文章”。您可以从首页 › Open_News › 文章查看

欣赏~

于 2015-12-03T03:15:28.623 回答
1

我可能会迟到,但希望我的解决方案对后来遇到的人有所帮助。

@alan-nala 解决方案效果很好。但是,它基本上跳过了详细页面抓取

以下是您可以充分利用详细信息页面抓取的方法。

首先,转到首页 › Dynamic_Scraper › Scrapers › Wikinews Scraper (Article) 并在Request page types中添加

其次,确保您的元素在SCRAPER ELEMS中看起来像这样

现在,您可以根据文档运行手动抓取命令

scrapy crawl article_spider -a id=1 -a do_action=yes

好吧,您可能会遇到@alan-nala 提到的错误

“错误:缺少强制性元素标题!”

请注意错误截图,我有一条消息表明脚本是“为...调用 DP2 URL”

最后,您可以返回SCRAPER ELEMS并将元素“标题(文章)”的请求页面类型更改为“详细信息页面 2 ”而不是“详细信息页面 1”。

保存您的设置并再次运行 scrapy 命令。

注意:您的“详细信息页面编号”可能会有所不同。

顺便说一句,我还准备了一个由 GitHub 托管的简短教程,以防您需要有关此主题的更多详细信息。

于 2020-05-17T17:49:36.093 回答