1

嗨,我正在尝试使用 crawlspider,我创建了自己的拒绝规则

class MySpider(CrawlSpider): 
    name = "craigs" 
    allowed_domains = ["careers-cooperhealth.icims.com"] 
    start_urls = ["careers-cooperhealth.icims.com"] 
    d= [0-9] 
    path_deny_base = [ '.(login)', '.(intro)', '(candidate)', '(referral)', '(reminder)', '(/search)',] 
    rules = (Rule (SgmlLinkExtractor(deny = path_deny_base, 
                                     allow=('careers-cooperhealth.icims.com/jobs/…;*')), 
                                     callback="parse_items", 
                                     follow= True), ) 

我的蜘蛛仍然抓取了https://careers-cooperhealth.icims.com/jobs/22660/registered-nurse-prn/login之类的页面,其中不应抓取登录名这里有什么问题?

4

1 回答 1

2

只需以这种方式更改它(没有点和括号):

deny = ['login', 'intro', 'candidate', 'referral', 'reminder', 'search']
allow = ['jobs']

rules = (Rule (SgmlLinkExtractor(deny = deny, 
                                 allow=allow, 
                                 restrict_xpaths=('*')), 
                                 callback="parse_items", 
                                 follow= True),)

这将意味着提取的链接中没有loginintro或等,仅提取其中包含jobs的链接。

这是抓取链接https://careers-cooperhealth.icims.com/jobs/intro?hashed=0并打印“YAHOO!”的整个蜘蛛代码:

from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule


class MySpider(CrawlSpider):
    name = "craigs" 
    allowed_domains = ["careers-cooperhealth.icims.com"] 
    start_urls = ["https://careers-cooperhealth.icims.com"]

    deny = ['login', 'intro', 'candidate', 'referral', 'reminder', 'search']
    allow = ['jobs']

    rules = (Rule (SgmlLinkExtractor(deny = deny,
                                     allow=allow,
                                     restrict_xpaths=('*')),
                                     callback="parse_items",
                                     follow= True),)

    def parse_items(self, response):
        print "YAHOO!"

希望有帮助。

于 2013-08-28T08:27:20.950 回答