0

以下蜘蛛代码以k.1375093834.0.txt. 我想要的是表单的文件名kickstarter.com.1375093834.0.txt

任何建议的代码更改都会非常有帮助

class shnurl(CrawlSpider):
    name = "shnurl"
    #start_urls = [
    # "http://www.blogger.com"
    # ]
    rules = [
        Rule(SgmlLinkExtractor(),follow=True, callback="parse")
    ]

    def __init__(self, *args, **kwargs):

        #Initialize the parent class.
        super(shnurl, self).__init__(*args, **kwargs)

        #Get the start URL from the command line.
        self.start_urls = [kwargs.get('start_url')]

        #Create a results file based on the start_url + current time.
        self.fname = '{0}.{1}.{2}'.format(self.start_url[12], time.time(),'txt')
        self.fileout = open(self.fname, 'w+')

        #Create a logfile based on the start_url + current time.
        #Log file stores the errors, debug & info prints.
        logfname = '{0}.{1}.{2}'.format(self.start_url[12], time.time(),'log')
        #log.start(logfile='./runtime.log', loglevel=log.INFO)
        log.start(logfile=logfname, loglevel=log.INFO)
        self.log('Output will be written to: {0}'.format(self.fname), log.INFO)
        #End of constructor

用法:-

scrapy crawl shnurl -a start_url="https://www.kickstarter.com"
4

1 回答 1

1

假设我已经理解了这个问题,你想在 start_url 上做一个切片,但你定义不正确。根据以下在方括号中的 12 后放置一个冒号,这将解决问题:

    self.fname = '{0}.{1}.{2}'.format(self.start_url[12:], time.time(),'txt')
    logfname = '{0}.{1}.{2}'.format(self.start_url[12:], time.time(),'log')
于 2013-07-29T11:11:07.817 回答