1

我正在抓取一个返回列表的网站urls。例子 -scrapy crawl xyz_spider -o urls.csv

它现在工作得非常好,我想要的是让 newurls.csv不附加data到文件中。我可以做任何参数传递来使其启用吗?

4

3 回答 3

2

我通常通过将 Scrapy 作为 python 脚本运行并在调用 Spider 类之前打开文件来处理自定义文件导出。这为处理和格式化 csv 文件提供了更大的灵活性,甚至可以将它们作为 Web 应用程序的扩展运行或在云中运行。以下内容:

import csv

if __name__ == '__main__':            
        process = CrawlerProcess()

        with open('Output.csv','wb') as output_file:
            mywriter = csv.write(output_file)
            process.crawl(Spider_Class, start_urls = start_urls)
            process.start() 
            process.close()                             
于 2016-11-01T03:33:18.983 回答
2

不幸的是,scrapy 目前无法做到这一点。
虽然在 github 上有一个提议的增强功能:https ://github.com/scrapy/scrapy/issues/547

但是,您可以轻松地将输出重定向到标准输出并将其重定向到文件:

scrapy crawl myspider -t json --nolog -o - > output.json

-o -表示输出到减号和减号在这种情况下表示标准输出。
您还可以在运行scrapy之前创建一些别名来删除文件,例如:

alias sc='-rm output.csv && scrapy crawl myspider -o output.csv'
于 2016-10-30T10:31:39.967 回答
0

您可以打开文件并关闭它,以便删除文件的内容。

class RestaurantDetailSpider(scrapy.Spider):

    file = open('./restaurantsLink.csv','w')
    file.close()
    urls = list(open('./restaurantsLink.csv')) 
    urls = urls[1:]
    print "Url List Found : " + str(len(urls))

    name = "RestaurantDetailSpider"
    start_urls = urls

    def safeStr(self, obj):
        try:
            if obj == None:
                return obj
            return str(obj)
        except UnicodeEncodeError as e:
            return obj.encode('utf8', 'ignore').decode('utf8')
        return ""

    def parse(self, response):
        try :
            detail = RestaurantDetailItem()
            HEADING = self.safeStr(response.css('#HEADING::text').extract_first())
            if HEADING is not None:
                if ',' in HEADING:
                    HEADING = "'" + HEADING + "'"
                detail['Name'] = HEADING

            CONTACT_INFO = self.safeStr(response.css('.directContactInfo *::text').extract_first())
            if CONTACT_INFO is not None:
                if ',' in CONTACT_INFO:
                    CONTACT_INFO = "'" + CONTACT_INFO + "'"
                detail['Phone'] = CONTACT_INFO

            ADDRESS_LIST = response.css('.headerBL .address *::text').extract()
            if ADDRESS_LIST is not None:
                ADDRESS = ', '.join([self.safeStr(x) for x in ADDRESS_LIST])
                ADDRESS = ADDRESS.replace(',','')
                detail['Address'] = ADDRESS

            EMAIL = self.safeStr(response.css('#RESTAURANT_DETAILS .detailsContent a::attr(href)').extract_first())
            if EMAIL is not None:
                EMAIL = EMAIL.replace('mailto:','')
                detail['Email'] = EMAIL

            TYPE_LIST = response.css('.rating_and_popularity .header_links *::text').extract()
            if TYPE_LIST is not None:
                TYPE = ', '.join([self.safeStr(x) for x in TYPE_LIST])
                TYPE = TYPE.replace(',','')
                detail['Type'] = TYPE

            yield detail
        except Exception as e:
            print "Error occure"
            yield None

    scrapy crawl RestaurantMainSpider  -t csv -o restaurantsLink.csv

这将创建我在下一个蜘蛛中使用的 restaurantLink.csv 文件RestaurantDetailSpider

所以你可以运行下面的命令——它将删除并创建一个新的文件restaurantLink.csv,我们将在上面的蜘蛛中使用它,并且每当我们运行蜘蛛时它都会被覆盖:

rm restaurantsLink.csv && scrapy crawl RestaurantMainSpider -o restaurantsLink.csv -t csv
于 2018-02-21T15:45:33.193 回答