0

我有一个非常大的网站,其中包含很多我想抓取的 URL。有没有办法告诉 Scrapy 忽略 URL 列表?

现在我将所有 URL 存储在一个 DB 列中,我希望能够重新启动蜘蛛,但将长列表(24k 行)传递给 Scrapy,以便它知道跳过它已经看到的那些。

有没有办法做到这一点?

class MySpider(Spider):
    custom_settings = {
        'AUTOTHROTTLE_ENABLED': True,
        'DOWNLOAD_DELAY': 1.5,
        'DEPTH_LIMIT': 0,
        'JOBDIR': 'jobs/scrapy_1'
    }

    name = None
    allowed_domains = []
    start_urls = []

    def parse(self, response):
        for link in le.extract_links(response):
            yield response.follow(link.url, self.parse)
4

2 回答 2

1

您必须将抓取的 URL 存储在某个地方,我通常在 MySQL 中执行此操作,然后当我重新启动抓取工具时,我会像这样忽略它们

class YourSpider(scrapy.Spider):

    def parse(self, response):
        cursor.execute("SELECT url FROM table")

        already_scraped = tuple(a['url'] for a in cursor.fetchall())

        for link in le.extract_links(response):
            if url not in already_scraped:
                yield Request(...)
            else:
                self.logger.error("%s is already scraped"%(link.url))
于 2019-02-21T16:14:06.633 回答
0

检查数据库中的信息:

def check_duplicate_post_links(self, links):
    new_links = []
    for link in links:
        sql = 'SELECT id FROM your_table WHERE url = %s'
        self.cursor.execute(sql, (url,))
        duplicate_db = self.cursor.fetchall()

        if duplicate_db:
            self.logger.error("error url duplicated: {}".format(link))
        else:
            new_links.append(link)

    return new_links


class YourSpider(scrapy.Spider):

    def parse(self, response):
        links = le.extract_links(response):
        new_links = self.check_duplicate_post_links(links)

        if len(new_links) > 0:
            for link in new_links:
                #Add your information
                item = YourScrapyItem()
                item['url'] = link.url

                yield item
于 2019-02-25T21:24:02.370 回答