-2

我试图从这个网页递归解析所有类别及其嵌套类别,最终导致这样的页面,最后是我想从中获取所有产品标题的最里面的页面。

该脚本可以按照上述步骤进行。但是,当从遍历所有下一页的结果页面中获取所有标题时,脚本获得的内容比有多少内容少。

这是我写的:

class mySpider(scrapy.Spider):
    name = "myspider"

    start_urls = ['https://www.phoenixcontact.com/online/portal/gb?1dmy&urile=wcm%3apath%3a/gben/web/main/products/subcategory_pages/Cables_P-10/e3a9792d-bafa-4e89-8e3f-8b1a45bd2682']
    headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36"}

    def parse(self,response):
        cookie = response.headers.getlist('Set-Cookie')[1].decode().split(";")[0]
        for item in response.xpath("//div[./h3[contains(.,'Category')]]/ul/li/a/@href").getall():
            item_link = response.urljoin(item.strip())
            if "/products/list_pages/" in item_link:
                yield scrapy.Request(item_link,headers=self.headers,meta={'cookiejar': cookie},callback=self.parse_all_links)
            else:
                yield scrapy.Request(item_link,headers=self.headers,meta={'cookiejar': cookie},callback=self.parse)


    def parse_all_links(self,response):
        for item in response.css("[class='pxc-sales-data-wrp'][data-product-key] h3 > a[href][onclick]::attr(href)").getall():
            target_link = response.urljoin(item.strip())
            yield scrapy.Request(target_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_main_content)

        next_page = response.css("a.pxc-pager-next::attr(href)").get()
        if next_page:
            base_url = response.css("base::attr(href)").get()
            next_page_link = urljoin(base_url,next_page)
            yield scrapy.Request(next_page_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_all_links)


    def parse_main_content(self,response):
        item = response.css("h1::text").get()
        print(item)

如何获得该类别中的所有可用标题?

The script gets different number of results every time I run it.

4

1 回答 1

1

您的主要问题是您需要为每个单独 使用才能正确获取下一页。我为此使用了一个类变量(请参阅我的代码)并多次获得相同的结果(4293 项)。cookiejar"/products/list_pages/"cookie

这是我的代码(我不下载产品页面(只是从产品列表中读取产品标题):

class mySpider(scrapy.Spider):
    name = "phoenixcontact"

    start_urls = ['https://www.phoenixcontact.com/online/portal/gb?1dmy&urile=wcm%3apath%3a/gben/web/main/products/subcategory_pages/Cables_P-10/e3a9792d-bafa-4e89-8e3f-8b1a45bd2682']
    headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36"}
    cookie = 1

    def parse(self,response):
        # cookie = response.headers.getlist('Set-Cookie')[1].decode().split(";")[0]
        for item in response.xpath("//div[./h3[contains(.,'Category')]]/ul/li/a/@href").getall():
            item_link = response.urljoin(item.strip())
            if "/products/list_pages/" in item_link:
                cookie = self.cookie
                self.cookie += 1
                yield scrapy.Request(item_link,headers=self.headers,meta={'cookiejar': cookie},callback=self.parse_all_links, cb_kwargs={'page_number': 1})
            else:
                yield scrapy.Request(item_link,headers=self.headers,callback=self.parse)


    def parse_all_links(self,response, page_number):
        # if page_number > 1:
        #     with open("Samples/Page.htm", "wb") as f:
        #         f.write(response.body)
        # for item in response.css("[class='pxc-sales-data-wrp'][data-product-key] h3 > a[href][onclick]::attr(href)").getall():
        for item in response.xpath('//div[@data-product-key]//h3//a'):
            target_link = response.urljoin(item.xpath('./@href').get())
            item_title = item.xpath('./text()').get()
            yield {'title': item_title}
            # yield scrapy.Request(target_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_main_content)

        next_page = response.css("a.pxc-pager-next::attr(href)").get()
        if next_page:
            base_url = response.css("base::attr(href)").get()
            next_page_link = response.urljoin(next_page)
            yield scrapy.Request(next_page_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_all_links, cb_kwargs={'page_number': page_number + 1})
于 2020-11-29T12:58:20.733 回答